MA-DISSERTATION-2022.pdf (3.58 MB)
Learning Invariant Representation through Warping Layers
thesisposted on 2022-05-01, 00:00 authored by Yingyi Ma
Invariance is an effective prior that has been extensively used in supervised learning. Existing works usually use invariance to bias learning with given representations of data. Direct factorization of parametric models is feasible for only a small range of invariances. Regularization approach, despite improved generality, can lead to nonconvex optimization. In this thesis, we break these limitations by designing new algorithms to learn representations that can incorporate various invariances. Our first approach is based on warping a Reproducing Kernel Hilbert Space (RKHS) in a data- dependent fashion. By applying finite approximations, it is computationally efficient and leads to a deep kernel through multiple layers. To explore more generalized invariances, our second approach incorporates invariances as semi-norm functionals. In this way, an RKHS can be warped into a semi-inner-product space, e.g. Reproducing Kernel Banach Space (RKBS). To restore convexity, we then embed the kernel representer into Euclidean spaces and demonstrate how to accomplish it in a convex and efficient fashion. We further constructed warping layer to make kernel warping be compatible in any deep neural network setting, and demonstrated its usage in learning representation for label structure in low data regime.
Degree GrantorUniversity of Illinois at Chicago
Degree namePhD, Doctor of Philosophy
Committee MemberZiebart, Brian Tang, Wei Ravi, Sathya Yu, Yaoliang
Submitted dateMay 2022