University of Illinois at Chicago
Browse

Learning Invariant Representation through Warping Layers

Download (3.58 MB)
thesis
posted on 2022-05-01, 00:00 authored by Yingyi Ma
Invariance is an effective prior that has been extensively used in supervised learning. Existing works usually use invariance to bias learning with given representations of data. Direct factorization of parametric models is feasible for only a small range of invariances. Regularization approach, despite improved generality, can lead to nonconvex optimization. In this thesis, we break these limitations by designing new algorithms to learn representations that can incorporate various invariances. Our first approach is based on warping a Reproducing Kernel Hilbert Space (RKHS) in a data- dependent fashion. By applying finite approximations, it is computationally efficient and leads to a deep kernel through multiple layers. To explore more generalized invariances, our second approach incorporates invariances as semi-norm functionals. In this way, an RKHS can be warped into a semi-inner-product space, e.g. Reproducing Kernel Banach Space (RKBS). To restore convexity, we then embed the kernel representer into Euclidean spaces and demonstrate how to accomplish it in a convex and efficient fashion. We further constructed warping layer to make kernel warping be compatible in any deep neural network setting, and demonstrated its usage in learning representation for label structure in low data regime.

History

Advisor

Zhang, Xinhua

Chair

Zhang, Xinhua

Department

computer science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Doctoral

Degree name

PhD, Doctor of Philosophy

Committee Member

Ziebart, Brian Tang, Wei Ravi, Sathya Yu, Yaoliang

Submitted date

May 2022

Thesis type

application/pdf

Language

  • en

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC