Invariant Kernels for Few-shot Learning
thesisposted on 06.08.2019, 00:00 by Amlaan Bhoi
Recent advances in few-shot learning algorithms focus on the development of meta-learning or improvements in distance-based algorithms. However, the majority of these approaches do not take into consideration the robustness of the model to data that is augmented by various transforms. Transformations such as rotation, scaling, shifting, shearing and various other affine transforms exist, which might negatively impact the performance of a trained model. While such variations can be instilled by training time random augmentation, the computational cost incurred for training such models to achieve robustness is significant. In this thesis, we propose a general framework to induce a variety of invariances such as rotation, shifting, shearing and more in few-shot learning by extending orbit embeddings to patch-wise image representations to preserve spatial-invariant structure required by convolutional neural networks. However, the computational cost of these kernels can be expensive when done on millions of patches/samples in a dataset. Thus, we utilize Nyström’s method to generate a low-rank matrix approximation of the kernel matrix by sampling a smaller number of data points. Finally, these transformed features are passed as input to a modified Relation Network for few-shot learning. When analyzing the performance of these model under scenarios with heavy test time data augmentation, we show results that empirically prove the effectiveness of our approach when compared to baseline methods.