Robustness and Explainability of Deep Neural Networks: Architectures and Applications
thesisposted on 2020-08-01, 00:00 authored by Chirag Agarwal
In this work, I study the performance, robustness, and explainability of machine learning models, in particular deep neural networks, and propose their respective solutions. I consider the above three properties to be the three pillar of modern Artificial Intelligence. In particular, I address: (a) convergence of backpropagation for skip-connected architectures, (b) how to design a new model architecture for generating time signals, (c) how to increase the robustness of deep neural networks by designing new objective functions, and (d) how to address the Explanability and Interpretability of Deep Neural Networks by 1) visualizing attribution maps for classifier model's using generative models, and 2) designing interpretable models using Deep Unfolding.