posted on 2025-05-01, 00:00authored byHarish Ganapati Naik
Many real-world problems are modeled as graphs that represent relationships between entities. Graph Neural Networks (GNNs) are a powerful variant of neural networks that combine vertex and edge attributes with node neighborhood structures to infer properties of graph data. Message Passing Neural Networks (MPNNs), a common type of GNN, leverage the expressiveness of the first-order Weisfeiler-Leman (1-WL) algorithm for learning representations for classification tasks. However, 1-WL has known limitations in expressiveness and these limitations pose serious limitations on GNN performance. Separately, eXplainable Artificial Intelligence (XAI) is a sub-field of Machine Learning focused on addressing the “black-box” nature of neural networks. Several projects, such as GNNExplainer, have explored providing post-hoc explanations for GNN predictions. This current work combines XAI methods with graph mining to develop a computational framework to improve GNN performance.
The following are the main themes of this work: (1) A new computational framework called Explanation Enhanced Graph Learning (EEGL) to address the performance limitations of GNNs. We achieve this by annotating the input with relevant local structural information based on explanation artifacts and graph mining. Through experiments, we show that data annotated in this way results in higher model performance. (2) We study four different types of noise in our synthetic data and their effects on GNN learnability. We then show that EEGL can mitigate these adverse effects leading to improved performance even in noisy data. (3) GNNs as Logical Classifiers: Logical characterization involves a structured way to analyze and define the expressiveness of GNN models, e.g., “learning a query,” meaning learning a node classification problem in a unified manner across all graphs using a single logic formula. Through experiments, we examine the inductive learning characteristics of GNNs and the models’ ability to generalize on graphs that encode the same logical rule across structurally diverse graphs. (4) Philosophy of Science Perspective on Explainable AI: We briefly explore the philosophy of science perspective on Explainable AI. In this study, we discuss a high-level framework called ExpSpec for contextualizing requirements and defining a set of requirements for explanations.
History
Advisor
György Turán
Department
Computer Science
Degree Grantor
University of Illinois Chicago
Degree Level
Doctoral
Degree name
PhD, Doctor of Philosophy
Committee Member
Bhaskar DasGupta
Xiaorui Sun
Sourav Medya
Tamás Horváth