University of Illinois Chicago
Browse

Graph Representation Learning with Deep Recurrent Models

Download (5.95 MB)
thesis
posted on 2020-05-01, 00:00 authored by Aynaz Taheri
Representing and comparing graphs is a central problem in many fields. We propose sequence- to-sequence architectures for graph representation learning in both static and dynamic settings. Our methods use recurrent neural networks to encode and decode information from graph-structured data. In the static setting, we investigate the graph representation learning problem in unsupervised and super- vised regimes. Recurrent neural networks require sequences, so we choose several methods of traversing graphs using different types of substructures with various levels of granularity. We train our unsupervised approaches using long short-term memory (LSTM) encoder-decoder models to embed the graph sequences into a continuous vector space. We then represent a graph by aggregating its graph sequence representations. Our supervised architecture uses the substructure embeddings obtained by our unsupervised encoder-decoder models and an attention mechanism to collect information from neighborhoods. The attention module enriches our model so that it can focus on the neighborhoods that are crucial from the supervised task’s point of view. We demonstrate the effectiveness of our approaches by showing improvements over the existing state-of-the-art on several graph classification tasks. Moreover, we propose an unsupervised representation learning architecture for dynamic graphs, designed to learn both the topological and temporal features of the graphs that evolve over time. The approach consists of a sequence-to-sequence encoder-decoder model embedded with gated graph neural networks (GGNNs) and LSTMs. The GGNN is able to learn the topology of the graph at each time step, while LSTMs are leveraged to propagate the temporal information among the time steps. Moreover, an encoder learns the temporal dynamics of an evolving graph and a decoder reconstructs the dynamics over the same period of time using the encoded representation provided by the encoder. We demonstrate that our approach is capable of learning the representation of a dynamic graph through time by applying the embeddings to dynamic graph classification using two real-world datasets of animal behavior and brain networks.

History

Advisor

Berger-Wolf, Tanya

Chair

Berger-Wolf, Tanya

Department

Computer Science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Doctoral

Degree name

PhD, Doctor of Philosophy

Committee Member

Zhang, Xinhua Ziebart, Brian Zheleva, Elena Gimpel, Kevin

Submitted date

May 2020

Thesis type

application/pdf

Language

  • en

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC