University of Illinois Chicago
Browse

User-Centric Adversarial Perturbations to Protect Privacy in Social Networks

Download (600.94 kB)
thesis
posted on 2019-08-06, 00:00 authored by Sachin G Biradar
Social media has completely pervaded our lives in the past few years with increased accessibility to the internet. They have drastically changed the way people interact with one another, with users posting extensively on every facet of their lives on these websites. These social networks have become a treasure trove of data, which consequently has led to rapid advancements in the field of social media mining. In the interest of personalizing user experience, highly performant and specialized machine learning models have been developed that have the ability to learn behavioral patterns of users and make reliable predictions about them. However, there have been cases of several organizations leveraging this data to predict user preferences and classify them without the express permission of the users themselves. This has raised huge privacy concerns among users of these social networks. Social networks are typically represented as graphs, with the users being the nodes of the graph and the connections between them being the edges. Recent research has shown how effective deep learning is at analyzing this graph data and accurately performing node (user) classification. This, however, also presents an opportunity to effectively "fool" these models into incorrectly classifying users by making relatively minor changes to their data, a technique commonly known as adversarial example generation. The users in social networks however have no control over other nodes or links between them so they cannot change the whole graph to protect their privacy. But they do have control over their own profile information and friendship links so we propose an user-centric algorithm that shall suggest minimal changes only in user profiles and their links that would result in conventional models misclassifying their node. We find that the algorithm is successful in finding small user-centric adversarial perturbations consistently leading to a strong decrease in performance of graph convolutional networks. This is therefore a viable approach by which users can ensure that their sensitive information remains truly private and cannot be reliably used to any end without their explicit approval.

History

Advisor

Zheleva, Elena

Chair

Zheleva, Elena

Department

Computer Science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Masters

Committee Member

Kanich, Chris Caragea, Cornelia

Submitted date

May 2019

Issue date

2019-04-19

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC