University of Illinois at Chicago
Browse
MEMARRAST-DISSERTATION-2023.pdf (13.55 MB)

Distributionally-Robust and Specification-Robust Fairness for Machine Learning

Download (13.55 MB)
thesis
posted on 2023-08-01, 00:00 authored by Omid Memarrast
Developing machine learning methods with high accuracy that also avoid unfair treatment of different groups has become increasingly important for data-driven decision making in social applications. This thesis presents the development of fair machine learning algorithms through two main paradigms. Firstly, we leverage the distributionally robust framework to develop fair algorithms for classification and ranking. Specifically, we propose novel approaches that minimize disparities across different groups while preserving overall classification or ranking performance. Secondly, we introduce a specification-robust approach to fair classification, inspired by the ideas from imitation learning. Supervised learning often relies on Empirical Risk Minimization (ERM) (Vapnik, 1992) to train models that can generalize well to unseen data. However, ERM methods are susceptible to noise and outliers due to their dependence on sample means. To address this issue, adversarial robust learning (Asif et al., 2015) has emerged as a promising approach, formulating supervised learning as a minimax game between a predictor and an adversary. This thesis investigates how this framework can be applied to fair machine learning tasks and propose two approaches for fair classification: fair and robust log loss classification (Rezaei et al., 2020) and fair and robust log loss classification under covariate shift (Rezaei et al., 2021). Furthermore, this thesis leverages adversarial robust learning in developing fair and robust models for structured prediction problems, where the goal is to predict a ranking of items or outputs rather than a single label. It introduces a fair and robust learning-to-rank approach (Memarrast et al., 2023a) that achieves fairness of exposure for protected groups (such as race or gender) while maximizing utility to the users. Overall, this thesis explores the potential of adversarial robust learning for addressing fairness in classification and structured prediction tasks and provides new approaches to building fair and robust machine learning models. The second part of this thesis discusses the specification-robust approach to fairness in machine learning. While most fairness approaches optimize a specified trade-off between performance measures (such as accuracy, log loss, or AUC) and fairness metrics (such as demographic parity or equalized odds), this begs the question of whether the right trade-offs are being specified. To address this issue, this thesis proposes a new approach called superhuman fairness (Memarrast et al., 2023b) which recasts fair machine learning as an imitation learning task. Superhuman fairness seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures, rather than relying on a pre-specified trade-off. The thesis demonstrates the benefits of this approach given suboptimal decisions, showing that it can improve both performance and fairness outcomes. Finally, we outline further directions of our ongoing and future research.

History

Advisor

Ziebart, Brian

Chair

Ziebart, Brian

Department

Computer Science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Doctoral

Degree name

PhD, Doctor of Philosophy

Committee Member

Zhang, Xinhua Kash, Ian Asudeh, Abolfazl Liu, Anqi

Submitted date

August 2023

Thesis type

application/pdf

Language

  • en

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC