University of Illinois Chicago
Browse

Reward-Driven Group Fairness in Machine Learning

Download (7.25 MB)
thesis
posted on 2024-05-01, 00:00 authored by Jack Dwight Blandin
This thesis addresses fundamental challenges and underlying assumptions associated with group fairness definitions in machine learning, striving to develop more effective strategies for both defining group fairness and identifying fair policies. To start, we identify four problematic assumptions with existing group fairness definitions: fair predictions guarantee fair outcomes, predictions are independent of target variables, predicting an unobserved target is the objective, and decisions for one individual do not impact others. To tackle these issues, we define a utility-based group fairness framework, grounded in the principles of individual utility and counterfactual outcomes. The framework enables standard group fairness definitions to naturally extend to various ML environments, including classification, clustering, and reinforcement learning (RL). We evaluate our approach on real-world public datasets, including the German credit dataset and 2020 US Census Bureau CVAP data. We further delve into achieving group fairness specifically in RL, extending the exploration beyond classification. We emphasize the distinction between decision-maker utility and individual utility, the latter often ignored in existing policy-learning techniques. We also explore solutions to improve on the concept of "no-harm", so that policies do not inadvertently harm people in their attempt to be "fair". To achieve these, we propose a multi-objective reward function that directly optimizes decision-maker utility, individual utility improvement, and individual utility equality. Illustrated in a simulated sequential loan application environment modeled as an MDP, we benchmark our approach against state-of-the-art methods, underscoring its effectiveness on decision-making applications. Last, we address the challenge of defining fairness in new domains where fairness has not yet been agreed upon. Our research introduces a novel method for learning and applying group fairness preferences across different classification domains, without the need for manual fine-tuning. Utilizing concepts from inverse reinforcement learning (IRL), our approach enables the extraction and application of fairness preferences from human experts or established algorithms. We propose the first technique for using IRL to recover and adapt group fairness preferences to new domains, offering a low-touch solution for implementing fair classifiers in settings where expert-established fairness tradeoffs are not yet defined.

History

Advisor

Ian Kash

Department

Computer Science

Degree Grantor

University of Illinois Chicago

Degree Level

  • Doctoral

Degree name

Doctor of Philosophy

Committee Member

Chris Kanich Mesrob Ohannessian Brian Ziebart Andrew Perrault

Thesis type

application/pdf

Language

  • en

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC