University of Illinois at Chicago
Browse

File(s) under embargo

1

year(s)

8

month(s)

15

day(s)

until file(s) become available

Unbiased and Fair Learning-To-Rank Systems

thesis
posted on 2023-12-01, 00:00 authored by Zohreh Ovaisi
Ranking algorithms used in recommender systems are mediators that connect users on the demand side to content providers on the supply side. With the expansion of online marketplaces, these algorithms are critical not only to users seeking to find their desirable items (e.g., rentals, movies, job applicants) but also to content providers seeking to get enough visibility and attention from users. To provide users with a desirable ranking, learning-to-rank systems should be able to predict user-item relevance scores properly and rank them accordingly. To this end, such systems often utilize user-item interaction data (e.g., clicks) as a relevance signal. However, this data suffers from a number of biases including position and selection bias, among others. Position bias occurs because higher-ranked results are more likely to receive interaction, while selection bias occurs because users are exposed to a truncated list of results, which gives a zero chance for some items to be observed even if they are relevant. These biases, unless accounted for, can lead to biased relevance prediction that hurts the user experience. At the same time, ranking items based on relevance scores may hurt content providers' experience through the rich-get-richer dynamic. Less relevant suppliers in the long tail of ranking struggle to attract user attention, which can lead to unfair ranking, sometimes due to position and selection bias. I study the problem of selection bias and position bias in learning-to-rank systems and propose several methods to overcome these biases for unbiased relevance prediction. In my first work, I propose Heckman-rank, an algorithm for addressing selection bias in the context of learning-to-rank systems, together with two bias-correcting ensembles that combine Heckman-rank with existing position-bias correcting algorithms and thus address both selection and position bias. The position-bias correction component of the ensemble method relies on Inverse Propensity Weighting (IPW) technique where propensity is the probability of an item being observed given its position. As proper propensity estimation is infeasible in many real-world scenarios and sometimes requires intervention, I also propose a two-stage bivariate method that is capable of jointly correcting for both position and selection bias without relying on propensity knowledge. These proposed methods, though do not necessitate any intervention that could impact user experience, come with certain limitations: they draw inspiration from the Heckman sample selection bias method, and the theoretical guarantees for inference, as outlined in the Heckman method, constrain their outcome model to adhere to a pointwise linear model. To overcome this issue, in my second work, I propose two other novel counterfactual approaches to mitigate selection bias, both in the context of pairwise and non-linear state-of-the-art lambda-based LTR methods. The first technique involves adapting the IPW technique, with pairwise propensities being estimated solely from item features, thus avoiding interventions. The estimated pairwise propensities are incorporated into LambdaMART, a well-known nonlinear pairwise LTR model to account for selection bias. While previous work on IPW for selection bias correction assumes probabilistic selection bias, we assume the harder case of deterministic selection bias which has no relevance information about unobserved items. The second one adapts the Heckman Probit correction method, another established counterfactual approach that mitigates selection bias by accounting for the relationship between the selection process and the outcome of interest. Unlike my previous work which considers Heckman's correction under a linear model assumption, this approach allows nonlinearity in the outcome. Both approaches enhance the model’s robustness to selection bias by assigning higher weights to infrequent item pairs in the top-k results. In my third work, I propose a post-processing framework for trading off fairness and utility under position, selection, and trust bias. My proposed methods outperform several state-of-the-art unbiased and fair algorithms. Lastly, I propose a library that evaluates recommender system robustness under subpopulation, distributional shift, transformation, and attack.

History

Advisor

Elena Zheleva

Department

Computer Science

Degree Grantor

University of Illinois Chicago

Degree Level

  • Doctoral

Degree name

PhD, Doctor of Philosophy

Committee Member

Mesrob I. Ohannessian Kathryn Vasilaky Cornelia Caragea Brian Ziebart

Thesis type

application/pdf

Language

  • en

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC