University of Illinois at Chicago
Browse
LIU-DISSERTATION-2018.pdf (2.69 MB)

Robust Prediction Methods for Covariate Shift and Active Learning

Download (2.69 MB)
thesis
posted on 2018-11-27, 00:00 authored by Anqi Liu
In real world machine learning applications, it is often not very realistic to assume that the training data distribution aligns with the testing data distribution. A relaxation is to assume the distribution shift only occurs on the input variable (covariates), while the conditional output distribution given the input variable (covariates) remains the same. This is called the covariate shift setting. Besides various examples of covariate shift in supervised learning tasks, one of the typical covariate shift scenarios is the sampling bias problem in pool-based active learning, in which the learner selects the labeled set, thus introducing a different input distribution from the unlabeled pool in each step of learning and prediction. In this thesis, we propose a general framework for robust prediction under covariate shift. Rather than focusing on minimizing a reweighted empirical loss on training data, we manage to more directly optimize the expected test loss with a minimax approach. The resulting predictor provides more randomized predictions on test data when it lacks training data distribution support and therefore avoids possible loss induced by over optimistic extrapolation of other predictors. This framework accounts for facilitating different loss function minimization and incorporating different feature functions and feature generalization assumptions. We discuss how the framework reduces to specific forms and the corresponding approaches to estimate the parameters. Moreover, we investigate active learning using robust prediction when the active learning step is constructed as a special case of robust covariate shift problem. We conduct experiments on synthetic biased benchmark datasets and natural covariate shift datasets to show performance of the robust prediction on real data. Additionally, we evaluate pool-based active learning using robust prediction on benchmark real data sets. We demonstrate a number of benefits over existing methods.

History

Advisor

Ziebart, Brian

Chair

Ziebart, Brian

Department

Computer Science

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Doctoral

Committee Member

Liu, Bing Yu, Philip Reyzin, Lev Dudik, Miro

Submitted date

August 2018

Issue date

2018-08-02

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC