University of Illinois Chicago
Browse

Generalized Conditional Gradient for Sparse Estimation

Download (868.2 kB)
journal contribution
posted on 2018-06-19, 00:00 authored by Yaoliang Yu, Xinhua Zhang, Dale Schuurmans
Sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design. In this paper we investigate the generalized conditional gradient (GCG) algorithm for solving sparse optimization problems--demonstrating that, with some enhancements, it can provide a more efficient alternative to current state of the art approaches. After studying the convergence properties of GCG for general convex composite problems, we develop efficient methods for evaluating polar operators, a subroutine that is required in each GCG iteration. In particular, we show how the polar operator can be efficiently evaluated in learning low-rank matrices, instantiated with detailed examples on matrix completion and dictionary learning. A further improvement is achieved by interleaving GCG with fixed-rank local subspace optimization. A series of experiments on matrix completion, multi-class classification, and multi-view dictionary learning shows that the proposed method can significantly reduce the training cost of current alternatives.

History

Publisher Statement

Copyright @ Journal of Machine Learning Research

Citation

Yu, Y., Zhang, X. and Schuurmans, D. Generalized conditional gradient for sparse estimation. Journal of Machine Learning Research. 2017. 18.

Publisher

Journal of Machine Learning Research

Language

  • en_US

issn

1532-4435

Issue date

2017-01-01

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC