University of Illinois at Chicago
14-348.pdf (868.2 kB)
Download file

Generalized Conditional Gradient for Sparse Estimation

Download (868.2 kB)
journal contribution
posted on 2018-06-19, 00:00 authored by Yaoliang Yu, Xinhua Zhang, Dale Schuurmans
Sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design. In this paper we investigate the generalized conditional gradient (GCG) algorithm for solving sparse optimization problems--demonstrating that, with some enhancements, it can provide a more efficient alternative to current state of the art approaches. After studying the convergence properties of GCG for general convex composite problems, we develop efficient methods for evaluating polar operators, a subroutine that is required in each GCG iteration. In particular, we show how the polar operator can be efficiently evaluated in learning low-rank matrices, instantiated with detailed examples on matrix completion and dictionary learning. A further improvement is achieved by interleaving GCG with fixed-rank local subspace optimization. A series of experiments on matrix completion, multi-class classification, and multi-view dictionary learning shows that the proposed method can significantly reduce the training cost of current alternatives.


Publisher Statement

Copyright @ Journal of Machine Learning Research


Yu, Y., Zhang, X. and Schuurmans, D. Generalized conditional gradient for sparse estimation. Journal of Machine Learning Research. 2017. 18.


Journal of Machine Learning Research


  • en_US



Issue date


Usage metrics


    No categories selected