INDIGO Home University of Illinois at Urbana-Champaign logo uic building uic pavilion uic student center

Sparse Convex Optimization on GPUs

Show simple item record

Bookmark or cite this item:

Files in this item

File Description Format
PDF Maggioni_Marco.pdf (3MB) (no description provided) PDF
Title: Sparse Convex Optimization on GPUs
Author(s): Maggioni, Marco
Advisor(s): Berger-Wolf, Tanya
Contributor(s): Kshemkalyani, Ajay; Khokhar, Ashfaq; Liang, Jie; Santambrogio, Marco D.
Department / Program: Computer Science
Graduate Major: Computer Science
Degree Granting Institution: University of Illinois at Chicago
Degree: PhD, Doctor of Philosophy
Genre: Doctoral
Subject(s): SpMV GPU Interior Point Method Convex Optimization Linear Programming Integer Linear Programming Adaptive Conjugate Gradient Cholesky
Abstract: Convex optimization is a fundamental mathematical framework used for general problem solving. The computational time taken to optimize problems formulated as Linear Programming, Integer Linear Programming or Quadratic Programming has an immediate impact on countless application fields, and it is critical to determining which problems we will be able to solve in the future. Since the very beginning, the research community has always been investigating on new algorithmic and numerical techniques to speed up convex optimization. Recently, the focus has included parallel computer architectures and their ability to perform high-throughput computation. This dissertation continues on the same research direction developing novel computational techniques tailored for modern GPUs. We focus on problems with sparse structure which are, arguably, the most challenging to solve on throughput-oriented many-core architectures naturally well-suited for dense computations As original contribution, we combine the leading ideas in SpMV optimization on GPUs into an advanced sparse format known as AdELL+. We also speed up the class of optimization algorithms known as Interior Points Methods with GPU-based adaptive strategies to select between Cholesky factorization and Conjugate Gradient. Last, we design an incremental matrix data structure that provides the foundation for implementing “branch-and-cut” ILP solvers.
Issue Date: 2016-02-16
Genre: thesis
Rights Information: Copyright 2015 Marco Maggioni
Date Available in INDIGO: 2016-02-16
Date Deposited: 2015-12

This item appears in the following Collection(s)

Show simple item record


Country Code Views
United States of America 311
China 300
Russian Federation 28
Germany 11
United Kingdom 10


My Account


Access Key