About the Project
Convex composite conic optimization problems have found widespread applications in a wide variety of domains such as operations research (e.g. relaxation of combinatorial and polynomial optimization problems), machine learning (e.g. classification, clustering, completion), statistics (e.g. regression, covariance estimation, graphical model, compressed sensing), and engineering (e.g. optimal power flow, signal/image processing, sensor network localization, limit analysis).In particular, semidefinite programming (SDP) and its generalizations have been widely used to model a wide variety of problems.
Previously, the second author (joint with Michael Todd) had designed highly efficient primal-dual interior-point methods, as implemented in his popular software package SDPT3, for solving medium scale SDP problems. However, very often large scale SDP or more general large scale conic optimization problems must be solved. In the past few years, the authors have been working on the design, analysis and implementation of efficient solvers for various classes (such as linear SDP, doubly nonnegative SDP, convex quadratic SDP, nuclear norm minimization, L1-regularized convex programming) of large scale conic optimization problems where the algorithms are based on the powerful framework of inexact proximal-point and augmented Lagrangian methods.
In our current work, we will focus on two main broad topics. First, the development of efficient and robust software packages for solving various classes of large scale conic optimization problems, such as linear SDP and doubly nonnegative SDP, based on a two-phase framework for which we will design an efficient symmetric Gauss-Seidel ADMM first-order method in the first phase so as to generate a reasonably good starting point to warm-start the second phase algorithm (semismooth Newton-CG based augmented Lagrangian or proximal-point algorithm) where second-order information will be exploited. Second, the design, analysis, and implementation of semismooth Newton based augmented or proximal point algorithms for solving huge scale convex quadratic/linear optimization problems in machine learning and statistics, in particular, L1-regularized structured regression and convex programming problems such as support vector machine and distance weighted discrimination.
- A highly efficient semismooth Netwon augmented Lagrangian method for solving Lasso problems. (2016) (X.D. Li, D.F. Sun, and K.C. Toh )
- Fast algorithms for large scale generalized distance weighted discrimination. (2016)( X.Y. Lam, J.S. Marron, D.F. Sun, and K.C. Toh )
- QSDPNAL: A two-phase proximal augmented Lagrangian method for convex quadratic semidefinite programming. (2015) (X.D. Li, D.F. Sun, and K.C. Toh)
- Linear rate convergence of the alternating direction method of multipliers for convex composite programming (Aug 2015) (revised in November 2016 from the first part) (Deren Han, Defeng Sun, and Liwei Zhang)
- A unified formulation and fast accelerated proximal gradient method for classification, preprint. (2016) (N. Ito, A. Takeda, and K.C. Toh)
- Max-norm optimization for robust matrix recovery, preprint at Optimization Online. (2016) (Ethan Fang, H. Liu, K.C. Toh, W.-X. Zhou )