About Research Project
In particular, recent randomized algorithms such as stochastic gradient descent (SGD), stochastic variancereduced gradient descent (SVRG), and stochastic average gradient (SAG) were motivated by the need to solve unconstrained optimization problems where the objective is the sum of a large number of terms (making it expensive to compute the gradient of the entire objective). Such problems are typical in machine learning problems where one is trying to fit the best explanatory model to a huge number of data points.
In this proposal, we aim to apply the same philosophy to optimization problems with a large number of constraints.This class of problems is extremely prevalent (e.g. risk-aware optimization, robust optimization, and semi-infinite programming) and deserves serious consideration. Our specific aim is two-fold, to create (i) new randomized augmented Lagrangian methods and (ii) new randomized cutting plane methods. Both of these approaches will yield nearly optimal and nearly feasible solutions with high probability with only modest computational effort, in contrast to deterministic methods which are not scalable.