joint work with Fatma Kilinc-Karzan
Many optimization problems in practice must be solved under uncertainty, i.e., only with noisy estimates of input parameters. We present a unified primal-dual framework for deriving first-order algorithms for two paradigms of optimization under uncertainty: robust optimization (RO) where constraint parameters are uncertain, and joint estimation-optimization (JEO) where objective parameters are uncertain. The key step is to study a related ‘dynamic saddle point’ problem, and we show how RO and JEO are covered by our framework through choosing the dynamic parameters in a particular manner.
joint work with Digvijay Boob, Qi Deng
We present novel proximal-point methods for solving a class of nonconvex optimization problems with nonconvex constraints and establish its computational complexity for both deterministic and stochastic problems. We will also illustrate the effectiveness of this algorithm when solving some application problems in machine learning.
We discuss a new class of primal-dual methods for solving nonsmooth convex optimization problem ranging from unconstrained to constrained settings. The proposed methods achieve optimal convergence rates and faster rates in the last iterates instead of taking averaging sequence, and essentially maintain the same per-iteration complexity as in existing methods. Our approach relies on two different frameworks: quadratic penalty and augmented Lagrangian functions. In addition, it combines with other techniques such as alternating, linearizing, accelerated, and adaptive strategies in one