joint work with Fei Li
We propose and analyse an inexact augmented Lagrangian (I-AL) algorithm for solving large-scale composite, nonsmooth and constrained convex optimization problems. Each subproblem is solved inexactly with self-adaptive stopping criteria, without requiring the target accuracy a priori as in many existing variants of I-AL methods. In addition, each inner problem is solved by applying accelerated coordinate descent method, making the algorithm more scalable when the problem dimension is high.
joint work with Alexander Gasnikov, Alexander Tiurin
We consider smooth convex optimization problems with simple constraints and inexactness in the oracle information such as value, partial or directional derivatives of the objective function. We introduce a unifying framework, which allows to construct different types of accelerated randomized methods for such problems and to prove convergence rate theorems for them. We focus on accelerated random block-coordinate descent, accelerated random directional search, accelerated random derivative-free method.
joint work with Peter Richtárik, Robert Gower, Alibek Sailanbayev, Nicolas Loizou, Egor Shulgin
We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size.