joint work with Sien Deng
We provide various level-set based error bounds to study necessary and sufficient conditions on the linear convergence of variable Bregman proximal gradient (VBPG) method for a broad class of nonsmooth and nonconvex optimization problems. We also provide a fresh perspective that allows us to explore the connections among many known sufficient conditions for linear convergence of various first-order methods.
joint work with Aryan Mokhtari, Asuman Ozdaglar
In this talk, we consider solving convex-concave saddle point problems. We focus on two variants of gradient decent-ascent algorithms, extra-gradient (EG) and optimistic gradient descent-ascent (OGDA) methods, and show that they admit a unified analysis as approximations of the classical proximal point method for solving saddle-point problems. This viewpoint enables us to generalize OGDA (in terms of parameters) and obtain new convergence rate results for these algorithms for the bilinear case as well as the strongly convex-concave setting.
We study systems where such online algorithms, such as online gradient descent and multiplicative weights, compete against each other in zero-sum games. We prove that these systems exhibit Poincaré recurrence and can be interpreted as Hamiltonian dynamics. We discuss implications of these results for discrete-time dynamics, designing new algorithms with provable guarantees beyond convex-concave games and we present some open questions.