joint work with Krishnakumar Balasubramanian
We analyze zeroth-order stochastic approximation algorithms for nonconvex optimization with a focus on addressing constrained optimization, high-dimensional setting, and saddle-point avoiding. In particular, we generalize the conditional stochastic gradient method under zeroth-order setting and also highlight an implicit regularization phenomenon where the stochastic gradient method adapts to the sparsity of the problem by just varying the step-size. Furthermore, we provide a zeroth-order variant of the cubic regularized Newton method and discuss its rate of convergence to local minima.
joint work with Nicolas Gillis, Panagiotis Patrinos
We propose inertial versions of block coordinate descent methods for solving nonconvex nonsmooth composite optimization problems. Our methods do not require a restarting step, allow using two different extrapolation points, and take advantage of randomly shuffled. To prove the convergence of the whole generated sequence to a critical point, we modify the well-known proof recipe of Bolte, Sabach and Teboulle, and incorporate it with using auxiliary functions. Applied to solve nonnegative matrix factorization problems, our methods compete favourably with the state-of-the-art NMF algorithms.