Wed.3 16:00–17:15 | H 2013 | NON
.

New Trends in Optimization Methods and Machine Learning (3/3)

Chair: El houcine Bergou
16:00

Saeed Ghadimi

joint work with Krishnakumar Balasubramanian

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality, and Saddle-Points

We analyze zeroth-order stochastic approximation algorithms for nonconvex optimization with a focus on addressing constrained optimization, high-dimensional setting, and saddle-point avoiding. In particular, we generalize the conditional stochastic gradient method under zeroth-order setting and also highlight an implicit regularization phenomenon where the stochastic gradient method adapts to the sparsity of the problem by just varying the step-size. Furthermore, we provide a zeroth-order variant of the cubic regularized Newton method and discuss its rate of convergence to local minima.

16:25

Hien Le

joint work with Nicolas Gillis, Panagiotis Patrinos

Inertial Block Mirror Descent Method for Non-convex Non-smooth Optimization

We propose inertial versions of block coordinate descent methods for solving nonconvex nonsmooth composite optimization problems. Our methods do not require a restarting step, allow using two different extrapolation points, and take advantage of randomly shuffled. To prove the convergence of the whole generated sequence to a critical point, we modify the well-known proof recipe of Bolte, Sabach and Teboulle, and incorporate it with using auxiliary functions. Applied to solve nonnegative matrix factorization problems, our methods compete favourably with the state-of-the-art NMF algorithms.