joint work with Zhaosong Lu
In this talk we propose two nonmonotone enhanced proximal DC algorithms for solving a class of structured nonsmooth difference-of-convex (DC) minimization, in which the first convex component is the sum of a smooth and a nonsmooth function while the second one is the supremum of finitely many smooth convex functions. We show that any accumulation point of the generated sequence is a D-stationary point. We next propose randomized counterparts for them and show that any accumulation point of the generated sequence is a D-stationary point almost surely. Numerical results shall also be presented.
joint work with Akiko Takeda
Our work focuses on stochastic proximal gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer and convex constraints. To the best of our knowledge we present the first non-asymptotic convergence results for this class of problem. We present two simple stochastic proximal gradient algorithms, for finite-sum and general stochastic optimization problems, which have the same or superior convergence complexities compared to the current state-of-the-art for the unconstrained problem setting.
joint work with Bruno Figueira Lourenço, Akiko Takeda
In this paper we propose a new random projection method that allows to reduce the number of inequalities of a Linear Program (LP). More precisely, we randomly aggregate the constraints of a LP into a new one with much fewer constraints, while preserving, approximatively, the value of the LP. We will also see how to extend this to conic linear programming.