joint work with Adrian S. Lewis
The idea of partial smoothness unifies active-set and constant-rank properties in optimization and variational inequalities. We sketch two local algorithmic schemes in partly smooth settings. One promising heuristic solves finite minimax subproblems using only partial knowledge of the component functions. The second, more conceptual, approach relies on splitting schemes for variational inequalities.
joint work with Masao Yamagishi
In sparsity aware signal processing and machine learning, ideal optimization models are often formulated as a minimization of a cost function defined as the sum of convex and possibly nonconvex terms of which the globally optimal solution is hard to compute in general. We present a unified translation of a certain class of such convex optimization problems into monotone inclusion problems and propose iterative algorithms of guaranteed convergence to a globally optimal solution.
joint work with Patrick L. Combettes
We introduce an optimization model for general maximum likelihood-type estimation (M-estimation). The model leverages the observation that convex M-estimators with concomitant scale as well as various regularizers are instances of perspective functions. Such functions are amenable to proximal analysis and solution methods via splitting techniques. Using a geometrical approach based on duality, we derive novel proximity operators for several perspective functions. Numerical examples are provided.