Tue.3 14:45–16:00 | H 1028 | CNV
.

Operator Splitting and Proximal Decomposition Methods

Chair: Patrick L. Combettes Organizers: Jonathan Eckstein, Patrick L. Combettes
14:45

Yu Du

An outer-inner linearization method for non-convex and non-differentiable composite regularization problems

We propose a new outer-inner linearization method for nonconvex statistical learning problems involving nonconvex structural penalties and nonconvex loss. It linearizes the outer concave functions and solves the resulting convex, but still nonsmooth, subproblems by a special alternating linearization method. We provide proof of convergence of the method under mild conditions. Finally, numerical examples involving nonconvex structural penalties and nonconvex loss functions demonstrate the efficacy and accuracy of the method.

15:10

Maicon Marques Alves

joint work with Jonathan Eckstein, Marina Geremia

Relative-Error Inertial-Relaxed Inexact Versions of Douglas-Rachford and ADMM Splitting Algorithms

This paper derives new inexact variants of the Douglas-Rachford splitting method for maximal monotone operators and the alternating direction method of multipliers (ADMM) for convex optimization. The analysis is based on a new inexact version of the proximal point algorithm that includes both an inertial step and and overrelaxation. We apply our new inexact ADMM method to LASSO and logistic regression problems and obtain better computational performance than earlier inexact ADMM methods.

15:35

Jonathan Eckstein

joint work with Patrick Johnstone

Projective Splitting with Co-coercive Operators

This talk describes a new variant of monotone operator projective splitting in which co-coercive operators can be processed with a single forward step per iteration. This result establishes a symmetry between projective splitting algorithms, the classical forward-backward method, and Tseng’s forward-backward-forward method: In a situation in which Lipschitz monotone operators require two forward steps, co-coercive operators may be processed with a single forward step. The single forward step may be interpreted as a single step of the classical forward-backward method for the standard “prox” problem associated with the cocoercive operator, starting at the previous known point in the operator graph. Proving convergence of the algorithm requires some departures from the usual proof framework for projective splitting. We have also developed a backtracking variant of the method for cases in which the co-coercivity constant is unknown, and present some computational tests of this method on large-scale optimization problems from data fitting and portfolio applications.