Thu.3 13:30–14:45 | H 2032 | NON
.

Nonlinear Optimization – Contributed Session 2/3

Chair: Jourdain Lamperski
13:30

Simone Rebegoldi

joint work with Silvia Bonettini, Marco Prato

Convergence analysis of an inexact forward-backward scheme using the forward-backward envelope

We present an inexact variable metric linesearch based forward-backward method for minimizing the sum of an analytic function and a subanalytic convex term. Upon the concept of forward-backward envelope, we build a continuously differentiable surrogate function, coinciding with the objective function at every stationary point and satisfying a relative error condition which might fail for the objective function. By exploiting the Kurdyka-Lojasiewicz property on the surrogate, we prove convergence of the iterates to a stationary point, as well as the convergence rates for the function values.

13:55

Andrea Cristofari

An almost cyclic 2-coordinate descent method for minimization problems with one linear equality constraint

A block decomposition method is proposed for minimizing a non-convex continuously differentiable function subject to one linear equality constraint and simple bounds on the variables. The working set is computed according to an almost cyclic strategy that does not use first-order information, so that the whole gradient of the objective function does not need to be computed during the algorithm. Under an appropriate assumption on the level set, global convergence to stationary points is established by using different line search strategies. Numerical results are finally provided.

14:20

Jourdain Lamperski

Solving Linear Inequalities via Non-convex Optimization

We mainly consider solving homogeneous linear inequalities. We formulate a non-convex optimization problem in a slightly lifted space, whose critical points map to feasible solutions of the linear inequalities. We show various properties of the non-convex objective function and develop an algorithm that computes critical points thereof, thus yielding an algorithm for linear inequalities. We establish convergence guarantees for the algorithm and further investigate its performance via computational experiments.