Thu.3 13:30–14:45 | H 0104 | NON
.

Advances in Nonlinear Optimization With Applications (2/2)

Chair: Yu-Hong Dai Organizers: Yu-Hong Dai, Deren Han
13:30

Xingju Cai

An indefinite proximal point algorithm for maximal monotone operator

We investigate the possibility of relaxing the positive definiteness requirement of the proximal matrix in PPA. A new indefinite PPA for finding a root of maximal monotone operator is proposed via choosing an indefinite proximal regularization term. The proposed method is more exible, especially in dealing with cases where the operator has some special structures. We prove the global convergence. We also allow the subproblem to be solved in an approximate manner and propose two exible inexact criteria.

13:55

Caihua Chen

On the Linear Convergence of the ADMM for Regularized Non-Convex Low-Rank Matrix Recovery

In this paper, we investigate the convergence behavior of ADMM for solving regularized non-convex low-rank matrix recovery problems. We show that the ADMM will converge globally to a critical point of the problem without making any assumption on the sequence generated by the method. If the objective function of the problem satisfies the Ɓojasiewicz inequality with exponent at every (globally) optimal solution, then with suitable initialization, the ADMM will converge linearly. We exhibit concrete instances for which the ADMM converges linearly.

14:20

[canceled] Deren Han

On the convergence rate of a splitting augmented Lagrangian method

In this talk, we present a new ADMM based prediction-correction method (APCM) for solving a three-block convex minimization problem with linear constrains. We solve the subproblem partially parallel in the prediction step and only the third variable and Lagrange multiplier are corrected in the correction step, which results in less time cost in the prediction step and employs different step size in the correction step. We show its global convergence and the O(1/t) convergence rate under mild conditions. We also give the globally linear convergence rate with some additional assumptions.