Tue.2 13:15–14:30 | H 3025 | NON
.

Advances in Interior Point Methods

Chair: Margherita Porcelli Organizer: Coralia Cartis
13:15

Renke Kuhlmann

Learning to Steer Nonlinear Interior-Point Methods

Interior-point methods handle nonlinear programs by sequentially solving barrier subprograms with a decreasing sequence of barrier parameters. In practice, adaptive barrier updates have been shown to lead to superior performance. In this talk we interpret the adaptive barrier update as a reinforcement learning task and train a deep Q-learning to solve it. Numerical results based on an implementation within the nonlinear programming solver WORHP show that the agent successfully learns to steer the barrier parameter and additionally improves WORHP's performance on the CUTEst test set.

13:40

Andre Tits

joint work with M. Paul Laiu

Constraint Reduction in Infeasible-Start Interior-Point for Imbalanced Large Sparse Convex Quadratic Optimization

A framework for accommodating infeasible starts, given a user-supplied feasible-start "base" iteration for CQP, is proposed and analyzed. Requirements to be satisfied by the base iteration allow for inexact computation of search directions. The framework is applied to a recent feasible-start "constraint-reduced" MPC algorithm, involving a small "working set" of constraints, updated at each iteration. In early numerical tests with Matlab/CVX, a speedup of up to 10x over SDPT3/SeDuMi was observed, on both randomly generated and data fitting problems with many more constraints than variables.

14:05

Margherita Porcelli

joint work with Stefania Bellavia, Jacek Gondzio

A new interior point approach for low-rank semidefinite programs

We address the solution of semidefinite programming (SDP) problems in which the primal variable X is expected to be low-rank at optimality, a common situation in relaxations of combinatorial optimization (maximum cut) or in matrix completion. SDPs are solved efficiently using interior-point methods (IPMs), but such algorithms typically converge to a maximum-rank solution. We propose a new IPM approach which works with a low-rank X and gradually reveals the optimal (minimum) rank. Preliminary results show that using alternating directions improves the efficiency of the linear algebra.