joint work with Roummel Marcia, Cosmin Petra
For large-scale optimization problems, limited-memory quasi-Newton methods are efficient means to estimate Hessian matrices. We present two L-BFGS trust-region methods for large optimization problems with a low number of linear equality constraints. Both methods exploit a compact representation of the (1,1) block of the inverse KKT matrix. One of the proposed methods uses an implicit eigendecomposition in order to compute search directions by an analytic formula, and the other uses a 1D root solver. We compare the methods to alternative quasi-Newton algorithms on a set of CUTEst problems.
joint work with Nobuo Yamashita
Recently, Sugimoto and Yamashita proposed the regularized limited memory BFGS (RL-BFGS) with non-monotone technique for unconstrained optimization and reported that it is competitive to the standard L-BFGS. However, RL-BFGS does not use line search; hence it cannot take longer step. In order to take longer step, we propose to use line search with Wolfe condition when iteration of RL-BFGS is successful and the search direction is still descent direction. The numerical results shows that RL-BFGS with the proposed technique is more efficient and stable than the standard L-BFGS and RL-BFGS method.
The recent class of symmetric conjugate gradient methods for large-scale unconstrained optimization will be considered. The quasi-Newton like condition will be introduced to this class of methods in a sense to be defined. It will be shown that the proposed methods converge globally and has some useful features. Numerical results will be described to illustrate the behavior of some new techniques for improving the performance of several conjugate gradient methods substantially.