joint work with Michael Saunders, Ron Estrin, Michael Friedlander
We develop a general constrained optimization algorithm based on a smooth penalty function proposed by Fletcher. The main computational kernel is solving structured linear systems. We show how to solve these systems efficiently by storing a single factorization per iteration when the matrices are available explicitly. We also give a factorization-free implementation. The penalty function shows particular promise when such linear systems can be solved efficiently, e.g., for PDE-constrained optimization problems where efficient preconditioners exist.
joint work with Dominique Orban
Like the conjugate gradient method (CG), the conjugate residual method (CR), proposed by Hestenes and Stiefel, has properties that are desirable in both linesearch and trust-region contexts for optimization, but is only defined for symmetric positive-definite operators. We investigate modifications that make CR suitable, even in the presence of negative curvature, and perform comparisons on convex and nonconvex problems with CG. We give an extension suitable for nonlinear least-squares problems. CR performs as well as or better than CG, and yields savings in operator-vector products.
joint work with Juliano Francisco, Fermin Bazan, Lila Paredes
We consider the problem of minimizing a differentiable functional restricted to the set of n x p matrices with orthonormal columns. We present a nonmonotone line search variant of the inexact restoration method along with implementation details. The restoration phase employs the Cayley transform, which leads to an SVD-free scheme. By exploiting the Sherman-Morrison-Woodburry formula this strategy proved to be quite efficient when p is much smaller than n. Numerical experiments validate the reliability of our approach on a wide class of problems against a well established algorithm.