Non-convex optimization is typically intractable. This intractability can be surmounted if the problem possesses enough structure. This talk focuses on “geodesic convexity” as this structure. This hidden convexity not only yields proofs not only of tractability but also suggests practical algorithmic approaches. I will highlight several examples from machine learning and statistics, where non-Euclidean geometry plays a valuable role in obtaining efficient solutions to fundamental tasks (e.g., eigenvector computation, gaussian mixtures, Wasserstein barycenters, etc.).
joint work with Paul Breiding
The canonical tensor rank approximation problem consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We discuss Riemannian optimization methods for solving this problem for small-scale, dense tensors. The proposed method displayed up to two orders of magnitude improvements in computational time for challenging problems, as measured by the condition number of the tensor rank decomposition.
Unconstrained optimization can be framed generally within the framework of optimization on Riemannian manifolds. Over the years, standard algorithms such as gradient descent, trust regions and cubic regularization (to name a few) have been extended to the Riemannian setting, with excellent numerical success. I will show how, under appropriately chosen assumptions, their worst-case iteration complexity analyses carry over seamlessly from the Euclidean to the Riemannian case, then I will discuss finer points related to these assumptions.