joint work with Carola-Bibiane Schönlieb, Matthias Ehrhardt
Stochastic gradient methods have cemented their role as workhorses for solving problems in machine learning, but it is still unclear if they can have the same success solving inverse problems in imaging. In this talk, we explore the unique challenges that stochastic gradient methods face when solving imaging problems. We also propose a family of accelerated stochastic gradient methods with fast convergence rates. Our numerical experiments demonstrate that these algorithms can solve imaging and machine learning problems faster than existing methods.
joint work with Coralia Cartis, Jan Fiala
We discuss sketching techniques for sparse Linear Least Squares (LLS). We give theoretical bounds for the accuracy of the sketched solution/residual when hashing matrices are used for sketching, quantifying carefully the trade-off between the coherence of the original matrix and the sparsity of the hashing matrix. We use these to quantify the success of our algorithm that employs a sparse factorisation of the sketched matrix as a preconditioner for the original LLS before applying LSQR. We extensively compare our algorithm to state-of-the-art direct and iterative solvers for large sparse LLS.
We introduce a framework for developing and analyzing distributed optimization algorithms that are guaranteed to converge under very general conditions. The conditions we consider include the use of reduced order and approximate models, and we only assume that the approximate models are consistent with a certain probability. To motivate the theory, we report preliminary results from using a particular instance of the proposed framework for solving large scale non-convex optimization models.