The randomized Kaczmarz method is an incremental method to solve linear equations. It uses only a single row of the system in each step and hence, only needs very little storage. The standard Kaczmarz method approximates (in the underdetermined case) the minimum norm solution. In this talk we will study two extensions of the method: First, we will show how to adapt the method to produce sparse solutions of underdetermined and second, we investigate how a mismatch in the adjoint matrix does affect the convergence.
We consider a constrained convex problem with linear inequalities and provide an inexact penalty reformulation. The novelty is in the choice of the penalty functions, which are smooth and can induce a non-zero penalty over some points in feasible region of the original problem. For problems with a large number of constraints, a particular advantage of such a smooth penalty-based reformulation is that it renders a penalized problem suitable for the implementation of fast incremental gradient methods, which require only one sample from the inequality constraints at each iteration (such as SAGA).
We present convergence rates for minibatch stochastic first order methods with constant or variable stepsize for convex optimization problems, expressed in terms of expectation operator. We show that these methods can achieve linear convergence rate up to a constant proportional to stepsize and under some strong stochastic bounded gradient condition even pure linear convergence. Moreover, when variable stepsize is chosen we derive sublinear convergence rates that show an explicit dependence on minibatch size. Applications of these results to convex feasibility problems will be also discussed.