We describe fast gradient methods for solving the symmetric nonnegative matrix factorization problem (SymNMF). We use recent results on non-Euclidean gradient methods and show that the SymNMF problem is smooth relatively to a well-chosen Bregman divergence. This approach provides a simple hyper-parameter-free method which comes with theoretical convergence guarantees. We also discuss accelerated variants. Numerical experiments on clustering problems show that our algorithm scales well and reaches both state of the art convergence speed and clustering accuracy for SymNMF methods.
joint work with Uday Shanbhag
We consider an equivalent formulation of l0-minimization problem over a polyhedral set as MPCC. It is known that l0-norm penalty implicitly emphasizes sparsity of the solution. We show that under suitable assumptions, an equivalence is derived between KKT points and local minimizers of this formulation. A perturbed variant of proximal ADMM is proposed for which each subproblem is tractable and convergence can be claimed under mild assumptions. Preliminary numerics suggest that this scheme may significantly outperform their standard nonconvex ADMM counterparts.
joint work with Deniz Akkaya
Robust regression analysis using Huber's linear-quadratic loss function has been studied in the context of numerical optimization since the 1980s. Its statistical properties under deviations from normality are well-known. We couple the Huber loss function with a L0-norm term to induce sparsity, and study the local and global minimizers of the resulting non-convex function inspired from results of Nikolova on the least squares regression adjusted for sparsity.