Wed.1 11:30–12:45 | H 0110 | BIG
.

Modern Optimization Methods in Machine Learning

Chair: Qingna Li Organizer: Qingna Li
11:30

Chengjing Wang

joint work with Peipei Tang, Defeng Sun, Kim-Chuan Toh

A sparse semismooth Newton based proximal majorization-minimization algorithm for nonconvex square-root-loss regression problems

In this paper, we consider high-dimensional nonconvex square-root-loss regression problems. We shall introduce a proximal majorization-minimization (PMM) algorithm for these problems. Our key idea for making the proposed PMM to be efficient is to develop a sparse semismooth Newton method to solve the corresponding subproblems. By using the Kurdyka-Lojasiewicz property exhibited in the underlining problems, we prove that the PMM algorithm converges to a d-stationarity point. Extensive numerical experiments are presented to demonstrate the high efficiency of the proposed PMM algorithm.

11:55

Peipei Tang

joint work with Dunbiao Niu, Chengjing Wang, Qingsong Wang, Enbin Song

A semismooth Newton based augmented Lagrangian method for solving the support vector machine problems

In this talk, we propose a highly efficient semismooth Newton based augmented Lagrangian method for solving a large-scale convex quadratic programming problem generated by the dual of the SVM with constraints consisting of one linear equality constraint and a simple box constraint. By leveraging on available error bound results to realize the asymptotic superlinear convergence property of the augmented Lagrangian method, and by exploiting the second-order sparsity of the problem through the semismooth Newton method, the algorithm we propose can efficiently solve these difficult problems.

12:20

Qingna Li

joint work with Juan Yin

A Semismooth Newton Method for Support Vector Classification and Regression

SVM is an important and fundamental technique in machine learning. In this talk, we apply a semismooth Newton method to solve two typical SVM models: the L2-loss SVC model and the $\epsilon$-L2-loss SVR model. A common belief on the semismooth Newton method is its fast convergence rate as well as high computational complexity. Our contribution is that by exploring the sparse structure of the models, we significantly reduce the computational complexity, meanwhile keeping the quadratic convergence rate. Extensive numerical experiments demonstrate the outstanding performance of the method.