Tue.2 13:15–14:30 | H 2013 | BIG
.

Optimization for Data Science (2/2)

Chair: Jong-Shi Pang Organizers: Jong-Shi Pang, Ying Cui
13:15

Kim-Chuan Toh

joint work with Xudong Li, Defeng Sun

An asymptotically superlinear convergent semismooth Newton augmented Lagrangian method for LP

Powerful interior-point methods (IPM) based commercial solvers have been hugely successful in solving large-scale linear programming problems. Unfortunately, the natural remedy, although can avoid the explicit computation of the coefficient matrix and its factorization, are not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data, we propose a semismooth Newton based inexact proximal augmented Lagrangian method.

13:40

Zhengling Qi

joint work with Yufeng Liu, Jong-Shi Pang

Learning Optimal Individualized Decision Rules with Risk Control

With the emergence of precision medicine, estimation of optimal individualized decision rules (IDRs) has attracted tremendous attentions in many scientific areas. Motivated by complex decision making procedures and the popular conditional value at risk, we propose a robust criterion to evaluate IDRs to control the lower tails of each subject’s outcome. The resulting optimal IDRs are robust in controlling adverse events. The related nonconvex optimization algorithm will be discussed. Finally, I will present some optimization challenges about learning optimal IDRs in the observational studies.

14:05

Meisam Razaviyayn

First and Second Order Nash Equilibria of Non-Convex Min-Max Games: Existence and Computation

Recent applications that arise in machine learning have surged significant interest in solving min-max saddle point games. While this problem has been extensively studied in the convex-concave regime, our understanding of non-convex case is very limited. In this talk, we discuss existence and computation of first and second order Nash Equilibria of such games in the non-convex regime. Then, we see applications of our theory in training Generative Adversarial Networks.