Mon.1 11:00–12:15 | H 3025 | ROB
.

Advances in Optimization Under Uncertainty

Chair: Fatma Kilinc-Karzan Organizer: Fatma Kilinc-Karzan
11:00

Paul Grigas

New Results for Contextual Stochastic Optimization

We consider a class of stochastic optimization problems where the distribution defining the objective function can be estimated based on a contextual feature vector. We propose an estimation framework that leverages the structure of the underlying optimization problem that will be solved given an observed feature vector. We provide theoretical, algorithmic, and computational results to demonstrate the validity and practicality of our framework.

11:25

Anirudh Subramanyam

joint work with Kibaek Kim

Wasserstein distributionally robust two-stage convex optimization with discrete uncertainty

We study two-stage distributionally robust convex programs over Wasserstein balls centered at the empirical distribution. Under this setting, recent works have studied single-stage convex programs and two-stage linear programs with continuous parameters. We study two-stage convex conic programs where the uncertain parameters are supported on mixed-integer programming representable sets. We show that these problems also admit equivalent reformulations as finite convex programs and propose an exact algorithm for their solution that is scalable with respect to the size of the training dataset.

11:50

Prateek R Srivastava

joint work with Purnamrita Sarkar, Grani A. Hanasusanto

A Semidefinite Programming Based Kernel Clustering Algorithm for Gaussian Mixture Models with Outliers

We consider the problem of clustering data points generated from a mixture of Gaussians in the presence of outliers. We propose a semidefinite programming based algorithm that takes the original data as input in the form of a Gaussian kernel matrix, uses the SDP formulation to denoise the data and applies spectral clustering to recover the true cluster labels. We show that under a reasonable separation condition our algorithm is weakly consistent. We compare the performance of our algorithm with existing algorithms like k-means, vanilla spectral clustering and other SDP-based formulations.