Tue.4 16:30–17:45 | H 1058 | ROB
.

Recent Advances in Distributionally Robust Optimization

Chair: Peyman Mohajerin Esfahani Organizer: Peyman Mohajerin Esfahani
16:30

Jose Blanchet

joint work with Peter Glynn, Zhengqing Zhou

Distributionally Robust Performance Analysis with Martingale Constraints

We consider distributionally robust performance analysis of path-dependent expectations over a distributional uncertainty region which includes both a Wasserstein ball around a benchmark model, as well as martingale constraints. These types of constraints arise naturally in the context of dynamic optimization. We show that these problems (which are infinite dimensional in nature) can be approximated with a canonical sample complexity (i.e. square root in the number of samples). We also provide various statistical guarantees (also in line with canonical statistical rates).

16:55

John Duchi

joint work with Hongseok Namkoong

Distributional Robustness, Uniform Predictions, and applications in Machine Learning

A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects. We develop and analyze a distributionally robust stochastic optimization framework that learns a model that provides good performance against perturbations to the data-generating distribution. We give a convex optimization formulation for the problem, providing several convergence guarantees, including finite sample upper and lower minimax bounds.

17:20

Peyman Mohajerin Esfahani

joint work with Daniel Kuhn, Viet Anh Nguyen, Soroosh Shafieezadeh Abadeh

Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning

In this talk, we review recent developments in the area of distributionally robust optimization, with particular emphasis on a data-driven setting and robustness to the so-called Wasserstein ambiguity set. We argue that this approach has several conceptual and computational benefits. We will also show that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others.