joint work with Peter Glynn, Zhengqing Zhou
We consider distributionally robust performance analysis of path-dependent expectations over a distributional uncertainty region which includes both a Wasserstein ball around a benchmark model, as well as martingale constraints. These types of constraints arise naturally in the context of dynamic optimization. We show that these problems (which are infinite dimensional in nature) can be approximated with a canonical sample complexity (i.e. square root in the number of samples). We also provide various statistical guarantees (also in line with canonical statistical rates).
joint work with Hongseok Namkoong
A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects. We develop and analyze a distributionally robust stochastic optimization framework that learns a model that provides good performance against perturbations to the data-generating distribution. We give a convex optimization formulation for the problem, providing several convergence guarantees, including finite sample upper and lower minimax bounds.
joint work with Daniel Kuhn, Viet Anh Nguyen, Soroosh Shafieezadeh Abadeh
In this talk, we review recent developments in the area of distributionally robust optimization, with particular emphasis on a data-driven setting and robustness to the so-called Wasserstein ambiguity set. We argue that this approach has several conceptual and computational benefits. We will also show that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others.