Tue.1 10:30–11:45 | H 3012 | ROB
.

Advances in Modeling Uncertainty via Budget-Constrained Uncertainty Sets in Robust and Distributionally Robust Optimization

Chair: Karthyek Murthy Organizer: Karthyek Murthy
10:30

Bradley Sturt

joint work with Shimrit Shtern, Dimitris Bertsimas

A Data-Driven Approach for Multi-Stage Linear Optimization

We propose a novel data-driven framework for multi-stage stochastic linear optimization where uncertainty is correlated across stages. The proposed framework optimizes decision rules by averaging over historical sample paths, and to avoid overfitting, each sample path is slightly perturbed by an adversary. We show that this new framework is asymptotically optimal (even if uncertainty is arbitrarily correlated across stages) and can be tractably approximated using techniques from robust optimization. We also present new results and limitations of possible Wasserstein alternatives.

10:55

Omar El Housni

joint work with Vineet Goyal

On the Optimality of Affine Policies for Budgeted Uncertainty Sets

We study the performance of affine policies for two-stage adjustable robust optimization problem under budget of uncertainty sets. This problem is hard to approximate within a factor better than \Omega(log n/log log n) where n is the number of decision variables. We show that surprisingly affine policies provide the optimal approximation for this class of uncertainty sets that matches the hardness of approximation; thereby, further confirming the power of affine policies. Our analysis also gives a significantly faster algorithm to compute near-optimal affine policies.

11:20

Rui Gao

Wasserstein distributionally robust optimization and statistical learning

In this talk, we consider a framework, called Wasserstein distributionally robust optimization, that aims to find a solution that hedges against a set of distributions that are close to some nominal distribution in Wasserstein metric. We will discuss the connection between this framework and regularization problems in statistical learning, and study its generalization error bound.