joint work with Shimrit Shtern, Dimitris Bertsimas
We propose a novel data-driven framework for multi-stage stochastic linear optimization where uncertainty is correlated across stages. The proposed framework optimizes decision rules by averaging over historical sample paths, and to avoid overfitting, each sample path is slightly perturbed by an adversary. We show that this new framework is asymptotically optimal (even if uncertainty is arbitrarily correlated across stages) and can be tractably approximated using techniques from robust optimization. We also present new results and limitations of possible Wasserstein alternatives.
joint work with Vineet Goyal
We study the performance of affine policies for two-stage adjustable robust optimization problem under budget of uncertainty sets. This problem is hard to approximate within a factor better than \Omega(log n/log log n) where n is the number of decision variables. We show that surprisingly affine policies provide the optimal approximation for this class of uncertainty sets that matches the hardness of approximation; thereby, further confirming the power of affine policies. Our analysis also gives a significantly faster algorithm to compute near-optimal affine policies.
In this talk, we consider a framework, called Wasserstein distributionally robust optimization, that aims to find a solution that hedges against a set of distributions that are close to some nominal distribution in Wasserstein metric. We will discuss the connection between this framework and regularization problems in statistical learning, and study its generalization error bound.