joint work with Peyman Mohajerin Esfahani, Daniel Kuhn, John Lygeros
We consider linear programming (LP) problems in infinite dimensional spaces and develop an approximation bridge from the infinite-dimensional LP to tractable finite convex programs in which the performance of the approximation is quantified explicitly. We illustrate our theoretical results in the optimal control problem for Markov decision processes on Borel spaces when the dynamics are unknown and learned from data and derive a probabilistic explicit error bound between the data-driven finite convex program and the original infinite linear program.
joint work with Karthyek Murthy, Jose Blanchet
We provide a theory of confidence regions for Wasserstein distributionally robust optimization (DRO) formulations. In addition to showing the asymptotic normality of these types of estimators (under natural convex constraints), we also characterize optimal confidence regions (in a certain sense informed by the underlying DRO formulation and study the asymptotic shape of these regions in a suitable topology defined for convergence of sets.
joint work with Daniel Kuhn, Viet Anh Nguyen, Peyman Mohajerin Esfahani
We bridge the Bayesian and the minimax mean square error estimators by leveraging tools from modern distributionally robust optimization. We show that the optimal estimator and the least favorable distribution form a Nash equilibrium for the Wasserstein ambiguity set and can be computed by solving a single tractable convex program. We further devise a Frank-Wolfe algorithm with a linear convergence rate to solve the convex program whose direction-searching subproblem can be solved in a quasi-closed form. Using these ingredients, we introduce an image denoiser and a robust Kalman filter.