Thu.3 13:30–14:45 | H 2033 | CON
.

Miscellaneous Topics in Conic Programming

Chair: Sandra S. Y. Tan Organizer: Etienne de Klerk
13:30

Antonios Varvitsiotis

Robust self-testing of quantum systems via noncontextuality inequalities

Self-testing unknown quantum systems is a fundamental problem in quantum information processing. We study self-testing using non-contextuality inequalities and show that the celebrated Klyachko-Can-Binicioğlu-Shumovsky non-contextuality tests admit robust self-testing. Our results rely crucially on the graph-theoretic framework for contextuality introduced by Cabello, Severini, and Winter, combined with well-known uniqueness and robustness properties for solutions to semidefinite programs.

13:55

Lucas Slot

joint work with Monique Laurent

Convergence analysis of measure-based bounds for polynomial optimization on compact sets

We investigate a hierarchy of upper bounds introduced by Lasserre (2011) for the minimization of a polynomial f over a set K, obtained by searching for a degree 2r sos density function minimizing the expected value of f over K with respect to a given measure. For the hypercube, de Klerk and Laurent (2018) show that the convergence rate of these bounds to the global minimum is in O(1/r^2). We extend this result to a large subclass of convex bodies. For general compact K, we show a convergence rate in O((log r)/r) under a minor condition and a rate in O(((log r)/r)^2) when K is a convex body.

14:20

Sandra S. Y. Tan

joint work with Antonios Varvitsiotis, Vincent Y. F. Tan

A Unified Framework for the Convergence Analysis of Optimization Algorithms via Sum-of-Squares

Given the popularity of machine learning, understanding the convergence properties of the optimization algorithms is of theoretical importance. However, existing convergence proofs consist mostly of ad-hoc arguments. We introduce a novel framework for bounding the convergence rates of various algorithms by means of sum-of-squares certificates, thereby unifying their convergence analyses. Using our approach, we recover known convergence bounds for three widely-used first-order algorithms, putting forth a promising framework for unifying the convergence analyses of other optimization algorithms.