Mon.2 13:45–15:00 | H 2053 | NON
.

Variational Inequalities, Minimax Problems and GANs (1/2)

Chair: Konstantin Mishchenko Organizer: Konstantin Mishchenko
13:45

Simon Lacoste-Julien

Negative Momentum for Improved Game Dynamics

Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics is more complex and less understood. In this work, we analyze gradient-based methods with negative momentum on simple games and show stability of alternating updates.

14:10

Dmitry Kovalev

Revisiting Stochastic Extragradient Method.

We revisit stochastic extragradient methods for variational inequalities. We propose a new scheme which uses the same stochastic realization of the operator to perform an extragradient step. We prove convergence under the assumption that variance is bounded only at the optimal point and generalize method to proximal setting. We improve state-of-the-art results for bilinear problems and analyze convergence of the method for non-convex minimization.

14:35

Gauthier Gidel

New Optimization Perspectives on Generative Adversarial Networks

Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework and investigate the effect of the noise in this context.