Main Building of TU Berlin
Straße des 17. Juni 135
10623 Berlin
(OpenStreetMap,
Google Maps)
Sat, Aug 3 | 08:00 - 09:00 |
Sun, Aug 4 | 08:30 - 09:00 |
The summer school from Aug 3-4, 2019 at TU Berlin is a part of the ICCOPT 2019. To participate in the summer school, check the respective button during the registration and payment process.
Abstract: Large-scale optimization problems occur in many domains. In this tutorial, I will focus on optimization problems with partial differential equation constraints. In this series, I will start with formulations for inverse and design problems before introducing the reduced-space approach and adjoints to compute derivatives. I will then cover numerical methods for solving the reduced-space problems including nonlinear conjugate gradient and limited-memory quasi-Newton methods for solving bound-constrained optimization problems and some available software packages including the Toolkit for Advanced Optimization.
Short Bio: Dr. Todd Munson is a computational scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, where he is currently the lead developer of the Toolkit for Advanced Optimization, the area lead in numerical optimization for the FASTMath SciDAC Institute, and the deputy director for the Center for Online Data Analysis and Reduction in the Exascale Computing Project. He received a Presidential Early Career Award for Scientists and Engineers in 2006 for his "pioneering work in the development of algorithms, software, and problem-solving environments for the solution of large-scale optimization problems" and the Beale-Orchard-Hayes Prize from the Mathematical Optimization Society in 2003. PATH, an implementation of the Josephy-Newton method that he helped develop, is a popular code for solving nonlinear complementarity problems.
Abstract: In these lectures, we will introduce a set PDE-constrained optimization problems of practical interest and explore some basic analytical properties both in finite- and infinite-dimensions. The study will encompass existence of minima, existence of Lagrange multipliers and optimality conditions. The concept of adjoint state will be particularly highlighted and the derivation of optimality systems characterizing optimal solutions will be explained by means of different steady-state and time-dependent examples. Finally, some numerical algorithms for the solution of the resulting optimality systems will be presented, together with their main convergence properties.
Links: Presentation
Short Bio: De los Reyes is Founding Director of the Ecuadorian Research Center on Mathematical Modelling (MODEMAT) and Professor of Optimization and Control at Escuela Politécnica Nacional de Ecuador. He obtained his Ph.D. in 2003 at Karl-Franzens University of Graz, and worked from 2005 to 2006 as postdoctoral researcher at the Technical University of Berlin. In 2009 he was awarded an Alexander von Humboldt Fellowship for Experienced Researchers to carry out research in Germany, and in 2010 he was awarded a J.T. Oden Faculty Fellowship to work at The University of Texas at Austin. He has held Visiting Professor positions at the Humboldt University of Berlin (2010) and at the University of Hamburg (2013). He is member of the Ecuadorian Academy of Sciences (ACE), since 2015, and Fellow of The World Academy of Sciences (TWAS), since 2016.
Abstract: Stochastic gradient descent (SGD) in one of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard to understand its landscape and inhabitants. In this tutorial I will offer a guided walk through the ZOO of SGD methods. I will chart the landscape of this beautiful world, and make it easier to understand its inhabitants and their properties. In particular, I will introduce a unified analysis of a large family of variants of proximal stochastic gradient descent (SGD) which so far have required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, importance sampling, mini-batching and quantization. As a by-product, the presented framework offers the first unified theory of SGD and randomized coordinate descent (RCD) methods, the first unified theory of variance reduced and non-variance-reduced SGD methods, and the first unified theory of quantized and non-quantized methods.
Links: main slides, SGD-SR slides, SEGA slides
08:00-09:00
09:00-10:30
11:00-12:30
12:30-14:00
14:00-15:30
16:00-17:30
17:30-20:00
08:30-09:00
09:00-10:30
11:00-12:30
12:30-14:00
14:00-15:30
16:00-17:30
17:00-20:00