Thu.2 10:45–12:00 | H 3025 | DER
.

Recent Advances in Derivative-Free Optimization (3/3)

Chair: Francesco Rinaldi
10:45

Tyler Chang

joint work with Layne Watson, Thomas Lux

A Surrogate for Local Optimization using Delaunay Triangulations

The Delaunay triangulation is an unstructured simplicial mesh, which is often used to perform piecewise linear interpolation. In this talk, the Delaunay triangulation is presented as a surrogate model for blackbox optimization. If the objective function is smooth, then the estimated gradient in each simplex is guaranteed to converge to the true gradient, and the proposed algorithm will emulate gradient descent. This algorithm is useful when paired with an effective global optimizer, particularly in situations where function evaluations are expensive and gradient information is unavailable.

11:10

Alexander Matei

joint work with Stefan Ulbrich, Tristan Gally, Peter Groche, Florian Hoppe, Anja Kuttich, Marc Pfetsch, Martin Rakowitsch

Detecting model uncertainty via parameter estimation and optimal design of experiments

In this talk we propose an approach to identify model uncertainty using both parameter estimation and optimal design of experiments. The key idea is to optimize sensor locations and input configurations for the experiments such that the identifiable model parameters can be determined with minimal variance. This enables us to determine confidence regions, in which the real parameters or the parameter estimations from different test sets have to lie. We test the proposed method using examples from load-carrying mechanical structures. Numerical results are presented.

11:35

Douglas Vieira

joint work with Adriano Lisboa, Matheus de Oliveira Mendonça

Convergence properties of line search strategies for multi-modal functions

It is presented line search optimization methods using a mathematical framework based on the simple concept of a v-pattern and its properties, providing theoretical guarantees on preserving, in the localizing interval, a local optimum no worse than the starting point. The framework can be applied to arbitrary unidimensional functions, including multimodal and infinitely valued ones. Enhanced versions of the golden section and Brent’s methods are proposed and analyzed within this framework: they inherit the improving local optimality guarantee.