Expected decrease for derivative-free algorithms using random subspaces [slides available]

Abstract

When optimising functions that are black-box, noisy and/or computationally expensive, it may be impractical to get gradient information, requiring the use of derivative-free optimisation (DFO) algorithms. Compared to traditional nonlinear optimisation methods, DFO methods are typically not as scalable (in terms of number of decision variables). However, recent DFO approaches based on iterative steps in randomly drawn subspaces has shown promise as a way of improving scalability. In this talk, I will outline these approaches, and a novel average-case analysis that demonstrates why lower dimensional subspaces typically perform well (even though this is not guaranteed by existing theory). This is joint work with Warren Hare (UBC) and Clément Royer (Université Paris-Dauphine PSL).

Date
11 Dec 2023
Location
University of Sydney
Avatar
Lindon Roberts
Lecturer

My research is in numerical analysis and data science, particularly nonconvex and derivative-free optimization.