Expected decrease for derivative-free algorithms using random subspaces [slides available]

Abstract

A promising approach for improving the scalability for DFO methods is to work in low-dimensional subspaces that are iteratively drawn at random. For such methods, the connection between the subspace dimension and the algorithmic guarantees is not yet fully understood. I will introduce a new average-case analysis for direct search and model-based DFO in random subspaces which allows us to better understand why working in low-dimensional subspaces often outperforms higher-dimensional subspaces.

Date
28 Jun 2024
Location
University of Padova
Avatar
Lindon Roberts
Lecturer

My research is in numerical analysis and data science, particularly nonconvex and derivative-free optimization.