Recent & Upcoming Talks

2024

Randomised Subspace Methods for Scalable Derivative-Free Optimisation [slides available]

Most algorithms for optimising nonlinear functions rely on access to (possibly stochastic) derivative information. However, for …

Expected decrease for derivative-free algorithms using random subspaces [slides available]

A promising approach for improving the scalability for DFO methods is to work in low-dimensional subspaces that are iteratively drawn …

Differentiation: the good, the bad, and the ugly [slides available]

For most (sufficiently simple) smooth functions, most high school calculus students can compute its derivative by repeated application …

2023

Expected decrease for derivative-free algorithms using random subspaces [slides available]

When optimising functions that are black-box, noisy and/or computationally expensive, it may be impractical to get gradient …

Large-scale derivative-free optimization using random subspace methods [slides available]

Many standard optimization algorithms require being able to cheaply and accurately compute derivatives for the objective and/or …

Analyzing Inexact Hypgergradients for Bilevel Learning [slides available]

Estimating hyperparameters has been a long-standing problem in machine learning. We consider the case where the task at hand is …

2022

Analyzing Inexact Hypergradients for Bilevel Learning [slides available]

Estimating hyperparameters is an important and long-standing problem in machine learning. We consider the case where hyperparameter …

Black-Box Optimisation Techniques for Complex Systems [slides available]

I will introduce some techniques for optimisation suitable for complex systems, such as those involving significant computation or …

Large-scale derivative-free optimization using random subspace methods

Many standard optimization algorithms require being able to cheaply and accurately compute derivatives for the objective and/or …

Large-scale derivative-free optimization using random subspace methods [slides and video available]

Many standard optimization algorithms require being able to cheaply and accurately compute derivatives for the objective and/or …

2021

Derivative-Free Optimization with Convex Constraints [slides available]

When optimizing functions which are computationally expensive and/or noisy, gradient information is often impractical to obtain or …

Derivative-Free Optimization with Convex Constraints [slides available]

When optimizing functions which are computationally expensive and/or noisy, gradient information is often impractical to obtain or …

Inexact Derivative-Free Optimization for Bilevel Learning [slides available]

When variational regularisation methods are used to solve inverse problems, they suffer from the drawback of having potentially many …

Inexact Derivative-Free Optimization for Bilevel Learning [slides available]

When variational regularisation methods are used to solve inverse problems, they suffer from the drawback of having potentially many …

Scalable Subspace Methods for Derivative-Free Nonlinear Least-Squares Optimization [slides available]

When optimizing functions which are computationally expensive and/or noisy, gradient information is often impractical to obtain or …

Large-Scale Derivative-Free Optimization using Subspace Methods [slides available]

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Large-Scale Derivative-Free Optimization using Subspace Methods [slides available]

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Large-Scale Derivative-Free Optimization using Subspace Methods [slides available]

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Inexact Derivative-Free Optimization for Bilevel Learning [slides available]

When variational regularisation methods are used to solve inverse problems, they suffer from the drawback of having potentially many …

2020

Block Methods for Scalable Derivative-Free Optimisation [slides available]

Derivative-free optimisation (DFO) methods are an important class of optimisation routines with applications in areas such as in image …

Scalable Derivative-Free Optimization for Nonlinear Least-Squares Problems [slides & video available]

Derivative-free optimisation (DFO) methods are an important class of optimisation routines with applications in areas such as in image …

Inexact Derivative-Free Optimisation for Bilevel Learning [slides available]

When variational regularisation methods are used to solve inverse problems, they suffer from the drawback of having potentially many …

Derivative-free optimisation for least-squares problems [slides available]

Least-squares problems (such as parameter estimation) are ubiquitous across quantitative disciplines. Optimisation algorithms for …

2019

Improving the scalability of model-based derivative-free optimization [slides available]

Derivative-free optimization (DFO) methods are an important class of optimization routines for many problems in data science, such as …

Improving the efficiency and robustness of black-box optimisation

In classical nonlinear optimisation, the availability of first-order information is crucial to constructing accurate local models for …

Derivative-Free Algorithms for Nonconvex Optimisation

Classical nonconvex optimisation algorithms require the availability of gradient evaluations for constructing local approximations to …

Improving the scalability of derivative-free optimization for nonlinear least-squares problems

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Improving the scalability of derivative-free optimization for nonlinear least-squares problems

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Improving the scalability of derivative-free optimization for nonlinear least-squares problems

In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange …

Improving the efficiency and robustness of black-box optimisation

In classical nonlinear optimisation, the availability of first-order information is crucial to constructing accurate local models for …

Improving the Flexibility and Robustness of Derivative-Free Optimisation Solvers

Classical nonlinear optimisation algorithms require the availability of gradient evaluations for constructing local approximations to …

2018

Improving the Flexibility and Robustness of Derivative-Free Optimization Solvers

Classical nonlinear optimization algorithms require the availability of gradient evaluations for constructing local approximations to …

A flexible, robust and efficient derivative-free solver for least-squares

We present DFO-LS, a software package for derivative-free optimization (DFO) for nonlinear least-squares problems, that has simplified …

Improving the efficiency of derivative-free methods for nonlinear least squares problems

We present DFO-GN, a derivative-free version of the Gauss-Newton method for solving nonlinear least-squares problems. As is common in …

2017

Derivative-Free Optimisation Methods for Nonlinear Least-Squares Problems

Derivative-free optimisation (DFO) algorithms are a category of optimisation methods for situations when one is unable to compute or …