When optimizing functions which are computationally expensive and/or noisy, gradient information is often impractical to obtain or inaccurate. As a result, so-called ‘derivative-free’ optimization methods are a suitable alternative. In existing techniques for derivative-free optimization, the linear algebra cost of constructing function approximations. As a result, these algorithms are not as suitable for large-scale problems as derivative-based methods. In this talk, I will discuss a new derivative-free algorithm based on exploration of random subspaces, its worst-case complexity bounds, and some numerical results. This is joint work with Coralia Cartis (Oxford).