In existing techniques for model-based derivative-free optimization, the computational cost of constructing local models and Lagrange polynomials can be high. As a result, these algorithms are not as suitable for large-scale problems as derivative-based methods. In this talk, I will discuss a model-based derivative-free algorithm based on exploration of random subspaces, its worst-case complexity bounds, and some numerical results.