A promising approach for improving the scalability for DFO methods is to work in low-dimensional subspaces that are iteratively drawn at random. For such methods, the connection between the subspace dimension and the algorithmic guarantees is not yet fully understood. I will introduce a new average-case analysis for direct search and model-based DFO in random subspaces which allows us to better understand why working in low-dimensional subspaces often outperforms higher-dimensional subspaces.