We discuss random and deterministic subspace methods for nonconvex optimization problems. We are interested in the global optimisation of functions with low effective dimensionality, that vary only along certain important directions or components. We show that the effective subspace of variation can be efficiently learned in advance of the optimization process; we contrast this with random embedding techniques that focus directly on optimization rather than learning. For local optimization, time permitting, we will also discuss efficient choices of subspaces that blend randomisation techniques with expert deterministic choices.