In many fields of science, remarkably comprehensive and realistic computational models are available nowadays.
Often, the respective numerical calculations call for the use of powerful supercomputers, and therefore only a limited number of cases can be investigated explicitly.
This prevents straightforward approaches to important tasks like uncertainty quantification and sensitivity analysis.
As it turns out, this challenge can be overcome via our recently developed sensitivity-driven dimension-adaptive sparse grid interpolation strategy.
The method exploits, via adaptivity, the structure of the underlying model (such as lower intrinsic dimensionality and anisotropic coupling of the uncertain inputs) to enable efficient and accurate uncertainty quantification and sensitivity analysis at scale.
We demonstrate the efficiency of our approach in the context of fusion research.
In a realistic and computationally expensive scenario of turbulent transport in a magnetic confinement device comprising more than $264$ million degrees of freedom and eight uncertain parameters, our approach required a mere total of only $57$ high-fidelity simulations in total.
In contrast, a full-grid approach with only three points per dimension would require $6,561$ high-fidelity simulations, which is computationally infeasible.
Moreover, we show that our method intrinsically provides an accurate reduced model that is nine orders of magnitude cheaper than the high-fidelity model.