Learning hierarchies of reduced-dimension and context-aware low-fidelity models for multi-fidelity Monte Carlo sampling

Ionut-Gabriel Farcas (U of Texas, Austin)

May 04. 2022, 16:15 — 16:45

In traditional model reduction, low-cost low-fidelity models are explicitly constructed to replace computationally expensive high-fidelity models for speeding up computations. In contrast, in multi-fidelity methods, low-
and high-fidelity models are used together, therefore the primary purpose of low-fidelity models is supporting
computations with the high-fidelity models rather than approximating and replacing them.
In the first part of this talk, we introduce a Data-driven Multi-fidelity Monte Carlo approach in which a
hierarchy of low-fidelity models are constructed using both the full set of uncertain inputs and subsets comprising
only selected, important parameters. We illustrate the power of this method by applying it to a realistic plasma
turbulence problem with 14 stochastic parameters, demonstrating that it is about two orders of magnitude
more efficient than standard Monte Carlo methods measured in single-core performance, which translates into
a runtime reduction from around eight days to one hour on 240 cores on parallel machines.
In the second part, we present our Context-aware Multi-fidelity Monte Carlo sampling
algorithm, in which context-aware low-fidelity models are explicitly constructed for being used together with
high-fidelity models. This is realized by quasi-optimally trading off adapting the low-fidelity models–to improve
their deterministic approximation quality–with sampling the models–to reduce the statistical error. Our analysis
shows that the quasi-optimal computational effort to spend on improving the low-fidelity models is bounded,
meaning that low-fidelity models can become too accurate for multi-fidelity methods, which is in stark contrast
to traditional model reduction. We illustrate our context-aware algorithm in a realistic plasma turbulence simulations with $12$ uncertain parameters, in which, for example, only $263$ high-fidelity samples are necessary to train a fully-connected feed-forward deep neuronal network model.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Recordings:
Recording
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Organizer(s):
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zurich)
Sara van de Geer (ETH Zurich)
Karen Willcox (U of Texas, Austin)