Due to the Covid-19 pandemic, the thematic program "Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data" has been canceled and will be re-proposed from May 2 to June 24, 2022 with the same workshop structure, including also an opening workshop on "Multilevel and multifidelity sampling methods in UQ for PDEs” as originally planned.
As a precursor for this event in 2022, we propose, however, a reduced version of this opening workshop to be held online on May 4-5, 2020 at 14-19 CET, with the aim of initiating a 2-year collaborative ESI research effort in analysis and computation for UQ.
The reduced online version of the workshop includes 8 speakers and one recorded talk:
Tiangang Cui (Monash University, Melbourne).
Josef Dick (UNSW Sydney)
Alex Gorodetsky (University of Michigan)
Abdul-Lateef Haji-Ali (Herriot-Watt University, Edinburgh),
Dirk Nuyens (KU Leuven)
Benjamin Peherstorfer (Courant Institute)
Raul Tempone (RWTH Aachen & KAUST)
Elisabeth Ullmann (TU Munich)
Matti Vihola (University of Jyväskylä)
as well as some "round table" online discussion and interaction.
Follow-Up Session & Collaboration Kick-Off on May 13, 2020, 15:30 - 16:30
Desription of the Workshop
This workshop will cover multilevel and multifidelity sampling methods in uncertainty quantification for PDEs, with a particular focus on moving beyond forward propagation of uncertainty.
A powerful and attractive way for uncertainty propagation and for Bayesian inference in random and parametric PDEs are multilevel sampling approaches, such as multilevel Monte Carlo, multilevel quasi-Monte Carlo, multilevel stochastic collocation, to name but a few. These methods exploit the natural hierarchies of numerical approximations (mesh size, polynomial degree, truncations of expansions, regularisations, model order reduction) to efficiently tackle the arising high- or infinite-dimensional quadrature problems. Their efficiency is based on variance reduction through a systematic use of control variates and importance sampling (in the stochastic setting) and of the sparse grid recombination idea (in the deterministic setting). The variance reduction is intrinsically tied to a priori and a posteriori error control in the numerical approximations, which in turn has spawned a resurgence in fundamental research in the mathematical and numerical analysis of PDEs with spatiotemporally heterogeneous data.
A related body of work has focused on combining models of varying fidelity (such as 1D and 3D models, reduced-order models, differing physical assumptions, data-fit surrogate models, and look-up tables of experimental results) in multifidelity uncertainty quantification methods. These methods similarly use control variate formulations (the multifidelity Monte Carlo method) and importance sampling. This multifidelity setting differs from the multilevel setting above because the different models do not need to form a fixed hierarchy. The relationships among the models are typically unknown a priori and are learned adaptively as the computation advances.
All these methodologies have most notably been developed in the context of forward propagation of uncertainty, or quadrature with respect to a known (prior) distribution, but they have also been extended to inverse problems and intractable (posterior) distributions that arise when incorporating data in a Bayesian inference framework, and to in optimization under uncertainty.