Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data

This ESI TP will gather at ESI leading researchers from applied mathematics, scientific computing and high-dimensional, computational statistics around the emerging area of numerical uncertainty quantification (UQ for short) in engineering and in the sciences. The TP will concentrate on mathematical foundations and underpinnings of novel computational strategies for the efficient numerical approximation of PDEs with uncertain inputs, as well as on the analysis of statistical methodologies for high-dimensional statistical data resulting from such PDE simulations. Both forward and inverse problems will be considered.

Upon placing (prior) probability measures on input parameter spaces, randomized (sampling) approximations can be employed to sample from the parametric solution manifolds: the proposed thematic program will, therefore, have one focus on Monte Carlo and quasi-Monte Carlo methods for high-dimensional random inputs, with particular attention to multilevel strategies. Other algorithmic techniques to be considered will include adaptive ("stochastic") collocation and Galerkin methods, in particular combined with Model Order Reduction (MOR), Reduced Basis Methods (RBM), low-rank approximations in tensor formats and compressed sensing based algorithms.

Another focus will be statistical modelling of large-scale (spatially or temporally) heterogeneous data for use as inputs of random PDEs. Regression and least squares based methodologies from high-dimensional statistics will be analyzed in the particular case of noisy responses of PDE outputs, and one workshop will be dedicated to kernel and machine learning based approximations of input-output maps for PDEs with highdimensional inputs as well as to new directions at the intersection of UQ and machine learning in general. While engineering models such as diffusion, acoustic, elastic and electromagnetic wave propagation and viscous flow will be foundational applications, extensions to kinetic and more general, integrodifferential equations with random input data will be considered.

Application areas will include computational directions in life sciences, medicine, geosciences, quantum chemistry, nanotechnology, computational mechanics and aerospace engineering.

Theme: WS1 Multilevel and multifidelity sampling methods in UQ for PDEs
organizers: Karen Willcox, Rob Scheichl, Fabio Nobile, Kody Law
May 2 to May 6, 2022

Theme: WS2 Approximation of high-dimensional parametric PDEs in forward UQ
organizers: A. Cohen, F. Nobile, C. Powell, Ch. Schwab, L. Tamellini
May 9 to May 13, 2022

Theme: WS3 PDE-constrained Bayesian inverse problems: Interplay of spatial statistical models with Machine Learning in PDE discretizations 
organizers: S. Reich, Ch. Schwab, A.M. Stuart, S. van de Geer 
May 16 to May 20, 2022

Theme: WS4 Statistical estimation and deep learning in UQ for PDEs
organizers: F. Bach, C. Heitzinger, J. Schmidt-Hieber, S. van de Geer
May 30  to June 3, 2022

Theme: WS5 UQ in kinetic and transport equations and in high-frequency wave propagation
organizers: L. Borcea, C. Heitzinger, S. Jin, R. Scheichl, E. Spence
June 13  to June 17, 2022 

Abstract WS1: Multilevel and multifidelity sampling methods in UQ for PDEs

This workshop will cover multilevel and multifidelity sampling methods in uncertainty quantification for PDEs, with a particular focus on moving beyond forward propagation of uncertainty.

A powerful and attractive way for uncertainty propagation and for Bayesian inference in random and parametric PDEs are multilevel sampling approaches, such as multilevel Monte Carlo, multilevel quasi-Monte Carlo, multilevel stochastic collocation, to name but a few. These methods exploit the natural hierarchies of numerical approximations (mesh size, polynomial degree, truncations of expansions, regularisations, model order reduction) to efficiently tackle the arising high- or infinite-dimensional quadrature problems. Their efficiency is based on variance reduction through a systematic use of control variates and importance sampling (in the stochastic setting) and of the sparse grid recombination idea (in the deterministic setting). The variance reduction is intrinsically tied to a priori and a posteriori error control in the numerical approximations, which in turn has spawned a resurgence in fundamental research in the mathematical and numerical analysis of PDEs with spatiotemporally heterogeneous data.

A related body of work has focused on combining models of varying fidelity (such as 1D and 3D models, reduced-order models, differing physical assumptions, data-fit surrogate models, and look-up tables of experimental results) in multifidelity uncertainty quantification methods. These methods similarly use control variate formulations (the multifidelity Monte Carlo method) and importance sampling. This multifidelity setting differs from the multilevel setting above because the different models do not need to form a fixed hierarchy. The relationships among the models are typically unknown a priori and are learned adaptively as the computation advances.

All these methodologies have most notably been developed in the context of forward propagation of uncertainty, or quadrature with respect to a known (prior) distribution, but they have also been extended to inverse problems and intractable (posterior) distributions that arise when incorporating data in a Bayesian inference framework, and to in optimization under uncertainty. 

Abstract WS2: Approximation of high-dimensional parametric PDEs in forward UQ
This workshop focuses on efficient numerical methods for the propagation of uncertainty in partial differential equations (PDEs). Of primary interest is the case where the uncertain inputs belong to separable Banach spaces, a situation which commonly arises when inputs to PDE models are represented as random processes or fields. The task of quantifying numerically the uncertainty in the PDE solution can be recast as an approximation problem for a parametric family of PDE solutions, seen as a map  between data and solution (Banach) spaces. One therefore deals with deterministic maps defined on potentially very high-dimensional or infinite dimensional parameter spaces.

Key mathematical questions to be addressed in this workshop concern regularity and compressibility of random input data (measured, for instance, in terms of summability of expansion coefficients over a basis or a dictionary), compressibility results for the corresponding data-to-solution map (for instance, sparsity estimates of generalized polynomial chaos expansions of the parametric family of responses), bounds on the Kolmogorov n-width of the solution manifold, as well as low rank estimates in tensor-formatted representations, etc.

Theoretical sparsity bounds on solutions to parametric PDEs provide benchmarks for high-dimensional approximation schemes, a wide variety of which have emerged in recent years in computational UQ. Such methodologies include Stochastic Galerkin and Collocation methods, Model Order Reduction (MOR) and Reduced Basis (RB) methods, tensor-formatted numerical linear algebra techniques, compressed sensing and LASSO regularisation, kernel-based approximations and Gaussian Process regression. Recent advances on convergence and complexity results for such approximation schemes in the high-dimensional setting will be presented.

Abstract WS3: PDE-constrained Bayesian inverse problems: interplay of spatial statistical models with advanced PDE discretizations
Numerical Bayesian inversion of PDE models with uncertain input data has gained substantial momentum within the general area of data-driven computational UQ during the past years. 

Bayesian analysis consists in computing expectations of so-called Quantities of Interest (QoI's for short), constrained by forward PDE models, under a prior probability on uncertain PDE inputs, and taking into account the availability of possibly massive, noisy and/or redundant data.

Prominent examples are climate and weather forecasts, subsurface flow, social media, biomedial and genomic data. Variants are UQ in classification.

Standard numerical approaches are Markov Chain Monte Carlo (MCMC) methods and their variants. Alternative approaches include deterministic high-dimensional integration methods of Quasi-Monte Carlo type or variational techniques. While fundamental advances have been made in understanding the convergence of these methods, running such algorithms on complex forward PDE models and large data, possibly streamed in real time, entail prohibitive computational work. 

This WS will therefore explore the analysis and implementation of computational acceleration strategies including novel approximation techniques driven by advances from Machine Learning. 

One acceleration of computational MCMC in UQ for Bayesian PDE inversion and data assimilation is based on running MCMC on small scale PDE surrogate models, obtained, e.g., by MOR or reduced basis methods; this approach raises the issue on how to perform PDE MOR with certification for all states within reach of the sampler. Other acceleration methodologies comprise (semi-)supervised machine learning surrogates of Bayesian posteriors, and reparametrization of the Bayesian posterior through a transport of measures. Novel approximation techniques include random feature maps, reservoir computing, deep neural networks, and tensor trains. Furthermore, highly nonlinear and complex PDE-based forward models stand to benefit from recent advances in derivative-free inference methods and affine-invariant interacting particle samplers.

In addition to these algorithm-driven themes, the WS will address the principled selection of prior distributions and their impact on the posterior consistency of the QoIs from a frequentist perspective.

Overall, this WS will explore recent foundational and application advances in PDE-constrained Bayesian inference in step with WS2 and WS4.

Abstract WS4: Statistical estimation and deep learning in UQ for PDEs
Both uncertainty quantification and machine learning extract information from noisy data. In machine learning the data are usually outcomes of some random mechanism. Random data resulting from forward UQ, on the other hand, are generated as solutions of PDEs with uncertain coefficients, and exhibit additional structure of PDE solution manifolds.

Deep neural networks (DNN) have empirically been shown to perform well in various supervised machine learning tasks. Recently, some important steps towards a theoretical understanding of deep learning algorithms have been taken. For example, insight has been gained concerning their approximation properties in terms of network depth for various function classes.

In this workshop, we survey the state of the art in theoretical approximation results for DNNs, in particular for many-parametric data-to-solution maps for PDEs with uncertain inputs from function spaces, to advance statistical methodologies specifically tailored to corresponding parametric manifolds of PDE responses.

In particular, given a description of the ``richness'' of a neural network, one may use statistical machinery to evaluate the performance on noisy data. Moreover, regularization methods allow one to deal with the many parameters in the network and to improve generalization. Closely related are Bayesian approaches where the prior serves as a regularizer. An important question is for example whether such methods allow unsupervised learning of the sparsity of a DNN on PDE solution manifolds or, if not, whether semi-supervised learning methods can be designed based on a-priori information on the sparsity structure of PDE solution manifolds.

Research in deep learning involves information and approximation theory, in conjunction with scientific computing and statistics. A key aim of this workshop is to foster interaction among experts in computational PDE UQ with leaders in high-dimensional mathematical statistics to explore theory and applications of state-of-the art methods in high-dimensional statistics, to data resulting from uncertainty propagation in PDEs.

Abstract WS5: UQ in kinetic and transport equations and in high-frequency wave propagation
This workshop will focus on uncertainty quantification (UQ) in kinetic equations and in high-frequency wave propagation. While there is a very active international research community in analysis and numerical analysis in both those areas with a strong presence also in Vienna, both are fairly new application areas for UQ. These two areas are also somewhat related via mathematical tools such as the Wigner transform which gives rise to a kinetic radiative transfer equation for the high frequency wave equation in random media. Thus bringing together researchers from those two communities will be mutually beneficial.

The goal of the workshop will be to foster interaction between the UQ experts present at the program and domain specialists in the two application areas. The key aims of the workshop will be to study (i) kinetic models such as the Boltzmann transport equation (among others) where uncertainty often arises in the characterization of particle interactions and other model parameters and (ii) wave scattering problems, where uncertainty arises, for example, in parameters describing the medium or in the geometry of the scatterer. 

The talks will present and formulate central questions of uncertainty and stochasticity in those models, as well as existing approaches to handle them analytically and numerically. This will include theoretical questions of existence, uniqueness, regularity, inversion, and hypocoercivity as well as numerical aspects such as efficient solvers, approximation, and quadrature especially in high dimensions. Applications will include all areas where kinetic equations and wave equations have proven useful such as quantum mechanics, waves in random media and imaging, and more generally engineering, biology, and economics.

Coming soon.

This event has no subevents associated to it.
  • Pierre Alquier (RIKEN AIP)
  • Markus Bachmayr (U Mainz)
  • Hosseini Bamdad (Caltech, Pasadena)
  • Peter Bartlett (UC, Berkeley)
  • Helmut Bölcskei (ETH Zürich)
  • Francesca Bonizzoni (U Augsburg)
  • Liliana Borcea (U of Michigan, Ann Arbor)
  • Peng Chen (U of Texas, Austin)
  • Andres Christen (CIMAT, Guanajuato)
  • Colin Cotter (Imperial College, London)
  • Tiangang Cui (Monash U, Melbourne)
  • Mesoumeh Dashti (U Sussex)
  • Alexis Derumigny (TU Delft)
  • Nicholas Dexter (Simon Fraser U, Vancouver)
  • Giacomo Dimarco (U of Ferrara)
  • Matthieu Dolbeault (Sorbonne U, Paris)
  • Qiang Du (Columbia University, New York)
  • Virginie Ehrlacher (ENPC, Paris)
  • Bjorn Engquist (U of Texas, Austin)
  • Ionut-Gabriel Farcas (U of Texas, Austin)
  • Michael Feischl (TU Vienna)
  • Xiaobing Feng (U of Tennessee, Knoxville)
  • Martin Frank (KIT, Karlsruhe)
  • Josselin Garnier (Ecole Polytechnique, Palaiseau)
  • Susana Gomes (U Warwick)
  • Alex Gorodetsky (U of Michigan, Ann Arbor)
  • Remi Gribonval (INRIA, Rocquencourt)
  • Diane Guignard (U of Ottawa)
  • Abdul-Lateef Haji-Ali (Heriot-Watt U, Edinburgh)
  • Helmut Harbrecht (University of Basel)
  • Clemens Heitzinger (TU Vienna) — Organizer
  • Ralf Hiptmair (ETH Zürich)
  • Gianluca Iaccarino (SU)
  • John Jakeman (Sandia National Laboratories)
  • Carlos Jerez-Hanckes (U Adolfo Ibanez)
  • Barbara Kaltenbacher (AAU, Klagenfurt)
  • Kristin Kirchner (TU Delft)
  • Michael Kohler (TU Darmstadt)
  • Peter Kritzer (RICAM)
  • Karl Kunisch (U Graz)
  • Frances Kuo (UNSW Sydney)
  • Annika Lang (CUT, Gothenburg)
  • Sophie Langer (TU Darmstadt)
  • Jonas Latz (U Cambridge)
  • Qin Li (U of Wisconsin-Madison)
  • Po-Ling Loh (U Cambridge)
  • Youssef Marzouk (MIT, Cambridge)
  • Hrushikesh Mhaskar (Claremont Graduate U)
  • Giovanni Migliorati (Sorbonne U, Paris)
  • Mohammad Motamed (U of New Mexico, Albuquerque)
  • Olga Mula Hernandez (Dauphine U, Paris)
  • Akil Narayan (U of Utah, Saltlake City)
  • Richard Nickl (U Cambridge)
  • Fabio Nobile (EPF Lausanne) — Organizer
  • Anthony Nouy (CN, Nantes)
  • Dirk Nuyens (KU Leuven)
  • Houman Owhadi (Caltech, Pasadena)
  • Iason Papaioannou (TU Munich)
  • Lorenzo Pareschi (U of Ferrara)
  • Benjamin Peherstorfer (CIMS, New York)
  • Philipp Petersen (U Vienna)
  • Catherine Powell (U Manchester)
  • Dirk Praetorius (TU Vienna)
  • Elizabeth Qian (Caltech, Pasadena)
  • Sebastian Reich (U of Potsdam)
  • Gianluigi Rozza (SISSA, Trieste)
  • Michele Ruggeri (TU Vienna)
  • Olof Runborg (KTH Stockholm)
  • Laura Scarabosio (Radboud U)
  • Robert Scheichl (U Heidelberg) — Organizer
  • Claudia Schillings (U of Mannheim)
  • Carola-Bibiane Schönlieb (U Cambridge)
  • Christoph Schwab (ETH Zürich) — Organizer
  • Elnaz Seylabi (U of Reno)
  • Ian Sloan (UNSW, Sidney)
  • Björn Sprungk (TU Freiberg)
  • Taiji Suzuki (U Tokyo)
  • Lorenzo Tamellini (CNR-IMATI, Pavia)
  • Aretha Teckentrup (U Edinburgh)
  • Elisabeth Ullmann (TU Munich)
  • Sara van de Geer (ETH Zürich) — Organizer
  • Sven Wang (MIT, Cambridge)
  • Karen Willcox (U of Texas, Austin) — Organizer
  • Marie-Therese Wolfram (U Warwick)
  • Mattia Zanella (U Pavia)
  • Lenka Zdeborova (EPF Lausanne)
At a glance
Thematic Programme
May 2, 2022 — June 24, 2022
ESI Boltzmann Lecture Hall
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPF Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zürich)
Sara van de Geer (ETH Zürich)
Karen Willcox (U of Texas, Austin)