Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data

This ESI TP will gather at ESI leading researchers from applied mathematics, scientific computing and high-dimensional, computational statistics around the emerging area of numerical uncertainty quantification (UQ for short) in engineering and in the sciences. The TP will concentrate on mathematical foundations and underpinnings of novel computational strategies for the efficient numerical approximation of PDEs with uncertain inputs, as well as on the analysis of statistical methodologies for high-dimensional statistical data resulting from such PDE simulations. Both forward and inverse problems will be considered.

Upon placing (prior) probability measures on input parameter spaces, randomized (sampling) approximations can be employed to sample from the parametric solution manifolds: the proposed thematic program will, therefore, have one focus on Monte Carlo and quasi-Monte Carlo methods for high-dimensional random inputs, with particular attention to multilevel strategies. Other algorithmic techniques to be considered will include adaptive ("stochastic") collocation and Galerkin methods, in particular combined with Model Order Reduction (MOR), Reduced Basis Methods (RBM), low-rank approximations in tensor formats and compressed sensing based algorithms.

Another focus will be statistical modelling of large-scale (spatially or temporally) heterogeneous data for use as inputs of random PDEs. Regression and least squares based methodologies from high-dimensional statistics will be analyzed in the particular case of noisy responses of PDE outputs, and one workshop will be dedicated to kernel and machine learning based approximations of input-output maps for PDEs with highdimensional inputs as well as to new directions at the intersection of UQ and machine learning in general. While engineering models such as diffusion, acoustic, elastic and electromagnetic wave propagation and viscous flow will be foundational applications, extensions to kinetic and more general, integrodifferential equations with random input data will be considered.

Application areas will include computational directions in life sciences, medicine, geosciences, quantum chemistry, nanotechnology, computational mechanics and aerospace engineering.

Week #19 May 4 - 8, 2020:
Theme: WS1 Multilevel and multifidelity sampling methods in UQ for PDEs
organizers: Karen Willcox, Rob Scheichl, Fabio Nobile, Kody Law

Week #20 May 11 - 15, 2020:
Theme: WS3 PDE-constrained Bayesian inverse UQ
organizers: S. Reich, Ch. Schwab, A.M. Stuart, S. van de Geer

Week #22 May 25 - 29, 2020:
Theme: WS4 Statistical estimation and deep learning in UQ for PDEs
organizers: F. Bach, C. Heitzinger, J. Schmidt-Hieber, S. van de Geer

Week #23 June 2 - 5, 2020 [June 1 public holiday]:
Theme: WS2 Approximation of high-dimensional parametric PDEs in forward UQ
organizers: A. Cohen, F. Nobile, C. Powell, Ch. Schwab, L. Tamellini

Week #26 June 8 - 12, 2020 [June 11 public holiday]: 
Theme: WS5 UQ in kinetic and transport equations and in high-frequency wave propagation
organizers: L. Borcea, I.G. Graham, C. Heitzinger, S. Jin, R. Scheichl

Abstract WS1: Multilevel and multifidelity sampling methods in UQ for PDEs

This workshop will cover multilevel and multifidelity sampling methods in uncertainty quantification for PDEs, with a particular focus on moving beyond forward propagation of uncertainty.

A powerful and attractive way for uncertainty propagation and for Bayesian inference in random and parametric PDEs are multilevel sampling approaches, such as multilevel Monte Carlo, multilevel quasi-Monte Carlo, multilevel stochastic collocation, to name but a few. These methods exploit the natural hierarchies of numerical approximations (mesh size, polynomial degree, truncations of expansions, regularisations, model order reduction) to efficiently tackle the arising high- or infinite-dimensional quadrature problems. Their efficiency is based on variance reduction through a systematic use of control variates and importance sampling (in the stochastic setting) and of the sparse grid recombination idea (in the deterministic setting). The variance reduction is intrinsically tied to a priori and a posteriori error control in the numerical approximations, which in turn has spawned a resurgence in fundamental research in the mathematical and numerical analysis of PDEs with spatiotemporally heterogeneous data.

A related body of work has focused on combining models of varying fidelity (such as 1D and 3D models, reduced-order models, differing physical assumptions, data-fit surrogate models, and look-up tables of experimental results) in multifidelity uncertainty quantification methods. These methods similarly use control variate formulations (the multifidelity Monte Carlo method) and importance sampling. This multifidelity setting differs from the multilevel setting above because the different models do not need to form a fixed hierarchy. The relationships among the models are typically unknown a priori and are learned adaptively as the computation advances.

All these methodologies have most notably been developed in the context of forward propagation of uncertainty, or quadrature with respect to a known (prior) distribution, but they have also been extended to inverse problems and intractable (posterior) distributions that arise when incorporating data in a Bayesian inference framework, and to in optimization under uncertainty. 

Abstract WS2: Approximation of high-dimensional parametric PDEs in forward UQ
This workshop focuses on efficient numerical methods for the propagation of uncertainty in partial differential equations (PDEs). Of primary interest is the case where the uncertain inputs belong to separable Banach spaces, a situation which commonly arises when inputs to PDE models are represented as random processes or fields. The task of quantifying numerically the uncertainty in the PDE solution can be recast as an approximation problem for a parametric family of PDE solutions, seen as a map  between data and solution (Banach) spaces. One therefore deals with deterministic maps defined on potentially very high-dimensional or infinite dimensional parameter spaces.

Key mathematical questions to be addressed in this workshop concern regularity and compressibility of random input data (measured, for instance, in terms of summability of expansion coefficients over a basis or a dictionary), compressibility results for the corresponding data-to-solution map (for instance, sparsity estimates of generalized polynomial chaos expansions of the parametric family of responses), bounds on the Kolmogorov n-width of the solution manifold, as well as low rank estimates in tensor-formatted representations, etc.

Theoretical sparsity bounds on solutions to parametric PDEs provide benchmarks for high-dimensional approximation schemes, a wide variety of which have emerged in recent years in computational UQ. Such methodologies include Stochastic Galerkin and Collocation methods, Model Order Reduction (MOR) and Reduced Basis (RB) methods, tensor-formatted numerical linear algebra techniques, compressed sensing and LASSO regularisation, kernel-based approximations and Gaussian Process regression. Recent advances on convergence and complexity results for such approximation schemes in the high-dimensional setting will be presented.

Abstract WS3: PDE-constrained Bayesian inverse problems: interplay of spatial statistical models with advanced PDE discretizations
A discipline within the general area of data-driven computational UQ which has gained substantial momentum during the past years is numerical Bayesian inversion of PDEs and network models with uncertain input data.

Bayesian analysis consists in computing expectations of so-called Quantities of Interest (QoI's for short), constrained by forward PDE models, under a prior probability on uncertain PDE inputs, and taking into account the availability of possibly massive, noisy and/or redundant data. Prominent examples are climate and weather forecasts, subsurface flow, social media, biomedial and genomic data. Variants are UQ in classification. Here, standard numerical approaches are Markov Chain Monte Carlo methods and their variants. Alternative approaches include deterministic high-dimensional integration methods of Quasi-Monte Carlo type. For PDEs, draws of samples from the posterior involve a PDE solve, and can, therefore, become probhibitively expensive.
In recent years, fundamental advances have been made in understanding the convergence of Markov Chains for discretized PDE solves at each draw of the chain. Still, running MCMC algorithms on complex forward PDE models on large data, possibly streamed in real time, entail prohibitive computational work. One acceleration of computational MCMC in UQ for Bayesian PDE inversion and data assimilation is based on running MCMC on small scale PDE surrogate models, obtained, e.g., by MOR or reduced basis methods; this approach raises the issue on how to perform PDE MOR with certication for all states within reach of the sampler. Among various approaches to reduce burn-in time for MCMC samplers is, in particular, also reparametrization of the Bayesian posterior through measure transport. The WS will address in particular adaptation of measure transport of Bayesian posteriors to data.

Abstract WS4: Statistical estimation and deep learning in UQ for PDEs
Both uncertainty quantification and machine learning extract information from noisy data. In machine learning the data are usually outcomes of some random mechanism. Random data resulting from forward UQ, on the other hand, are generated as solutions of PDEs with uncertain coefficients, and exhibit additional structure of PDE solution manifolds.

Deep neural networks (DNN) have empirically been shown to perform well in various supervised machine learning tasks. Recently, some important steps towards a theoretical understanding of deep learning algorithms have been taken. For example, insight has been gained concerning their approximation properties in terms of network depth for various function classes.

In this workshop, we survey the state of the art in theoretical approximation results for DNNs, in particular for many-parametric data-to-solution maps for PDEs with uncertain inputs from function spaces, to advance statistical methodologies specifically tailored to corresponding parametric manifolds of PDE responses.

In particular, given a description of the ``richness'' of a neural network, one may use statistical machinery to evaluate the performance on noisy data. Moreover, regularization methods allow one to deal with the many parameters in the network and to improve generalization. Closely related are Bayesian approaches where the prior serves as a regularizer. An important question is for example whether such methods allow unsupervised learning of the sparsity of a DNN on PDE solution manifolds or, if not, whether semi-supervised learning methods can be designed based on a-priori information on the sparsity structure of PDE solution manifolds.

Research in deep learning involves information and approximation theory, in conjunction with scientific computing and statistics. A key aim of this workshop is to foster interaction among experts in computational PDE UQ with leaders in high-dimensional mathematical statistics to explore theory and applications of state-of-the art methods in high-dimensional statistics, to data resulting from uncertainty propagation in PDEs.

Abstract WS5: UQ in kinetic and transport equations and in high-frequency wave propagation
This workshop will focus on uncertainty quantification (UQ) in kinetic equations and in high-frequency wave propagation. While there is a very active international research community in analysis and numerical analysis in both those areas with a strong presence also in Vienna, both are fairly new application areas for UQ. These two areas are also somewhat related via mathematical tools such as the Wigner transform which gives rise to a kinetic radiative transfer equation for the high frequency wave equation in random media. Thus bringing together researchers from those two communities will be mutually beneficial.

The goal of the workshop will be to foster interaction between the UQ experts present at the program and domain specialists in the two application areas. The key aims of the workshop will be to study (i) kinetic models such as the Boltzmann transport equation (among others) where uncertainty often arises in the characterization of particle interactions and other model parameters and (ii) wave scattering problems, where uncertainty arises, for example, in parameters describing the medium or in the geometry of the scatterer. 

The talks will present and formulate central questions of uncertainty and stochasticity in those models, as well as existing approaches to handle them analytically and numerically. This will include theoretical questions of existence, uniqueness, regularity, inversion, and hypocoercivity as well as numerical aspects such as efficient solvers, approximation, and quadrature especially in high dimensions. Applications will include all areas where kinetic equations and wave equations have proven useful such as quantum mechanics, waves in random media and imaging, and more generally engineering, biology, and economics.

Coming soon.

This event has no subevents associated to it.

  • Pierre Alquier (ENSAE Paris-Tech)
  • Anton Arnold (TU Vienna)
  • Francis Bach (INRIA, Rocquencourt)
  • Markus Bachmayr (University of Mainz)
  • Hosseini Bamdad (Caltech, Pasadena)
  • Gang Bao (Zhejiang U)
  • Andrew Barron (Yale U, New Haven)
  • Andrea Barth (U Stuttgart)
  • Peter Bartlett (UC, Berkeley)
  • Francesca Bonizzoni (U Vienna)
  • Liliana Borcea (U of Michigan, Ann Arbor)
  • Helmut Bölcskei (ETH Zürich)
  • Alina Chertock (North Carolina State U)
  • Andres Christen (CIMAT, Guanajuato)
  • Albert Cohen (Sorbonne U, Paris)
  • Matteo Croci (U Oxford)
  • Mesoumeh Dashti (U Sussex)
  • Ronald DeVore (TAMU, College Station)
  • Miguel Del Alamo (U Göttingen)
  • Nicholas Dexter (Simon Fraser U, Vancouver)
  • Josef Dick (UNSW, Sidney)
  • Giacomo Dimarco (U of Ferrara)
  • Tim Dodwell (U of Exeter)
  • Virginie Ehrlacher (ENPC, Paris)
  • Martin Eigel (WIAS, Berlin)
  • Bjorn Engquist (U of Texas, Austin)
  • Oliver Ernst (TU Chemnitz)
  • Inot-Gabriel Farcas (TU Munich)
  • Michael Feischl (TU Vienna)
  • Xiaobing Feng (U of Tennessee, Knoxville)
  • Josselin Garnier (Ecole Polytechnique, Palaiseau)
  • Omar Ghattas (U of Texas, Austin)
  • Nathan Glatt-Holtz (Tulane U)
  • Susana Gomes (U Warwick)
  • Alex Gorodetsky (U of Michigan, Ann Arbor)
  • Ivan Graham (U Bath)
  • Remi Gribonval (INRIA, Rocquencourt)
  • Diane Guignard (TAMU, College Station)
  • Seung-Yeal Ha (Seoul National U)
  • Abdul-Lateef Haji-Ali (Heriot-Watt U, Edinburgh)
  • Helmut Harbrecht (University of Basel)
  • Clemens Heitzinger (TU Vienna) — Organizer
  • Lukas Herrmann (ETH Zürich)
  • Ralf Hiptmair (ETH Zürich)
  • Hakon Hoel (KAUST, Thuwal)
  • Thorsten Hohage (U Göttingen)
  • Gianluca Iaccarino (SU)
  • Marco Iglesias (U of Nottingham)
  • John Jakeman (Sandia National Laboratories)
  • Ajay Jasra (KAUST, Thuwal)
  • Carlos Jerez Hanckes (U Adolfo Ibanez)
  • Shi Jin (U of Wisconsin-Madison)
  • Barbara Kaltenbacher (AAU, Klagenfurt)
  • Yoshihito Kazashi (EPFL Lausanne)
  • Amirreza Khodadadian (U Hannover)
  • Kristin Kirchner (ETH Zürich)
  • Michael Kohler (TU Darmstadt)
  • Peter Kritzer (JKU, Linz)
  • Adam Krzyzak (Concordia U, Montreal)
  • Karl Kunisch (U Graz)
  • Gitta Kutyniok (TU Berlin)
  • Sophie Langer (TU Darmstadt)
  • Jonas Latz (U Cambridge)
  • Kody Law (U Manchester)
  • Qin Li (U of Wisconsin-Madison)
  • Liu Liu (U of Texas, Austin)
  • Youssef Marzouk (MIT)
  • Hermann Matthies (TU Braunschweig)
  • Giovanni Migliorati (Sorbonne U, Paris)
  • Mohammad Motamed (U of New Mexico, Albuquerque)
  • Olga Mula Hernandez (Dauphine U, Paris)
  • Akil Narayan (U of Utah, Saltlake City)
  • Richard Nickl (U Cambridge)
  • Fabio Nobile (EPFL Lausanne) — Organizer
  • Anthony Nouy (CN, Nantes)
  • Nikolas Nuesken (U of Potsdam)
  • Dirk Nuyens (KU Leuven)
  • Houman Owhadi (Caltech, Pasadena)
  • Iason Papaioannou (TU Munich)
  • Lorenzo Pareschi (U of Ferrara)
  • Benjamin Peherstorfer (CIMS, New York)
  • Philipp Petersen (U Oxford)
  • Tomaso Poggio (MIT, Cambridge)
  • Jim Portegies (TU Eindhoven)
  • Catherine Powell (U Manchester)
  • Dirk Praetorius (TU Vienna)
  • Holger Rauhut (RWTH Aachen)
  • Sebastian Reich (U of Potsdam)
  • Christian Rieger (U Bonn)
  • Gianluigi Rozza (SISSA, Trieste)
  • Michele Ruggeri (TU Vienna)
  • Olof Runborg (KTH Stockholm)
  • Laura Scarabosio (TU Munich)
  • Robert Scheichl (U Heidelberg) — Organizer
  • Claudia Schillings (U of Mannheim)
  • Johannes Schmidt-Hieber (Leiden U)
  • Christoph Schwab (ETH Zürich) — Organizer
  • Carola-Bibiane Schönlieb (U Cambridge)
  • Elnaz Esmaeilzadeh Seylabi (U of Reno)
  • Ian Sloan (UNSW, Sidney)
  • Euan Spence (U Bath)
  • Björn Sprungk (U Göttingen)
  • Andrew Stuart (Caltech, Pasadena)
  • Taiji Suzuki (U Tokyo)
  • Lorenzo Tamellini (CNR-IMATI, Pavia)
  • Aretha Teckentrup (U Edinburgh)
  • Raul Tempone (KAUST, Thuwal)
  • Elisabeth Ullmann (TU Munich)
  • Andre Uschmajew (MPI Leipzig)
  • Karen Veroy-Grepl (RWTH Aachen)
  • Eva Vidlickova (EPFL Lausanne)
  • Matti Vihola (U of Jyväskulä)
  • Umberto Villa (WUSTL, St. Louis)
  • Li Wang (U of Minnesota)
  • Clayton Webster (ORNL, Oak Ridge)
  • Karen Willcox (U of Texas, Austin) — Organizer
  • Marie-Therese Wolfram (U Warwick)
  • Mattia Zanella (Politecnico, Torino)
  • Jakob Zech (ETH Zürich)
  • Sara van de Geer (ETH Zürich) — Organizer
At a glance
Thematic Programme
May 4, 2020 -- June 26, 2020
ESI Boltzmann Lecture Hall
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zürich)
Karen Willcox (U of Texas, Austin)
Sara van de Geer (ETH Zürich)