Multi-Level Optimization based Monte-Carlo Samplers for Large-Scale Inverse Problems

Tiangang Cui (Monash U, Melbourne)

May 02. 2022, 11:15 — 12:00

The Markov Chain Monte Carlo (MCMC) method is one of the pillars of Bayesian inverse problems. However, this approach typically faces several challenges in large-scale inverse problems: classical MCMC algorithms rely on constructing a sequential Markov chain, which makes it hard to fully parallelise; it is often challenging to derive efficient transition kernels; and simulating the Markov chain can be computationally costly, as the posterior density evaluation involves expensive forward model solves. We present an integrated approach based on the multilevel Monte Carlo method and the optimisation-based samplers, e.g., implicit sampling and randomise-then-optimise, to address these challenges. The use of optimisation based samplers allows us to derive efficient and parallelisable MCMC or importance sampling estimators for solving inverse problems. With the help of the multilevel Monte Carlo, we can further accelerate RTO and reduce the variance of resulting estimators. We will demonstrate the efficacy of our approach on inverse problems governed by PDE and ODE.

Further Information
ESI Boltzmann Lecture Hall
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zürich)
Sara van de Geer (ETH Zürich)
Karen Willcox (U of Texas, Austin)