Sampling algorithms for generalized model ensembles in multifidelity uncertainty quantification

Alex Gorodetsky (U of Michigan, Ann Arbor)

May 04. 2020, 16:00 — 16:40

We consider variance reduction for Monte Carlo sampling algorithms for propagating uncertainty through computational simulation models when additional simulators of varying fidelity are available. Our goal is to estimate, or predict, quantities of interest from a specified high-fidelity model when only a limited number of such simulations is available. To aid in this task, lower fidelity models can be used to reduce the uncertainty of the high-fidelity predictions. We have developed a framework that unifies several existing variance reduction sampling approaches such as recursive difference and recursive nested (e..g, multifidelity Monte Carlo) estimators through the lens of an approximate control variate. The framework enables analyzing the statistical properties, i.e., variance reduction, of approximate control variate estimators, and we demonstrate that  existing sampling approaches are in-fact sub-optimal --- they cannot obtain the same variance reduction performance that would be achieved by an optimal (non-approximate) linear control variate scheme because they rely on either implicit or explicit model orderings. We then describe several estimators arising from this framework that do converge to the optimal linear control variate. Finally, directions for deriving data-dependent estimators are described.

Further Information
Venue:
Erwin Schrödinger Institute - virtual
Recordings:
Recording
Associated Event:
Multilevel and multifidelity sampling methods in UQ for PDEs (Online Workshop)
Organizer(s):
Kody Law (U Manchester)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Karen Willcox (U of Texas, Austin)