An empirical adaptive Galerkin method for parametric PDEs

Martin Eigel (WIAS, Berlin)

Apr 06. 2022, 13:30 — 14:20

Adaptive stochastic Galerkin FEM (ASGFEM) with residual based a posteriori error estimation have shown to exhibit optimal convergence in practice for some standard parametric linear PDEs. However, their implementation is rather involved and requires significant effort when different problems should be tackled.
Motivated by recent results with empirical low-rank tensor regression in the framework of statistical learning, we propose a non-intrusive reconstruction method that only uses samples of the solution and yields the Galerkin projection with high probability. The a posteriori error control involves all discretisation parameters, determining the deterministic error components and the statistical error. Moreover, for the sum of error and estimator, the empirical ASGFEM can be shown to converge.
To realize the error estimator, a sufficiently accurate tensor representation of the coefficient is required, which easily becomes challenging for instance when it is defined as an exponential function of a Gaussian field. We consider this common case and recall that it corresponds to the solution of a differential equation. It hence can be computed by means of a Petrov-Galerkin method for which error estimators and an application related to Bayesian inverse problems are presented.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Associated Event:
Adaptivity, High Dimensionality and Randomness (Workshop)
Organizer(s):
Carsten Carstensen (HU Berlin)
Albert Cohen (Sorbonne U, Paris)
Michael Feischl (TU Vienna)
Christoph Schwab (ETH Zurich)