Noisy linear operator learning as an inverse problem

Nicholas Nelsen (CalTech)

May 17. 2022, 16:00 — 16:45

This talk studies supervised linear operator learning between infinite-dimensional Hilbert spaces. Learning is framed as a Bayesian inverse problem with a linear operator as the unknown parameter. Assuming that the true operator is diagonalizable in a known basis, this work solves the equivalent inverse problem of estimating the operator's eigenvalues given the data. The analysis establishes posterior contraction rates in the infinite data limit under Gaussian priors. These convergence rates reveal fundamental principles of operator learning that could help guide practical developments and reduce the required data volume. Numerical evidence supports the theory in diagonal and non- diagonal settings corresponding to familiar PDE operators.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Recordings:
Recording
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Organizer(s):
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zurich)
Sara van de Geer (ETH Zurich)
Karen Willcox (U of Texas, Austin)