Noisy linear operator learning as an inverse problem

Nicholas Nelsen (CalTech)

May 17. 2022, 16:00 — 16:45

This talk studies supervised linear operator learning between infinite-dimensional Hilbert spaces. Learning is framed as a Bayesian inverse problem with a linear operator as the unknown parameter. Assuming that the true operator is diagonalizable in a known basis, this work solves the equivalent inverse problem of estimating the operator's eigenvalues given the data. The analysis establishes posterior contraction rates in the infinite data limit under Gaussian priors. These convergence rates reveal fundamental principles of operator learning that could help guide practical developments and reduce the required data volume. Numerical evidence supports the theory in diagonal and non- diagonal settings corresponding to familiar PDE operators.

Further Information
ESI Boltzmann Lecture Hall
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zürich)
Sara van de Geer (ETH Zürich)
Karen Willcox (U of Texas, Austin)