Metric entropy limits on recurrent neural network learning of linear dynamical systems

Helmut Bölcskei (ETH Zurich)

May 31. 2022, 09:15 — 10:00

One of the most influential results in neural network theory is the universal approximation theorem which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward neural networks. The purpose of this paper is to establish a result in this spirit for the approximation of general discrete-time linear dynamical systems—including time-varying systems—by recurrent neural networks (RNNs). For the subclass of linear time-invariant (LTI) systems, we devise a quantitative version of this statement. Specifically, measuring the complexity of the considered class of LTI systems through metric entropy according to Zames and Owen, 1993, we show that RNNs can optimally learn—or identify in system-theory parlance—stable LTI systems. For LTI systems whose input-output relation is characterized through a difference equation, this means that RNNs can learn the difference equation from input-output traces in a metric-entropy optimal manner.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Recordings:
Recording
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Organizer(s):
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zurich)
Sara van de Geer (ETH Zurich)
Karen Willcox (U of Texas, Austin)