Iterative Regularization of the Deep Inverse Prior via (Inertial) Gradient Flow

Jalal Fadili (CNRS-ENSICAEN)

Jun 04. 2024, 10:00 — 10:30

Neural networks have become a prominent approach to solve inverse problems in recent years. While a plethora of data-driven methods was developed to solve inverse problems empirically, we are still lacking clear theoretical guarantees of these methods. On the other hand, many works have outlined the role of overparametrization to show convergence to optimal solutions of neural networks training. In this work we investigate how to bridge these two worlds and we provide deterministic convergence and recovery guarantees for a class of neural networks optimized using inertial gradient flow. In the random setting, we also derive overparametrization bounds under which a two-layer Deep Inverse Prior network with smooth activation function will benefit from our guarantees. This provides a first step towards the theoretical understanding of the interplay between the optimization dynamics and neural networks in the inverse problem setting.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Recordings:
Recording
Associated Event:
One World Optimization Seminar in Vienna (Workshop)
Organizer(s):
Radu Ioan Bot (U of Vienna)
Yurii Malitskyi (U of Vienna)