Rapture of the deep : highs and lows of sparsity in a world of depths

Remi Gribonval (Inria Lyon)

May 30. 2022, 10:00 — 10:45

(Joint work with Quoc-Tung Le, Elisa Riccietti, Pierre Stock, Léon Zheng)

Attempting to promote sparsity in deep networks is natural to control their complexity, and can be expected to bring other benefits in terms of statistical significance or explainability. Yet, while sparsity-promoting regularizers are well under-stood in linear inverse problems, much less is known in deeper contexts, linear or not. We show that, in contrast to the linear case, even the simple bilinear setting leads to surprises: ℓ1 regularization does not always lead to sparsity [1], and optimization with a fixed support can be NP-hard [2]. We nevertheless identify families of supports for which this optimization becomes easy [2] and well-posed [3], and exploit this to derive an algorithm able to recover multilayer sparse matrix factorizations with certain prescribed (butterfly) supports at a cost proportional to the size on the approximated matrix [4,5]. Behind much of the observed phenomena are intrinsic scaling ambiguities in the parameterization of deep linear networks, which are also present in ReLU networks. We conclude with a scaling invariant embedding of such networks [6], which can be used to analyze the identifiability of (the equivalence class of) parameters of ReLU networks from their realization.

1] A. Benichoux, E. Vincent, R. Gribonval, A fundamental pitfall in blind deconvolution with sparse and shift-invariant priors, Proc. ICASSP 2013.

[2] Q.T. Le, E. Riccietti, R. Gribonval, Spurious Valleys, Spurious Minima and NP-hardness of Sparse Matrix Factorization With Fixed Support, 2021, arXiv:2112.00386.

[3] L. Zheng, E. Riccietti, R. Gribonval, Identifiability in Two-Layer Sparse Matrix Factorization, 2021, arXiv:2110.01235.

[4] Q.T. Le, L. Zheng, E. Riccietti, R. Gribonval, Fast learning of fast transforms, with guarantees, Proc. ICASSP 2022

[5] L. Zheng, E. Riccietti, R. Gribonval, Efficient Identification of Butterfly Sparse Matrix Factorizations, 2022, arXiv:2110.01230

[6] P. Stock, R. Gribonval, An Embedding of ReLU Networks and an Analysis of their Identifiability, to appear in Constructive Approxim


Further Information
ESI Boltzmann Lecture Hall
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zürich)
Sara van de Geer (ETH Zürich)
Karen Willcox (U of Texas, Austin)