Derivative-Free Thinning (DaFT) with Stochastic Kernel Embeddings

Mikkel Lykkegaard (U of Exeter)

May 05. 2022, 15:45 — 16:15

We propose a novel heuristic method for thinning of MCMC samples, which does not rely on the gradient of the posterior distribution or direct computation of a (potentially large) kernel matrix. Instead, we compute the Maximum Mean Discrepancy (MMD) using stochastic feature embeddings which asymptotically approximate popular shift-invariant kernels. This allows us to rapidly compute the MMD between candidate sample subsets and the entire set of samples. The MMD is then minimised using a Genetic Algorithm (GA) targeting the (discrete) optimisation space of empirical samples in a natural and straightforward manner. Using GA as an optimiser has the added benefit of yielding a population of near-optimal sample subsets rather than a single solution. This baseline approach is extended to allow for weighting the thinned samples so that more samples from the tails of the distribution can be included without compromising the MMD-optimality of the thinned subset. In this talk, we show that our stochastic feature embeddings asymptotically approximate the true MMD, demonstrate the utility of the method on a complex synthetic distribution with multiple modes and a curved ridge, and outline some future research directions.

Further Information
Venue:
ESI Boltzmann Lecture Hall
Recordings:
Recording
Files:
Slides
Associated Event:
Computational Uncertainty Quantification: Mathematical Foundations, Methodology & Data (Thematic Programme)
Organizer(s):
Clemens Heitzinger (TU Vienna)
Fabio Nobile (EPFL, Lausanne)
Robert Scheichl (U Heidelberg)
Christoph Schwab (ETH Zurich)
Sara van de Geer (ETH Zurich)
Karen Willcox (U of Texas, Austin)