Zoom: https://chalmers.zoom.us/j/67626671400 with password 293587
Bayesian inference in applied fields of science and engineering can be challenging because in the best-case scenario the likelihood is a black-box (e.g., mildly-to-very expensive, no gradients) and more often than not it is not even available, with the researcher being only able to simulate data from the model. In this talk, I review a recent sample-efficient framework for approximate Bayesian inference, Variational Bayesian Monte Carlo (VBMC), which uses only a limited number of potentially noisy log-likelihood evaluations. VBMC produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. VBMC combines well with a technique we (re)introduced, inverse binomial sampling (IBS), that obtains unbiased and normally-distributed estimates of the log-likelihood via simulation. VBMC has been tested on many real problems (up to 10 dimensions) from computational and cognitive neuroscience, with and without likelihoods. Our method performed consistently well in reconstructing the ground-truth posterior and model evidence with a limited budget of evaluations, showing promise as a general tool for black-box, sample-efficient approximate inference — with exciting potential extensions to more complex cases.
1. MATLAB toolbox: https://github.com/lacerbi/vbmc
2. VBMC paper: https://arxiv.org/abs/1810.05558
3. VBMC with noisy likelihoods paper: https://arxiv.org/abs/2006.08655
4. Inverse binomial sampling preprint: https://arxiv.org/abs/2001.03985
20 October, 2020, 14:15
20 October, 2020, 15:15