Robust, automated, and accurate black-box variational inference

Files
2203.15945.pdf(5.62 MB)
First author draft
Date
2022-03-29
Authors
Welandawe, Manushi
Riis Anderson, Michael
Vehtari, Aki
Huggins, Jonathan
Version
First author draft
OA Version
Citation
M. Welandawe, M. Riis Anderson, A. Vehtari, J. Huggins. 2022. "Robust, Automated, and Accurate Black-box Variational Inference" https://doi.org/10.48550/arXiv.2203.15945
Abstract
Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial expertise and hand-tuning to apply effectively. In this paper, we propose Robust, Automated, and Accurate BBVI (RAABBVI), a framework for reliable BBVI optimization. RAABBVI is based on rigorously justified automation techniques, includes just a small number of intuitive tuning parameters, and detects inaccurate estimates of the optimal variational approximation. RAABBVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leiber (KL) divergence between the current variational approximation and the optimal one. It also employs a novel optimization termination criterion that enables the user to balance desired accuracy against computational cost by comparing (i) the predicted relative decrease in the symmetrized KL divergence if a smaller learning were used and (ii) the predicted computation required to converge with the smaller learning rate. We validate the robustness and accuracy of RAABBVI through carefully designed simulation studies and on a diverse set of real-world model and data examples.
Description
License