Variationally Auto-Encoded Deep Gaussian Processes

Zhenwen DaiAndreas DamianouJavier GonzalezNeil D. Lawrence
Proceedings of the International Conference on Learning Representations, 3, 2016.

Abstract

We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.

Cite this Paper


BibTeX
@InProceedings{Dai-variationally16, title = {Variationally Auto-Encoded Deep {G}aussian Processes}, author = {Dai, Zhenwen and Damianou, Andreas and Gonzalez, Javier and Lawrence, Neil D.}, booktitle = {Proceedings of the International Conference on Learning Representations}, year = {2016}, editor = {Larochelle, Hugo and Kingsbury, Brian and Bengio, Samy}, volume = {3}, address = {Caribe Hotel, San Juan, PR}, pdf = {http://arxiv.org/pdf/1511.06455v2}, url = {http://inverseprobability.com/publications/dai-variationally16.html}, abstract = {We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.} }
Endnote
%0 Conference Paper %T Variationally Auto-Encoded Deep Gaussian Processes %A Zhenwen Dai %A Andreas Damianou %A Javier Gonzalez %A Neil D. Lawrence %B Proceedings of the International Conference on Learning Representations %D 2016 %E Hugo Larochelle %E Brian Kingsbury %E Samy Bengio %F Dai-variationally16 %U http://inverseprobability.com/publications/dai-variationally16.html %V 3 %X We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
RIS
TY - CPAPER TI - Variationally Auto-Encoded Deep Gaussian Processes AU - Zhenwen Dai AU - Andreas Damianou AU - Javier Gonzalez AU - Neil D. Lawrence BT - Proceedings of the International Conference on Learning Representations DA - 2016/05/02 ED - Hugo Larochelle ED - Brian Kingsbury ED - Samy Bengio ID - Dai-variationally16 VL - 3 L1 - http://arxiv.org/pdf/1511.06455v2 UR - http://inverseprobability.com/publications/dai-variationally16.html AB - We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization. ER -
APA
Dai, Z., Damianou, A., Gonzalez, J. & Lawrence, N.D.. (2016). Variationally Auto-Encoded Deep Gaussian Processes. Proceedings of the International Conference on Learning Representations 3 Available from http://inverseprobability.com/publications/dai-variationally16.html.

Related Material