Variationally Auto-Encoded Deep Gaussian Processes

Zhenwen DaiAndreas DamianouJavier GonzalezNeil D. Lawrence
Proceedings of the International Conference on Learning Representations, 3, 2016.

Abstract

We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.

Cite this Paper


BibTeX
@InProceedings{Dai:variationally16, title = {Variationally Auto-Encoded Deep Gaussian Processes}, author = {Zhenwen Dai and Andreas Damianou and Javier Gonzalez and Neil D. Lawrence}, booktitle = {Proceedings of the International Conference on Learning Representations}, year = {2016}, editor = {Hugo Larochelle and Brian Kingsbury and Samy Bengio}, volume = {3}, address = {Caribe Hotel, San Juan, PR}, abstract = {We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.} }
Endnote
%0 Conference Paper %T Variationally Auto-Encoded Deep Gaussian Processes %A Zhenwen Dai %A Andreas Damianou %A Javier Gonzalez %A Neil D. Lawrence %B Proceedings of the International Conference on Learning Representations %D 2016 %E Hugo Larochelle %E Brian Kingsbury %E Samy Bengio %F Dai:variationally16 %V 3 %X We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
RIS
TY - CPAPER TI - Variationally Auto-Encoded Deep Gaussian Processes AU - Zhenwen Dai AU - Andreas Damianou AU - Javier Gonzalez AU - Neil D. Lawrence BT - Proceedings of the International Conference on Learning Representations DA - 2016/01/01 ED - Hugo Larochelle ED - Brian Kingsbury ED - Samy Bengio ID - Dai:variationally16 VL - 3 AB - We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization. ER -
APA
Dai, Z., Damianou, A., Gonzalez, J. & Lawrence, N.D.. (2016). Variationally Auto-Encoded Deep Gaussian Processes. Proceedings of the International Conference on Learning Representations 3

Related Material