We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
@InProceedings{dai-variationally16,
title = {Variationally Auto-Encoded Deep Gaussian Processes},
author = {Zhenwen Dai and Andreas Damianou and Javier Gonzalez and Neil D. Lawrence},
booktitle = {Proceedings of the International Conference on Learning Representations},
year = {2016},
editor = {Hugo Larochelle and Brian Kingsbury and Samy Bengio},
volume = {3},
address = {Caribe Hotel, San Juan, PR},
month = {00},
edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2016-01-01-dai-variationally16.md},
url = {http://inverseprobability.com/publications/dai-variationally16.html},
abstract = {We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.},
crossref = {Larochelle:iclr16},
key = {Dai:variationally16},
linkpdf = {http://arxiv.org/pdf/1511.06455v2},
OPTgroup = {}
}
%T Variationally Auto-Encoded Deep Gaussian Processes
%A Zhenwen Dai and Andreas Damianou and Javier Gonzalez and Neil D. Lawrence
%B
%C Proceedings of the International Conference on Learning Representations
%D
%E Hugo Larochelle and Brian Kingsbury and Samy Bengio
%F dai-variationally16
%P --
%R
%U http://inverseprobability.com/publications/dai-variationally16.html
%V 3
%X We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
TY - CPAPER
TI - Variationally Auto-Encoded Deep Gaussian Processes
AU - Zhenwen Dai
AU - Andreas Damianou
AU - Javier Gonzalez
AU - Neil D. Lawrence
BT - Proceedings of the International Conference on Learning Representations
PY - 2016/01/01
DA - 2016/01/01
ED - Hugo Larochelle
ED - Brian Kingsbury
ED - Samy Bengio
ID - dai-variationally16
SP -
EP -
L1 - http://arxiv.org/pdf/1511.06455v2
UR - http://inverseprobability.com/publications/dai-variationally16.html
AB - We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
ER -
Dai, Z., Damianou, A., Gonzalez, J. & Lawrence, N.D.. (2016). Variationally Auto-Encoded Deep Gaussian Processes. Proceedings of the International Conference on Learning Representations 3:-