Variational Inference for Uncertainty on the Inputs of Gaussian Process Models

Andreas DamianouMichalis K. TitsiasNeil D. Lawrence
, 2014.

Abstract

The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data.

Cite this Paper


BibTeX
@Misc{Damianou-variational14, title = {Variational Inference for Uncertainty on the Inputs of {G}aussian Process Models}, author = {Damianou, Andreas and Titsias, Michalis K. and Lawrence, Neil D.}, year = {2014}, pdf = {https://arxiv.org/pdf/1409.2287.pdf}, url = {http://inverseprobability.com/publications/variational-inference-for-uncertainty-on-the-inputs-of-gaussian-process-models.html}, abstract = {The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data. } }
Endnote
%0 Generic %T Variational Inference for Uncertainty on the Inputs of Gaussian Process Models %A Andreas Damianou %A Michalis K. Titsias %A Neil D. Lawrence %D 2014 %F Damianou-variational14 %U http://inverseprobability.com/publications/variational-inference-for-uncertainty-on-the-inputs-of-gaussian-process-models.html %X The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data.
RIS
TY - GEN TI - Variational Inference for Uncertainty on the Inputs of Gaussian Process Models AU - Andreas Damianou AU - Michalis K. Titsias AU - Neil D. Lawrence DA - 2014/09/14 ID - Damianou-variational14 L1 - https://arxiv.org/pdf/1409.2287.pdf UR - http://inverseprobability.com/publications/variational-inference-for-uncertainty-on-the-inputs-of-gaussian-process-models.html AB - The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data. ER -
APA
Damianou, A., Titsias, M.K. & Lawrence, N.D.. (2014). Variational Inference for Uncertainty on the Inputs of Gaussian Process Models. Available from http://inverseprobability.com/publications/variational-inference-for-uncertainty-on-the-inputs-of-gaussian-process-models.html.

Related Material