# Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes

Andreas Damianou, University of Sheffield
Michalis K. Titsias, University of Athens
Neil D. Lawrence, University of Sheffield

Journal of Machine Learning Research 17

#### Abstract

The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximised over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximising an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from i.i.d. observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to over-fitting and its ability to automatically select the dimensionality of the non-linear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain or partially missing inputs. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data.

  @Article{damianou-variational15, title = {Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes}, journal = {Journal of Machine Learning Research}, author = {Andreas Damianou and Michalis K. Titsias and Neil D. Lawrence}, year = {2016}, volume = {17}, month = {00}, edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2016-01-01-damianou-variational15.md}, url = {http://inverseprobability.com/publications/damianou-variational15.html}, abstract = {The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximised over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximising an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from i.i.d. observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to over-fitting and its ability to automatically select the dimensionality of the non-linear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain or partially missing inputs. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data.}, key = {Damianou-variational15}, OPTgroup = {} }
 %T Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes %A Andreas Damianou and Michalis K. Titsias and Neil D. Lawrence %B %C Journal of Machine Learning Research %D %F damianou-variational15 %J Journal of Machine Learning Research %P -- %R %U http://inverseprobability.com/publications/damianou-variational15.html %V 17 %X The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximised over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximising an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from i.i.d. observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to over-fitting and its ability to automatically select the dimensionality of the non-linear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain or partially missing inputs. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data. 
 TY - CPAPER TI - Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes AU - Andreas Damianou AU - Michalis K. Titsias AU - Neil D. Lawrence PY - 2016/01/01 DA - 2016/01/01 ID - damianou-variational15 SP - EP - UR - http://inverseprobability.com/publications/damianou-variational15.html AB - The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximised over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximising an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from i.i.d. observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to over-fitting and its ability to automatically select the dimensionality of the non-linear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain or partially missing inputs. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data. ER - 
 Damianou, A., Titsias, M.K. & Lawrence, N.D.. (2016). Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes. Journal of Machine Learning Research 17:-