[edit]
Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes
Journal of Machine Learning Research, 17(42):1-62, 2016.
Abstract
The Gaussian process latent variable model (GP-LVM) provides a flexible
approach for non-linear dimensionality reduction that has been widely applied. However,
the current approach for training GP-LVMs is based on maximum likelihood, where
the latent projection variables are maximised over rather than integrated out. In
this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard
variational inference framework that allows to approximately integrate out the latent
variables and subsequently train a GP-LVM by maximising an analytic lower bound
on the exact marginal likelihood. We apply this method for learning a GP-LVM from
i.i.d. observations and for learning non-linear dynamical systems where the observations
are temporally correlated. We show that a benefit of the variational Bayesian procedure
is its robustness to over-fitting and its ability to automatically select the dimensionality
of the non-linear latent space. The resulting framework is generic, flexible and
easy to extend for other purposes, such as Gaussian process regression with uncertain
or partially missing inputs. We demonstrate our method on synthetic data and standard
machine learning benchmarks, as well as challenging real world datasets, including
high resolution video data.