Gaussian Process Models for Visualisation of High Dimensional Data

Neil D. Lawrence
Advances in Neural Information Processing Systems, MIT Press 16:329-336, 2004.

Abstract

In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be *further* kernelised leading to 'twin kernel PCA' in which a *mapping between feature spaces* occurs.

Cite this Paper


BibTeX
@InProceedings{Lawrence:gplvm03, title = {{G}aussian Process Models for Visualisation of High Dimensional Data}, author = {Lawrence, Neil D.}, booktitle = {Advances in Neural Information Processing Systems}, pages = {329--336}, year = {2004}, editor = {Thrun, Sebastian and Saul, Lawrence and Schölkopf, Bernhard}, volume = {16}, address = {Cambridge, MA}, publisher = {MIT Press}, pdf = {https://proceedings.neurips.cc/paper/2003/file/9657c1fffd38824e5ab0472e022e577e-Paper.pdf}, url = {http://inverseprobability.com/publications/lawrence-gplvm03.html}, abstract = {In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be *further* kernelised leading to 'twin kernel PCA' in which a *mapping between feature spaces* occurs. } }
Endnote
%0 Conference Paper %T Gaussian Process Models for Visualisation of High Dimensional Data %A Neil D. Lawrence %B Advances in Neural Information Processing Systems %D 2004 %E Sebastian Thrun %E Lawrence Saul %E Bernhard Schölkopf %F Lawrence:gplvm03 %I MIT Press %P 329--336 %U http://inverseprobability.com/publications/lawrence-gplvm03.html %V 16 %X In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be *further* kernelised leading to 'twin kernel PCA' in which a *mapping between feature spaces* occurs.
RIS
TY - CPAPER TI - Gaussian Process Models for Visualisation of High Dimensional Data AU - Neil D. Lawrence BT - Advances in Neural Information Processing Systems DA - 2004/01/01 ED - Sebastian Thrun ED - Lawrence Saul ED - Bernhard Schölkopf ID - Lawrence:gplvm03 PB - MIT Press VL - 16 SP - 329 EP - 336 L1 - https://proceedings.neurips.cc/paper/2003/file/9657c1fffd38824e5ab0472e022e577e-Paper.pdf UR - http://inverseprobability.com/publications/lawrence-gplvm03.html AB - In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be *further* kernelised leading to 'twin kernel PCA' in which a *mapping between feature spaces* occurs. ER -
APA
Lawrence, N.D.. (2004). Gaussian Process Models for Visualisation of High Dimensional Data. Advances in Neural Information Processing Systems 16:329-336 Available from http://inverseprobability.com/publications/lawrence-gplvm03.html.

Related Material