Page 1791: eqn (10) there is a factor of a half missing on both terms of the right hand side. Thanks to: Andreas Geiger

Page 1789: second line, the exponent of $(\lambda_j - \beta^{-1})$ should be 1/2, not -1/2. Thanks to: Mathieu Saltzman

Page 1812: end of Appendix A, instead of `... for any symmetric matrix $\mathbf{S}$ ...' line should read `... for any positive definite symmetric matrix $\mathbf{S}$.'

Page 1812: after eqn (25) last line of paragraph, instead of `... (for kernel PCA) in where ...' line should read `... (for kernel PCA) in which ...'.

Abstract

Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.

@Article{lawrence-pnpca05,
title = {Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models},
journal = {Journal of Machine Learning Research},
author = {Neil D. Lawrence},
pages = {1783},
year = {2005},
volume = {6},
month = {00},
edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2005-11-01-lawrence-pnpca05.md},
url = {http://inverseprobability.com/publications/lawrence-pnpca05.html},
abstract = {Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.},
key = {Lawrence-pnpca05},
group = {shefml,gplvm,ppca,pca,dimensional reduction}
}

%T Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
%A Neil D. Lawrence
%B
%C Journal of Machine Learning Research
%D
%F lawrence-pnpca05
%J Journal of Machine Learning Research
%P 1783--1816
%R
%U http://inverseprobability.com/publications/lawrence-pnpca05.html
%V 6
%X Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.

TY - CPAPER
TI - Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
AU - Neil D. Lawrence
PY - 2005/11/01
DA - 2005/11/01
ID - lawrence-pnpca05
SP - 1783
EP - 1816
UR - http://inverseprobability.com/publications/lawrence-pnpca05.html
AB - Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.
ER -

Lawrence, N.D.. (2005). Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models. Journal of Machine Learning Research 6:1783-1816