Extensions of the Informative Vector Machine

[edit]

Neil D. Lawrence, University of Sheffield
John C. Platt, Google
Michael I. Jordan, UC Berkeley

in Deterministic and Statistical Methods in Machine Learning 3635, pp 56-87

Related Material

Abstract

The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.


@InCollection{lawrence-extensions05,
  title = 	 {Extensions of the Informative Vector Machine},
  author = 	 {Neil D. Lawrence and John C. Platt and Michael I. Jordan},
  booktitle = 	 {Deterministic and Statistical Methods in Machine Learning},
  pages = 	 {56},
  year = 	 {2005},
  editor = 	 {Joab Winkler and Neil D. Lawrence and Mahesan Niranjan},
  volume = 	 {3635},
  series = 	 {Lecture Notes in Artificial Intelligence},
  address = 	 {Berlin},
  month = 	 {00},
  publisher = 	 {Springer-Verlag},
  edit = 	 {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2005-01-01-lawrence-extensions05.md},
  url =  	 {http://inverseprobability.com/publications/lawrence-extensions05.html},
  abstract = 	 {The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.},
  crossref =  {Winkler:smlw04},
  key = 	 {Lawrence:extensions05},
  linkpsgz =  {ftp://ftp.dcs.shef.ac.uk/home/neil/ivmdev.ps.gz},
  linksoftware = {http://inverseprobability.com/ivm/},
  group = 	 {shefml,gp,spgp}
 

}
%T Extensions of the Informative Vector Machine
%A Neil D. Lawrence and John C. Platt and Michael I. Jordan
%B 
%C Deterministic and Statistical Methods in Machine Learning
%D 
%E Joab Winkler and Neil D. Lawrence and Mahesan Niranjan
%F lawrence-extensions05
%I Springer-Verlag	
%P 56--87
%R 
%U http://inverseprobability.com/publications/lawrence-extensions05.html
%V 3635
%X The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.
TY  - CPAPER
TI  - Extensions of the Informative Vector Machine
AU  - Neil D. Lawrence
AU  - John C. Platt
AU  - Michael I. Jordan
BT  - Deterministic and Statistical Methods in Machine Learning
PY  - 2005/01/01
DA  - 2005/01/01
ED  - Joab Winkler
ED  - Neil D. Lawrence
ED  - Mahesan Niranjan	
ID  - lawrence-extensions05
PB  - Springer-Verlag	
SP  - 56
EP  - 87
UR  - http://inverseprobability.com/publications/lawrence-extensions05.html
AB  - The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.
ER  -

Lawrence, N.D., Platt, J.C. & Jordan, M.I.. (2005). Extensions of the Informative Vector Machine. Deterministic and Statistical Methods in Machine Learning 3635:56-87