Extensions of the Informative Vector Machine

Neil D. Lawrence, John C. Platt, Michael I. Jordan
, Springer-Verlag 3635:56-87, 2005.

Abstract

The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.

Cite this Paper


BibTeX
@Misc{Lawrence:extensions05, title = {Extensions of the Informative Vector Machine}, author = {Lawrence, Neil D. and Platt, John C. and Jordan, Michael I.}, pages = {56--87}, year = {2005}, editor = {Winkler, Joab and Lawrence, Neil D. and Niranjan, Mahesan}, volume = {3635}, series = {Lecture Notes in Artificial Intelligence}, address = {Berlin}, publisher = {Springer-Verlag}, url = {http://inverseprobability.com/publications/lawrence-extensions05.html}, abstract = {The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.} }
Endnote
%0 Generic %T Extensions of the Informative Vector Machine %A Neil D. Lawrence %A John C. Platt %A Michael I. Jordan %C Lecture Notes in Artificial Intelligence %D 2005 %E Joab Winkler %E Neil D. Lawrence %E Mahesan Niranjan %F Lawrence:extensions05 %I Springer-Verlag %P 56--87 %U http://inverseprobability.com/publications/lawrence-extensions05.html %V 3635 %X The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.
RIS
TY - GEN TI - Extensions of the Informative Vector Machine AU - Neil D. Lawrence AU - John C. Platt AU - Michael I. Jordan BT - Deterministic and Statistical Methods in Machine Learning DA - 2005/01/01 ED - Joab Winkler ED - Neil D. Lawrence ED - Mahesan Niranjan ID - Lawrence:extensions05 PB - Springer-Verlag DP - Lecture Notes in Artificial Intelligence VL - 3635 SP - 56 EP - 87 UR - http://inverseprobability.com/publications/lawrence-extensions05.html AB - The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data. ER -
APA
Lawrence, N.D., Platt, J.C. & Jordan, M.I.. (2005). Extensions of the Informative Vector Machine. 3635:56-87 Available from http://inverseprobability.com/publications/lawrence-extensions05.html.

Related Material