Fast Sparse Gaussian Process Methods: The Informative Vector Machine

Neil D. Lawrence, Matthias Seeger, Ralf Herbrich
Advances in Neural Information Processing Systems, MIT Press 15:625-632, 2003.

Abstract

We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d<

Cite this Paper


BibTeX
@InProceedings{Lawrence:ivm02, title = {Fast Sparse Gaussian Process Methods: The Informative Vector Machine}, author = {Lawrence, Neil D. and Seeger, Matthias and Herbrich, Ralf}, booktitle = {Advances in Neural Information Processing Systems}, pages = {625--632}, year = {2003}, editor = {Becker, Sue and Thrun, Sebastian and Obermayer, Klaus}, volume = {15}, address = {Cambridge, MA}, publisher = {MIT Press}, url = {http://inverseprobability.com/publications/lawrence-ivm02.html}, abstract = {We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d<
Endnote
%0 Conference Paper %T Fast Sparse Gaussian Process Methods: The Informative Vector Machine %A Neil D. Lawrence %A Matthias Seeger %A Ralf Herbrich %B Advances in Neural Information Processing Systems %D 2003 %E Sue Becker %E Sebastian Thrun %E Klaus Obermayer %F Lawrence:ivm02 %I MIT Press %P 625--632 %U http://inverseprobability.com/publications/lawrence-ivm02.html %V 15 %X We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d<
RIS
TY - CPAPER TI - Fast Sparse Gaussian Process Methods: The Informative Vector Machine AU - Neil D. Lawrence AU - Matthias Seeger AU - Ralf Herbrich BT - Advances in Neural Information Processing Systems DA - 2003/01/01 ED - Sue Becker ED - Sebastian Thrun ED - Klaus Obermayer ID - Lawrence:ivm02 PB - MIT Press VL - 15 SP - 625 EP - 632 UR - http://inverseprobability.com/publications/lawrence-ivm02.html AB - We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d<
APA
Lawrence, N.D., Seeger, M. & Herbrich, R.. (2003). Fast Sparse Gaussian Process Methods: The Informative Vector Machine. Advances in Neural Information Processing Systems 15:625-632 Available from http://inverseprobability.com/publications/lawrence-ivm02.html.

Related Material