Fast Forward Selection to Speed Up Sparse Gaussian Process Regression

Matthias Seeger, Christopher K. I. Williams, Neil D. Lawrence
Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.

Abstract

We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.

Cite this Paper


BibTeX
@InProceedings{Seeger:fast03, title = {Fast Forward Selection to Speed Up Sparse Gaussian Process Regression}, author = {Seeger, Matthias and Williams, Christopher K. I. and Lawrence, Neil D.}, booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics}, year = {2003}, editor = {Bishop, Christopher M. and Frey, Brendan J.}, address = {Key West, FL}, url = {http://inverseprobability.com/publications/seeger-fast03.html}, abstract = {We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.} }
Endnote
%0 Conference Paper %T Fast Forward Selection to Speed Up Sparse Gaussian Process Regression %A Matthias Seeger %A Christopher K. I. Williams %A Neil D. Lawrence %B Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics %D 2003 %E Christopher M. Bishop %E Brendan J. Frey %F Seeger:fast03 %U http://inverseprobability.com/publications/seeger-fast03.html %X We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.
RIS
TY - CPAPER TI - Fast Forward Selection to Speed Up Sparse Gaussian Process Regression AU - Matthias Seeger AU - Christopher K. I. Williams AU - Neil D. Lawrence BT - Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics DA - 2003/01/01 ED - Christopher M. Bishop ED - Brendan J. Frey ID - Seeger:fast03 UR - http://inverseprobability.com/publications/seeger-fast03.html AB - We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods. ER -
APA
Seeger, M., Williams, C.K.I. & Lawrence, N.D.. (2003). Fast Forward Selection to Speed Up Sparse Gaussian Process Regression. Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics Available from http://inverseprobability.com/publications/seeger-fast03.html.

Related Material