edit

Fast Sparse Gaussian Process Methods: The Informative Vector Machine

Neil D. Lawrence, Matthias Seeger, Ralf Herbrich
Advances in Neural Information Processing Systems, MIT Press 15:625-632, 2003.

Abstract

We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in $O(d)$ rather than $O(n)$, $d«n$, $n$ the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most $O(nd^2)$, and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet it requires only a fraction of the training time. In contrast to the SVM, our approximation produces estimates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation.

This site last compiled Fri, 06 Dec 2024 20:39:33 +0000
Github Account Copyright © Neil D. Lawrence 2024. All rights reserved.