# Large Scale Learning in Gaussian Processes

[edit]

**Large-Scale Kernel Learning Workshop @ICML2015**on Jul 11, 2015 [pdf]

### Links

#### Abstract

Gaussian process models view the kernel matrix as representing the covariance between data points. In a Gaussian process, the RKHS function is a mean of a posterior distribution over possible functions. Gaussian processes sustain uncertainty around this means and this leads to a posterior *covariance* function (or kernel) associated with the process. A complication for large scale Gaussian process models is the need to sustain the estimate for this covariance function. In this talk we’ll review how this can be done probabilistically through a variational approach we know as ’variational compression’.