Gaussian Process Models with Parallelization and GPU acceleration

Zhenwen DaiAndreas DamianouJames HensmanNeil D. Lawrence
, 2014.

Abstract

In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy.

Cite this Paper


BibTeX
@Misc{Dai-gpu14, title = {Gaussian Process Models with Parallelization and {GPU} acceleration}, author = {Dai, Zhenwen and Damianou, Andreas and Hensman, James and Lawrence, Neil D.}, year = {2014}, pdf = {https://arxiv.org/pdf/1412.1370.pdf}, url = {http://inverseprobability.com/publications/dai-gpu14.html}, abstract = {In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy. } }
Endnote
%0 Generic %T Gaussian Process Models with Parallelization and GPU acceleration %A Zhenwen Dai %A Andreas Damianou %A James Hensman %A Neil D. Lawrence %D 2014 %F Dai-gpu14 %U http://inverseprobability.com/publications/dai-gpu14.html %X In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy.
RIS
TY - GEN TI - Gaussian Process Models with Parallelization and GPU acceleration AU - Zhenwen Dai AU - Andreas Damianou AU - James Hensman AU - Neil D. Lawrence DA - 2014/10/18 ID - Dai-gpu14 L1 - https://arxiv.org/pdf/1412.1370.pdf UR - http://inverseprobability.com/publications/dai-gpu14.html AB - In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy. ER -
APA
Dai, Z., Damianou, A., Hensman, J. & Lawrence, N.D.. (2014). Gaussian Process Models with Parallelization and GPU acceleration. Available from http://inverseprobability.com/publications/dai-gpu14.html.

Related Material