Fast Variational Inference for Gaussian Process Models through KL-Correction

Nathaniel J. King, Neil D. Lawrence
ECML, Berlin, 2006, Springer-Verlag :270-281, 2006.

Abstract

Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.

Cite this Paper


BibTeX
@InProceedings{King:klcorrection06, title = {Fast Variational Inference for {G}aussian Process Models through {KL}-Correction}, author = {King, Nathaniel J. and Lawrence, Neil D.}, booktitle = {ECML, Berlin, 2006}, pages = {270--281}, year = {2006}, series = {Lecture Notes in Computer Science}, address = {Berlin}, publisher = {Springer-Verlag}, pdf = {https://lawrennd.github.io/publications/files/ECMLppa.pdf}, url = {http://inverseprobability.com/publications/king-klcorrection06.html}, abstract = {Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.} }
Endnote
%0 Conference Paper %T Fast Variational Inference for Gaussian Process Models through KL-Correction %A Nathaniel J. King %A Neil D. Lawrence %B ECML, Berlin, 2006 %C Lecture Notes in Computer Science %D 2006 %F King:klcorrection06 %I Springer-Verlag %P 270--281 %U http://inverseprobability.com/publications/king-klcorrection06.html %X Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.
RIS
TY - CPAPER TI - Fast Variational Inference for Gaussian Process Models through KL-Correction AU - Nathaniel J. King AU - Neil D. Lawrence BT - ECML, Berlin, 2006 DA - 2006/01/01 ID - King:klcorrection06 PB - Springer-Verlag DP - Lecture Notes in Computer Science SP - 270 EP - 281 L1 - https://lawrennd.github.io/publications/files/ECMLppa.pdf UR - http://inverseprobability.com/publications/king-klcorrection06.html AB - Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence. ER -
APA
King, N.J. & Lawrence, N.D.. (2006). Fast Variational Inference for Gaussian Process Models through KL-Correction. ECML, Berlin, 2006, in Lecture Notes in Computer Science:270-281 Available from http://inverseprobability.com/publications/king-klcorrection06.html.

Related Material