Fast Variational Inference for Gaussian Process Models through KL-Correction

[edit]

Nathaniel J. King, IBM
Neil D. Lawrence, University of Sheffield

in ECML, Berlin, 2006, pp 270-281

Related Material

Errata

  • Page 276: first equation on page, after 'The KL-corrected bound (9) can be written using (3) as ...'. The left hand side of this bound should be $\mathcal{L}^{\prime}(\theta)$ instead of $L(\theta)$.
    Thanks to: Raquel Urtasun

Abstract

Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.


@InProceedings{king-klcorrection06,
  title = 	 {Fast Variational Inference for Gaussian Process Models through KL-Correction},
  author = 	 {Nathaniel J. King and Neil D. Lawrence},
  booktitle = 	 {ECML, Berlin, 2006},
  pages = 	 {270},
  year = 	 {2006},
  series = 	 {Lecture Notes in Computer Science},
  address = 	 {Berlin},
  month = 	 {00},
  publisher = 	 {Springer-Verlag},
  edit = 	 {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2006-01-01-king-klcorrection06.md},
  url =  	 {http://inverseprobability.com/publications/king-klcorrection06.html},
  abstract = 	 {Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.},
  crossref =  {Scheffer:ecml06},
  key = 	 {King:klcorrection06},
  linkpdf = 	 {ftp://ftp.dcs.shef.ac.uk/home/neil/ECMLppa.pdf},
  linksoftware = {http://inverseprobability.com/ppa/},
  group = 	 {gp,variational,shefml}
 

}
%T Fast Variational Inference for Gaussian Process Models through KL-Correction
%A Nathaniel J. King and Neil D. Lawrence
%B 
%C ECML, Berlin, 2006
%D 
%F king-klcorrection06
%I Springer-Verlag	
%P 270--281
%R 
%U http://inverseprobability.com/publications/king-klcorrection06.html
%X Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.
TY  - CPAPER
TI  - Fast Variational Inference for Gaussian Process Models through KL-Correction
AU  - Nathaniel J. King
AU  - Neil D. Lawrence
BT  - ECML, Berlin, 2006
PY  - 2006/01/01
DA  - 2006/01/01	
ID  - king-klcorrection06
PB  - Springer-Verlag	
SP  - 270
EP  - 281
L1  - ftp://ftp.dcs.shef.ac.uk/home/neil/ECMLppa.pdf
UR  - http://inverseprobability.com/publications/king-klcorrection06.html
AB  - Variational inference is a exible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modied bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.
ER  -

King, N.J. & Lawrence, N.D.. (2006). Fast Variational Inference for Gaussian Process Models through KL-Correction. ECML, Berlin, 2006 :270-281