Approximate Inference in Deep GPs

[edit]

at Gatsby Computational Neuroscience Unit, University College London, U.K. on Oct 23, 2014 [pdf]
Neil D. Lawrence, University of Sheffield

Abstract

In this talk we will review deep Gaussian process models and relate them to neural network models. We will then consider the details of how variational inference may be performed in these models. The approach is centred on ‚Äúvariational compression‚ÄĚ, an approach to variational inference that compresses information into an augmented variable space. The aim of the deep Gaussian process framework is to enable probabilistic learning of multi-modal data. We will therefore end by highlighting directions for future research and discussing application of these models in domains such as personalised health.

Links