In this talk we will review deep Gaussian process models and relate them to neural network models. We will then consider the details of how variational inference may be performed in these models. The approach is centred on “variational compression”, an approach to variational inference that compresses information into an augmented variable space. The aim of the deep Gaussian process framework is to enable probabilistic learning of multi-modal data. We will therefore end by highlighting directions for future research and discussing application of these models in domains such as personalised health.