Recurrent Gaussian Processes

César Lincoln C. Mattos, Zhenwen DaiAndreas Damianou, Jeremy Forth, Guilherme A. Barreto, Neil D. Lawrence
Proceedings of the International Conference on Learning Representations, 3, 2016.

Abstract

We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.

Cite this Paper


BibTeX
@InProceedings{Mattos-recurrent16, title = {Recurrent {G}aussian Processes}, author = {Mattos, César Lincoln C. and Dai, Zhenwen and Damianou, Andreas and Forth, Jeremy and Barreto, Guilherme A. and Lawrence, Neil D.}, booktitle = {Proceedings of the International Conference on Learning Representations}, year = {2016}, editor = {Larochelle, Hugo and Kingsbury, Brian and Bengio, Samy}, volume = {3}, address = {Caribe Hotel, San Juan, PR}, pdf = {http://arxiv.org/abs/1511.06644}, url = {http://inverseprobability.com/publications/mattos-recurrent16.html}, abstract = {We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.} }
Endnote
%0 Conference Paper %T Recurrent Gaussian Processes %A César Lincoln C. Mattos %A Zhenwen Dai %A Andreas Damianou %A Jeremy Forth %A Guilherme A. Barreto %A Neil D. Lawrence %B Proceedings of the International Conference on Learning Representations %D 2016 %E Hugo Larochelle %E Brian Kingsbury %E Samy Bengio %F Mattos-recurrent16 %U http://inverseprobability.com/publications/mattos-recurrent16.html %V 3 %X We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.
RIS
TY - CPAPER TI - Recurrent Gaussian Processes AU - César Lincoln C. Mattos AU - Zhenwen Dai AU - Andreas Damianou AU - Jeremy Forth AU - Guilherme A. Barreto AU - Neil D. Lawrence BT - Proceedings of the International Conference on Learning Representations DA - 2016/05/02 ED - Hugo Larochelle ED - Brian Kingsbury ED - Samy Bengio ID - Mattos-recurrent16 VL - 3 L1 - http://arxiv.org/abs/1511.06644 UR - http://inverseprobability.com/publications/mattos-recurrent16.html AB - We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available. ER -
APA
Mattos, C.L.C., Dai, Z., Damianou, A., Forth, J., Barreto, G.A. & Lawrence, N.D.. (2016). Recurrent Gaussian Processes. Proceedings of the International Conference on Learning Representations 3 Available from http://inverseprobability.com/publications/mattos-recurrent16.html.

Related Material