Recurrent Gaussian Processes

[edit]

César Lincoln C. Mattos, Federal University of Ceará
Zhenwen Dai, Inferentia Ltd
Andreas Damianou, University of Sheffield
Jeremy Forth
Guilherme A. Barreto, Federal University of Ceará
Neil D. Lawrence, University of Sheffield

in Proceedings of the International Conference on Learning Representations 3

Related Material

Abstract

We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.


@InProceedings{mattos-recurrent16,
  title = 	 {Recurrent {G}aussian Processes},
  author = 	 {César Lincoln C. Mattos and Zhenwen Dai and Andreas Damianou and Jeremy Forth and Guilherme A. Barreto and Neil D. Lawrence},
  booktitle = 	 {Proceedings of the International Conference on Learning Representations},
  year = 	 {2016},
  editor = 	 {Hugo Larochelle and Brian Kingsbury and Samy Bengio},
  volume = 	 {3},
  address = 	 {Caribe Hotel, San Juan, PR},
  month = 	 {00},
  edit = 	 {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2016-05-02-mattos-recurrent16.md},
  url =  	 {http://inverseprobability.com/publications/mattos-recurrent16.html},
  abstract = 	 {We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.},
  crossref =  {Larochelle:iclr16},
  key = 	 {Mattos:recurrent16},
  linkpdf = 	 {http://arxiv.org/abs/1511.06644},
  OPTgroup = 	 {}
 

}
%T Recurrent Gaussian Processes
%A César Lincoln C. Mattos and Zhenwen Dai and Andreas Damianou and Jeremy Forth and Guilherme A. Barreto and Neil D. Lawrence
%B 
%C Proceedings of the International Conference on Learning Representations
%D 
%E Hugo Larochelle and Brian Kingsbury and Samy Bengio
%F mattos-recurrent16	
%P --
%R 
%U http://inverseprobability.com/publications/mattos-recurrent16.html
%V 3
%X We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.
TY  - CPAPER
TI  - Recurrent Gaussian Processes
AU  - César Lincoln C. Mattos
AU  - Zhenwen Dai
AU  - Andreas Damianou
AU  - Jeremy Forth
AU  - Guilherme A. Barreto
AU  - Neil D. Lawrence
BT  - Proceedings of the International Conference on Learning Representations
PY  - 2016/05/02
DA  - 2016/05/02
ED  - Hugo Larochelle
ED  - Brian Kingsbury
ED  - Samy Bengio	
ID  - mattos-recurrent16	
SP  - 
EP  - 
L1  - http://arxiv.org/abs/1511.06644
UR  - http://inverseprobability.com/publications/mattos-recurrent16.html
AB  - We define Recurrent Gaussian Processes (RGP) models, a general family of Bayesian nonparametric models with recurrent GP priors which are able to learn dynamical patterns from sequential data. Similar to Recurrent Neural Networks (RNNs), RGPs can have different formulations for their internal states, distinct inference methods and be extended with deep structures. In such context, we propose a novel deep RGP model whose autoregressive states are latent, thereby performing representation and dynamical learning simultaneously. To fully exploit the Bayesian nature of the RGP model we develop the Recurrent Variational Bayes (REVARB) framework, which enables efficient inference and strong regularization through coherent propagation of uncertainty across the RGP layers and states. We also introduce a RGP extension where variational parameters are greatly reduced by being reparametrized through RNN-based sequential recognition models. We apply our model to the tasks of nonlinear system identification and human motion modeling. The promising obtained results indicate that our RGP model maintains its highly flexibility while being able to avoid overfitting and being applicable even when larger datasets are not available.
ER  -

Mattos, C.L.C., Dai, Z., Damianou, A., Forth, J., Barreto, G.A. & Lawrence, N.D.. (2016). Recurrent Gaussian Processes. Proceedings of the International Conference on Learning Representations 3:-