nips nips2011 nips2011-301 nips2011-301-reference knowledge-graph by maker-knowledge-mining

301 nips-2011-Variational Gaussian Process Dynamical Systems


Source: pdf

Author: Neil D. Lawrence, Michalis K. Titsias, Andreas Damianou

Abstract: High dimensional time series are endemic in applications of machine learning such as robotics (sensor data), computational biology (gene expression data), vision (video sequences) and graphics (motion capture data). Practical nonlinear probabilistic approaches to this data are required. In this paper we introduce the variational Gaussian process dynamical system. Our work builds on recent variational approximations for Gaussian process latent variable models to allow for nonlinear dimensionality reduction simultaneously with learning a dynamical prior in the latent space. The approach also allows for the appropriate dimensionality of the latent space to be automatically determined. We demonstrate the model on a human motion capture data set and a series of high resolution video sequences. 1


reference text

[1] N. D. Lawrence, “Probabilistic non-linear principal component analysis with Gaussian process latent variable models,” Journal of Machine Learning Research, vol. 6, pp. 1783–1816, 2005.

[2] N. D. Lawrence, “Gaussian process latent variable models for visualisation of high dimensional data,” in Advances in Neural Information Processing Systems, pp. 329–336, MIT Press, 2004.

[3] J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models,” in NIPS, pp. 1441– 1448, MIT Press, 2006.

[4] J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models for human motion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 283–298, Feb. 2008.

[5] N. D. Lawrence, “Hierarchical Gaussian process latent variable models,” in Proceedings of the International Conference in Machine Learning, pp. 481–488, Omnipress, 2007.

[6] J. Ko and D. Fox, “GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation models,” Auton. Robots, vol. 27, pp. 75–90, July 2009.

[7] M. K. Titsias, “Variational learning of inducing variables in sparse Gaussian processes,” in Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, vol. 5, pp. 567–574, JMLR W&CP;, 2009.

[8] M. K. Titsias and N. D. Lawrence, “Bayesian Gaussian process latent variable model,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 844–851, JMLR W&CP; 9, 2010.

[9] C. E. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. MIT Press, 2006.

[10] D. J. C. MacKay, “Introduction to Gaussian processes,” in Neural Networks and Machine Learning (C. M. Bishop, ed.), NATO ASI Series, pp. 133–166, Kluwer Academic Press, 1998.

[11] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1st ed. 2006. corr. 2nd printing ed., Oct. 2007.

[12] M. Opper and C. Archambeau, “The variational Gaussian approximation revisited,” Neural Computation, vol. 21, no. 3, pp. 786–792, 2009.

[13] A. Girard, C. E. Rasmussen, J. Qui˜ onero-Candela, and R. Murray-Smith, “Gaussian process priors with n uncertain inputs - application to multiple-step ahead time series forecasting,” in Neural Information Processing Systems, 2003.

[14] G. W. Taylor, G. E. Hinton, and S. Roweis, “Modeling human motion using binary latent variables,” in Advances in Neural Information Processing Systems, vol. 19, MIT Press, 2007.

[15] N. D. Lawrence, “Learning for larger datasets with the Gaussian process latent variable model,” in Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, pp. 243–250, Omnipress, 2007. 9