nips nips2010 nips2010-262 nips2010-262-reference knowledge-graph by maker-knowledge-mining

262 nips-2010-Switched Latent Force Models for Movement Segmentation


Source: pdf

Author: Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Abstract: Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology. 1


reference text

´

[1] Mauricio Alvarez, David Luengo, and Neil D. Lawrence. Latent Force Models. In David van Dyk and Max Welling, editors, Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, pages 9–16, Clearwater Beach, Florida, 16-18 April 2009. JMLR W&CP; 5. ´

[2] Mauricio A. Alvarez, David Luengo, Michalis K. Titsias, and Neil D. Lawrence. Efficient multioutput Gaussian processes through variational inducing kernels. In JMLR: W&CP; 9, pages 25–32, 2010.

[3] Roman Garnett, Michael A. Osborne, Steven Reece, Alex Rogers, and Stephen J. Roberts. Sequential Bayesian prediction in the presence of changepoints and faults. The Computer Journal, 2010. Advance Access published February 1, 2010.

[4] Roman Garnett, Michael A. Osborne, and Stephen J. Roberts. Sequential Bayesian prediction in the presence of changepoints. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 345–352, 2009.

[5] Antti Honkela, Charles Girardot, E. Hilary Gustafson, Ya-Hsin Liu, Eileen E. M. Furlong, Neil D. Lawrence, and Magnus Rattray. Model-based method for transcription factor target identification with limited data. PNAS, 107(17):7793–7798, 2010.

[6] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In Advances in Neural Information Processing Systems 15, 2003.

[7] T. Oyama, Y. Uno, and S. Hosoe. Analysis of variability of human reaching movements based on the similarity preservation of arm trajectories. In International Conference on Neural Information Processing (ICONIP), pages 923–932, 2007.

[8] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006.

[9] Yunus Saatci, Ryan Turner, and Carl Edward Rasmussen. Gaussian Process change point models. In ¸ Proceedings of the 27th Annual International Conference on Machine Learning, pages 927–934, 2010.

[10] E. Solak, R. Murray-Smith W. E. Leithead, D. J. Leith, and C. E. Rasmussen. Derivative observations in Gaussian process models of dynamic systems. In Sue Becker, Sebastian Thrun, and Klaus Obermayer, editors, NIPS, volume 15, pages 1033–1040, Cambridge, MA, 2003. MIT Press.

[11] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In JMLR: W&CP; 5, pages 567–574, 2009. 9