nips nips2006 nips2006-111 nips2006-111-reference knowledge-graph by maker-knowledge-mining

111 nips-2006-Learning Motion Style Synthesis from Perceptual Observations


Source: pdf

Author: Lorenzo Torresani, Peggy Hackney, Christoph Bregler

Abstract: This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1


reference text

[1] O. Arikan and D. A. Forsyth. Synthesizing constrained motions from examples. ACM Transactions on Graphics, 21(3):483–490, July 2002.

[2] M. Brand and A. Hertzmann. Style machines. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages 183–192, July 2000.

[3] D. Chi, M. Costa, L. Zhao, and N. Badler. The emote model for effort and shape. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, July 2000.

[4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines (and other kernel-based learning methods). Cambridge University Press, 2000.

[5] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik. Support vector regression machines. In Proc. NIPS 9, 2003.

[6] M. A. Giese and T. Poggio. Morphable models for the analysis and synthesis of complex motion patterns. International Journal of Computer Vision, 38(1):59–73, 2000.

[7] P. Hackney. Making Connections: Total Body Integration Through Bartenieff Fundamentals. Routledge, 2000.

[8] M. T. Heath. Scientific Computing: An Introductory Survey, Second edition. McGraw Hill, 2002.

[9] E. Hsu, K. Pulli, and J. Popovic. Style translation for human motion. ACM Transactions on Graphics, 24(3):1082–1089, 2005.

[10] L. Kovar and M. Gleicher. Automated extraction and parameterization of motions in large data sets. ACM Transactions on Graphics, 23(3):559–568, Aug. 2004.

[11] J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics, 21(3):491–500, July 2002.

[12] Y. Li, T. Wang, and H.-Y. Shum. Motion texture: A two-level statistical model for character motion synthesis. ACM Transactions on Graphics, 21(3):465–472, July 2002.

[13] R. Murray, Z. Li, and S. Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994.

[14] H. Ney. The use of a one–stage dynamic programming algorithm for connected word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(3):263–271, 1984.

[15] K. Pullen and C. Bregler. Motion capture assisted animation: Texturing and synthesis. ACM Transactions on Graphics, 21(3):501–508, July 2002.

[16] C. Rose, M. Cohen, and B. Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Application, 18(5):32–40, 1998.

[17] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.

[18] D. J. Wiley and J. K. Hahn. Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Application, 17(6):39–45, 1997.