nips nips2003 nips2003-37 nips2003-37-reference knowledge-graph by maker-knowledge-mining

37 nips-2003-Automatic Annotation of Everyday Movements


Source: pdf

Author: Deva Ramanan, David A. Forsyth

Abstract: This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor’s activity while in view. The system does not require a fixed background, and is automatic. The system works by (1) tracking people in 2D and then, using an annotated motion capture dataset, (2) synthesizing an annotated 3D motion sequence matching the 2D tracks. The 3D motion capture data is manually annotated off-line using a class structure that describes everyday motions and allows motion annotations to be composed — one may jump while running, for example. Descriptions computed from video of real motions show that the method is accurate.


reference text

[1] J. K. Aggarwal and Q. Cai. Human motion analysis: A review. Computer Vision and Image Understanding: CVIU, 73(3):428–440, 1999.

[2] O. Arikan and D. Forsyth. Interactive motion generation from examples. In Proc. ACM SIGGRAPH, 2002.

[3] O. Arikan, D. Forsyth, and J. O’Brien. Motion synthesis from annotations. In Proc. ACM SIGGRAPH, 2003.

[4] A. Bobick. Movement, activity, and action: The role of knowledge in the perception of motion. Philosophical Transactions of Royal Society of London, B-352:1257–1265, 1997.

[5] A. F. Bobick and J. Davis. The recognition of human movement using temporal templates. IEEE T. Pattern Analysis and Machine Intelligence, 23(3):257–267, 2001.

[6] L. W. Campbell and A. F. Bobick. Recognition of human body motion using phase space constraints. In ICCV, pages 624–630, 1995.

[7] C. C. Chang and C. J. Lin. Libsvm: Introduction and benchmarks. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, 2000.

[8] P. Felzenschwalb and D. Huttenlocher. Efficient matching of pictorial structures. In Proc CVPR, 2000.

[9] D. M. Gavrila. The visual analysis of human movement: A survey. Computer Vision and Image Understanding: CVIU, 73(1):82–98, 1999.

[10] J. K. Hodgins and N. S. Pollard. Adapting simulated behaviors for new characters. In SIGGRAPH - 97, 1997.

[11] M. I. Jordan, editor. Learning in Graphical Models. MIT Press, Cambridge, MA, 1999.

[12] M. Leventon and W. Freeman. Bayesian estimation of 3D human motion from an image sequence. Technical Report TR-98-06, MERL, 1998.

[13] D. Ramanan and D. A. Forsyth. Automatic annotation of everyday movements. Technical report, UCB//CSD-03-1262, UC Berkeley, CA, 2003.

[14] D. Ramanan and D. A. Forsyth. Finding and tracking people from the bottom up. In Proc CVPR, 2003.