nips nips2006 nips2006-31 nips2006-31-reference knowledge-graph by maker-knowledge-mining

31 nips-2006-Analysis of Contour Motions


Source: pdf

Author: Ce Liu, William T. Freeman, Edward H. Adelson

Abstract: A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours.


reference text

[1] M. J. Black and D. J. Fleet. Probabilistic detection and tracking of motion boundaries. International Journal of Computer Vision, 38(3):231–245, 2000.

[2] J. Canny. A computational approach to edge detection. IEEE Trans. Pat. Anal. Mach. Intel., 8(6):679–698, Nov 1986.

[3] W. T. Freeman and E. H. Adelson. The design and use of steerable filters. IEEE Trans. Pat. Anal. Mach. Intel., 13(9):891–906, Sep 1991.

[4] B. K. P. Horn and B. G. Schunck. Determing optical flow. Artificial Intelligence, 17:185–203, 1981.

[5] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 674–679, 1981.

[6] D. Mackay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.

[7] S. Mahamud, L. Williams, K. Thornber, and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Trans. Pat. Anal. Mach. Intel., 25(4):433–444, 2003.

[8] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pat. Anal. Mach. Intel., 26(5):530–549, May 2004.

[9] J. McDermott. Psychophysics with junctions in real images. Perception, 33:1101–1127, 2004.

[10] J. McDermott and E. H. Adelson. The geometry of the occluding contour and its effect on motion interpretation. Journal of Vision, 4(10):944–954, 2004.

[11] J. McDermott and E. H. Adelson. Junctions and cost functions in motion interpretation. Journal of Vision, 4(7):552–563, 2004. (a) Extracted boundaries (b) Estimated flow (c) Frame 1 (d) Frame 2 Figure 6: Experimental results for some synthetic and real examples. The same parameter settings were used for all examples. Column (a): Boundary fragments are extracted using our boundary tracker. The red dots are the edgelets and the green ones are the boundary fragment ends. Column (b): Boundary fragments are grouped into contours and the flow vectors are estimated. Each contour is shown in its own color. Columns (c): the illusory boundaries are generated for the first and second frames. The gap between the fragments belonging to the same contour are linked exploiting both static and motion cues in Eq. (5).

[12] S. J. Nowlan and T. J. Sejnowski. A selection model for motion processing in area mt primates. The Journal of Neuroscience, 15(2):1195–1214, 1995.

[13] R. Raskar, K.-H. Tan, R. Feris, J. Yu, and M. Turk. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans. Graph. (SIGGRAPH), 23(3):679–688, 2004.

[14] X. Ren, C. Fowlkes, and J. Malik. Scale-invariant contour completion using conditional random fields. In Proceedings of International Conference on Computer Vision, pages 1214–1221, 2005.

[15] E. Saund. Logic and MRF circuitry for labeling occluding and thinline visual contours. In Advances in Neural Information Processing Systems 18, pages 1153–1160, 2006.

[16] A. Shahua and S. Ullman. Structural saliency: the detection of globally salient structures using a locally connected network. In Proceedings of International Conference on Computer Vision, pages 321–327, 1988.

[17] J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition, pages 593–600, 1994.

[18] Y. Weiss and E. H. Adelson. Perceptually organized EM: A framework for motion segmentaiton that combines information about form and motion. Technical Report 315, M.I.T Media Lab, 1995.