iccv iccv2013 iccv2013-275 iccv2013-275-reference knowledge-graph by maker-knowledge-mining

275 iccv-2013-Motion-Aware KNN Laplacian for Video Matting


Source: pdf

Author: Dingzeyu Li, Qifeng Chen, Chi-Keung Tang

Abstract: This paper demonstrates how the nonlocal principle benefits video matting via the KNN Laplacian, which comes with a straightforward implementation using motionaware K nearest neighbors. In hindsight, the fundamental problem to solve in video matting is to produce spatiotemporally coherent clusters of moving foreground pixels. When used as described, the motion-aware KNN Laplacian is effective in addressing this fundamental problem, as demonstrated by sparse user markups typically on only one frame in a variety of challenging examples featuring ambiguous foreground and background colors, changing topologies with disocclusion, significant illumination changes, fast motion, and motion blur. When working with existing Laplacian-based systems, our Laplacian is expected to benefit them immediately with improved clustering of moving foreground pixels.


reference text

[1] N. Apostoloff and A. Fitzgibbon. Bayesian video matting using learnt image priors. In CVPR, pages I:407–414, 2004.

[2] X. Bai and G. Sapiro. A geodesic framework for fast interactive image and video segmentation and matting. In ICCV, 2007.

[3] X. Bai, J. Wang, and D. Simons. Towards temporally-coherent video matting. In MIRAGE, volume 6930, pages 63–74. Springer, 2011. 33659058 man Geodesic α KNN α Geodesic αI Zoom-in KNN αI Zoom-in Figure 11. Comparison with geodesic matting [2] on talk using sparse strokes. Only strokes on the first frame are given and all the αs are computed using our closed-form solution. While the α results look similar, αI shows our method extracts a better foreground. Input & Trimap Frame 15 Frame 20 Frame 31 Frame 45 Frame 51 Figure 12. Comparison with video snapcut [4] on walk. Our results (bottom) are robust to stark illumination changes given only a single input trimap (Frame 12). The shading on the walking man is constantly changing. In video snapcut (top), the user needs to supply quite a number of additional strokes to achieve a comparable segmentation, for example, by carefully drawn control points on Frame 15 and 51 as well as blue strokes on the intermediate frames. Frame 7 Figure 13. Frame 29 Frame 37 Motion-aware Frame 15 Frame 18 KNN Laplacian degrades gracefully in fast and complex motion in front of a background with ambigu- ous colors (left, jurassic), and in presence of motion blur (right, waving).

[4] X. Bai, J. Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph., 28(3), 2009.

[5] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein. The

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21] generalized patchmatch correspondence algorithm. In ECCV’10, pages 29–43, 2010. A. Buades, B. Coll, and J.-M. Morel. Nonlocal image and movie denoising. IJCV, 76(2): 123–139, 2008. Q. Chen, D. Li, and C.-K. Tang. KNN matting. In CVPR, pages 869–876, 2012. I. Choi, M. Lee, and Y.-W. Tai. Video matting using multi-frame nonlocal matting laplacian. In ECCV, pages 540–553, 2012. Y. Chuang, B. Curless, D. H. Salesin, and R. Szeliski. A bayesian approach to digital matting. CVPR’01, II:264-271, 2001. Y.-Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski. Video matting of complex scenes. ACM Trans. on Graph., 21(3):243–248, July 2002. D. Corrigan, S. Robinson, and A. Kokaram. Video matting using motion extended grabcut. In Visual Media Production (CVMP 2008), 5th European Conference on, pages 1–9, nov. 2008. M. Eisemann, J. Wolf, and M. Magnor. Spectral video matting. In Vision, Modeling and Visualization, 2009. N. Joshi, W. Matusik, and S. Avidan. Natural video matting using camera arrays. ACM Trans. Graph., 25(3):779–786, July 2006. P. Lee and Y. Wu. Nonlocal matting. In CVPR, pages 2193–2200, 2011. S.-Y. Lee, J.-C. Yoon, and I.-K. Lee. Temporally coherent video matting. Graphical Models, 72(3):25 – 33, 2010. A. Levin, D. Lischinski, and Y. Weiss. A closed-form solution to natural image matting. IEEE TPAMI, 30(2):228–242, 2008. A. Levin, A. Rav-Acha, and D. Lischinski. Spectral matting. IEEE TPAMI, 30:1699–1712, October 2008. Y. Li, J. Sun, and H.-Y. Shum. Video object cut and paste. ACM Trans. Graph., 24(3):595–600, jul 2005. M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand. Defocus video matting. ACM Trans. Graph., 24(3):567–576, 2005. P. Ochs and T. Brox. Higher order motion models and spectral clustering. In CVPR, pages 614–621, 2012. B. L. Price, B. S. Morse, and S. Cohen. Livecut: Learning-based

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30] interactive video segmentation by evaluation of multiple propagated cues. In ICCV, pages 779–786, 2009. D. Sun, S. Roth, and M. J. Black. Secrets of optical flow estimation and their principles. In CVPR, pages 2432–2439, 2010. J. Sun, J. Jia, C.-K. Tang, and H.-Y. Shum. Poisson matting. ACM Trans. Graph., 23:3 15–321, August 2004. C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ICCV, pages 839–846, 1998. A. Treisman. Perceptual grouping and attention in visual search for features and for objects. Journal of experimental psychology. Human perception and performance, 8(2): 194–214, Apr 1982. A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008. J. Wang, P. Bhat, R. A. Colburn, M. Agrawala, and M. F. Cohen. Interactive video cutout. ACM Trans. Graph., 24(3):585–594, 2005. J. Wang and M. F. Cohen. Optimized color sampling for robust matting. IEEE CVPR, 2007. J. Wang and M. F. Cohen. Image and Video Matting. Now Publishers Inc., Hanover, MA, USA, 2008. S. K. Yeung, C.-K. Tang, M. S. Brown, and S. B. Kang. Matting and compositing of transparent and refractive objects. ACM Trans. Graph., 30(1):2, 2011. 33659069