cvpr cvpr2013 cvpr2013-55 cvpr2013-55-reference knowledge-graph by maker-knowledge-mining

55 cvpr-2013-Background Modeling Based on Bidirectional Analysis


Source: pdf

Author: Atsushi Shimada, Hajime Nagahara, Rin-ichiro Taniguchi

Abstract: Background modeling and subtraction is an essential task in video surveillance applications. Most traditional studies use information observed in past frames to create and update a background model. To adapt to background changes, the backgroundmodel has been enhancedby introducing various forms of information including spatial consistency and temporal tendency. In this paper, we propose a new framework that leverages information from a future period. Our proposed approach realizes a low-cost and highly accurate background model. The proposed framework is called bidirectional background modeling, and performs background subtraction based on bidirectional analysis; i.e., analysis from past to present and analysis from future to present. Although a result will be output with some delay because information is takenfrom a futureperiod, our proposed approach improves the accuracy by about 30% if only a 33-millisecond of delay is acceptable. Furthermore, the memory cost can be reduced by about 65% relative to typical background modeling.


reference text

[1] O. Barnich and M. V. Droogenbroeck. Vibe: A powerful random technique to estimate the background in video sequences. In In IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 945–948, 2009.

[2] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26: 1124–1 137, 2004.

[3] S. Brutzer, B. Hoferlin, and G. Heidemann. Evaluation of background subtraction techniques for video surveillance. In Computer Vision and Pattern Recognition (CVPR), pages 1937–1944. IEEE, 2011.

[4] A. Elgammal, R. Duraiswami, D. Harwood, and L. Davis. Background and Foreground Modeling using Non-parametric Kernel Density Estimation for Visual Surveillance. Proceedings of the IEEE, 90: 1151–1 163, 2002.

[5] K. Kim, T. Chalidabhongse, D. Harwood, and L. Davis. Real-time foreground-background segmentation using codebook model. Real-Time Imaging, 11(3): 172–185, 2005.

[6] L. Li, W. Huang, I. Gu, and Q. Tian. Foreground object detection from videos containing complex background. In In Int. Conf. on Multimedia, pages 2–10, 2003.

[7] L. Maddalena and A. Petrosino. A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing, 17(7): 1168– 1177, 2008.

[8] H. Marko and P. Matti. A Texture-Based Method for Modeling the Background and Detecting Moving Objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):657–662, 2006.

[9] N. McFarlane and C. Schofield. Segmentation and tracking of piglets in images. Machine Vision and Applications, 8(3): 187–193, 1995.

[10] S. J. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler. Tracking groups of people. Computer Vision and Image Understanding, 80(1):42–56, 2000. 111999888533 nicPrseoi0 . 184602 .204 .P6rop sed0.81snoicerP0 . 186420 .204 .6Prop0 .s8ed1 Recall (a) Basic Recall (b) Dynamic Background icsernoP0 . 618402 .04.608Propsed1snoicerP0 . 186420 .204.6Propse0d.81 Recall Recall (d) Darkening (e) Light Switch coniePsri0 . 1480620. 4Recal0.6Pro0p.8osed1 (c) Bootstrapping incsoPire0 0. 1406280 .204Recal0P.r6op sed0.81 (f) Noisy Night McFarlaneStaufferOliverMcKennaLiKimZivkovicMaddalenaBarnichProposed Figure 8. Precision and recall for the SABS dataset Table 4. Maximal F-measures for the SABS dataset MethodBasicBDacykngarmouincdBootstrapDarkeningSLwigithcthNNoigishytASvceernaege

[11] N. Oliver, B. Rosario, and A. Pentland. A bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):831, 2000.

[12] A. Shimada, D. Arita, and R. ichiro Taniguchi. Dynamic Control of Adaptive Mixture-of-Gaussians Background Model. CD-ROM Proceedings of IEEE International Conference on Advanced Video and Signal Based Surveillance, 2006.

[13] A. Shimada and R. Taniguchi. Hybrid Background Model using Spatial-Temporal LBP. In IEEE International Conference on Advanced Video and Signal based Surveillance 2009, 09 2009.

[14] C. Stauffer and W. Grimson. Adaptive background mixture models for real-time tracking. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2:246–252, 1999.

[15] T. Tanaka, A. Shimada, R. Taniguchi, T. Yamashita, and D. Arita. Towards robust object detection: integrated background modeling based on spatio-temporal features. InAsian Conference on Computer Vision 2009, 2009.

[16] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principle and Practice of Background Maintenance. International Conference on Computer Vision, pages 255–261, 1999.

[17] S. Yoshinaga, A. Shimada, H. Nagahara, and R. ichiro Taniguchi. Object Detection Using Local Difference Patterns. In Asian Conference on Computer Vision 2010, 2010.

[18] S. Zhang, H. Yao, and S. Liu. Dynamic background modeling and subtraction using spatio-temporal local binary patterns. In 15th IEEE International Conference on Image Processing, pages 1556 –1559, 2008.

[19] Z. Zivkovic and F. van der Heijden. Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognition Letters, 27:773–780, 2006. 111999888644