cvpr cvpr2013 cvpr2013-464 cvpr2013-464-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Ran Margolin, Ayellet Tal, Lihi Zelnik-Manor
Abstract: What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets.
[1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequencytuned salient region detection. In CVPR, pages 1597–1604, 2009. 2, 4, 5
[2] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. S ¨usstrunk. Slic superpixels. Technical Report 149300 EPFL, (June), 2010. 4, 5
[3] S. Alpert, M. Galun, R. Basri, and A. Brandt. Image segmentation by probabilistic bottom-up aggregation and cue integration. In CVPR, pages 1–8, June 2007. 4, 5, 7
[4] A. Borji, D. Sihite, and L. Itti. Salient object detection: A benchmark. In ECCV, pages 414–429, 2012. 1, 2, 5, 6, 8
[5] K. Chang, T. Liu, H. Chen, and S. Lai. Fusing generic objectness and visual saliency for salient object detection. In ICCV, pages 914–921, 2011. 1, 2, 3, 5, 6, 7, 8
[6] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu. Global contrast based salient region detection. In CVPR, pages 409– 416, 2011. 1, 4, 5, 6, 7, 8
[7] F. Durand, T. Judd, F. Durand, A. Torralba, et al. A benchmark of computational models of saliency to predict human fixations. Technical report, MIT, 2012. 5
[8] S. Goferman, A. Tal, and L. Zelnik-Manor. Puzzle-like collage. Computer Graphics Forum, 29:459–468, 2010. 1
[9] S. Goferman, L. Zelnik-Manor, and A. Tal. Context-aware saliency detection. In CVPR, pages 2376–2383, 2010. 1, 2, 3, 5, 7, 8
[10] L. Itti. Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing, 13(10): 1304–13 18, 2004. 1
[11] H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li. Automatic salient object segmentation based on context and shape prior.
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20] In BMVC, page 7, 2012. 1, 4, 5, 7, 8 T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In ICCV, pages 2106–21 13, 2009. 5, 6 C. Kanan and G. Cottrell. Robust classification of objects, faces, and flowers using natural image statistics. In CVPR, pages 2472–2479, 2010. 1 T. Liu, S. Slotnick, J. Serences, and S. Yantis. Cortical mechanisms of feature-based attentional control. Cerebral Cortex, 13(12): 1334– 1343, 2003. 5 T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H. Shum. Learning to detect a salient object. PAMI, pages 353–367, 2010. 4, 5 Y. Ma, X. Hua, L. Lu, and H. Zhang. A generic framework of user attention model and its application in video summarization. IEEE Transactions on Multimedia, 7(5):907–919, 2005. 1 V. Movahedi and J. Elder. Design and perceptual validation ofperformance measures for salient object segmentation. In CVPRW, pages 49–56, 2010. 5 M. Muja and D. G. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. In VISSAPP, pages 331–340. INSTICC Press, 2009. 4 H. Seo and P. Milanfar. Static and space-time visual saliency detection by self-resemblance. Journal of Vision, 9(12), 2009. 2, 3 P. Soille. Morphological image analysis: principles and applications. Springer-Verlag New York, Inc., 2003. 3 1 1 1 1 14 4 45 3 3 ASD ASD MSRA MSRA SED1 SED1 SED2 SED2 SOD SOD(a) Input(b) SVO [5](c) RC [6](d) CNTX [9](e) CBS [11](f) Ours Figure 12. Qualitative comparison. Salient object detection results on ten example images, two from each dataset in the benchmark of [4]. It can be seen that our results are consistently more accurate than those of other methods. 1 1 1 1 1 14 4 46 4 4