cvpr cvpr2013 cvpr2013-376 knowledge-graph by maker-knowledge-mining

376 cvpr-2013-Salient Object Detection: A Discriminative Regional Feature Integration Approach


Source: pdf

Author: Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, Shipeng Li

Abstract: Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional backgroundness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, backgroundness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In this paper, we regard saliency map computation as a regression problem. [sent-4, score-0.777]

2 Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. [sent-5, score-2.65]

3 The other is that we introduce a new regional feature vector, backgroundness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. [sent-8, score-0.621]

4 Introduction Visual saliency has been a fundamental problem in neuroscience, psychology, neural systems, and computer vision for a long time. [sent-11, score-0.718]

5 It is originally a task of predicting the eye-fixations on images, and recently has been extended to identifying a region containing the salient object, which is the focus of this paper. [sent-12, score-0.374]

6 There are various applications for salient object detection, including object detection and recognition [25, 46] , image compression [21] , image cropping [35] , photo collage [17, 47] , dominant color detection [51, 52] and so on. [sent-13, score-0.437]

7 The study on human visual systems suggests that the saliency is related to uniqueness, rarity and surprise of a scene, characterized by primitive features like color, texture, shape, etc. [sent-14, score-0.762]

8 Recently a lot of efforts have been made to design various heuristic algorithms to compute the saliency [1, 6, 11, 15, 18, 27, 31, 34, 38] . [sent-15, score-0.745]

9 In this paper, we regard saliency estimation as a regression problem, and learn a regressor that directly maps the regional feature vector to a saliency score. [sent-16, score-2.083]

10 Second, we conduct a region saliency computation step with a random forest regressor that maps the regional features to a saliency score. [sent-19, score-2.219]

11 Last, a saliency map is computed by fusing the saliency maps across multiple levels of segmentations. [sent-20, score-1.533]

12 The key contributions lie in the second step, region saliency computation. [sent-21, score-0.796]

13 This is a principle way in image classification [19] , but rarely studied in salient object detection. [sent-23, score-0.323]

14 Related work The following gives a review of salient object detection (segmentation) algorithms that are related to our approach. [sent-28, score-0.354]

15 A comprehensive survey of salient object detection can be found from [9] . [sent-29, score-0.372]

16 The review on visual attention modeling [7] also includes some analysis on salient object detection. [sent-30, score-0.381]

17 222000888311 The basis of most saliency detection algorithms can date back to the feature integration theory [43] which posits that different kinds of attention are responsible for binding various features into consciously experienced wholes. [sent-31, score-0.882]

18 It represents the input image from the color, intensity and orientation channels, and computes three conspicuity (saliency) maps using center-surround differences, which are combined together to form the final master saliency map. [sent-33, score-0.802]

19 Recently, a lot of research efforts have been made to design various saliency features characterizing salient objects or regions. [sent-34, score-1.065]

20 The center-surround difference framework is also investigated to compute the saliency from region-based image representation. [sent-39, score-0.718]

21 In [23] , the difference between the color histogram of a region and its immediately neighboring regions are used to evaluate the saliency score. [sent-40, score-0.884]

22 The global contrast based approach [11] , computing the saliency map by comparing each region with others, aims to directly compute the global uniqueness. [sent-41, score-0.852]

23 Based on the regional contrast, element color uniqueness and spatial distribution are introduced to evaluate the saliency scores of regions [38] . [sent-42, score-1.25]

24 The saliency map is generated by propagating the saliency scores of regions to the pixels. [sent-43, score-1.514]

25 Many other models are also proposed for saliency computation. [sent-44, score-0.718]

26 the salient object usually lies in the center of an image, is investigated in [23, 50] . [sent-47, score-0.323]

27 Object prior, such as connectivity prior [45] , concavity context [34] , auto-context cue [48] , and the background prior [53] are also studied for saliency computation. [sent-48, score-0.771]

28 Example-based approaches, searching for similar images of the input, are developed for salient object detection [35, 49] . [sent-49, score-0.354]

29 A graphical model is proposed to fuse generic objectness and visual saliency together to detect objects [10] . [sent-50, score-0.84]

30 A low rank matrix recovery scheme is proposed for salient object detection [41] . [sent-51, score-0.377]

31 The stereopsis is leveraged for saliency analysis [37] . [sent-53, score-0.743]

32 Besides, spectral analysis in the frequency domain is used to detect salient regions [1, 20] Additionally, there are several works directly check- ing if an image window contains an object. [sent-54, score-0.372]

33 A random forest regression approach is adopted to directly regress the object rectangle from the saliency map [50] . [sent-58, score-0.828]

34 Eye fixation prediction, another visual saliency research direction, also attracts a lot of interests [7, 24] . [sent-59, score-0.799]

35 context-aware saliency detection [18] aiming to detect the image regions that represent the scene. [sent-63, score-0.788]

36 In term of the saliency features, we compute a contrast vector instead of a contrast value used in the existing algorithms for a region. [sent-65, score-0.792]

37 In contrast to existing learning algorithms that perform saliency integration by combining saliency maps computed from different types of features, e. [sent-68, score-1.561]

38 [2, 10, 31] , our approach learns to directly integrate feature vectors to compute the saliency map. [sent-70, score-0.744]

39 The closely related approach [26] which also learns to integrate the saliency features is a pixel-based algorithm, while our approach is region-based that performs multi-level estimation and can capture non-local contrast. [sent-71, score-0.742]

40 Moreover, we introduce a novel regional feature vector to characterize the background. [sent-72, score-0.481]

41 Another one [36] touches the discriminative feature integration lightly without presenting a deep investigation and it only considers the regional property descriptor. [sent-73, score-0.587]

42 The recent learning approach [33] aims to predict eye fixation, while our approach is for salient object detection and moreover, we solve the problem by introducing and exploring multi-level regional descriptors. [sent-74, score-0.81]

43 The framework of our proposed discriminative regional feature integration (DRFI) approach. [sent-77, score-0.526]

44 segmentation integrates three types of regional features in a discriminative strategy for the saliency regression on multiple segmentations. [sent-78, score-1.252]

45 Our algorithm computes the saliency score for each region. [sent-91, score-0.736]

46 However, our algorithm essentially takes into consideration of such relations because we conduct the region saliency computation on multi-level segmentation. [sent-93, score-0.814]

47 The spatial consistency of saliency scores for neighboring regions is imposed since the neighboring regions in the finer-level segmentation may form a single region in the coarser level. [sent-94, score-0.976]

48 Our approach represents each region using three types of features: regional contrast, regional property, and regional backgroundness, which will be described in Section 3. [sent-95, score-1.371]

49 Then the feature x is passed into a random forest regressor f, yielding a saliency score. [sent-97, score-0.933]

50 The random forest regressor is learnt from the regions of the training images, and integrates the features together in a discriminative strategy. [sent-98, score-0.332]

51 After conducting region saliency computation, each region Rnm ∈ Sm has a saliency value anm. [sent-101, score-1.592]

52 For each level, we assign the saliency value of each region to its contained pixels. [sent-102, score-0.796]

53 As a result, we generate M saliency maps {A1, A2 , · · · , AM}, and then fuse them together, A = g(A1, · · · , AM) , to get the final saliency map A, where g is a combinator function introduced in section 4. [sent-103, score-1.575]

54 Regional contrast descriptor A region is likely thought to be salient if it is different from its surrounding regions. [sent-107, score-0.474]

55 the differences of region features like color and texture, and then combine them together directly forming a saliency score, our approach computes a contrast descriptor, which will be fed into a regressor to automatically calculate the saliency score. [sent-110, score-1.78]

56 The regional contrast descriptor of R is computed as the differences diff(vR, vN) between its features and the neighborhood features. [sent-114, score-0.576]

57 The details of the regional contrast descriptor are given in Table 1. [sent-117, score-0.531]

58 Color and texture features describing the visual characteristics of a region which are used to compute the regional d(x1, x2) = (|x11 − x21 |, · · · , |x1d − x2d |) where d is the number of elements in the vectors x1 and x2 . [sent-119, score-0.573]

59 The last two columns denote the symbols for regional contrast and backgroundness descriptors. [sent-122, score-0.692]

60 (In the definition column, S corresponds to N for the regional contrast descriptor and B for the regional backgroundness descriptor, respectively. [sent-123, score-1.186]

61 Regional property descriptor In addition to regional contrast, we consider the generic properties of a region, including appearance and geometric features. [sent-127, score-0.524]

62 The appearance features attempt to describe the distribution of colors and textures in a region, which can characterize the common properties of the salient object and the background. [sent-129, score-0.39]

63 The geometric features include the size and position of a region that may be useful to describe the spatial distribution of the salient object and the background. [sent-131, score-0.425]

64 For instance, the salient object tends to be placed near the center of the image while the background usually scatters over the entire image. [sent-132, score-0.354]

65 In summary, we obtain a 34-dimensional regional property descriptor. [sent-133, score-0.461]

66 Image regions with similar appearances might belong to the background in one image but belong to the salient object in some other ones. [sent-144, score-0.393]

67 It is not enough to merely use the property features to check if one region is in the background or the salient object. [sent-145, score-0.459]

68 Therefore, we extract the pseudo-background region and compute the backgroundness descriptor for each region with the pseudo-background region as a reference. [sent-146, score-0.521]

69 The backgroundness feature of the region R is then computed as the differences diff(vR, vB) between its features vR and the features vB of the pseudo-background region, resulting a 26-dimensional feature vector. [sent-149, score-0.423]

70 We aim to learn the regional saliency estimator from a set of training examples. [sent-153, score-1.149]

71 The training examples include a set of confident regions R = {R1, R2 , · · · , RQ} and the corresponding saliency scores A = {a1, a2 , · · · , aQ}, which are collected from the multi-level segmentation over a set of images with the ground truth annotation of the salient objects. [sent-154, score-1.155]

72 A region is considered to be confident if the number of the pixels belonging to the salient object or the background exceeds 80% of the number of the pixels in the region, and its saliency score is set as 1 or 0 accordingly. [sent-155, score-1.193]

73 As aforementioned, each region is described by a feature vector x, composed of the regional contrast, regional property, and regional backgroundness descriptors. [sent-157, score-1.621]

74 We learn a random forest regressor f from the training data X = {x1, x2 , · · · , xQ} and the saliency scores A = {a1, a2 , · · · , aQ }. [sent-158, score-0.927]

75 Learning a saliency regressor can automatically combine the features and discover the most discriminative ones. [sent-159, score-0.908]

76 Given the multi-level saliency maps {A1, A2 , · · · , AM} for 222000888644 descriptionnTotaabtlieon 2. [sent-162, score-0.761]

77 This data set [3] contains two subsets: SED1 that has 100 images containing only one salient object and SED2 that has 100 images containing two salient . [sent-184, score-0.619]

78 This data set is a collection of salient object boundaries based on the Berkeley segmentation data set. [sent-188, score-0.357]

79 Precision corresponds to the percentage of salient pixels correctly assigned, and recall is the fraction of detected salient pixels belonging to the salient object in the ground truth. [sent-215, score-0.915]

80 The PR curve is created by varying the saliency threshold that determines if a pixel is on the salient object. [sent-216, score-1.034]

81 One can see in Figure 2(a) that the AUC score of the saliency maps increases when more levels of segmentations are adopted. [sent-223, score-0.844]

82 As shown in Figure 2(b), the performance of our approach with more trees in the random forest saliency regressor is higher. [sent-241, score-0.931]

83 Figure 3 shows the rank of the most important 20 regional features. [sent-251, score-0.454]

84 The feature rank indicates that the backgroundness descriptor is the most critical one in our feature set (occupies 10 out of top 20 features) . [sent-252, score-0.362]

85 The reason might be that it is a local contrast descriptor and less important compared with the regional backgroundness descriptor which is in some sense non-local. [sent-254, score-0.818]

86 In the property descriptor, the geometric features are ranked higher as salient objects tend to lie in the center in most images. [sent-255, score-0.35]

87 For example, in the first two rows, other approaches may be distracted by the textures on the background while our method almost successfully highlights the whole salient object. [sent-289, score-0.346]

88 Quantitative comparison of saliency maps produced by different approaches on different data sets. [sent-345, score-0.761]

89 Conclusions In this paper, we address the salient object detection problem using a discriminative regional feature integration approach. [sent-352, score-0.88]

90 One is that we learn to integrate a lot of regional descriptors to compute the saliency scores, rather than heuristically compute saliency maps from different types of features and combine them to get the saliency map. [sent-354, score-2.755]

91 Boosting bottom-up and top-down visual features for saliency estimation. [sent-393, score-0.762]

92 Exploiting local and global patch rarities for saliency detection. [sent-398, score-0.749]

93 Fusing generic objectness and visual saliency for salient object detection. [sent-432, score-1.121]

94 Automatic salient object segmentation based on context and shape prior. [sent-524, score-0.357]

95 A benchmark of computational models of saliency to predict human fixations. [sent-530, score-0.718]

96 Center-surround divergence of feature statistics for salient object detection. [sent-548, score-0.371]

97 A framework for [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] visual saliency detection with applications to image thumbnailing. [sent-612, score-0.769]

98 Saliency filters: Contrast based filtering for salient region detection. [sent-631, score-0.374]

99 A unified approach to salient object detection via low rank matrix recovery. [sent-648, score-0.377]

100 Top-down visual saliency via joint crf and dictionary learning. [sent-740, score-0.738]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('saliency', 0.718), ('regional', 0.431), ('salient', 0.296), ('backgroundness', 0.224), ('regressor', 0.125), ('drfi', 0.086), ('region', 0.078), ('pages', 0.069), ('forest', 0.064), ('descriptor', 0.063), ('objectness', 0.06), ('heuristically', 0.059), ('sm', 0.055), ('combinator', 0.052), ('borji', 0.049), ('segmentations', 0.046), ('integration', 0.045), ('auc', 0.045), ('maps', 0.043), ('icoseg', 0.04), ('roc', 0.039), ('regions', 0.039), ('attention', 0.038), ('contrast', 0.037), ('cbsal', 0.034), ('wmam', 0.034), ('fixation', 0.034), ('segmentation', 0.034), ('vr', 0.032), ('background', 0.031), ('detection', 0.031), ('lrk', 0.031), ('curvedness', 0.031), ('isocentric', 0.031), ('lang', 0.031), ('rarities', 0.031), ('touches', 0.031), ('property', 0.03), ('adaptability', 0.028), ('zheng', 0.028), ('object', 0.027), ('pr', 0.027), ('lot', 0.027), ('aq', 0.027), ('svo', 0.027), ('diff', 0.027), ('feature', 0.026), ('stereopsis', 0.025), ('sihite', 0.025), ('eye', 0.025), ('border', 0.025), ('confident', 0.025), ('color', 0.025), ('fuse', 0.025), ('neighboring', 0.024), ('discriminative', 0.024), ('features', 0.024), ('trees', 0.024), ('characterize', 0.024), ('wang', 0.024), ('sod', 0.024), ('master', 0.024), ('yuan', 0.023), ('annotation', 0.023), ('fusion', 0.023), ('goferman', 0.023), ('rank', 0.023), ('concavity', 0.022), ('multitask', 0.022), ('lu', 0.022), ('divergence', 0.022), ('regard', 0.022), ('discriminant', 0.021), ('vb', 0.021), ('differences', 0.021), ('integrates', 0.021), ('texture', 0.02), ('scores', 0.02), ('visual', 0.02), ('curve', 0.02), ('textures', 0.019), ('map', 0.019), ('window', 0.019), ('levels', 0.019), ('zeng', 0.019), ('spectral', 0.018), ('quantitative', 0.018), ('psychology', 0.018), ('score', 0.018), ('computation', 0.018), ('survey', 0.018), ('learnt', 0.018), ('vn', 0.018), ('uniqueness', 0.017), ('decomposes', 0.017), ('together', 0.017), ('combine', 0.017), ('counterpart', 0.017), ('gao', 0.016), ('fusing', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 376 cvpr-2013-Salient Object Detection: A Discriminative Regional Feature Integration Approach

Author: Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, Shipeng Li

Abstract: Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional backgroundness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, backgroundness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

2 0.66030276 375 cvpr-2013-Saliency Detection via Graph-Based Manifold Ranking

Author: Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, Ming-Hsuan Yang

Abstract: Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with superpixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult bench- mark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field.

3 0.63533294 202 cvpr-2013-Hierarchical Saliency Detection

Author: Qiong Yan, Li Xu, Jianping Shi, Jiaya Jia

Abstract: When dealing with objects with complex structures, saliency detection confronts a critical problem namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed. –

4 0.61635298 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

Author: Keyang Shi, Keze Wang, Jiangbo Lu, Liang Lin

Abstract: Driven by recent vision and graphics applications such as image segmentation and object recognition, assigning pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly critical. More often, such fine-grained saliency detection is also desired to have a fast runtime. Motivated by these, we propose a generic and fast computational framework called PISA Pixelwise Image Saliency Aggregating complementary saliency cues based on color and structure contrasts with spatial priors holistically. Overcoming the limitations of previous methods often using homogeneous superpixel-based and color contrast-only treatment, our PISA approach directly performs saliency modeling for each individual pixel and makes use of densely overlapping, feature-adaptive observations for saliency measure computation. We further impose a spatial prior term on each of the two contrast measures, which constrains pixels rendered salient to be compact and also centered in image domain. By fusing complementary contrast measures in such a pixelwise adaptive manner, the detection effectiveness is significantly boosted. Without requiring reliable region segmentation or post– relaxation, PISA exploits an efficient edge-aware image representation and filtering technique and produces spatially coherent yet detail-preserving saliency maps. Extensive experiments on three public datasets demonstrate PISA’s superior detection accuracy and competitive runtime speed over the state-of-the-arts approaches.

5 0.58572358 374 cvpr-2013-Saliency Aggregation: A Data-Driven Approach

Author: Long Mai, Yuzhen Niu, Feng Liu

Abstract: A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating various saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First, different methods perform differently in saliency analysis. Second, the performance of a saliency analysis method varies with individual images. Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among individual methods and the performance dependence of each method on individual images. This paper discusses various data-driven approaches and finds that the image-dependent aggregation method works best. Specifically, our method uses a Conditional Random Field (CRF) framework for saliency aggregation that not only models the contribution from individual saliency map but also the interaction between neighboringpixels. To account for the dependence of aggregation on an individual image, our approach selects a subset of images similar to the input image from a training data set and trains the CRF aggregation model only using this subset instead of the whole training set. Our experiments on public saliency benchmarks show that our aggregation method outperforms each individual saliency method and is robust with the selection of aggregated methods.

6 0.56502622 273 cvpr-2013-Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection

7 0.44701529 258 cvpr-2013-Learning Video Saliency from Human Gaze Using Candidate Selection

8 0.3335003 411 cvpr-2013-Statistical Textural Distinctiveness for Salient Region Detection in Natural Images

9 0.33091545 418 cvpr-2013-Submodular Salient Region Detection

10 0.29185152 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images

11 0.24234237 205 cvpr-2013-Hollywood 3D: Recognizing Actions in 3D Natural Scenes

12 0.21583088 325 cvpr-2013-Part Discovery from Partial Correspondence

13 0.17166823 200 cvpr-2013-Harvesting Mid-level Visual Concepts from Large-Scale Internet Images

14 0.15808538 464 cvpr-2013-What Makes a Patch Distinct?

15 0.14581348 263 cvpr-2013-Learning the Change for Automatic Image Cropping

16 0.10908637 321 cvpr-2013-PDM-ENLOR: Learning Ensemble of Local PDM-Based Regressions

17 0.090930246 171 cvpr-2013-Fast Trust Region for Segmentation

18 0.083874196 378 cvpr-2013-Sampling Strategies for Real-Time Action Recognition

19 0.079683334 371 cvpr-2013-SCaLE: Supervised and Cascaded Laplacian Eigenmaps for Visual Object Recognition Based on Nearest Neighbors

20 0.073463544 468 cvpr-2013-Winding Number for Region-Boundary Consistent Salient Contour Extraction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.19), (1, -0.249), (2, 0.637), (3, 0.341), (4, -0.151), (5, -0.038), (6, -0.005), (7, -0.075), (8, 0.076), (9, 0.026), (10, -0.021), (11, 0.042), (12, -0.038), (13, -0.003), (14, -0.023), (15, 0.027), (16, 0.004), (17, 0.008), (18, -0.023), (19, -0.012), (20, -0.021), (21, -0.014), (22, -0.03), (23, 0.025), (24, -0.024), (25, 0.036), (26, -0.025), (27, 0.008), (28, 0.008), (29, 0.02), (30, 0.005), (31, 0.011), (32, -0.017), (33, -0.015), (34, -0.011), (35, -0.013), (36, 0.011), (37, -0.015), (38, 0.015), (39, 0.014), (40, -0.0), (41, 0.012), (42, -0.005), (43, -0.015), (44, 0.022), (45, 0.003), (46, 0.016), (47, 0.028), (48, 0.008), (49, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98617864 374 cvpr-2013-Saliency Aggregation: A Data-Driven Approach

Author: Long Mai, Yuzhen Niu, Feng Liu

Abstract: A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating various saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First, different methods perform differently in saliency analysis. Second, the performance of a saliency analysis method varies with individual images. Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among individual methods and the performance dependence of each method on individual images. This paper discusses various data-driven approaches and finds that the image-dependent aggregation method works best. Specifically, our method uses a Conditional Random Field (CRF) framework for saliency aggregation that not only models the contribution from individual saliency map but also the interaction between neighboringpixels. To account for the dependence of aggregation on an individual image, our approach selects a subset of images similar to the input image from a training data set and trains the CRF aggregation model only using this subset instead of the whole training set. Our experiments on public saliency benchmarks show that our aggregation method outperforms each individual saliency method and is robust with the selection of aggregated methods.

2 0.95877635 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

Author: Keyang Shi, Keze Wang, Jiangbo Lu, Liang Lin

Abstract: Driven by recent vision and graphics applications such as image segmentation and object recognition, assigning pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly critical. More often, such fine-grained saliency detection is also desired to have a fast runtime. Motivated by these, we propose a generic and fast computational framework called PISA Pixelwise Image Saliency Aggregating complementary saliency cues based on color and structure contrasts with spatial priors holistically. Overcoming the limitations of previous methods often using homogeneous superpixel-based and color contrast-only treatment, our PISA approach directly performs saliency modeling for each individual pixel and makes use of densely overlapping, feature-adaptive observations for saliency measure computation. We further impose a spatial prior term on each of the two contrast measures, which constrains pixels rendered salient to be compact and also centered in image domain. By fusing complementary contrast measures in such a pixelwise adaptive manner, the detection effectiveness is significantly boosted. Without requiring reliable region segmentation or post– relaxation, PISA exploits an efficient edge-aware image representation and filtering technique and produces spatially coherent yet detail-preserving saliency maps. Extensive experiments on three public datasets demonstrate PISA’s superior detection accuracy and competitive runtime speed over the state-of-the-arts approaches.

same-paper 3 0.95436776 376 cvpr-2013-Salient Object Detection: A Discriminative Regional Feature Integration Approach

Author: Huaizu Jiang, Jingdong Wang, Zejian Yuan, Yang Wu, Nanning Zheng, Shipeng Li

Abstract: Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional backgroundness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, backgroundness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

4 0.91976833 375 cvpr-2013-Saliency Detection via Graph-Based Manifold Ranking

Author: Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, Ming-Hsuan Yang

Abstract: Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with superpixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult bench- mark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field.

5 0.90939742 202 cvpr-2013-Hierarchical Saliency Detection

Author: Qiong Yan, Li Xu, Jianping Shi, Jiaya Jia

Abstract: When dealing with objects with complex structures, saliency detection confronts a critical problem namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed. –

6 0.89260507 411 cvpr-2013-Statistical Textural Distinctiveness for Salient Region Detection in Natural Images

7 0.80618119 418 cvpr-2013-Submodular Salient Region Detection

8 0.79398507 273 cvpr-2013-Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection

9 0.78518742 258 cvpr-2013-Learning Video Saliency from Human Gaze Using Candidate Selection

10 0.5883134 263 cvpr-2013-Learning the Change for Automatic Image Cropping

11 0.51813513 464 cvpr-2013-What Makes a Patch Distinct?

12 0.49699691 450 cvpr-2013-Unsupervised Joint Object Discovery and Segmentation in Internet Images

13 0.36235589 200 cvpr-2013-Harvesting Mid-level Visual Concepts from Large-Scale Internet Images

14 0.36025196 205 cvpr-2013-Hollywood 3D: Recognizing Actions in 3D Natural Scenes

15 0.33883321 157 cvpr-2013-Exploring Implicit Image Statistics for Visual Representativeness Modeling

16 0.31397727 325 cvpr-2013-Part Discovery from Partial Correspondence

17 0.25728685 291 cvpr-2013-Motionlets: Mid-level 3D Parts for Human Motion Recognition

18 0.24524681 321 cvpr-2013-PDM-ENLOR: Learning Ensemble of Local PDM-Based Regressions

19 0.22119392 416 cvpr-2013-Studying Relationships between Human Gaze, Description, and Computer Vision

20 0.22000393 468 cvpr-2013-Winding Number for Region-Boundary Consistent Salient Contour Extraction


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.108), (16, 0.025), (26, 0.042), (33, 0.248), (67, 0.175), (69, 0.047), (80, 0.012), (87, 0.055), (89, 0.013), (93, 0.17)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88270187 120 cvpr-2013-Detecting and Naming Actors in Movies Using Generative Appearance Models

Author: Vineet Gandhi, Remi Ronfard

Abstract: We introduce a generative model for learning person and costume specific detectors from labeled examples. We demonstrate the model on the task of localizing and naming actors in long video sequences. More specifically, the actor’s head and shoulders are each represented as a constellation of optional color regions. Detection can proceed despite changes in view-point and partial occlusions. We explain how to learn the models from a small number of labeled keyframes or video tracks, and how to detect novel appearances of the actors in a maximum likelihood framework. We present results on a challenging movie example, with 81% recall in actor detection (coverage) and 89% precision in actor identification (naming).

2 0.8779788 2 cvpr-2013-3D Pictorial Structures for Multiple View Articulated Pose Estimation

Author: Magnus Burenius, Josephine Sullivan, Stefan Carlsson

Abstract: We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework.

3 0.87585312 160 cvpr-2013-Face Recognition in Movie Trailers via Mean Sequence Sparse Representation-Based Classification

Author: Enrique G. Ortiz, Alan Wright, Mubarak Shah

Abstract: This paper presents an end-to-end video face recognition system, addressing the difficult problem of identifying a video face track using a large dictionary of still face images of a few hundred people, while rejecting unknown individuals. A straightforward application of the popular ?1minimization for face recognition on a frame-by-frame basis is prohibitively expensive, so we propose a novel algorithm Mean Sequence SRC (MSSRC) that performs video face recognition using a joint optimization leveraging all of the available video data and the knowledge that the face track frames belong to the same individual. By adding a strict temporal constraint to the ?1-minimization that forces individual frames in a face track to all reconstruct a single identity, we show the optimization reduces to a single minimization over the mean of the face track. We also introduce a new Movie Trailer Face Dataset collected from 101 movie trailers on YouTube. Finally, we show that our methodmatches or outperforms the state-of-the-art on three existing datasets (YouTube Celebrities, YouTube Faces, and Buffy) and our unconstrained Movie Trailer Face Dataset. More importantly, our method excels at rejecting unknown identities by at least 8% in average precision.

4 0.87581134 45 cvpr-2013-Articulated Pose Estimation Using Discriminative Armlet Classifiers

Author: Georgia Gkioxari, Pablo Arbeláez, Lubomir Bourdev, Jitendra Malik

Abstract: We propose a novel approach for human pose estimation in real-world cluttered scenes, and focus on the challenging problem of predicting the pose of both arms for each person in the image. For this purpose, we build on the notion of poselets [4] and train highly discriminative classifiers to differentiate among arm configurations, which we call armlets. We propose a rich representation which, in addition to standardHOGfeatures, integrates the information of strong contours, skin color and contextual cues in a principled manner. Unlike existing methods, we evaluate our approach on a large subset of images from the PASCAL VOC detection dataset, where critical visual phenomena, such as occlusion, truncation, multiple instances and clutter are the norm. Our approach outperforms Yang and Ramanan [26], the state-of-the-art technique, with an improvement from 29.0% to 37.5% PCP accuracy on the arm keypoint prediction task, on this new pose estimation dataset.

5 0.87421972 398 cvpr-2013-Single-Pedestrian Detection Aided by Multi-pedestrian Detection

Author: Wanli Ouyang, Xiaogang Wang

Abstract: In this paper, we address the challenging problem of detecting pedestrians who appear in groups and have interaction. A new approach is proposed for single-pedestrian detection aided by multi-pedestrian detection. A mixture model of multi-pedestrian detectors is designed to capture the unique visual cues which are formed by nearby multiple pedestrians but cannot be captured by single-pedestrian detectors. A probabilistic framework is proposed to model the relationship between the configurations estimated by single- and multi-pedestrian detectors, and to refine the single-pedestrian detection result with multi-pedestrian detection. It can integrate with any single-pedestrian detector without significantly increasing the computation load. 15 state-of-the-art single-pedestrian detection approaches are investigated on three widely used public datasets: Caltech, TUD-Brussels andETH. Experimental results show that our framework significantly improves all these approaches. The average improvement is 9% on the Caltech-Test dataset, 11% on the TUD-Brussels dataset and 17% on the ETH dataset in terms of average miss rate. The lowest average miss rate is reduced from 48% to 43% on the Caltech-Test dataset, from 55% to 50% on the TUD-Brussels dataset and from 51% to 41% on the ETH dataset.

6 0.87285393 103 cvpr-2013-Decoding Children's Social Behavior

7 0.87017995 275 cvpr-2013-Lp-Norm IDF for Large Scale Image Search

8 0.8682673 208 cvpr-2013-Hyperbolic Harmonic Mapping for Constrained Brain Surface Registration

9 0.8671428 89 cvpr-2013-Computationally Efficient Regression on a Dependency Graph for Human Pose Estimation

10 0.86610413 345 cvpr-2013-Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues

11 0.86351871 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation

12 0.86294335 375 cvpr-2013-Saliency Detection via Graph-Based Manifold Ranking

13 0.86134791 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

14 0.85910195 254 cvpr-2013-Learning SURF Cascade for Fast and Accurate Object Detection

15 0.8568998 288 cvpr-2013-Modeling Mutual Visibility Relationship in Pedestrian Detection

16 0.85567963 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval

17 0.85542095 142 cvpr-2013-Efficient Detector Adaptation for Object Detection in a Video

same-paper 18 0.85235018 376 cvpr-2013-Salient Object Detection: A Discriminative Regional Feature Integration Approach

19 0.85117698 122 cvpr-2013-Detection Evolution with Multi-order Contextual Co-occurrence

20 0.8425796 60 cvpr-2013-Beyond Physical Connections: Tree Models in Human Pose Estimation