iccv iccv2013 iccv2013-71 knowledge-graph by maker-knowledge-mining

71 iccv-2013-Category-Independent Object-Level Saliency Detection


Source: pdf

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. [sent-3, score-1.417]

2 In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. [sent-4, score-0.763]

3 Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics. [sent-6, score-1.266]

4 Other than cognitively understanding the way human perceive images and scenes, finding salient regions and objects in the images helps various tasks such as speeding up object detection [27, 23] and content-aware image editing [4]. [sent-9, score-0.399]

5 There is a line of saliency detection work centered around visual attention models [13, 15, 12] that focuses on finding locations of images that capture early-stage human fixations before more complex object recognition or scene parsing takes place. [sent-10, score-0.843]

6 While this bears much importance in understanding human visual systems, we focus on the problem offinding salient objects, aiming to find consistent foreground objects, which is often of interest in many further applications such as object detection. [sent-11, score-0.453]

7 An illustration of our approach from images to the final saliency map: (a) Input Image (b) objectness detections, (c) saliency prior from objectness, (d) diverse density scores for pixels, (e) the final saliency map, and (f) the segmented object. [sent-14, score-2.809]

8 In this paper, we propose a novel approach that fuses top-down object level information and bottom-up pixel appearances to obtain a final saliency map that identifies the most interesting regions in the image. [sent-16, score-0.981]

9 Specifically, we adopt the recent objectness frame- work [3] that finds potential object candidates in the image. [sent-18, score-0.447]

10 Such objectness information is then passed onto the pixel level as a prior of the per-pixel saliency. [sent-19, score-0.437]

11 We will start by reviewing the related work in the saliency detection field, and then formally describe our algorithm in Section 3 and 4, including the employment of the objectness cue in our model, and the Markov random field that fuses high-level object information and low-level appearance. [sent-22, score-1.216]

12 Related Work Pre-attentive bottom up saliency algorithms have been extensively studied from biological and computational perspectives. [sent-25, score-0.727]

13 We note that such information is not neglected in our framework, but is rather incorporated into the objectness detection component. [sent-27, score-0.381]

14 [5] was the first to adopt a high-level object information as saliency prior. [sent-29, score-0.771]

15 However, the prior is combined with pixel-wise scores from another low-level saliency model, which then creates an arbitrary bias towards the specific algorithms’ behaviors, such as favoring high frequency areas, and may in some cases hurt the final performance. [sent-30, score-0.887]

16 Other work, especially in segmentation [26, 16], adopt parameterized models such as Gaussian Mixture Model (GMM) to model the foreground and to cut out the foreground region with coarse supervised information. [sent-31, score-0.338]

17 While such tasks (such as cosegmentation) explicitly need to identify the mixture components of the foreground, it may not be necessary in finding saliency regions, and the multiple parameters to be tuned in these models may hurt performance. [sent-32, score-0.753]

18 We empirically tested a parameterized mixture model for foreground modeling, and found the MRF approach in our paper to better fit the saliency problem. [sent-33, score-0.882]

19 To obtain a consistent salient object detection, an important structural choice is to use a fully-connected graph, rather than a locally connected graph as many previous approaches do [5], as locally connected graph may lead to overly smoothed saliency maps. [sent-34, score-1.233]

20 What we will show in the paper is that, despite its simplicity, a purely top-down prior and a fully-connected graph built on simple color features could achieve state-of-the-art performance, without the need of additional bottom-up saliency prior or additional handcrafted features. [sent-41, score-0.909]

21 Saliency Detection with Object-level Information In this section, we formally describe the algorithm we proposed to perform saliency detection based on high-level object information. [sent-43, score-0.81]

22 Object Detection Our method starts with finding an informative prior that captures the potential salient regions from images. [sent-46, score-0.403]

23 While specific object detectors such as faces and vehicles have been adopted to help finding good prior knowledge of salient objects [15, 28], we focus on algorithms that are able to handle general object appearances without categoryspecific information. [sent-47, score-0.506]

24 To this end, we adopted the objectness algorithm as proposed in [3] to find a set of object candidates in input images. [sent-48, score-0.458]

25 Specifically, the objectness algorithm finds a set ofobject candidates represented by bounding boxes, together with the confidence scores, for each input image. [sent-49, score-0.459]

26 It adopts four different low-level cues to learn if a certain bounding box contains an object or not, which we give a brief explanation to the cues adopted in the method as follows for completeness. [sent-50, score-0.215]

27 Color Contrast: this cue computes the dissimilarity of the color distribution of a candidate bounding box with that of its surrounding area. [sent-54, score-0.193]

28 Edge Density: this cue computes the density of the edges (computed by Canny edge detection) near the borders of the candidate bounding box. [sent-57, score-0.251]

29 Superpixel Straddling: this cue computes the agreement between the candidate bounding box and the super pixels obtained by [8]. [sent-60, score-0.273]

30 Since pixels in the same superpixel often belong to the same semantic group (either the object or the background), for a good object candidate most superpixels should lie mostly either inside or outside the bounding box, and should not cross the boundary. [sent-61, score-0.327]

31 The top row shows sample images with the most confident 5 bounding boxes, with the bottom row the corresponding saliency map priors obtained from objectness. [sent-63, score-0.872]

32 The per-superpixel image saliency obtained directly from the objectness detection. [sent-65, score-1.069]

33 We note that unlike previous works such as [5], we keep the low-level, frequency based saliency component (MS) in the objectness pipeline. [sent-67, score-1.094]

34 This allows the objectness method to more accurately identify possible objects in the image, and as we will discuss in the next section, low-level saliency may impose a negative impact on the final saliency measure when used alone. [sent-68, score-1.818]

35 We trained the objectness parameters on a randomly selected subset of ImageNet images that are separate from our testing data. [sent-69, score-0.342]

36 For each input I, we then performed objectness detection on each image with K top object candidates1, denoted by {(B1, b1) , (B2 , b2) , · · · , (BK, bK) } where Bk is tnhoet bounding box and bk is) ,th··e· corresponding hceornefiBd ence score. [sent-70, score-0.626]

37 In most cases, the algorithm is able to capture the correct location of the salient object, although in rare cases such as the last column ex- ample, where the large number of vertical lines in the background building causes the objectness to bias towards it. [sent-72, score-0.596]

38 Pixel-level Objectness Scores As our goal is to obtain a saliency map for the whole image, we transfer the objectness scores from the bounding boxes to the pixel level. [sent-75, score-1.33]

39 classification, by computing each pixel p’s objectness score (denoted by sp) as the square root of the summed squares (rss) of the scores from all the bounding boxes that covers it, weighted by a Gaussian function for smoothness: sp=? [sent-77, score-0.582]

40 The exponent term provides a discounting factor so that pixels far from the bounding box center receives less contribution from the bounding box than pixels near the bounding center do. [sent-81, score-0.429]

41 To reduce the computation cost for subsequent steps we adopted the idea of superpixels and averaged the saliency values of pixels inside each superpixel. [sent-82, score-0.895]

42 Further, since the scale of the objectness scores from Eqn. [sent-84, score-0.396]

43 1may vary due to different objectness detections, we re-normalize the per-pixel scores based on each image so that the maximum score is 1and the minimum is 0 over the whole image. [sent-85, score-0.418]

44 Figure 3 shows the resulting per-pixel objectness score with summed pooling and two baseline choices, average pooling (as often used in classification), and without smoothing. [sent-86, score-0.415]

45 It could be observed that although the saliency map is still coarse, it provides a reasonable initialization for the final saliency map as it correctly identifies the salient object location. [sent-87, score-1.832]

46 More importantly, such saliency prior is not biased towards specific low-level appearances such as highfrequency regions, which often misses the inside region of the salient objects. [sent-88, score-1.085]

47 Saliency Computation with Graph-based Foreground Agreement The pixel level prior gives us a reasonably informative result on the salient regions of the images. [sent-90, score-0.416]

48 However, due to the fact that objectness bounding boxes are often overcomplete, the saliency map is often very coarse, and one would expect low-level appearance based information to be helpful in refining the saliency maps. [sent-91, score-1.941]

49 Such statistics will bias our further inference algorithm towards small highly textured areas, a negative effect for saliency detection. [sent-93, score-0.727]

50 agreement between salient regions in the image, based on the similarities between pixel level features. [sent-96, score-0.374]

51 The idea is that ifa pixel has a high salient prior, then pixels that appear similar in the image should also receive high salient scores even if it lies in a region with low contrast, thus ensuring a consistent saliency score assignment in the whole region of the salient object. [sent-97, score-1.645]

52 Thus, we use a fully connected MRF where any two superpixels are connected, with the corresponding edge weight computed as Wij= exp? [sent-103, score-0.231]

53 A potential issue with the direct computation of the weight is that with images of small foreground regions, the large number of background superpixels will dominate the spectral characteristics of W, making it relatively hard to identify the foreground object. [sent-114, score-0.455]

54 The diverse density models how near other salient regions are to it, and how far other non-salient regions are from it, with the saliency approximated by the prior information sj . [sent-120, score-1.293]

55 For normalization purposes, we then normalize all diverse density values by DDi←? [sent-121, score-0.177]

56 γ (4) where γ is a scaling factor that controls the peakness of the diverse density measure. [sent-123, score-0.177]

57 Figure 4 shows an example of the diverse density scores obtained. [sent-125, score-0.231]

58 Then, we solve for the improved saliency vta vluaelu feo 1r −eac sh pixel by viewing the graph as a Gaussian MRF, which leads to an efficient computation of the final saliency values as s s =? [sent-128, score-1.53]

59 Examples of the final saliency map can be seen in Figure 7. [sent-135, score-0.753]

60 Analysis of Performance Contributions With the multiple stages of many current saliency detection algorithms, it would be interesting to observe how ispambnerdulotwSihxvnem-crfsyaietpwodnfeptc,lrsanmoiteyd,almorwyvtnehifalouscgvret[n2fl,dpohwriuexn2gs“d5lric]o,pthufnxeosdthilnave. [sent-138, score-0.766]

61 hdbompavsgeri,odhclbpwjnrveitom bufaechnsrtd11776644 whole image, a Gaussian prior that favors the center of the image, and a more sophisticated prior from [28] that combines multiple cues, such as location, semantics and color to learn a final informed prior saliency. [sent-143, score-0.187]

62 We then use them as the initialization of our graph, and perform GMRF infer- ence to get the final saliency measure. [sent-144, score-0.752]

63 A prior that captures coarse locations of the foreground object does bear importance, as the uninformed priors do a very poor job of identifying the salient region. [sent-146, score-0.572]

64 Both are still worse than the proposed objectness prior, which gives us a 4% average further precision increase, suggesting that the general objectness measure serves as a good heuristics in saliency detection. [sent-148, score-1.452]

65 We start from a normal MRF construction, where only spatially connected superpixels are connected in the graph. [sent-150, score-0.226]

66 The diverse density (DD) term is then imposed on computing the edge weights of the graph. [sent-151, score-0.199]

67 Both baselines are then compared against our method that uses both a diverse density term and a fully connected graph. [sent-152, score-0.293]

68 The results show that diverse density provides with a significant precision gain in the low recall area, possibly resulting from preventing background superpixels to have a too strong influence on neighboring superpixels. [sent-153, score-0.361]

69 Experiments We evaluated our method on the MSRA saliency dataset containing 1000 images together with the salient object annotated by human participants as the ground-truth saliency map, and compared the performance against state-of-theart algorithms. [sent-158, score-1.752]

70 Our saliency maps on the MSRA dataset are publicly available at http : / /www . [sent-159, score-0.75]

71 Evaluation Criteria We mainly adopted the criteria introduced in [2] to evaluate the performance of various saliency algorithms using precision recall (PR) curves. [sent-165, score-0.836]

72 FC means fully connected and DD means diverse density weighted. [sent-173, score-0.268]

73 The precision-recall curves for our saliency detection algorithm and the baseline algorithms. [sent-175, score-0.806]

74 Results on the MSRA dataset with the saliency maps of the best 8 methods, ordered from left to right, where GT is the ground truth and OB is our approach. [sent-181, score-0.75]

75 saliency orders within each image: it uses the threshold to generate per-image PR curves, and then computes the average precision values at fixed recall (between 0 and 1 with a stepsize of 0. [sent-183, score-0.821]

76 Intuitively, the first method represents the robustness of the algorithm in a cross-image fashion when an uninformative threshold is used, and the second focuses on checking the correct saliency order for pixels in a single image. [sent-185, score-0.767]

77 We also report the performance when we binarize the saliency map with an adaptive thresholding method. [sent-186, score-0.753]

78 For binarization, we computed the mean m and the standard deviation σ of the saliency map, and then set all pixels whose saliency value is larger than m + σ to be foreground and the rest to be background. [sent-187, score-1.649]

79 (AC)[1], context-aware saliency (CA)[9], graph-based visual saliency (GB)[10], frequency-tuned saliency (IG)[2], Itti et al. [sent-194, score-2.181]

80 (IT)[13], contrast based attention model (MZ)[21], spectral residual approach (SR)[12], and saliency filters (SF)[24]. [sent-195, score-0.76]

81 (JD) [15], global contrast based saliency (RC)[6], saliency by low rank recovery (LR)[28], Chang et al. [sent-197, score-1.454]

82 Such methods utilize high-level object or global image information to create an informative prior for the saliency map. [sent-199, score-0.853]

83 For all baseline methods, we used either the published implementations with their recommended parameters or the author-provided saliency maps. [sent-200, score-0.727]

84 In general, methods that utilize high-level information to obtain more informative saliency priors perform better than purely low-level approaches, and our method achieves the highest average precision on both PR curves over all baselines. [sent-201, score-0.907]

85 Figure 7 shows exemplar their corresponding saliency maps from various Full results on the dataset can be found in the our method images and algorithms. [sent-208, score-0.75]

86 This partially results from the fact that it correctly identifies foreground regions that have a consistent appearance, without being biased towards e. [sent-220, score-0.223]

87 with discriminative regional features (DR)[14], in which a pixel-wise saliency prediction model is trained on ground-truth saliency maps3. [sent-224, score-1.483]

88 It is interesting to note that a major contribution is also due to the introduction of objectlevel information, further justifying the use of such approaches in saliency detection. [sent-225, score-0.727]

89 Performance on the Weizmann Dataset The MSRA saliency dataset mainly contains single salient objects of medium sizes per image, which is the assumption made by several saliency detection algorithms, especially those with a high-level object appearance model. [sent-228, score-1.813]

90 To evaluate the performance of our approach under more varied conditions such as multiple foreground objects, we used the Weizmann Dataset, which contains two subsets of images with single foreground object and two foreground objects respectively. [sent-229, score-0.531]

91 However, multiple foreground objects being present hurts some high-level model baselines (Figure 10(b)), leading to an even slightly worse performance than good low-level models (RC). [sent-232, score-0.202]

92 This is possibly due to the fact that these models explicitly models one single foreground (LC) or favors a connected foreground (SV). [sent-233, score-0.378]

93 Our method is free from such assumptions, On contrary, our model does not make such assumptions, and is able to naturally cope with multiple foreground blobs, as we model the appearance of the foreground with a graph, implicitly allowing mixtures of foreground appearances4. [sent-234, score-0.465]

94 Conclusion In this paper we proposed a novel image saliency algorithm that utilizes the object-level information to obtain better discovery of salient objects. [sent-238, score-0.981]

95 Our model obtains saliency maps that assign high scores for the whole salient 3We report the performance on the MSRA dataset in the supplementary material, as their result is on a test subset and is not directly comparable. [sent-240, score-1.111]

96 The precision-recall curves for our saliency detection algorithm and the baseline algorithms on the Weizmann dataset. [sent-243, score-0.806]

97 Fusing generic objectness and visual saliency for salient object detection. [sent-269, score-1.367]

98 Center-surround divergence of feature statistics for salient object detection. [sent-317, score-0.298]

99 Saliency filters: Contrast based filtering for salient region detection. [sent-338, score-0.254]

100 A unified approach to salient object detection via low rank matrix recovery. [sent-354, score-0.337]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('saliency', 0.727), ('objectness', 0.342), ('salient', 0.254), ('foreground', 0.155), ('msra', 0.15), ('weizmann', 0.128), ('diverse', 0.091), ('superpixels', 0.09), ('density', 0.086), ('bounding', 0.083), ('mrf', 0.075), ('connected', 0.068), ('pr', 0.067), ('christof', 0.065), ('prior', 0.055), ('scores', 0.054), ('ddi', 0.05), ('box', 0.05), ('appearances', 0.049), ('rc', 0.046), ('object', 0.044), ('radhakrishna', 0.043), ('sabine', 0.043), ('turbopixel', 0.043), ('bk', 0.043), ('binarization', 0.042), ('precision', 0.041), ('pixel', 0.04), ('curves', 0.04), ('agreement', 0.04), ('pixels', 0.04), ('regions', 0.04), ('sv', 0.039), ('lr', 0.039), ('detection', 0.039), ('dr', 0.038), ('pietro', 0.038), ('adopted', 0.038), ('bi', 0.038), ('gs', 0.037), ('cue', 0.037), ('graph', 0.036), ('boxes', 0.036), ('purely', 0.036), ('priors', 0.036), ('laurent', 0.035), ('philipp', 0.035), ('achanta', 0.034), ('candidates', 0.034), ('wij', 0.034), ('koch', 0.033), ('attention', 0.033), ('choices', 0.033), ('francisco', 0.032), ('scaled', 0.032), ('obtains', 0.031), ('ig', 0.031), ('foregrounds', 0.031), ('recall', 0.03), ('regional', 0.029), ('rss', 0.029), ('estrada', 0.029), ('harel', 0.029), ('weight', 0.028), ('identifies', 0.028), ('coarse', 0.028), ('rke', 0.028), ('potential', 0.027), ('fuses', 0.027), ('summed', 0.027), ('informative', 0.027), ('hurt', 0.026), ('judd', 0.026), ('superpixel', 0.026), ('map', 0.026), ('baselines', 0.025), ('dd', 0.025), ('sp', 0.025), ('jian', 0.025), ('graphbased', 0.025), ('frequency', 0.025), ('category', 0.025), ('ence', 0.025), ('connects', 0.024), ('itti', 0.024), ('cosegmentation', 0.024), ('fully', 0.023), ('emphasizes', 0.023), ('iand', 0.023), ('influence', 0.023), ('maps', 0.023), ('computes', 0.023), ('pooling', 0.023), ('ms', 0.023), ('objects', 0.022), ('markov', 0.022), ('edge', 0.022), ('tpami', 0.022), ('consistency', 0.022), ('whole', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

2 0.65755612 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

3 0.56836528 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

4 0.4699735 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

Author: Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, Laurent Itti

Abstract: Significant recent progress has been made in developing high-quality saliency models. However, less effort has been undertaken on fair assessment of these models, over large standardized datasets and correctly addressing confounding factors. In this study, we pursue a critical and quantitative look at challenges (e.g., center-bias, map smoothing) in saliency modeling and the way they affect model accuracy. We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scanpath sequence. We also account for the role of map smoothing. We find that, although model rankings vary, some (e.g., AWS, LG, AIM, and HouNIPS) consistently outperform other models over all datasets. Some models work well for prediction of both fixation locations and scanpath sequence (e.g., Judd, GBVS). Our results show low prediction accuracy for models over emotional stimuli from the NUSEF dataset. Our last benchmark, for the first time, gauges the ability of models to decode the stimulus category from statistics of fixations, saccades, and model saliency values at fixated locations. In this test, ITTI and AIM models win over other models. Our benchmark provides a comprehensive high-level picture of the strengths and weaknesses of many popular models, and suggests future research directions in saliency modeling.

5 0.46455175 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

Author: Peng Jiang, Haibin Ling, Jingyi Yu, Jingliang Peng

Abstract: The goal of saliency detection is to locate important pixels or regions in an image which attract humans ’ visual attention the most. This is a fundamental task whose output may serve as the basis for further computer vision tasks like segmentation, resizing, tracking and so forth. In this paper we propose a novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focusness and objectness (UFO). In particular, uniqueness captures the appearance-derived visual contrast; focusness reflects the fact that salient regions are often photographed in focus; and objectness helps keep completeness of detected salient regions. While uniqueness has been used for saliency detection for long, it is new to integrate focusness and objectness for this purpose. In fact, focusness and objectness both provide important saliency information complementary of uniqueness. In our experiments using public benchmark datasets, we show that, even with a simple pixel level combination of the three components, the proposed approach yields significant improve- ment compared with previously reported methods.

6 0.4423629 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

7 0.43327251 396 iccv-2013-Space-Time Robust Representation for Action Recognition

8 0.38807967 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

9 0.3728056 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

10 0.35438567 370 iccv-2013-Saliency Detection in Large Point Sets

11 0.33323145 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

12 0.31693408 369 iccv-2013-Saliency Detection: A Boolean Map Approach

13 0.25796276 299 iccv-2013-Online Video SEEDS for Temporal Window Objectness

14 0.22052106 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

15 0.18745546 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

16 0.12334774 282 iccv-2013-Multi-view Object Segmentation in Space and Time

17 0.1205373 74 iccv-2013-Co-segmentation by Composition

18 0.11862689 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

19 0.11415014 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

20 0.1140459 379 iccv-2013-Semantic Segmentation without Annotating Segments


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.215), (1, -0.084), (2, 0.679), (3, -0.317), (4, -0.163), (5, 0.005), (6, 0.012), (7, -0.039), (8, 0.023), (9, -0.021), (10, -0.03), (11, 0.063), (12, 0.019), (13, -0.022), (14, 0.006), (15, -0.076), (16, 0.116), (17, -0.047), (18, -0.083), (19, 0.093), (20, 0.006), (21, -0.028), (22, 0.005), (23, 0.009), (24, 0.036), (25, -0.023), (26, -0.0), (27, -0.023), (28, -0.005), (29, -0.016), (30, -0.006), (31, 0.02), (32, 0.009), (33, 0.008), (34, 0.02), (35, -0.022), (36, 0.006), (37, 0.022), (38, -0.02), (39, -0.002), (40, 0.008), (41, -0.023), (42, 0.003), (43, 0.021), (44, -0.008), (45, -0.001), (46, 0.036), (47, 0.007), (48, 0.048), (49, -0.012)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97224641 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

same-paper 2 0.9661842 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

3 0.94601977 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

Author: Peng Jiang, Haibin Ling, Jingyi Yu, Jingliang Peng

Abstract: The goal of saliency detection is to locate important pixels or regions in an image which attract humans ’ visual attention the most. This is a fundamental task whose output may serve as the basis for further computer vision tasks like segmentation, resizing, tracking and so forth. In this paper we propose a novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focusness and objectness (UFO). In particular, uniqueness captures the appearance-derived visual contrast; focusness reflects the fact that salient regions are often photographed in focus; and objectness helps keep completeness of detected salient regions. While uniqueness has been used for saliency detection for long, it is new to integrate focusness and objectness for this purpose. In fact, focusness and objectness both provide important saliency information complementary of uniqueness. In our experiments using public benchmark datasets, we show that, even with a simple pixel level combination of the three components, the proposed approach yields significant improve- ment compared with previously reported methods.

4 0.92030281 369 iccv-2013-Saliency Detection: A Boolean Map Approach

Author: Jianming Zhang, Stan Sclaroff

Abstract: A novel Boolean Map based Saliency (BMS) model is proposed. An image is characterized by a set of binary images, which are generated by randomly thresholding the image ’s color channels. Based on a Gestalt principle of figure-ground segregation, BMS computes saliency maps by analyzing the topological structure of Boolean maps. BMS is simple to implement and efficient to run. Despite its simplicity, BMS consistently achieves state-of-the-art performance compared with ten leading methods on five eye tracking datasets. Furthermore, BMS is also shown to be advantageous in salient object detection.

5 0.91165549 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

6 0.89519697 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

7 0.89464509 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

8 0.8927803 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

9 0.84720767 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

10 0.84720618 370 iccv-2013-Saliency Detection in Large Point Sets

11 0.75403297 396 iccv-2013-Space-Time Robust Representation for Action Recognition

12 0.63618159 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

13 0.48114911 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

14 0.36774483 74 iccv-2013-Co-segmentation by Composition

15 0.34952116 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

16 0.32247162 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

17 0.31429383 299 iccv-2013-Online Video SEEDS for Temporal Window Objectness

18 0.29609001 186 iccv-2013-GrabCut in One Cut

19 0.29400745 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model

20 0.28148338 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.117), (4, 0.156), (7, 0.019), (26, 0.105), (31, 0.036), (40, 0.01), (42, 0.085), (64, 0.037), (73, 0.039), (89, 0.168), (97, 0.098), (98, 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.86797112 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

2 0.84794778 163 iccv-2013-Feature Weighting via Optimal Thresholding for Video Analysis

Author: Zhongwen Xu, Yi Yang, Ivor Tsang, Nicu Sebe, Alexander G. Hauptmann

Abstract: Fusion of multiple features can boost the performance of large-scale visual classification and detection tasks like TRECVID Multimedia Event Detection (MED) competition [1]. In this paper, we propose a novel feature fusion approach, namely Feature Weighting via Optimal Thresholding (FWOT) to effectively fuse various features. FWOT learns the weights, thresholding and smoothing parameters in a joint framework to combine the decision values obtained from all the individual features and the early fusion. To the best of our knowledge, this is the first work to consider the weight and threshold factors of fusion problem simultaneously. Compared to state-of-the-art fusion algorithms, our approach achieves promising improvements on HMDB [8] action recognition dataset and CCV [5] video classification dataset. In addition, experiments on two TRECVID MED 2011 collections show that our approach outperforms the state-of-the-art fusion methods for complex event detection.

3 0.84757042 158 iccv-2013-Fast High Dimensional Vector Multiplication Face Recognition

Author: Oren Barkan, Jonathan Weill, Lior Wolf, Hagai Aronowitz

Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

4 0.83950388 236 iccv-2013-Learning Discriminative Part Detectors for Image Classification and Cosegmentation

Author: Jian Sun, Jean Ponce

Abstract: In this paper, we address the problem of learning discriminative part detectors from image sets with category labels. We propose a novel latent SVM model regularized by group sparsity to learn these part detectors. Starting from a large set of initial parts, the group sparsity regularizer forces the model to jointly select and optimize a set of discriminative part detectors in a max-margin framework. We propose a stochastic version of a proximal algorithm to solve the corresponding optimization problem. We apply the proposed method to image classification and cosegmentation, and quantitative experiments with standard benchmarks show that it matches or improves upon the state of the art.

5 0.83135432 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification

Author: Zhaowen Wang, Jianchao Yang, Nasser Nasrabadi, Thomas Huang

Abstract: Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal categories which lie on different subspaces. Despite its wide application to visual recognition tasks, current understanding of SRC is solely based on a reconstructive perspective, which neither offers any guarantee on its classification performance nor provides any insight on how to design a discriminative dictionary for SRC. In this paper, we present a novel perspective towards SRC and interpret it as a margin classifier. The decision boundary and margin of SRC are analyzed in local regions where the support of sparse code is stable. Based on the derived margin, we propose a hinge loss function as the gauge for the classification performance of SRC. A stochastic gradient descent algorithm is implemented to maximize the margin of SRC and obtain more discriminative dictionaries. Experiments validate the effectiveness of the proposed approach in predicting classification performance and improving dictionary quality over reconstructive ones. Classification results competitive with other state-ofthe-art sparse coding methods are reported on several data sets.

6 0.83078778 195 iccv-2013-Hidden Factor Analysis for Age Invariant Face Recognition

7 0.82594067 347 iccv-2013-Recursive Estimation of the Stein Center of SPD Matrices and Its Applications

8 0.82530844 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

9 0.82472706 412 iccv-2013-Synergistic Clustering of Image and Segment Descriptors for Unsupervised Scene Understanding

10 0.81886446 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

11 0.80830139 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

12 0.80791521 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction

13 0.80722982 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

14 0.80683875 73 iccv-2013-Class-Specific Simplex-Latent Dirichlet Allocation for Image Classification

15 0.80449617 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning

16 0.80428761 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

17 0.8036316 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

18 0.79779369 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation

19 0.79586041 369 iccv-2013-Saliency Detection: A Boolean Map Approach

20 0.79520708 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction