iccv iccv2013 iccv2013-372 knowledge-graph by maker-knowledge-mining

372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction


Source: pdf

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 For each image region, we first compute dense and sparse reconstruction errors. [sent-3, score-0.592]

2 Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. [sent-4, score-0.575]

3 Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. [sent-5, score-1.356]

4 We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. [sent-6, score-1.352]

5 Introduction Visual saliency is concerned with the distinct perceptual quality of biological systems which makes certain regions of a scene stand out from their neighbors and catch im- mediate attention. [sent-10, score-0.788]

6 Efficient saliency detection plays an important preprocessing role in many computer vision tasks, including segmentation, detection, recognition and compression, to name a few. [sent-13, score-0.78]

7 [13] define visual attention as the local center-surround difference and propose a saliency model based on multi-scale image features. [sent-15, score-0.754]

8 [18] propose a saliency detection algorithm by measuring the center-surround contrast of a sliding window over the entire image. [sent-17, score-0.801]

9 Recent methods [7, 8] measure global contrast-based saliency based on spatially weighted feature dissimilarities. [sent-22, score-0.733]

10 [17] formulate saliency estimation using two Gaussian filters by which color and position are respectively exploited to measure region uniqueness and distribution. [sent-24, score-0.795]

11 In [4], global saliency is computed inverse proportionally to the probability of a patch appearing in the entire scene. [sent-25, score-0.733]

12 When a foreground region is globally compared with the remaining portion of the scene (which inevitably includes the other foreground regions unless the object boundary is known), its contrast with the background is less distinct and the salient object is unlikely to be uniformly highlighted. [sent-27, score-0.578]

13 While dense or sparse representations have been separately applied to saliency detection recently [8, 4], these methods are developed for describing generic scenes. [sent-35, score-1.028]

14 In addition, each image patch is represented by the bases learned from a set of natural image patches rather than other ones directly from the scene, which means that the most relevant visual information is not fully extracted for saliency detection. [sent-36, score-0.765]

15 In this work, the saliency of each image region is measured by the reconstruction errors using background templates. [sent-41, score-1.317]

16 We exploit a context-based propagation mechanism to obtain more uniform reconstruction errors over the image. [sent-42, score-0.556]

17 The saliency of each pixel is then assigned by an integration of multi-scale reconstruction errors followed by an object-biased Gaussian refinement process. [sent-43, score-1.396]

18 In addition, we present a Bayesian integration method to combine saliency maps constructed from dense and sparse reconstruction. [sent-44, score-1.183]

19 We propose an algorithm to detect salient objects by dense and sparse reconstruction using the background templates for each individual image, which computes more effective bottom-up contrast-based saliency. [sent-47, score-1.041]

20 A context-based propagation mechanism is proposed for region-based saliency detection, which uniformly highlight- s the salient objects and smooths the region saliency. [sent-49, score-1.127]

21 We present a Bayesian integration method to combine saliency maps, which achieves more favorable results. [sent-51, score-0.911]

22 As shown in [4], the use of both Lab and RGB color spaces leads to saliency maps with higher accuracy. [sent-56, score-0.804]

23 Saliency Measure via Reconstruction Error We use both dense and sparse reconstruction errors to measure the saliency of each region which is represented by a D-dimensional feature. [sent-71, score-1.49]

24 For cluttered scenes, dense appearance models may be less effective in measuring salient objects via reconstruction errors. [sent-74, score-0.768]

25 , similar regions may have different sparse coefficients), which may lead to discontinuous saliency detection results. [sent-79, score-0.919]

26 In this work, we use both representations to model regions and measure saliency based on reconstruction errors. [sent-80, score-1.112]

27 The saliency measures via dense and sparse reconstruction errors are computed as shown in Figure 1(b). [sent-81, score-1.476]

28 First, we reconstruct all the image regions based on the background templates and normalize the reconstruction errors to the range of [0, 1] . [sent-82, score-0.718]

29 Third, pixel-level saliency is computed by taking multi-scale reconstruction errors followed by an objectbiased Gaussian refinement process. [sent-84, score-1.277]

30 For each region, we compute two reconstruction errors by dense and sparse representation, respectively. [sent-88, score-0.692]

31 1 Dense Reconstruction Error A segment with larger reconstruction error based on the background templates is more likely to be the foreground. [sent-91, score-0.724]

32 Based on this concern, the reconstruction error of each region is computed based on the dense appearance model generated from the background templates B = [b1, b2 , . [sent-92, score-0.855]

33 (xi −¯ x), and the dense reconstruction error of segment iis εid = ? [sent-104, score-0.629]

34 The saliency measure is proportional to the normalized reconstruction error (within the range of [0, 1]). [sent-107, score-1.139]

35 Figure 2(b) shows some saliency detection results via dense reconstruction. [sent-108, score-0.948]

36 The middle row of Figure 2 shows an example where some background regions have large reconstruction errors (i. [sent-110, score-0.578]

37 (4) Since all the background templates are regarded as the basis functions, sparse reconstruction error can better suppress the background compared with dense reconstruction error especially in cluttered images, as shown in the middle row of Figure 2. [sent-122, score-1.398]

38 Nevertheless, there are some drawbacks in measuring saliency with sparse reconstruction errors. [sent-123, score-1.202]

39 , when objects appear at the image boundaries), their saliency measures are close to 0 due to low sparse reconstruction errors. [sent-126, score-1.233]

40 In addition, the saliency measures for the other regions are less accurate due to inaccurate inclusion of foreground segments as part of sparse basis functions. [sent-127, score-1.069]

41 Saliency maps based on dense and sparse reconstruction errors. [sent-130, score-0.642]

42 We note sparse reconstruction error is more robust to deal with complicated background, while dense reconstruc- tion error is more accurate to handle the object segments at image boundaries. [sent-140, score-0.802]

43 Therefore, dense and sparse reconstruction errors are complementary in measuring saliency. [sent-141, score-0.713]

44 Context-Based Error Propagation We propose a context-based error propagation method to smooth the reconstruction errors generated by dense and sparse appearance models. [sent-144, score-0.825]

45 Both dense and sparse reconstruction errors of segment i(i. [sent-145, score-0.771]

46 We first apply the K-means algorithm to cluster N image segments into K clusters via their D-dimensional features and initialize the propagated reconstruction error of segment ias ε˜i = εi. [sent-148, score-0.707]

47 All the segments are sorted in descending order by their reconstruction errors and considered as multiple hypotheses. [sent-149, score-0.53]

48 The propagated reconstruction error of segment ibelonging to cluster k (k = 1, 2, . [sent-151, score-0.597]

49 (c) and (d) are original and propagated dense reconstruction errors. [sent-168, score-0.568]

50 (e) and (f) are original and propagated sparse reconstruction errors. [sent-169, score-0.528]

51 5 is the weighted averaging reconstruction error of the other segments in the same cluster, and the second term is the initial dense or sparse reconstruction error. [sent-171, score-1.084]

52 Figure 3 shows three examples where the context-based propagation mechanism smooths the reconstruction errors in a cluster, thereby uniformly highlighting the image objects. [sent-177, score-0.629]

53 Nevertheless, the reconstruction errors of these segments are modified by taking the contributions of their contexts into consideration using Eq. [sent-181, score-0.581]

54 Pixel-Level Saliency For a full-resolution saliency map, we assign saliency to each pixel by integrating results from multi-scale reconstruction errors, followed by refinement with an objectbiased Gaussian model. [sent-185, score-1.938]

55 We compute and propagate both dense and sparse reconstruction errors for each scale. [sent-189, score-0.692]

56 We integrate multi-scale reconstruction errors and compute the pixellevel reconstruction error by E(z) =sN? [sent-190, score-0.85]

57 Saliency maps with the multi-scale integration of propagated reconstruction errors. [sent-197, score-0.626]

58 (c) and (d) are propagated dense reconstruction errors without and with integration. [sent-199, score-0.668]

59 (e) and (f) are propagated sparse reconstruction errors without and with integration. [sent-200, score-0.628]

60 Figure 4 shows some examples where objects are more precisely identified by the reconstruction errors with multiscale integration, which suggests the effectiveness of using multi-scale integration mechanism to measure saliency. [sent-201, score-0.692]

61 show that there is a center bias in some saliency detection datasets [5]. [sent-205, score-0.805]

62 With the object-biased Gaussian model, the saliency of pixel z is computed by S (z) = Go (z) ∗ E (z). [sent-219, score-0.761]

63 Comparing the two refined maps of the saliency via dense or sparse reconstruction in the bottom row, the proposed object-biased Gaussian model renders more accurate object center, and therefore better refines the saliency detection results. [sent-221, score-2.249]

64 1, the saliency measures by dense and sparse reconstruction errors are complementary to each other. [sent-224, score-1.452]

65 To integrate both the saliency measures, we propose an integration method by Bayesian inference. [sent-225, score-0.885]

66 Ed and Es are the multi-scale integrated dense and sparse reconstruction error maps, respectively. [sent-227, score-0.715]

67 Recently, the Bayes formula has been used to measure saliency by the posterior probability in [18, 20, 22]: p(F|H(z)) =p(F)p(H(z)p|F(F)) +p ((1H −(z p)|(FF)))p(H(z)|B), (10) where the prior probability p(F) is a uniform [18] or a saliency map [20, 22] and H(z) is a feature vector of pixel z. [sent-228, score-1.619]

68 In this work, we take one saliency map as the prior and use the other one instead of Lab color information to compute the likelihoods, which integrates more diverse information from different saliency maps. [sent-232, score-1.55]

69 First, we threshold the map Si by its mean saliency value and obtain its foreground and background regions described by Fi and Bi, respectively. [sent-236, score-0.983]

70 The two saliency measures via dense and sparse reconstruction are respectively denoted by S1 and S2. [sent-265, score-1.376]

71 Similarly, the posterior saliency with Sj as the prior is computed. [sent-266, score-0.806]

72 We use these two posterior probabilities to compute an integrated saliency map, SB (S1(z) , S2 (z)), based on Bayesian integration: SB(S1(z), S2(z)) = p(F1|S2(z)) + p(F2|S1(z)). [sent-267, score-0.836]

73 (14) The proposed Bayesian integration of saliency maps is illustrated in Figure 6. [sent-268, score-0.935]

74 DE: dense reconstruction error; DEP: propagated DE; MDEP: multi-scale integrated DEP; MDEPG: Gaussian refined MDEP. [sent-288, score-0.656]

75 SE: sparse reconstruction error; SEP: propagated SE; MSEP: multi-scale integrated SEP; MSEPG: Gaussian refined MSEP. [sent-289, score-0.616]

76 (c) F-measure curves of the proposed Bayesian integrated saliency SB and four other integrated saliency of MDEPG and MSEPG. [sent-293, score-1.588]

77 9) in experiments, and observe that the saliency results are insensitive to either parameter. [sent-306, score-0.733]

78 We evaluate all saliency detection algorithms in terms of precision-recall curve and F-measure. [sent-313, score-0.78]

79 For each method, a binary map is obtained by segmenting each saliency map with a given threshold T ∈ [0, 255] and then compared with the ground truth mask to compute the precision and recall for an image. [sent-314, score-0.846]

80 We first use the mean-shift algorithm to segment the original image and extract the mean saliency of each segment. [sent-317, score-0.812]

81 We then obtain the binary map by thresholding the segments using twice the mean saliency value. [sent-318, score-0.851]

82 For each image in the MSRA database which is labeled with a bounding box (rather than precise object contour), we fit a rectangle to the thresholded saliency map for evaluation, similar to [5]. [sent-322, score-0.765]

83 We evaluate the contribution ofthe context-based propagation, multiscale reconstruction error integration and object-biased Gaussian refinement respectively in Figure 7. [sent-327, score-0.622]

84 Figure 7(a) shows that the sparse reconstruction error based on background templates achieves better accuracy in detecting salient objects than RC11 [7], while the dense one is comparable with it. [sent-329, score-1.083]

85 segment contexts through K-means clustering to smooth the reconstruction errors and minimize the detection mistakes introduced by the object segments in background templates with improved performance (Figure 7(a)). [sent-345, score-0.946]

86 The reconstruction error of a pixel is assigned by integrating the multiscale reconstruction errors, which helps generate more ac- curate and uniform saliency maps. [sent-346, score-1.556]

87 In Section 4, we discuss that the posterior probability can be more accurate with likelihood computed by a saliency map rather than the CIELab color space on the condition of the same prior in the Bayes formula. [sent-350, score-0.892]

88 Figure 8(a) shows that with the saliency via dense reconstruction as the prior, the result with the likelihood based on sparse reconstruction (DenseSparse) is more accurate than that with the CIELab color space (Dense-Lab). [sent-352, score-1.747]

89 While using the saliency map based on sparse reconstruction as the prior, the result with the likelihood based on dense reconstruction (Sparse-Dense) is comparable to that with the CIELab color space (Sparse-Lab) as shown in Figure 8(b). [sent-353, score-1.755]

90 We evaluate the performance of Bayesian integrated saliency map SB by comparing it with the integration strategies formulated in [5]: Sc = Z1 ? [sent-356, score-0.978]

91 Figure 8(c) shows that the F-measure of the proposed Bayesian integrated saliency map is higher than the other methods at most thresholds, which demonstrates the effectiveness of Bayesian integration. [sent-363, score-0.826]

92 We present the evaluation results of the proposed method compared with the state-of-the-art saliency detection methods on the ASD database in Figure 9, and the MSRA and SOD databases in Figure 10. [sent-365, score-0.801]

93 Comparisons SR [11] FT [2] CA [9] RA [18] DW [8] CB [14] RC [7] SVO [6] LR [19] DSR DSR cut GT of saliency maps. [sent-368, score-0.755]

94 DSR cut: cut map using the generated saliency map. [sent-371, score-0.787]

95 Figure 11 shows that our model generates more accurate saliency maps with uniformly highlighted foreground and well suppressed background. [sent-374, score-0.908]

96 Conclusions In this paper, we present a saliency detection algorithm via dense and sparse reconstruction based on the background templates. [sent-384, score-1.495]

97 The pixel-level saliency is then computed by an integration of multi-scale reconstruction errors followed by an object-biased Gaussian refinement. [sent-386, score-1.329]

98 To combine the two saliency maps via dense and sparse reconstruction, we introduce a Bayesian integration method which performs better than the conventional integration strategy. [sent-387, score-1.359]

99 Exploiting local and global patch rarities for saliency detection. [sent-424, score-0.733]

100 Fusing generic objectness and visual saliency for salient object detection. [sent-442, score-0.898]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('saliency', 0.733), ('reconstruction', 0.344), ('salient', 0.165), ('integration', 0.152), ('dense', 0.144), ('templates', 0.14), ('msra', 0.126), ('asd', 0.105), ('sparse', 0.104), ('seventeen', 0.102), ('errors', 0.1), ('background', 0.099), ('segments', 0.086), ('foreground', 0.084), ('bayesian', 0.082), ('propagated', 0.08), ('segment', 0.079), ('sod', 0.072), ('sj', 0.072), ('dsr', 0.067), ('error', 0.062), ('integrated', 0.061), ('objectbiased', 0.061), ('contexts', 0.051), ('ub', 0.05), ('maps', 0.05), ('cielab', 0.047), ('detection', 0.047), ('gaussian', 0.047), ('propagation', 0.046), ('mechanism', 0.046), ('posterior', 0.042), ('uniformly', 0.041), ('dep', 0.041), ('mdepg', 0.041), ('region', 0.041), ('pages', 0.04), ('refinement', 0.039), ('borji', 0.038), ('sb', 0.038), ('sep', 0.036), ('regions', 0.035), ('itti', 0.034), ('likelihood', 0.033), ('sc', 0.033), ('achanta', 0.032), ('bases', 0.032), ('cluster', 0.032), ('map', 0.032), ('recalls', 0.032), ('si', 0.031), ('superpixels', 0.031), ('prior', 0.031), ('smooths', 0.03), ('bayes', 0.029), ('mistakenly', 0.029), ('fz', 0.029), ('perazzi', 0.029), ('boundary', 0.029), ('recall', 0.028), ('pixel', 0.028), ('estrada', 0.028), ('measures', 0.027), ('bm', 0.027), ('refined', 0.027), ('zs', 0.026), ('favorable', 0.026), ('lu', 0.026), ('xo', 0.025), ('center', 0.025), ('objects', 0.025), ('bf', 0.025), ('multiscale', 0.025), ('appearance', 0.025), ('via', 0.024), ('precisions', 0.024), ('rahtu', 0.024), ('neuroscience', 0.024), ('yo', 0.023), ('slic', 0.023), ('bar', 0.023), ('refines', 0.022), ('sn', 0.022), ('cut', 0.022), ('highlighting', 0.022), ('bi', 0.021), ('boundaries', 0.021), ('precision', 0.021), ('databases', 0.021), ('measuring', 0.021), ('koch', 0.021), ('renders', 0.021), ('color', 0.021), ('attention', 0.021), ('effective', 0.02), ('stand', 0.02), ('nsfc', 0.02), ('likelihoods', 0.02), ('uniform', 0.02), ('torso', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999976 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

2 0.65755612 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

3 0.53416485 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

4 0.46327072 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

Author: Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, Laurent Itti

Abstract: Significant recent progress has been made in developing high-quality saliency models. However, less effort has been undertaken on fair assessment of these models, over large standardized datasets and correctly addressing confounding factors. In this study, we pursue a critical and quantitative look at challenges (e.g., center-bias, map smoothing) in saliency modeling and the way they affect model accuracy. We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scanpath sequence. We also account for the role of map smoothing. We find that, although model rankings vary, some (e.g., AWS, LG, AIM, and HouNIPS) consistently outperform other models over all datasets. Some models work well for prediction of both fixation locations and scanpath sequence (e.g., Judd, GBVS). Our results show low prediction accuracy for models over emotional stimuli from the NUSEF dataset. Our last benchmark, for the first time, gauges the ability of models to decode the stimulus category from statistics of fixations, saccades, and model saliency values at fixated locations. In this test, ITTI and AIM models win over other models. Our benchmark provides a comprehensive high-level picture of the strengths and weaknesses of many popular models, and suggests future research directions in saliency modeling.

5 0.44142917 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

Author: Nicolas Riche, Matthieu Duvinage, Matei Mancas, Bernard Gosselin, Thierry Dutoit

Abstract: Visual saliency has been an increasingly active research area in the last ten years with dozens of saliency models recently published. Nowadays, one of the big challenges in the field is to find a way to fairly evaluate all of these models. In this paper, on human eye fixations ,we compare the ranking of 12 state-of-the art saliency models using 12 similarity metrics. The comparison is done on Jian Li ’s database containing several hundreds of natural images. Based on Kendall concordance coefficient, it is shown that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks. On the other hand, other metrics provide a more diverse picture of models ’ overall performance. As a recommendation, three similarity metrics should be used to obtain a complete point of view of saliency model performance.

6 0.42393997 396 iccv-2013-Space-Time Robust Representation for Action Recognition

7 0.37581056 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

8 0.37488616 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

9 0.34641117 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

10 0.34072423 370 iccv-2013-Saliency Detection in Large Point Sets

11 0.32730779 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

12 0.30644807 369 iccv-2013-Saliency Detection: A Boolean Map Approach

13 0.22031458 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

14 0.15963137 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

15 0.12808712 74 iccv-2013-Co-segmentation by Composition

16 0.11867493 366 iccv-2013-STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data

17 0.11399191 442 iccv-2013-Video Segmentation by Tracking Many Figure-Ground Segments

18 0.11198549 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

19 0.10025109 121 iccv-2013-Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach

20 0.095029734 37 iccv-2013-Action Recognition and Localization by Hierarchical Space-Time Segments


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.224), (1, -0.121), (2, 0.628), (3, -0.276), (4, -0.227), (5, -0.024), (6, 0.027), (7, -0.072), (8, 0.013), (9, 0.028), (10, -0.026), (11, 0.051), (12, 0.001), (13, -0.002), (14, -0.011), (15, -0.088), (16, 0.121), (17, -0.018), (18, -0.057), (19, 0.082), (20, 0.011), (21, -0.011), (22, 0.022), (23, 0.006), (24, 0.012), (25, -0.019), (26, 0.025), (27, -0.001), (28, -0.003), (29, -0.004), (30, -0.012), (31, 0.016), (32, 0.019), (33, -0.046), (34, -0.062), (35, 0.029), (36, -0.042), (37, 0.045), (38, 0.016), (39, -0.045), (40, -0.023), (41, 0.001), (42, -0.012), (43, 0.008), (44, -0.008), (45, 0.01), (46, -0.036), (47, 0.005), (48, -0.011), (49, -0.007)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96674609 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

2 0.94644797 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

3 0.93034661 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

4 0.92266357 369 iccv-2013-Saliency Detection: A Boolean Map Approach

Author: Jianming Zhang, Stan Sclaroff

Abstract: A novel Boolean Map based Saliency (BMS) model is proposed. An image is characterized by a set of binary images, which are generated by randomly thresholding the image ’s color channels. Based on a Gestalt principle of figure-ground segregation, BMS computes saliency maps by analyzing the topological structure of Boolean maps. BMS is simple to implement and efficient to run. Despite its simplicity, BMS consistently achieves state-of-the-art performance compared with ten leading methods on five eye tracking datasets. Furthermore, BMS is also shown to be advantageous in salient object detection.

5 0.9176808 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

Author: Peng Jiang, Haibin Ling, Jingyi Yu, Jingliang Peng

Abstract: The goal of saliency detection is to locate important pixels or regions in an image which attract humans ’ visual attention the most. This is a fundamental task whose output may serve as the basis for further computer vision tasks like segmentation, resizing, tracking and so forth. In this paper we propose a novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focusness and objectness (UFO). In particular, uniqueness captures the appearance-derived visual contrast; focusness reflects the fact that salient regions are often photographed in focus; and objectness helps keep completeness of detected salient regions. While uniqueness has been used for saliency detection for long, it is new to integrate focusness and objectness for this purpose. In fact, focusness and objectness both provide important saliency information complementary of uniqueness. In our experiments using public benchmark datasets, we show that, even with a simple pixel level combination of the three components, the proposed approach yields significant improve- ment compared with previously reported methods.

6 0.89709151 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

7 0.88796449 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

8 0.87152576 370 iccv-2013-Saliency Detection in Large Point Sets

9 0.85834688 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

10 0.84753561 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

11 0.7623418 396 iccv-2013-Space-Time Robust Representation for Action Recognition

12 0.65382808 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

13 0.4958114 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

14 0.38653922 74 iccv-2013-Co-segmentation by Composition

15 0.34719634 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

16 0.32573023 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

17 0.3044799 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions

18 0.26526451 186 iccv-2013-GrabCut in One Cut

19 0.25494149 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

20 0.23844109 151 iccv-2013-Exploiting Reflection Change for Automatic Reflection Removal


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.067), (7, 0.03), (26, 0.095), (31, 0.035), (42, 0.092), (48, 0.014), (64, 0.069), (73, 0.046), (89, 0.197), (97, 0.233)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.93781888 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

Author: Nicolas Riche, Matthieu Duvinage, Matei Mancas, Bernard Gosselin, Thierry Dutoit

Abstract: Visual saliency has been an increasingly active research area in the last ten years with dozens of saliency models recently published. Nowadays, one of the big challenges in the field is to find a way to fairly evaluate all of these models. In this paper, on human eye fixations ,we compare the ranking of 12 state-of-the art saliency models using 12 similarity metrics. The comparison is done on Jian Li ’s database containing several hundreds of natural images. Based on Kendall concordance coefficient, it is shown that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks. On the other hand, other metrics provide a more diverse picture of models ’ overall performance. As a recommendation, three similarity metrics should be used to obtain a complete point of view of saliency model performance.

2 0.89345258 347 iccv-2013-Recursive Estimation of the Stein Center of SPD Matrices and Its Applications

Author: Hesamoddin Salehian, Guang Cheng, Baba C. Vemuri, Jeffrey Ho

Abstract: Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negativelycurved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance which is the square root of the LogDet di– vergence that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable com– ∗This research was funded in part by the NIH grant NS066340 to BCV. †Corresponding author parisons to other competing methods.

3 0.86118323 412 iccv-2013-Synergistic Clustering of Image and Segment Descriptors for Unsupervised Scene Understanding

Author: Daniel M. Steinberg, Oscar Pizarro, Stefan B. Williams

Abstract: With the advent of cheap, high fidelity, digital imaging systems, the quantity and rate of generation of visual data can dramatically outpace a humans ability to label or annotate it. In these situations there is scope for the use of unsupervised approaches that can model these datasets and automatically summarise their content. To this end, we present a totally unsupervised, and annotation-less, model for scene understanding. This model can simultaneously cluster whole-image and segment descriptors, therebyforming an unsupervised model of scenes and objects. We show that this model outperforms other unsupervised models that can only cluster one source of information (image or segment) at once. We are able to compare unsupervised and supervised techniques using standard measures derived from confusion matrices and contingency tables. This shows that our unsupervised model is competitive with current supervised and weakly-supervised models for scene understanding on standard datasets. We also demonstrate our model operating on a dataset with more than 100,000 images col- lected by an autonomous underwater vehicle.

same-paper 4 0.84993625 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

5 0.84247535 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

Author: Zheyun Feng, Rong Jin, Anil Jain

Abstract: One of the key challenges in search-based image annotation models is to define an appropriate similarity measure between images. Many kernel distance metric learning (KML) algorithms have been developed in order to capture the nonlinear relationships between visual features and semantics ofthe images. Onefundamental limitation in applying KML to image annotation is that it requires converting image annotations into binary constraints, leading to a significant information loss. In addition, most KML algorithms suffer from high computational cost due to the requirement that the learned matrix has to be positive semi-definitive (PSD). In this paper, we propose a robust kernel metric learning (RKML) algorithm based on the regression technique that is able to directly utilize image annotations. The proposed method is also computationally more efficient because PSD property is automatically ensured by regression. We provide the theoretical guarantee for the proposed algorithm, and verify its efficiency and effectiveness for image annotation by comparing it to state-of-the-art approaches for both distance metric learning and image annotation. ,

6 0.83700585 425 iccv-2013-Tracking via Robust Multi-task Multi-view Joint Sparse Representation

7 0.82875001 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification

8 0.82657826 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

9 0.81114233 369 iccv-2013-Saliency Detection: A Boolean Map Approach

10 0.79440182 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

11 0.7901175 71 iccv-2013-Category-Independent Object-Level Saliency Detection

12 0.78867877 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

13 0.77407706 396 iccv-2013-Space-Time Robust Representation for Action Recognition

14 0.76508576 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

15 0.75564587 338 iccv-2013-Randomized Ensemble Tracking

16 0.75087094 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

17 0.75067472 265 iccv-2013-Mining Motion Atoms and Phrases for Complex Action Recognition

18 0.74458015 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

19 0.74375606 180 iccv-2013-From Where and How to What We See

20 0.74355 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera