iccv iccv2013 iccv2013-370 knowledge-graph by maker-knowledge-mining

370 iccv-2013-Saliency Detection in Large Point Sets


Source: pdf

Author: Elizabeth Shtrom, George Leifman, Ayellet Tal

Abstract: While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Detecting the salient features in a point set of an urban scene. [sent-11, score-0.334]

2 The most salient points, such as the rosette and the yellow and red. [sent-13, score-0.222]

3 The least salient points, belonging to the floor and the feature-less walls, are colored for finding the most informative viewpoint (b), displaying the most interesting buildings of the city town hall. [sent-14, score-0.663]

4 Peter’s Cathedral and Bremen’s – Abstract While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. [sent-18, score-0.891]

5 In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. [sent-20, score-0.383]

6 For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. [sent-24, score-0.484]

7 We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan. [sent-25, score-0.745]

8 Less work addresses saliency of 3D surfaces [5, 7, 17] and only a few papers handle point sets [1, 16]. [sent-30, score-0.573]

9 Extending the existing techniques of saliency detection for 3D surfaces to operate directly on large point sets is not trivial. [sent-35, score-0.605]

10 Then, association is applied, grouping salient points and emphasizing the dragon’s facial features. [sent-39, score-0.288]

11 Next, the high-level distinctness procedure detects larger regions, such as the tail and the mouth. [sent-40, score-0.587]

12 Finally, the maps are integrated to produce the final saliency map. [sent-41, score-0.403]

13 Similarly to previous saliency detection algorithms, which operate on other types of data, our saliency detection algorithm is based on distinctness. [sent-44, score-0.87]

14 The challenge here is to look for a distinctness definition that suits point sets and is computationally efficient. [sent-45, score-0.69]

15 Therefore, points that are close to the foci of attention are more salient than faraway points. [sent-50, score-0.366]

16 We propose a novel algorithm that detects salient points in a 3D point set (Figure 1), by realizing the considerations mentioned above. [sent-51, score-0.356]

17 Additionally, to take the distance to foci into account, we adjust the point distinctness according this distance. [sent-54, score-0.679]

18 Our algorithm is general and competes favorably with state-of-the-art techniques for saliency detection of general objects, which typically consist of less than a million points. [sent-55, score-0.498]

19 However, it also copes with point sets of urban scans, containing tens of millions of noisy points. [sent-56, score-0.253]

20 We demonstrate the utility of our saliency maps in two applications. [sent-57, score-0.442]

21 The first application produces a set of the most informative viewpoints for a given point set, maximizing the accumulative viewed saliency. [sent-58, score-0.388]

22 Second, for urban scenes, we construct an informative tour in the city, which maximizes the interesting area viewed by the tourist. [sent-59, score-0.394]

23 First, we propose a novel algorithm for detecting the salient points in large point sets (Sections 3-5). [sent-61, score-0.381]

24 General Approach Given a point set, our goal is to efficiently compute its saliency map. [sent-64, score-0.488]

25 A point is considered distinct if its descriptor is dissimilar to all other point descriptors of the set. [sent-67, score-0.329]

26 Taking into account the fact that object recognition is performed hierarchically, from local representations to abstract ones, our saliency detection algorithm analyzes a scene hierarchically. [sent-69, score-0.435]

27 In particular, distinctness should be computed in a multi-level manner. [sent-70, score-0.539]

28 In the low level, delicate unique features are highlighted, while in the high level, the distinctness of entire semantic parts is detected. [sent-72, score-0.627]

29 Finally, we wish to look for salient regions, rather than for isolated points [29]. [sent-73, score-0.223]

30 Therefore, we apply point association, which regards the regions near the foci of attention as more interesting than faraway regions. [sent-75, score-0.228]

31 Then, we apply association, Alow, which increases traheg saliency i,n w thee a neighborhood onf, t Ahe most distinct points. [sent-78, score-0.497]

32 Finally, the above three components are integrated into the final saliency map, S, defined for a point pi as follows: S(pi) =21? [sent-80, score-0.624]

33 Finally, using the dissimilarities of a point to other points in the set, the distinctness is computed. [sent-89, score-0.699]

34 Below we discuss our point descriptor and our dissimilarity measure. [sent-90, score-0.268]

35 Comparison to other point descriptors: The low-level distinctness produced using our descriptor (Equation 6) outperforms others. [sent-106, score-0.686]

36 It detects the fine features, such as the rosette, the crosses on the top of the towers and the sculptures in the windows. [sent-107, score-0.243]

37 Figure 4 shows the low-level distinctness produced using the three descriptors. [sent-112, score-0.539]

38 It can be noticed that FPFH competes favorably with the other descriptors, detecting the fine distinctive features, such as the rosette, the crosses on the top of the towers and the sculptures in the windows. [sent-113, score-0.299]

39 Formally, given two points, pi and pj, and their FPFH 33558936 descriptors, the χ2 dissimilarity measure between them is: Dχ2(pi,pj) =n? [sent-125, score-0.257]

40 Hierarchical Saliency Computation Our goal is to compute saliency based on the dissimilarity between the descriptors, discussed in the previous section. [sent-130, score-0.524]

41 First, we identify the low-level distinctness that highlights the fine details. [sent-132, score-0.584]

42 We use a small neighborhood for the low-level distinctness and a large neighborhood for the high-level dis- tinctness. [sent-134, score-0.619]

43 Low-level distinctness: A point p is distinct when it differs from the other points in its appearance. [sent-136, score-0.214]

44 This is usually realized by looking for point descriptors whose dissimilarity to other descriptors is high. [sent-137, score-0.292]

45 Inspired by [9], a point is distinct when the points similar to it are nearby and less distinct when the resembling points are far away. [sent-140, score-0.343]

46 (5) In practice, computing this dissimilarity between all the points of the point set is too expensive. [sent-142, score-0.281]

47 Finally, a point pi is distinct when dL (pi ,pj) is high ∀pj ∈ P. [sent-146, score-0.275]

48 Thus, the low-level distinctness value of point pi is d∈ef Pin. [sent-147, score-0.76]

49 (6) Point association: Detecting low-level distinctness usually results in isolated points. [sent-152, score-0.539]

50 Letpfi be the closest focus point to pi and Dfoci (pi) be the low-level distinctness of pfi. [sent-155, score-0.76]

51 The assocaiandtio Dn of point pi is defined as (σ = 0. [sent-156, score-0.221]

52 (7) High-level distinctness: To evaluate the distinctness of entire regions, we compute our descriptors on large neighborhoods. [sent-160, score-0.582]

53 Therefore, we would like to decrease the contribution of nearby points and consider a point distinct when it is dissimilar to far points. [sent-165, score-0.214]

54 In particular, we define the high-level dissimilarity measure between pi and pj as: dH (pi, pj) = Dχ2 (pi, pj) · log(1 + | |pi − pj | |) . [sent-167, score-0.657]

55 (8) Finally, high-level distinctness is defined as: Dhigh(pi) = 1 − exp? [sent-168, score-0.539]

56 (9) Since the high-level distinctness depends on the lowlevel distinctness, the descriptor of each point is computed by considering only 10% of the points with the highest lowlevel distinctness. [sent-172, score-0.831]

57 We are not aware of any related work that handles saliency of such huge data. [sent-176, score-0.403]

58 Moreover, to assess the quality of our results, we compare them to those produced by surface saliency detection algorithms. [sent-178, score-0.435]

59 The saliency for the Jacobs University campus (15M points). [sent-181, score-0.43]

60 The buildings are salient and therefore are colored in orange. [sent-182, score-0.268]

61 The trees, of which there are many, are less salient and are colored in green. [sent-183, score-0.203]

62 Saliency in urban scenes: Urban point sets usually consist of millions of noisy points, which are generated by merging multiple range scans. [sent-185, score-0.253]

63 We ran our algorithm on two such point sets, the city center of Bremen and the Jacobs University campus (Figures 1, 5), which were scanned by a Riegl VZ-400 laser scanner [3]. [sent-186, score-0.243]

64 Figure 1 shows our saliency map for the city center of Bremen. [sent-187, score-0.534]

65 Our high-level distinctness identifies the entire facades of the most interesting buildings: the St. [sent-188, score-0.539]

66 The low-level distinctness highlights the fine details of the buildings, such as the rosette on the Cathedral, the crosses on the towers, and the small statues on the roof. [sent-190, score-0.703]

67 Figure 5 shows our saliency for the Jacobs University campus. [sent-191, score-0.403]

68 The buildings are found salient and therefore are colored in orange. [sent-192, score-0.268]

69 Figure 6 demonstrates that our algorithm detects the “expected” salient regions, such as the fork of Neptune and the fish next to his feet, and the facial features of Max Planck and the dinosaur. [sent-197, score-0.266]

70 Qualitative evaluation: In order to assess the quality of our algorithm, we compare our results to those of saliency detection of surfaces. [sent-198, score-0.435]

71 produces better saliency maps, detecting fine features, such as the delicate relief features on the bowl and fins of the fish. [sent-208, score-0.729]

72 Complexity analysis: The complexity of the distinctness computation depends on that of the FPFH and on that of finding the K-nearest neighbors. [sent-216, score-0.539]

73 Our saliency is computed based on the vertices, ignoring connectivity information, which is used by [18]. [sent-223, score-0.452]

74 However, for larger models, our method produces better saliency maps, detecting fine features, such as delicate relief features on the bowl, and the fins of the fish. [sent-225, score-0.655]

75 Applications We demonstrate the utility of our saliency in two applications. [sent-228, score-0.442]

76 First, we propose a technique for producing a set of the most informative viewpoints of the data. [sent-229, score-0.241]

77 Second, given urban data, we construct an informative tour of the city, which maximizes the saliency viewed along the path. [sent-230, score-0.797]

78 The idea is to maximize the accumulative saliency viewed by the set of viewpoints. [sent-232, score-0.493]

79 For each candidate viewpoint and its associated set Vi, we calculate the amount of saliency it views by: S¯(Vi) = ? [sent-241, score-0.491]

80 ∈Vi where S(pj ) is the saliency of pj, computed by Equatwiohne r(e1) . [sent-243, score-0.403]

81 wi(pj) =(1 +c |o|Ls(βi −j) pj| ), (11) This is due to the fact that our algorithm is efficient enough to work on where Li is the camera location and βij is the angle between the normal at pj and the viewing direction Li − pj . [sent-245, score-0.4]

82 The first viewpoint selected is the one having the maximal saliency (Equation 10). [sent-246, score-0.52]

83 We define the added visible saliency contributed by the viewpoint Vi as: δ(Vi) = ? [sent-248, score-0.491]

84 ∈Vi where wmax (pj) is the maximal weight assigned to pj by any of the viewpoints selected so far. [sent-253, score-0.387]

85 We keep adding viewpoints until the accumulated viewed saliency is at least 30% of the saliency viewed by all the viewpoints. [sent-255, score-1.039]

86 We are not aware of any previous work that generates informative viewpoints for urban scans. [sent-262, score-0.314]

87 For example, for the head of Igea, both algorithms choose a side-view, but our view presents the side with the salient scar near the mouth. [sent-266, score-0.215]

88 Producing the most informative tour: Given a point set of an urban scene and its saliency map, our aim is to suggest 33558969 Figure 8. [sent-268, score-0.685]

89 The most informative viewpoints generated by our algorithm indeed capture the most interesting buildings of Bremen from various angles. [sent-270, score-0.278]

90 The idea is to maximize the area of the viewed salient regions along a path. [sent-272, score-0.206]

91 First, we compute a set of candidate locations and pick a subset, Ls, of the most salient locations, similarly to viewpoint selection. [sent-274, score-0.236]

92 We stop when at least 75% of the total saliency is viewed by the candidates. [sent-275, score-0.461]

93 This can be explained by the fact that we weigh our saliency according to the viewing angle. [sent-283, score-0.403]

94 Consequently, when approaching an obstacle, the value of the cosine in Equation 11decreases, thus reducing the saliency of viewed points. [sent-284, score-0.461]

95 Conclusion This paper has studied saliency detection for 3D point sets. [sent-288, score-0.52]

96 Our saliency detection algorithm is based on finding the distinct points, using a multi-level approach. [sent-289, score-0.489]

97 Finally, we demonstrate the utility of our saliency in two applications: selecting a set of informative viewpoints and producing an informative tour in an urban environment. [sent-293, score-0.987]

98 Computing saliency map from spatial information in point cloud data. [sent-310, score-0.488]

99 Sparse points matching by combining 3d mesh saliency with statistical descriptors. [sent-332, score-0.478]

100 Learning video saliency from human gaze using candidate selection. [sent-442, score-0.403]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('distinctness', 0.539), ('saliency', 0.403), ('fpfh', 0.294), ('pj', 0.2), ('bremen', 0.187), ('salient', 0.148), ('pi', 0.136), ('city', 0.131), ('dissimilarity', 0.121), ('viewpoints', 0.117), ('tour', 0.107), ('urban', 0.101), ('informative', 0.096), ('delicate', 0.088), ('viewpoint', 0.088), ('point', 0.085), ('cathedral', 0.085), ('alow', 0.083), ('dlow', 0.083), ('spfh', 0.083), ('points', 0.075), ('bowl', 0.074), ('rosette', 0.074), ('spin', 0.068), ('buildings', 0.065), ('towers', 0.064), ('igea', 0.062), ('descriptor', 0.062), ('scans', 0.06), ('viewed', 0.058), ('colored', 0.055), ('foci', 0.055), ('distinct', 0.054), ('surfaces', 0.053), ('faraway', 0.051), ('town', 0.051), ('connectivity', 0.049), ('technion', 0.048), ('detects', 0.048), ('crosses', 0.045), ('fine', 0.045), ('descriptors', 0.043), ('vi', 0.043), ('jacobs', 0.042), ('detecting', 0.041), ('darboux', 0.041), ('dfoci', 0.041), ('dhigh', 0.041), ('ffppffhhnn', 0.041), ('fins', 0.041), ('ldistnct', 0.041), ('leifman', 0.041), ('neptune', 0.041), ('scar', 0.041), ('sculptures', 0.041), ('shtrom', 0.041), ('wmax', 0.041), ('obstacle', 0.041), ('neighborhood', 0.04), ('utility', 0.039), ('peter', 0.037), ('uvw', 0.037), ('rusu', 0.037), ('spikes', 0.037), ('relief', 0.037), ('cgf', 0.037), ('echni', 0.037), ('fork', 0.037), ('attention', 0.037), ('normals', 0.035), ('shot', 0.035), ('millions', 0.035), ('lowlevel', 0.035), ('pcl', 0.034), ('competes', 0.034), ('suits', 0.034), ('unorganized', 0.034), ('ls', 0.033), ('facial', 0.033), ('sets', 0.032), ('accumulative', 0.032), ('frog', 0.032), ('detection', 0.032), ('association', 0.032), ('maximizes', 0.032), ('path', 0.031), ('esults', 0.031), ('sphere', 0.03), ('floor', 0.029), ('favorably', 0.029), ('dl', 0.029), ('maximal', 0.029), ('myriad', 0.028), ('producing', 0.028), ('bins', 0.027), ('feet', 0.027), ('campus', 0.027), ('neighbors', 0.026), ('head', 0.026), ('possess', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 370 iccv-2013-Saliency Detection in Large Point Sets

Author: Elizabeth Shtrom, George Leifman, Ayellet Tal

Abstract: While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.

2 0.35438567 71 iccv-2013-Category-Independent Object-Level Saliency Detection

Author: Yangqing Jia, Mei Han

Abstract: It is known that purely low-level saliency cues such as frequency does not lead to a good salient object detection result, requiring high-level knowledge to be adopted for successful discovery of task-independent salient objects. In this paper, we propose an efficient way to combine such high-level saliency priors and low-level appearance models. We obtain the high-level saliency prior with the objectness algorithm to find potential object candidates without the need of category information, and then enforce the consistency among the salient regions using a Gaussian MRF with the weights scaled by diverse density that emphasizes the influence of potential foreground pixels. Our model obtains saliency maps that assign high scores for the whole salient object, and achieves state-of-the-art performance on benchmark datasets covering various foreground statistics.

3 0.34072423 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

4 0.30922401 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

5 0.26265422 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

Author: Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, Laurent Itti

Abstract: Significant recent progress has been made in developing high-quality saliency models. However, less effort has been undertaken on fair assessment of these models, over large standardized datasets and correctly addressing confounding factors. In this study, we pursue a critical and quantitative look at challenges (e.g., center-bias, map smoothing) in saliency modeling and the way they affect model accuracy. We quantitatively compare 32 state-of-the-art models (using the shuffled AUC score to discount center-bias) on 4 benchmark eye movement datasets, for prediction of human fixation locations and scanpath sequence. We also account for the role of map smoothing. We find that, although model rankings vary, some (e.g., AWS, LG, AIM, and HouNIPS) consistently outperform other models over all datasets. Some models work well for prediction of both fixation locations and scanpath sequence (e.g., Judd, GBVS). Our results show low prediction accuracy for models over emotional stimuli from the NUSEF dataset. Our last benchmark, for the first time, gauges the ability of models to decode the stimulus category from statistics of fixations, saccades, and model saliency values at fixated locations. In this test, ITTI and AIM models win over other models. Our benchmark provides a comprehensive high-level picture of the strengths and weaknesses of many popular models, and suggests future research directions in saliency modeling.

6 0.256271 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

7 0.23850431 396 iccv-2013-Space-Time Robust Representation for Action Recognition

8 0.20814528 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

9 0.20348679 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

10 0.19749604 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

11 0.19144297 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

12 0.17489593 369 iccv-2013-Saliency Detection: A Boolean Map Approach

13 0.16593283 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

14 0.14835124 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions

15 0.13340642 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

16 0.11030713 332 iccv-2013-Quadruplet-Wise Image Similarity Learning

17 0.087222278 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

18 0.081067272 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

19 0.078985259 1 iccv-2013-3DNN: Viewpoint Invariant 3D Geometry Matching for Scene Understanding

20 0.07223127 74 iccv-2013-Co-segmentation by Composition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.164), (1, -0.09), (2, 0.369), (3, -0.196), (4, -0.121), (5, 0.016), (6, 0.069), (7, -0.097), (8, 0.001), (9, 0.008), (10, -0.019), (11, 0.048), (12, -0.02), (13, 0.021), (14, 0.047), (15, -0.043), (16, 0.099), (17, -0.001), (18, -0.005), (19, 0.046), (20, -0.024), (21, -0.009), (22, 0.069), (23, -0.004), (24, 0.003), (25, -0.051), (26, 0.016), (27, 0.006), (28, 0.033), (29, -0.023), (30, -0.004), (31, -0.008), (32, 0.007), (33, -0.021), (34, 0.011), (35, -0.005), (36, 0.025), (37, 0.0), (38, 0.047), (39, -0.041), (40, 0.019), (41, 0.005), (42, -0.008), (43, -0.016), (44, 0.029), (45, -0.019), (46, -0.023), (47, -0.023), (48, -0.006), (49, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95431024 370 iccv-2013-Saliency Detection in Large Point Sets

Author: Elizabeth Shtrom, George Leifman, Ayellet Tal

Abstract: While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.

2 0.92266792 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

Author: Xi Li, Yao Li, Chunhua Shen, Anthony Dick, Anton Van_Den_Hengel

Abstract: Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region ’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on centerversus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the stateof-the-art approaches to salient object detection.

3 0.89715695 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

Author: Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang

Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

4 0.8938421 373 iccv-2013-Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics

Author: Nicolas Riche, Matthieu Duvinage, Matei Mancas, Bernard Gosselin, Thierry Dutoit

Abstract: Visual saliency has been an increasingly active research area in the last ten years with dozens of saliency models recently published. Nowadays, one of the big challenges in the field is to find a way to fairly evaluate all of these models. In this paper, on human eye fixations ,we compare the ranking of 12 state-of-the art saliency models using 12 similarity metrics. The comparison is done on Jian Li ’s database containing several hundreds of natural images. Based on Kendall concordance coefficient, it is shown that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks. On the other hand, other metrics provide a more diverse picture of models ’ overall performance. As a recommendation, three similarity metrics should be used to obtain a complete point of view of saliency model performance.

5 0.89101279 369 iccv-2013-Saliency Detection: A Boolean Map Approach

Author: Jianming Zhang, Stan Sclaroff

Abstract: A novel Boolean Map based Saliency (BMS) model is proposed. An image is characterized by a set of binary images, which are generated by randomly thresholding the image ’s color channels. Based on a Gestalt principle of figure-ground segregation, BMS computes saliency maps by analyzing the topological structure of Boolean maps. BMS is simple to implement and efficient to run. Despite its simplicity, BMS consistently achieves state-of-the-art performance compared with ten leading methods on five eye tracking datasets. Furthermore, BMS is also shown to be advantageous in salient object detection.

6 0.89030826 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

7 0.88152361 71 iccv-2013-Category-Independent Object-Level Saliency Detection

8 0.86943138 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

9 0.82992131 371 iccv-2013-Saliency Detection via Absorbing Markov Chain

10 0.82363874 137 iccv-2013-Efficient Salient Region Detection with Soft Image Abstraction

11 0.72974193 396 iccv-2013-Space-Time Robust Representation for Action Recognition

12 0.59961605 217 iccv-2013-Initialization-Insensitive Visual Tracking through Voting with Salient Local Features

13 0.4801921 381 iccv-2013-Semantically-Based Human Scanpath Estimation with HMMs

14 0.45657092 56 iccv-2013-Automatic Registration of RGB-D Scans via Salient Directions

15 0.36068141 325 iccv-2013-Predicting Primary Gaze Behavior Using Social Saliency Fields

16 0.32443014 74 iccv-2013-Co-segmentation by Composition

17 0.32324049 319 iccv-2013-Point-Based 3D Reconstruction of Thin Objects

18 0.29852968 281 iccv-2013-Multi-view Normal Field Integration for 3D Reconstruction of Mirroring Objects

19 0.29740241 102 iccv-2013-Data-Driven 3D Primitives for Single Image Understanding

20 0.29486713 139 iccv-2013-Elastic Fragments for Dense Scene Reconstruction


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.081), (6, 0.011), (7, 0.01), (11, 0.228), (12, 0.01), (13, 0.011), (26, 0.06), (31, 0.038), (35, 0.014), (42, 0.081), (64, 0.024), (73, 0.026), (89, 0.232), (95, 0.015), (97, 0.048), (98, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.8351211 370 iccv-2013-Saliency Detection in Large Point Sets

Author: Elizabeth Shtrom, George Leifman, Ayellet Tal

Abstract: While saliency in images has been extensively studied in recent years, there is very little work on saliency of point sets. This is despite the fact that point sets and range data are becoming ever more widespread and have myriad applications. In this paper we present an algorithm for detecting the salient points in unorganized 3D point sets. Our algorithm is designed to cope with extremely large sets, which may contain tens of millions of points. Such data is typical of urban scenes, which have recently become commonly available on the web. No previous work has handled such data. For general data sets, we show that our results are competitive with those of saliency detection of surfaces, although we do not have any connectivity information. We demonstrate the utility of our algorithm in two applications: producing a set of the most informative viewpoints and suggesting an informative city tour given a city scan.

2 0.80536747 220 iccv-2013-Joint Deep Learning for Pedestrian Detection

Author: Wanli Ouyang, Xiaogang Wang

Abstract: Feature extraction, deformation handling, occlusion handling, and classi?cation are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture1. By establishing automatic, mutual interaction among components, the deep model achieves a 9% reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.

3 0.79758453 362 iccv-2013-Robust Tucker Tensor Decomposition for Effective Image Representation

Author: Miao Zhang, Chris Ding

Abstract: Many tensor based algorithms have been proposed for the study of high dimensional data in a large variety ofcomputer vision and machine learning applications. However, most of the existing tensor analysis approaches are based on Frobenius norm, which makes them sensitive to outliers, because they minimize the sum of squared errors and enlarge the influence of both outliers and large feature noises. In this paper, we propose a robust Tucker tensor decomposition model (RTD) to suppress the influence of outliers, which uses L1-norm loss function. Yet, the optimization on L1-norm based tensor analysis is much harder than standard tensor decomposition. In this paper, we propose a simple and efficient algorithm to solve our RTD model. Moreover, tensor factorization-based image storage needs much less space than PCA based methods. We carry out extensive experiments to evaluate the proposed algorithm, and verify the robustness against image occlusions. Both numerical and visual results show that our RTD model is consistently better against the existence of outliers than previous tensor and PCA methods.

4 0.76980811 359 iccv-2013-Robust Object Tracking with Online Multi-lifespan Dictionary Learning

Author: Junliang Xing, Jin Gao, Bing Li, Weiming Hu, Shuicheng Yan

Abstract: Recently, sparse representation has been introduced for robust object tracking. By representing the object sparsely, i.e., using only a few templates via ?1-norm minimization, these so-called ?1-trackers exhibit promising tracking results. In this work, we address the object template building and updating problem in these ?1-tracking approaches, which has not been fully studied. We propose to perform template updating, in a new perspective, as an online incremental dictionary learning problem, which is efficiently solved through an online optimization procedure. To guarantee the robustness and adaptability of the tracking algorithm, we also propose to build a multi-lifespan dictionary model. By building target dictionaries of different lifespans, effective object observations can be obtained to deal with the well-known drifting problem in tracking and thus improve the tracking accuracy. We derive effective observa- tion models both generatively and discriminatively based on the online multi-lifespan dictionary learning model and deploy them to the Bayesian sequential estimation framework to perform tracking. The proposed approach has been extensively evaluated on ten challenging video sequences. Experimental results demonstrate the effectiveness of the online learned templates, as well as the state-of-the-art tracking performance of the proposed approach.

5 0.7571882 412 iccv-2013-Synergistic Clustering of Image and Segment Descriptors for Unsupervised Scene Understanding

Author: Daniel M. Steinberg, Oscar Pizarro, Stefan B. Williams

Abstract: With the advent of cheap, high fidelity, digital imaging systems, the quantity and rate of generation of visual data can dramatically outpace a humans ability to label or annotate it. In these situations there is scope for the use of unsupervised approaches that can model these datasets and automatically summarise their content. To this end, we present a totally unsupervised, and annotation-less, model for scene understanding. This model can simultaneously cluster whole-image and segment descriptors, therebyforming an unsupervised model of scenes and objects. We show that this model outperforms other unsupervised models that can only cluster one source of information (image or segment) at once. We are able to compare unsupervised and supervised techniques using standard measures derived from confusion matrices and contingency tables. This shows that our unsupervised model is competitive with current supervised and weakly-supervised models for scene understanding on standard datasets. We also demonstrate our model operating on a dataset with more than 100,000 images col- lected by an autonomous underwater vehicle.

6 0.75670338 372 iccv-2013-Saliency Detection via Dense and Sparse Reconstruction

7 0.75489771 50 iccv-2013-Analysis of Scores, Datasets, and Models in Visual Saliency Prediction

8 0.75363207 382 iccv-2013-Semi-dense Visual Odometry for a Monocular Camera

9 0.75358051 71 iccv-2013-Category-Independent Object-Level Saliency Detection

10 0.75344878 238 iccv-2013-Learning Graphs to Match

11 0.75170588 369 iccv-2013-Saliency Detection: A Boolean Map Approach

12 0.75037026 24 iccv-2013-A Non-parametric Bayesian Network Prior of Human Pose

13 0.74995995 256 iccv-2013-Locally Affine Sparse-to-Dense Matching for Motion and Occlusion Estimation

14 0.74969739 111 iccv-2013-Detecting Dynamic Objects with Multi-view Background Subtraction

15 0.74943203 396 iccv-2013-Space-Time Robust Representation for Action Recognition

16 0.7491979 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

17 0.7490465 297 iccv-2013-Online Motion Segmentation Using Dynamic Label Propagation

18 0.74820364 91 iccv-2013-Contextual Hypergraph Modeling for Salient Object Detection

19 0.74786669 336 iccv-2013-Random Forests of Local Experts for Pedestrian Detection

20 0.74741894 147 iccv-2013-Event Recognition in Photo Collections with a Stopwatch HMM