cvpr cvpr2013 cvpr2013-366 knowledge-graph by maker-knowledge-mining

366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics


Source: pdf

Author: Xiaobai Liu, Liang Lin, Alan L. Yuille

Abstract: In this work, we present an efficient multi-scale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noises, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. We call this internal image statistics as replication prior, and quantitatively justify it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. Our experiments demonstrate the presented method can substantially improve segmentation accuracy.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. [sent-7, score-0.255]

2 Since low-level superpixel features are usually corrupted by image noises, we propose to infer the low-rank refined affinity matrix. [sent-8, score-0.566]

3 First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. [sent-10, score-0.187]

4 We call this internal image statistics as replication prior, and quantitatively justify it on real image databases. [sent-11, score-0.742]

5 Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. [sent-12, score-0.341]

6 Introduction Image segmentation is to partition input image into several semantically consistent regions. [sent-16, score-0.124]

7 Therefore, the quality of segmentation heavily depends on how well the extra images match with the given image [17]. [sent-22, score-0.119]

8 (a) two input images overlaid with superpixel oversegmentation results; (b) repeatedly occurred patches (identified with the same color); (c) segmentation results by our algorithm(unsupevised). [sent-77, score-0.378]

9 In this work, we introduce a simple yet efficient internal image statistics for image segmentation, and integrate it within a unified low-rank image representation. [sent-79, score-0.258]

10 For each scale of the input image, we partition it into a set of non-overlapping superpixels [16], and construct an affinity graph by taking superpixels as graph vertices. [sent-81, score-0.546]

11 As conventionally, each superpixel is represented as one appropriate appearance feature, e. [sent-82, score-0.247]

12 Given these superpixel features, we assume that intra-class superpixels, namely the superpixels belonging to the same semantic region, are drawn from one identical low-rank feature subspace. [sent-85, score-0.519]

13 Our goal is to seek for a low-rank refined superpixel affinity matrix, so that: the intra-class superpixel affinities are dense, whereas the inter-class superpixel affinities are all sparse or zeros. [sent-86, score-1.205]

14 The inference of low-rank refined affinity matrices is guided by two more purposes. [sent-88, score-0.315]

15 We can intuitively find that a small-size patch usually has multiple copies (identified with the same color) in the same semantic region (shown in the right column). [sent-95, score-0.21]

16 The replication prior can be used to measure how likely two subregions 1 are semantically similar. [sent-96, score-0.584]

17 Generally, if every patch within one subregion has many copies in another one, namely these two subregions have high replication prior, they will belong to the same semantic region with high probability, and vice versa. [sent-97, score-0.845]

18 In later sections, we will further quantitatively justify that the above observation on fairly small-size patches is partially true in real image databases. [sent-98, score-0.136]

19 The second purpose lies on the fact that the desired superpixel-pair affinities at different scales should be consistently solved, which leads to the so-called cross-scale consistency constraint. [sent-103, score-0.218]

20 We formulate the pursuit of low- rank refined affinity matrices and the above two purposes as a unified constraint nuclear norm and ? [sent-104, score-0.408]

21 Taking the solved low-ranked refined superpixel affinities, one can call the Normalized Cut method [6] to address the unsupervised segmentation problem. [sent-107, score-0.455]

22 First, we develop a multi-scale low-rank representation to seek for the affinity matrix at each scale in parallel, while preserving the cross-scale consistency. [sent-109, score-0.206]

23 Second, we study a simple yet efficient internal image statistics, and present a practical method for image segmentation. [sent-110, score-0.147]

24 The advantages of our approach are demonstrated by extensive experiments with comparisons to the popular segmentation algorithms on two public datasets MSRC [19] and BSD500 [15]. [sent-111, score-0.133]

25 All these three algorithms directly utilize the low- 1A subregion indicates a part of the whole image, either a semanticless superpixel or a semantic region (e. [sent-116, score-0.483]

26 As aforementioned, extra knowledge or statistics have been studied to address the ill-posed nature of image segmentation [13] [17]. [sent-122, score-0.193]

27 In contrast, the proposed replication prior is a kind of internal image statistics, which bears the obvious benefit of low computational demand. [sent-124, score-0.631]

28 Moreover, we will experimentally show that, to achieve the equally good quality of segmentation by the low-rank refined replication prior, hundreds of images are required for the external statistics [13]. [sent-125, score-0.735]

29 In computer vision literature, internal image statistics has been used in various low-level image tasks, e. [sent-126, score-0.194]

30 [4] assume that one semantic region can be well explained by repeatable compositions and utilized this assumption for interactive image segmentation which requires user input. [sent-130, score-0.176]

31 Recently, Zontak and Irani [23] further quantitatively evaluate the strength of internal statistics, and demonstrate its advantages in enhancing the quality of image denoising and super-resolution. [sent-131, score-0.12]

32 Our work extends these methods, and introduces a practical method to apply the internal image statistic for image segmentation. [sent-132, score-0.12]

33 Each superpixel comprises an ensemble of pixels that are spatially coherent and perceptually similar with respect to certain appearance fea- 111999333200 tures (e. [sent-145, score-0.247]

34 We construct an affinity graph by taking the superpixels in Is as graph vertices. [sent-151, score-0.376]

35 Thus, the overall quality of segmentation depends on the pairwise superpixel affinity matrix. [sent-153, score-0.531]

36 Let xis ∈ Rd denote the d-dimension feature descriptor extracted fo∈r Rthe ith superpixel in Is. [sent-154, score-0.329]

37 One common method to compute the superpixel-pair affinity [6] is exp(−? [sent-156, score-0.206]

38 ve irnthdeiclaetsess, tbheeca Fruoseb onfi uvsa nrioorums image noises and clutters, the obtained affinity matrix is always corrupted and not discriminative enough to produce high-quality segmentations. [sent-164, score-0.291]

39 We will introduce in the rest of this section a novel formulation to infer more powerful superpixel-pair affinities for the input image, and further discuss how to apply this representation to segmentation problems in next section. [sent-165, score-0.174]

40 Objective-I: Low-rank Image Representation The major step of our method is to pursue the low-rank refined affinity matrix from the low-level superpixel features. [sent-168, score-0.519]

41 Herein, we assume that superpixels belonging to the same semantic region are all drawn from the same low- rank feature subspace, and all superpixels in the same image (also the same scale) lying on a union of multiple subspaces [14]. [sent-169, score-0.494]

42 We aim to represent each superpixel descriptor as a linear combination of other superpixel descriptors, and seek for the lowest rank representation of all superpixels in a joint fashion. [sent-170, score-0.693]

43 Let denote the number of superpixels in Is, and Xs = [xs1 , x2s , . [sent-171, score-0.17]

44 Each vector xis can be represented as the linear] ]c ∈om Rbination of the column vectors of Xs, denoted as, ns xis = Xszis, (1) where zis ∈ Rns is the coefficient vector. [sent-175, score-0.25]

45 Large zisj generally indica∈tes R that xis and xjs have similar projection in the feature subspaces spanned by the column vectors of Xs, and vice versa. [sent-176, score-0.181]

46 Thus, we can use zisj to measure the affinity between superpixels iand j. [sent-177, score-0.42]

47 The lowest rank representation of the superpixel affinity matrix Zs can be solved by following program × ? [sent-185, score-0.482]

48 Objective-II: Replication Prior The discovered replication prior is from a statistical observation on natural images: local small-size patches (e. [sent-209, score-0.598]

49 6 6 pixels) tend to recur frequently within the same se6m ×ant 6ic region, yet lteos sre frequently nwtlityhi wni semantically different regions. [sent-211, score-0.121]

50 The size of image patches is fairly small so one superpixel may contain multiple patches. [sent-215, score-0.332]

51 Replication prior can be used to measure the semantic consistency of two superpixels. [sent-216, score-0.153]

52 We use patch recurrence density to quantify the replication prior. [sent-217, score-0.596]

53 Let Λ denote a subregion and q index the patches in Λ. [sent-219, score-0.13]

54 We perform an experiment on the Berkeley Segmentation Database [15] to justify the replication prior. [sent-227, score-0.517]

55 The intra-class density of patch p is calculated using Eq. [sent-232, score-0.13]

56 (4) where Λ is set to be the semantic region in groundtruth that contains the patch p. [sent-233, score-0.182]

57 Correspondingly, the inter-class density of patch p is estimated with respect to the semantic region that does not contain patch p. [sent-234, score-0.312]

58 We compare the mean intra-class patch densities and mean inter-class patch densities under different patch sizes, including 6 ddeiftfaeilrse. [sent-492, score-0.35]

59 See texts for more more than one semantically different regions for patch p, we select the highest one (or the most ambiguous one) as the inter-class density of patch p . [sent-495, score-0.26]

60 Figures 2 plots the density comparisons while looking at different patch-sizes, including 6 6, 10 10, 12 12 and 14 14 pixels, respectively. [sent-497, score-0.129]

61 This experiment shows that the discovered replication prior is partially true while using fairly small-size patches. [sent-502, score-0.577]

62 We utilize the replication prior to measure how likely two superpixels belong to the same semantic region. [sent-503, score-0.777]

63 Let Λis denote the subregion in Is covered by superpixel i, Qs denote a matrix. [sent-504, score-0.324]

64 (5) Large Qisj indicates the associated suerpixel-pair has low replication prior, namely the superpixels iand j belong to different semantic regions with high probability, and vice versa. [sent-508, score-0.796]

65 In this way, replication prior is used as a kind of soft constraint to regularize the inference of the lowest rank affinity matrices from the low-level visual features. [sent-519, score-0.789]

66 (6) aims to compute the optimal superpixel affinity matrix for each scale separately. [sent-526, score-0.453]

67 We can achieve this by projecting the superpixels at the coarse-level to the finelevel and introducing a cross-scale consistency constraint. [sent-530, score-0.213]

68 Formally, let Iis indicate the superpixel at the scale s indexed by i. [sent-531, score-0.278]

69 We impose a cross-scale constraint for every two neighbor scales (namely the coarse-scale s + 1and the fine-scale s): for every two superpixels at the coarse-level, their affinity should be locally average of the affinities between their respective children at the fine-level. [sent-533, score-0.521]

70 2 (10) Minimizing the above term will enforce the desired superpixel affinity matrices at different scales are consistent. [sent-564, score-0.575]

71 Suppose the optimal superpixel affinity matrices are solved from Eq. [sent-650, score-0.496]

72 We first project Zs∗ to the pixel-level as follows: if two pixels in Is belong to the same superpixel, we set their affinity to be 1; otherwise, we set their affinity to be the corresponding superpixel-pair affinity. [sent-654, score-0.412]

73 We define the affinity between neighboring pixels at scale s using the linear combination of a set of Gaussian kernels: = ? [sent-655, score-0.206]

74 Experiments In this section, we apply the discovered replication prior and the proposed multi-scale low-rank representation (MsLRR) for image segmentation and evaluate them on publicly available image databases. [sent-674, score-0.623]

75 5, and over-segmented into superpixels using the method in [16]. [sent-685, score-0.17]

76 The number of superpixels is set to be 80, 120 and 150, respectively. [sent-686, score-0.17]

77 We extract for each superpixel three types of feature descriptors, including 12-dimension color histogram in each channel of RGB, 59-dimension Local Binary Pattern and 31-dimension Histogram of Oriented Gradient [19]. [sent-687, score-0.247]

78 We use two segmentation metrics, Covering Rate (CR), namely the percentage of per-pixel agreement between the obtained results w. [sent-692, score-0.115]

79 It degenerates to seeking for the low-rank refined affinity matrices at multiple scales separately. [sent-701, score-0.364]

80 3) MsLRR-III, that sets γ = 0, and only uses the regularization term of replication prior . [sent-703, score-0.511]

81 Moreover, we compute the superpixel-pair affinities based on feature descriptors, namely letting Δirj = exp(−|xi xj |2/2σ2) in Eq. [sent-706, score-0.133]

82 The threshold ζ for patch density feisxtiedma ttoio bne i 6s ×fix 6ed p itox e bles . [sent-731, score-0.13]

83 Table 1-(a) reports the performance comparisons of various unsupervised segmentation algorithms on MSRC [19] and BSD [15] databases. [sent-744, score-0.166]

84 These comparisons well justify the effectiveness of MsLRR and the internal replication prior. [sent-749, score-0.692]

85 Performance comparisons of unsupervised segmentation algorithms on MSRC [19] and BSD500 [15] (a) Segment single image MetricsCR (M%S)RCVICR (%B)SDVI M TeCUaBNTCnEM S uh[ti-21fS[ 8t6][P76 20– . [sent-752, score-0.166]

86 It takes about 20 seconds to process one image given superpixel features extracted offline. [sent-772, score-0.247]

87 We show several exemplar comparisons of segmentation results on BSD [15] in Figures 3, where each row shows the original image in column 1 and corresponding segmentation results obtained by MsLRR-IV, MeanShift [7] and MNCut [6] in columns 2, 3 and 4, respectively. [sent-773, score-0.211]

88 Exp-II: Internal Replication Prior and External Image Statistics We further evaluate the effectiveness of the discovered replication prior by comparing it with the external image statistics. [sent-777, score-0.596]

89 As aforementioned, there are several external statistics based methods, and we choose to implement the most recent one by Liu et al. [sent-778, score-0.125]

90 They proposed to BSD500 [15] extract superpixel co-occurrence frequencies from an extra unlabeled image corpus and utilize this knowledge for image segmentation. [sent-780, score-0.434]

91 Our formulation in (11) can integrate this knowledge by re-defining the matrix Qs as Qisj = exp{−cij } where cij indicates the co-occurrence frequency eofx superpixels ie raen dc j. [sent-781, score-0.263]

92 The details of calculating superpixel co-occurrence from unlabeled images are referred to the paper [13]. [sent-782, score-0.32]

93 The training subsets are used as the unlabeled image corpus for extracting the superpixel co-occurrence, as in [13]. [sent-785, score-0.362]

94 All images are oversegmented into superpixels [16], and we use the same superpixel descriptors as in Exp-I. [sent-788, score-0.417]

95 From the comparisons in Table 1, one can observe that MsLRR-IV is able to achieve comparable accuracies as MsLRR-V that uses extra unlabeled images. [sent-790, score-0.169]

96 Furthermore, in BSD500 database, MsLRR-V using 300 unlabeled images (in Table 1-b) is inferior to the same algorithm using 2300 unlabeled images (in Table 1-c). [sent-792, score-0.146]

97 These comparisons partially demonstrate that, external image statistics requires fairly large-scale unlabeled image corpus to achieve the robustness, which is consistent with the claims in previous works [13] [23]. [sent-794, score-0.327]

98 In contrast, the proposed replication prior is extracted from the image itself, and thus has less limitations in real applications. [sent-795, score-0.511]

99 Summary In this work, we studied a simple yet efficient internal image statistics and presented a practical method, the multiscale low-rank representation (MsLRR), for image segmentation. [sent-797, score-0.221]

100 MsLRR aims to infer the low-rank refined superpixel affinity matrices at different scales of the input image in parallel, and meanwhile, to impose the cross-scale constraint to make the desired affinity matrices consistent. [sent-798, score-0.89]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('replication', 0.466), ('zs', 0.407), ('superpixel', 0.247), ('js', 0.239), ('affinity', 0.206), ('xs', 0.179), ('superpixels', 0.17), ('mslrr', 0.153), ('internal', 0.12), ('es', 0.12), ('mncut', 0.109), ('xsps', 0.109), ('bsd', 0.097), ('affinities', 0.096), ('msrc', 0.093), ('xszs', 0.087), ('ns', 0.086), ('patch', 0.084), ('xis', 0.082), ('segmentation', 0.078), ('subregion', 0.077), ('hs', 0.077), ('statistics', 0.074), ('unlabeled', 0.073), ('ps', 0.071), ('qisj', 0.066), ('zms', 0.066), ('zstqs', 0.066), ('refined', 0.066), ('semantic', 0.065), ('cz', 0.062), ('isb', 0.058), ('ijs', 0.058), ('qs', 0.058), ('ias', 0.056), ('comparisons', 0.055), ('ncut', 0.054), ('nes', 0.054), ('irj', 0.054), ('patches', 0.053), ('external', 0.051), ('justify', 0.051), ('lrr', 0.051), ('scales', 0.049), ('densities', 0.049), ('recur', 0.048), ('ys', 0.048), ('corrupted', 0.047), ('alm', 0.046), ('ilj', 0.046), ('semantically', 0.046), ('density', 0.046), ('prior', 0.045), ('pstqs', 0.044), ('tbes', 0.044), ('wisj', 0.044), ('zisj', 0.044), ('bagon', 0.043), ('matrices', 0.043), ('consistency', 0.043), ('corpus', 0.042), ('extra', 0.041), ('vs', 0.041), ('iaj', 0.039), ('cs', 0.038), ('noises', 0.038), ('unified', 0.037), ('namely', 0.037), ('ucm', 0.036), ('eofx', 0.036), ('subspace', 0.035), ('discovered', 0.034), ('unsupervised', 0.033), ('region', 0.033), ('tr', 0.032), ('fairly', 0.032), ('tpami', 0.031), ('wright', 0.031), ('utilize', 0.031), ('exp', 0.031), ('indexed', 0.031), ('call', 0.031), ('lagrange', 0.03), ('indicates', 0.03), ('desired', 0.03), ('rank', 0.029), ('guangdong', 0.028), ('nonoverlapping', 0.028), ('copies', 0.028), ('malik', 0.028), ('vice', 0.028), ('segment', 0.028), ('looking', 0.028), ('subspaces', 0.027), ('yet', 0.027), ('fowlkes', 0.027), ('nuclear', 0.027), ('cij', 0.027), ('subregions', 0.027), ('sastry', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics

Author: Xiaobai Liu, Liang Lin, Alan L. Yuille

Abstract: In this work, we present an efficient multi-scale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noises, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. We call this internal image statistics as replication prior, and quantitatively justify it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. Our experiments demonstrate the presented method can substantially improve segmentation accuracy.

2 0.16462983 29 cvpr-2013-A Video Representation Using Temporal Superpixels

Author: Jason Chang, Donglai Wei, John W. Fisher_III

Abstract: We develop a generative probabilistic model for temporally consistent superpixels in video sequences. In contrast to supervoxel methods, object parts in different frames are tracked by the same temporal superpixel. We explicitly model flow between frames with a bilateral Gaussian process and use this information to propagate superpixels in an online fashion. We consider four novel metrics to quantify performance of a temporal superpixel representation and demonstrate superior performance when compared to supervoxel methods.

3 0.15143287 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds

Author: Yan Wang, Rongrong Ji, Shih-Fu Chang

Abstract: Recent years have witnessed a growing interest in understanding the semantics of point clouds in a wide variety of applications. However, point cloud labeling remains an open problem, due to the difficulty in acquiring sufficient 3D point labels towards training effective classifiers. In this paper, we overcome this challenge by utilizing the existing massive 2D semantic labeled datasets from decadelong community efforts, such as ImageNet and LabelMe, and a novel “cross-domain ” label propagation approach. Our proposed method consists of two major novel components, Exemplar SVM based label propagation, which effectively addresses the cross-domain issue, and a graphical model based contextual refinement incorporating 3D constraints. Most importantly, the entire process does not require any training data from the target scenes, also with good scalability towards large scale applications. We evaluate our approach on the well-known Cornell Point Cloud Dataset, achieving much greater efficiency and comparable accuracy even without any 3D training data. Our approach shows further major gains in accuracy when the training data from the target scenes is used, outperforming state-ofthe-art approaches with far better efficiency.

4 0.15014881 460 cvpr-2013-Weakly-Supervised Dual Clustering for Image Semantic Segmentation

Author: Yang Liu, Jing Liu, Zechao Li, Jinhui Tang, Hanqing Lu

Abstract: In this paper, we propose a novel Weakly-Supervised Dual Clustering (WSDC) approach for image semantic segmentation with image-level labels, i.e., collaboratively performing image segmentation and tag alignment with those regions. The proposed approach is motivated from the observation that superpixels belonging to an object class usually exist across multiple images and hence can be gathered via the idea of clustering. In WSDC, spectral clustering is adopted to cluster the superpixels obtained from a set of over-segmented images. At the same time, a linear transformation between features and labels as a kind of discriminative clustering is learned to select the discriminative features among different classes. The both clustering outputs should be consistent as much as possible. Besides, weakly-supervised constraints from image-level labels are imposed to restrict the labeling of superpixels. Finally, the non-convex and non-smooth objective function are efficiently optimized using an iterative CCCP procedure. Extensive experiments conducted on MSRC andLabelMe datasets demonstrate the encouraging performance of our method in comparison with some state-of-the-arts.

5 0.14517443 309 cvpr-2013-Nonparametric Scene Parsing with Adaptive Feature Relevance and Semantic Context

Author: Gautam Singh, Jana Kosecka

Abstract: This paper presents a nonparametric approach to semantic parsing using small patches and simple gradient, color and location features. We learn the relevance of individual feature channels at test time using a locally adaptive distance metric. To further improve the accuracy of the nonparametric approach, we examine the importance of the retrieval set used to compute the nearest neighbours using a novel semantic descriptor to retrieve better candidates. The approach is validated by experiments on several datasets used for semantic parsing demonstrating the superiority of the method compared to the state of art approaches.

6 0.14180563 217 cvpr-2013-Improving an Object Detector and Extracting Regions Using Superpixels

7 0.13195392 370 cvpr-2013-SCALPEL: Segmentation Cascades with Localized Priors and Efficient Learning

8 0.11978364 329 cvpr-2013-Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images

9 0.1163488 86 cvpr-2013-Composite Statistical Inference for Semantic Segmentation

10 0.10278228 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

11 0.09846177 357 cvpr-2013-Revisiting Depth Layers from Occlusions

12 0.097223274 212 cvpr-2013-Image Segmentation by Cascaded Region Agglomeration

13 0.086610414 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation

14 0.079940714 166 cvpr-2013-Fast Image Super-Resolution Based on In-Place Example Regression

15 0.077885553 437 cvpr-2013-Towards Fast and Accurate Segmentation

16 0.07697574 393 cvpr-2013-Separating Signal from Noise Using Patch Recurrence across Scales

17 0.076252997 50 cvpr-2013-Augmenting CRFs with Boltzmann Machine Shape Priors for Image Labeling

18 0.074058935 43 cvpr-2013-Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs

19 0.073659085 222 cvpr-2013-Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus

20 0.073648803 294 cvpr-2013-Multi-class Video Co-segmentation with a Generative Multi-video Model


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.169), (1, -0.012), (2, 0.026), (3, 0.025), (4, 0.094), (5, 0.021), (6, 0.018), (7, 0.035), (8, -0.117), (9, -0.007), (10, 0.153), (11, -0.065), (12, -0.013), (13, 0.051), (14, -0.014), (15, -0.005), (16, 0.011), (17, -0.105), (18, -0.072), (19, 0.116), (20, 0.107), (21, 0.0), (22, -0.1), (23, -0.055), (24, -0.039), (25, 0.009), (26, -0.106), (27, -0.13), (28, 0.038), (29, -0.036), (30, 0.015), (31, -0.037), (32, 0.027), (33, -0.088), (34, 0.014), (35, 0.033), (36, -0.07), (37, 0.032), (38, 0.052), (39, -0.072), (40, 0.045), (41, -0.016), (42, 0.016), (43, 0.011), (44, -0.008), (45, -0.051), (46, 0.022), (47, 0.007), (48, -0.02), (49, -0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92944199 366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics

Author: Xiaobai Liu, Liang Lin, Alan L. Yuille

Abstract: In this work, we present an efficient multi-scale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noises, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. We call this internal image statistics as replication prior, and quantitatively justify it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. Our experiments demonstrate the presented method can substantially improve segmentation accuracy.

2 0.78121537 460 cvpr-2013-Weakly-Supervised Dual Clustering for Image Semantic Segmentation

Author: Yang Liu, Jing Liu, Zechao Li, Jinhui Tang, Hanqing Lu

Abstract: In this paper, we propose a novel Weakly-Supervised Dual Clustering (WSDC) approach for image semantic segmentation with image-level labels, i.e., collaboratively performing image segmentation and tag alignment with those regions. The proposed approach is motivated from the observation that superpixels belonging to an object class usually exist across multiple images and hence can be gathered via the idea of clustering. In WSDC, spectral clustering is adopted to cluster the superpixels obtained from a set of over-segmented images. At the same time, a linear transformation between features and labels as a kind of discriminative clustering is learned to select the discriminative features among different classes. The both clustering outputs should be consistent as much as possible. Besides, weakly-supervised constraints from image-level labels are imposed to restrict the labeling of superpixels. Finally, the non-convex and non-smooth objective function are efficiently optimized using an iterative CCCP procedure. Extensive experiments conducted on MSRC andLabelMe datasets demonstrate the encouraging performance of our method in comparison with some state-of-the-arts.

3 0.77761024 29 cvpr-2013-A Video Representation Using Temporal Superpixels

Author: Jason Chang, Donglai Wei, John W. Fisher_III

Abstract: We develop a generative probabilistic model for temporally consistent superpixels in video sequences. In contrast to supervoxel methods, object parts in different frames are tracked by the same temporal superpixel. We explicitly model flow between frames with a bilateral Gaussian process and use this information to propagate superpixels in an online fashion. We consider four novel metrics to quantify performance of a temporal superpixel representation and demonstrate superior performance when compared to supervoxel methods.

4 0.76486593 458 cvpr-2013-Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds

Author: Jeremie Papon, Alexey Abramov, Markus Schoeler, Florentin Wörgötter

Abstract: Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as superpixels, is a widely used preprocessing step in segmentation algorithms. Superpixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that superpixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent superpixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.

5 0.74546427 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation

Author: Luming Zhang, Mingli Song, Zicheng Liu, Xiao Liu, Jiajun Bu, Chun Chen

Abstract: Weakly supervised image segmentation is a challenging problem in computer vision field. In this paper, we present a new weakly supervised image segmentation algorithm by learning the distribution of spatially structured superpixel sets from image-level labels. Specifically, we first extract graphlets from each image where a graphlet is a smallsized graph consisting of superpixels as its nodes and it encapsulates the spatial structure of those superpixels. Then, a manifold embedding algorithm is proposed to transform graphlets of different sizes into equal-length feature vectors. Thereafter, we use GMM to learn the distribution of the post-embedding graphlets. Finally, we propose a novel image segmentation algorithm, called graphlet cut, that leverages the learned graphlet distribution in measuring the homogeneity of a set of spatially structured superpixels. Experimental results show that the proposed approach outperforms state-of-the-art weakly supervised image segmentation methods, and its performance is comparable to those of the fully supervised segmentation models.

6 0.71835589 26 cvpr-2013-A Statistical Model for Recreational Trails in Aerial Images

7 0.71388406 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds

8 0.68232739 212 cvpr-2013-Image Segmentation by Cascaded Region Agglomeration

9 0.60773295 280 cvpr-2013-Maximum Cohesive Grid of Superpixels for Fast Object Localization

10 0.60279995 86 cvpr-2013-Composite Statistical Inference for Semantic Segmentation

11 0.59916806 370 cvpr-2013-SCALPEL: Segmentation Cascades with Localized Priors and Efficient Learning

12 0.55331159 309 cvpr-2013-Nonparametric Scene Parsing with Adaptive Feature Relevance and Semantic Context

13 0.52328825 329 cvpr-2013-Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images

14 0.51524013 217 cvpr-2013-Improving an Object Detector and Extracting Regions Using Superpixels

15 0.49020439 13 cvpr-2013-A Higher-Order CRF Model for Road Network Extraction

16 0.48942938 406 cvpr-2013-Spatial Inference Machines

17 0.48858956 9 cvpr-2013-A Fast Semidefinite Approach to Solving Binary Quadratic Problems

18 0.4707267 93 cvpr-2013-Constraints as Features

19 0.4651407 437 cvpr-2013-Towards Fast and Accurate Segmentation

20 0.45801595 425 cvpr-2013-Tensor-Based High-Order Semantic Relation Transfer for Semantic Scene Segmentation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.128), (16, 0.014), (26, 0.053), (33, 0.276), (35, 0.222), (39, 0.013), (67, 0.061), (69, 0.037), (77, 0.022), (80, 0.015), (87, 0.079)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88906819 404 cvpr-2013-Sparse Quantization for Patch Description

Author: Xavier Boix, Michael Gygli, Gemma Roig, Luc Van_Gool

Abstract: The representation of local image patches is crucial for the good performance and efficiency of many vision tasks. Patch descriptors have been designed to generalize towards diverse variations, depending on the application, as well as the desired compromise between accuracy and efficiency. We present a novel formulation of patch description, that serves such issues well. Sparse quantization lies at its heart. This allows for efficient encodings, leading to powerful, novel binary descriptors, yet also to the generalization of existing descriptors like SIFTorBRIEF. We demonstrate the capabilities of our formulation for both keypoint matching and image classification. Our binary descriptors achieve state-of-the-art results for two keypoint matching benchmarks, namely those by Brown [6] and Mikolajczyk [18]. For image classification, we propose new descriptors that perform similar to SIFT on Caltech101 [10] and PASCAL VOC07 [9].

same-paper 2 0.85750401 366 cvpr-2013-Robust Region Grouping via Internal Patch Statistics

Author: Xiaobai Liu, Liang Lin, Alan L. Yuille

Abstract: In this work, we present an efficient multi-scale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noises, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. We call this internal image statistics as replication prior, and quantitatively justify it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. Our experiments demonstrate the presented method can substantially improve segmentation accuracy.

3 0.82942754 53 cvpr-2013-BFO Meets HOG: Feature Extraction Based on Histograms of Oriented p.d.f. Gradients for Image Classification

Author: Takumi Kobayashi

Abstract: Image classification methods have been significantly developed in the last decade. Most methods stem from bagof-features (BoF) approach and it is recently extended to a vector aggregation model, such as using Fisher kernels. In this paper, we propose a novel feature extraction method for image classification. Following the BoF approach, a plenty of local descriptors are first extracted in an image and the proposed method is built upon the probability density function (p.d.f) formed by those descriptors. Since the p.d.f essentially represents the image, we extract the features from the p.d.f by means of the gradients on the p.d.f. The gradients, especially their orientations, effectively characterize the shape of the p.d.f from the geometrical viewpoint. We construct the features by the histogram of the oriented p.d.f gradients via orientation coding followed by aggregation of the orientation codes. The proposed image features, imposing no specific assumption on the targets, are so general as to be applicable to any kinds of tasks regarding image classifications. In the experiments on object recog- nition and scene classification using various datasets, the proposed method exhibits superior performances compared to the other existing methods.

4 0.82127208 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

Author: Ian Endres, Kevin J. Shih, Johnston Jiaa, Derek Hoiem

Abstract: We propose a method to learn a diverse collection of discriminative parts from object bounding box annotations. Part detectors can be trained and applied individually, which simplifies learning and extension to new features or categories. We apply the parts to object category detection, pooling part detections within bottom-up proposed regions and using a boosted classifier with proposed sigmoid weak learners for scoring. On PASCAL VOC 2010, we evaluate the part detectors ’ ability to discriminate and localize annotated keypoints. Our detection system is competitive with the best-existing systems, outperforming other HOG-based detectors on the more deformable categories.

5 0.81923497 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

Author: Brandon Rothrock, Seyoung Park, Song-Chun Zhu

Abstract: In this paper we present a compositional and-or graph grammar model for human pose estimation. Our model has three distinguishing features: (i) large appearance differences between people are handled compositionally by allowingparts or collections ofparts to be substituted with alternative variants, (ii) each variant is a sub-model that can define its own articulated geometry and context-sensitive compatibility with neighboring part variants, and (iii) background region segmentation is incorporated into the part appearance models to better estimate the contrast of a part region from its surroundings, and improve resilience to background clutter. The resulting integrated framework is trained discriminatively in a max-margin framework using an efficient and exact inference algorithm. We present experimental evaluation of our model on two popular datasets, and show performance improvements over the state-of-art on both benchmarks.

6 0.81779939 414 cvpr-2013-Structure Preserving Object Tracking

7 0.8176142 365 cvpr-2013-Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities

8 0.81708223 408 cvpr-2013-Spatiotemporal Deformable Part Models for Action Detection

9 0.81675684 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases

10 0.81612903 325 cvpr-2013-Part Discovery from Partial Correspondence

11 0.81598586 14 cvpr-2013-A Joint Model for 2D and 3D Pose Estimation from a Single Image

12 0.8158586 242 cvpr-2013-Label Propagation from ImageNet to 3D Point Clouds

13 0.81575537 285 cvpr-2013-Minimum Uncertainty Gap for Robust Visual Tracking

14 0.81559932 30 cvpr-2013-Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses

15 0.81534815 98 cvpr-2013-Cross-View Action Recognition via a Continuous Virtual Path

16 0.81452692 143 cvpr-2013-Efficient Large-Scale Structured Learning

17 0.8144722 104 cvpr-2013-Deep Convolutional Network Cascade for Facial Point Detection

18 0.81405544 206 cvpr-2013-Human Pose Estimation Using Body Parts Dependent Joint Regressors

19 0.8136307 121 cvpr-2013-Detection- and Trajectory-Level Exclusion in Multiple Object Tracking

20 0.81326139 256 cvpr-2013-Learning Structured Hough Voting for Joint Object Detection and Occlusion Reasoning