iccv iccv2013 iccv2013-21 knowledge-graph by maker-knowledge-mining

21 iccv-2013-A Method of Perceptual-Based Shape Decomposition


Source: pdf

Author: Chang Ma, Zhongqian Dong, Tingting Jiang, Yizhou Wang, Wen Gao

Abstract: In thispaper, wepropose a novelperception-based shape decomposition method which aims to decompose a shape into semantically meaningful parts. In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. The problem is formulated as a quadratically constrained quadratic program (QCQP) problem and is solved by a trust-region method. Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. [sent-2, score-1.183]

2 Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. [sent-4, score-0.721]

3 Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset. [sent-5, score-0.528]

4 Introduction Many psychological studies have shown the important role of parts that plays in object perception and recognition , (e. [sent-7, score-0.353]

5 We argue that perceptual meaningful parts have the advantage in many vision tasks, such as object detection, because such parts are usually more stable in different environments. [sent-17, score-0.431]

6 In addition, the semantic parts are useful injudging the affordance ofobjects and they are the key to transferring the knowledge between different objects via the shared parts [1]. [sent-22, score-0.494]

7 In order to obtain semantic parts, perception-rule-based methods are often adopted to decompose a shape into a number of parts (e. [sent-23, score-0.583]

8 , the Minima rule [8], the Short-cut rule [20] and the Convexity rule [11, 21]. [sent-29, score-0.717]

9 The Minima rule suggests the shape should be divided at loci of negative minima of curvature along the contour. [sent-30, score-0.487]

10 The Short-cut rule suggests to decompose shapes into parts using the shortest possible cuts. [sent-32, score-0.582]

11 [7] proposed methods for approximate shape decomposition based on the Convexity rule. [sent-34, score-0.442]

12 [14] formulated the convex shape decomposition as a linear programming problem and considers the Short-cut rule. [sent-36, score-0.478]

13 [9] proposed methods for perception-based shape decomposition which add the Minima rule in their works. [sent-39, score-0.681]

14 In this paper, we propose a new method to acquire semantic parts via perception-based shape decomposition. [sent-40, score-0.451]

15 Therefore, besides the existing rules adopted by [9, 14, 17], we add a new rule to encourage the consistent decomposition of similar parts. [sent-46, score-0.741]

16 In addition, most of the works based on the Convexity rule tend to generate redundant parts in order to satisfy the convexity constraint. [sent-47, score-0.71]

17 To prove the advantage of semantic parts in the vision tasks, we conduct the object detection experiment on the ETHZ dataset [6]. [sent-57, score-0.395]

18 We use the semantic parts obtained by the proposed decomposition method to detect objects in natural scenes and compare the detection rate with the same detection method using a set of non-semantic random parts. [sent-58, score-0.649]

19 The result shows that the detection rate of using the semantic parts is much better than the random parts, especially in the cases of objects with articulation. [sent-59, score-0.329]

20 Perception Rules The three perception rules usually adopted by humans to decompose a shape include the Minima rule [8], the Short- cut rule [20] and the Convexity rule [11, 21]. [sent-67, score-1.455]

21 The Minima rule suggests that the endpoints of a cut usually locate at the places where the curvature is local minimum. [sent-68, score-0.475]

22 The Shortcut rule prefers to minimize the total cut length, and the Convexity rule requires parts to be convex. [sent-69, score-0.867]

23 Formulation Perceptual-based shape decomposition is to decompose a shape into non-overlapping parts consistently with human perception. [sent-78, score-0.945]

24 In our problem, parts are generated by a set of cuts on a shape. [sent-79, score-0.483]

25 So the problem can be formulated as selecting an optimal subset of candidate cuts which can derive perceptual-based parts and do not intersect with each other. [sent-81, score-0.592]

26 We define Cp as a set of candidate cuts and C∗ as the selected optimal cuts. [sent-82, score-0.366]

27 According to the Short-cut rule [20], L is the cut length vector s. [sent-88, score-0.429]

28 I() nist a sfeucntc,ti Hon(i t,hja)t measures tthheimprovement of shape convexity by a cut. [sent-94, score-0.369]

29 Convexity Constraint As strict convex shape decomposition can generate many spurious parts due to the noise (e. [sent-101, score-0.677]

30 , given a threshold ε, we want to ensure the concavity of a decomposed part is less than ε. [sent-107, score-0.377]

31 In the following, we first introduce the concavity measurement of two points and a part as proposed in [14], then present the method of generating cuts to ensure the convexity of parts, followed by the formulation of the convexity constraint in Eq. [sent-109, score-1.111]

32 1), the concavity of two points within a shape w. [sent-116, score-0.486]

33 Any pair of points whose concavity is− more than ε is defined as a mutex pair [14]. [sent-129, score-0.886]

34 (a) Vertices p2 and p3 is a mutex pair while p1 and p2 is not under the threshold ε. [sent-140, score-0.483]

35 (b) Red lines SS1 and SS2 are two candidate cuts generated by S, the orange lines are the skeleton of the shape. [sent-143, score-0.494]

36 If we consider all the directions, then the concavity of a pair of points is defined as Concavity(p1,p2) = mfaxConcavityf(p1,p2), (3) For a shape part P, its concavity is defined as Concavity(P) =p1∈mPa,p2x∈PConcavity(p1,p2), (4) where p1 and p2 are two arbitrary points in P. [sent-144, score-0.928]

37 If the concavity of every pair of points in a part is less than ε, then the concavity of the part is less than ε. [sent-145, score-0.775]

38 To ensure all the decomposed parts are we shall separate all the mutex pairs of a shape by cuts, although some of these cuts are spurious. [sent-147, score-1.151]

39 Our goal is to find the optimal set of cuts for shape decomposition. [sent-148, score-0.44]

40 In order to extend the concept of mutex pair from point to point set, two concavity measures of two point sets R1 and R2 are defined as ε-convex, w(R1,R2) =p1∈Rm1,ipn2∈R2Concavity(p1,p2), (5) W(R1,R2) =p1∈Rm1a,px2∈R2Concavity(p1,p2). [sent-149, score-0.777]

41 (6) If w(R1 , R2) ≥ ε, every pair of points from R1 and R2 forms a mutex pair. [sent-150, score-0.519]

42 We use the method in [4, 14] to find mutex pairs of regions in our implementation and select a subset of candidate cuts to satisfy them in order to get ε-convex decomposition. [sent-156, score-0.801]

43 2 Generating candidate cuts to separate the mutex pairs By considering the Minima rule introduced above, we propose candidate cuts using all the saddle points of a shape. [sent-159, score-1.534]

44 The two points are the contacting points × between the shape contour and its maximal disk centered at the skeleton point [18]. [sent-167, score-0.406]

45 Hence, to get a candidate cut, we first compute the skeleton of a shape using method in [18], then find the symmetric points of the saddle point S based on its skeletons, i. [sent-171, score-0.486]

46 Then the cuts SS1 and SS2 are two candidate cuts generated by S. [sent-174, score-0.65]

47 3 Formulating the convexity constraint The candidate cuts generated by the above method are usually surplus. [sent-177, score-0.611]

48 We shall select an “optimal” set of cuts that is able to separate all the mutex pairs, and hence generates near-convex parts. [sent-178, score-0.752]

49 ) To achieve this, a binary matrix A is defined, which signifies the separation relationship between the mutex pairs (MP = {mp1 , mp2 , . [sent-181, score-0.41]

50 nIfu a mbeurte oxf pair mpi can be separated by a cut Cj, then A(i, j) = 1; otherwise 0. [sent-187, score-0.379]

51 So if we constrain A(i, :)x ≥ 1, then mutex pair riw i ss separated a wt ele casotn once by tih,e:) optimal stheet no fm cuts, which is also u? [sent-188, score-0.531]

52 × mutex pair R1 and R2 can be separated by the combination of cuts C1 and C2, so there is no need for C3. [sent-190, score-0.815]

53 Pruning redundant cuts Although we can satisfy all the mutex pairs using the above constraint, it can produce redundant cuts. [sent-191, score-0.812]

54 This is due to the double counting of mutex pairs. [sent-192, score-0.41]

55 However, the combination of two lower level cuts C1 and C2 is also able to separate R1 and R2; in addition, they can separate the two rear legs of the camel as well. [sent-199, score-0.453]

56 We design a series of matrices of which Ai2 is a binary matrix signifying the separation of mutex pair A{21,2,···,m} 887755 by all candidate cut pairs. [sent-201, score-0.755]

57 If the mutex pair mpi can be separated by the comibsi nna ×tio nn. [sent-203, score-0.575]

58 We extend the above convexity constraint to A(i, :)x + 12xTAi2x ≥ 1to enforce that the ith mutex pair must be separated exith ≥er 1 by a single c thuta or teh iet hco mmubteixna ptiaoirn mofu stwt boe. [sent-205, score-0.776]

59 Hsim() – the part-similarity term We aim to encourage a consistent decomposition of similar parts. [sent-209, score-0.371]

60 3, each candidate cut can separate the shape into two portions and we choose the smaller one as its corresponding part. [sent-211, score-0.461]

61 1, we define Hsim (i, j) = φ(Ti, Tj) to account for the similarity of a pair of contours derived from cut Ci and cut Cj . [sent-225, score-0.508]

62 I() – the cut income term In order to improve the convexity of the decomposed parts, we employ the cut income term as proposed in [9]. [sent-229, score-0.891]

63 The income of a cut is defined as the concavity of the separated mutex pair of regions by the cut. [sent-230, score-1.142]

64 1 (b), the concavity of mutex pair (blue regions) is fS − fp based on Eq. [sent-232, score-0.777]

65 We sample the corresponding contour of each cut and compute the similarity between every two contours to get part-similarity matrix Hsim. [sent-263, score-0.345]

66 In the first experiment, we compare our results with human decomposition on the MPEG-7 dataset. [sent-275, score-0.333]

67 Both the quantitative and qualitative evaluations show the improved consistency to human performance compared with other shape decomposition methods. [sent-276, score-0.53]

68 To justify the motivation of this work, we show that object detection on the ETHZ dataset using semantically meaningful parts greatly improve the detection rate than using non-meaningful parts, especially in the case of object articulation. [sent-277, score-0.329]

69 These parts greatly affect the convexity of the object shape. [sent-286, score-0.412]

70 In order to entertain the convexity constraint, curved parts will be cut into pieces as shown in Fig. [sent-287, score-0.679]

71 4 (c)) and map the cuts back to the original shape (Fig. [sent-292, score-0.44]

72 In the straightening process, the skeleton is straightened firstly, then shift the points on the contour accordingly. [sent-294, score-0.349]

73 (a) The decomposition result on the original shape without straightening. [sent-298, score-0.442]

74 2 Experiment method To verify the consistency of the decomposition by the proposed method with the human decomposition, we choose 20 representative categories which are suitable for decomposition from the MPEG-7 shape dataset as shown in Fig. [sent-303, score-0.846]

75 In each shape category, for each instance i, we define G(i, 1) to measure the decomposition similarity between the proposed method and the humans; we define G(i, 2) to measure the decomposition consistency among humans. [sent-310, score-0.769]

76 gi (j, k) is the matching score function between the jth decomposition with the k-th decomposition on the i-th shape instance. [sent-316, score-0.728]

77 ), (15) where Aijq is the area of part Pijq which denotes the q-th part of the j-th decomposition for shape i. [sent-321, score-0.544]

78 If two parts have little intersection, the F1 is close to 0 and we define F1 = 0 if two parts do not overlap. [sent-331, score-0.398]

79 To further show that the proposed method is more statistically consistent with human decomposition than the other methods, we conduct the pairwise t-test experiment. [sent-340, score-0.441]

80 If the decomposition consistency between the proposed method and human is less than the consistency among human beings, the testing result is 1; otherwise 0. [sent-342, score-0.486]

81 From Table 2 we can see that on 17 object classes the decomposition consistency by the proposed method is not significantly less than human, whereas the other methods only get 7, 9, 11 and 14 classes. [sent-343, score-0.352]

82 4 Qualitatively visual comparison Some visual comparisons between the proposed method and MNCD [17], PSD [9] and human decomposition are shown in Fig. [sent-347, score-0.333]

83 The results by our method are consistent with human decomposition statistically in 17 categories out of 20. [sent-376, score-0.439]

84 This makes the decomposed parts more consistent with human perception. [sent-378, score-0.334]

85 So when ε increases, the decomposition will ignore smaller concave parts due to local distortions and get a relatively robust result. [sent-410, score-0.51]

86 A larger b encourages more cuts with similar parts, hence, can get more semantic decomposition. [sent-413, score-0.405]

87 We decompose the shapes to generate semantic parts (Fig. [sent-419, score-0.439]

88 In comparison, we simply replace the semantic parts with a set of random parts (not semantically meaningful as shown in Fig. [sent-422, score-0.556]

89 We can see that the semantic parts can boost the performance on all categories, which demonstrates the representative power of the proposed semantic parts. [sent-436, score-0.391]

90 The semantic parts capture the anatomy structure and keep rigid in articulation; however, the random parts may change drastically. [sent-439, score-0.494]

91 9 shows some decomposition results of shapes with holes. [sent-444, score-0.329]

92 Because the cup handle is a curved branch and cannot be straightened by the preprocessing method, redundant cuts are generated. [sent-445, score-0.518]

93 10 shows a failure example of the proposed method although our method generates shorter cut length than human and more similar parts, the decomposition is not consistent with human perception. [sent-447, score-0.614]

94 It shows the limitation of shape decomposition only based on generic perception rules. [sent-449, score-0.567]

95 Conclusion In this paper, we propose a method to decompose a shape into semantic parts. [sent-456, score-0.353]

96 Apart from three existing perception rules, we propose a part-similarity rule to encourage consistent cuts for similar parts. [sent-457, score-0.733]

97 By jointly considering these perception rules, we formulate the shape decomposition problem as a quadratically constrained quadratic program problem and solve it by a trust-region method. [sent-458, score-0.641]

98 An object detection experiment is also conducted on the ETHZ dataset to demonstrate the advantage of the semantic parts over the non-meaningful parts for shape representation. [sent-460, score-0.718]

99 Convexity rule for shape decomposition based on discrete contour evolution. [sent-550, score-0.756]

100 From partial shape matching through local deformation to robust global shape similarity for object detection. [sent-580, score-0.346]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mutex', 0.41), ('concavity', 0.294), ('decomposition', 0.286), ('cuts', 0.284), ('rule', 0.239), ('convexity', 0.213), ('parts', 0.199), ('cut', 0.19), ('shape', 0.156), ('mncd', 0.152), ('hsim', 0.127), ('income', 0.127), ('perception', 0.125), ('skeleton', 0.103), ('pijq', 0.101), ('decompose', 0.101), ('rules', 0.1), ('straightened', 0.098), ('psd', 0.097), ('semantic', 0.096), ('minima', 0.092), ('saddle', 0.084), ('candidate', 0.082), ('curved', 0.077), ('concavityf', 0.076), ('pikq', 0.076), ('ethz', 0.075), ('contour', 0.075), ('pair', 0.073), ('camel', 0.067), ('redundant', 0.059), ('morse', 0.056), ('contours', 0.055), ('sf', 0.055), ('aitx', 0.051), ('beings', 0.051), ('bxthsimx', 0.051), ('ltx', 0.051), ('xthx', 0.051), ('quadratically', 0.049), ('fs', 0.049), ('separated', 0.048), ('human', 0.047), ('endpoints', 0.046), ('cj', 0.045), ('qcqp', 0.045), ('straighten', 0.045), ('decomposed', 0.044), ('consistent', 0.044), ('mpi', 0.044), ('shapes', 0.043), ('csd', 0.042), ('consistency', 0.041), ('encourage', 0.041), ('lien', 0.039), ('part', 0.039), ('qualitatively', 0.038), ('latecki', 0.037), ('pku', 0.037), ('straightening', 0.037), ('legs', 0.036), ('convex', 0.036), ('points', 0.036), ('fppi', 0.036), ('humans', 0.035), ('detection', 0.034), ('deformation', 0.034), ('experiment', 0.034), ('pruning', 0.034), ('separate', 0.033), ('pages', 0.033), ('meaningful', 0.033), ('constraint', 0.032), ('feet', 0.032), ('statistically', 0.032), ('conduct', 0.032), ('adopted', 0.031), ('gopalan', 0.031), ('categories', 0.03), ('decompositions', 0.029), ('bending', 0.029), ('semantically', 0.029), ('pi', 0.029), ('psychological', 0.029), ('direction', 0.027), ('templates', 0.027), ('intersect', 0.027), ('articulation', 0.027), ('jiang', 0.027), ('compositional', 0.026), ('orange', 0.025), ('shall', 0.025), ('springer', 0.025), ('md', 0.025), ('quadratic', 0.025), ('get', 0.025), ('area', 0.024), ('otherwise', 0.024), ('interior', 0.024), ('category', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 21 iccv-2013-A Method of Perceptual-Based Shape Decomposition

Author: Chang Ma, Zhongqian Dong, Tingting Jiang, Yizhou Wang, Wen Gao

Abstract: In thispaper, wepropose a novelperception-based shape decomposition method which aims to decompose a shape into semantically meaningful parts. In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. The problem is formulated as a quadratically constrained quadratic program (QCQP) problem and is solved by a trust-region method. Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset.

2 0.13247861 11 iccv-2013-A Fully Hierarchical Approach for Finding Correspondences in Non-rigid Shapes

Author: Ivan Sipiran, Benjamin Bustos

Abstract: This paper presents a hierarchical method for finding correspondences in non-rigid shapes. We propose a new representation for 3D meshes: the decomposition tree. This structure characterizes the recursive decomposition process of a mesh into regions of interest and keypoints. The internal nodes contain regions of interest (which may be recursively decomposed) and the leaf nodes contain the keypoints to be matched. We also propose a hierarchical matching algorithm that performs in a level-wise manner. The matching process is guided by the similarity between regions in high levels of the tree, until reaching the keypoints stored in the leaves. This allows us to reduce the search space of correspondences, making also the matching process efficient. We evaluate the effectiveness of our approach using the SHREC’2010 robust correspondence benchmark. In addition, we show that our results outperform the state of the art.

3 0.096751742 104 iccv-2013-Decomposing Bag of Words Histograms

Author: Ankit Gandhi, Karteek Alahari, C.V. Jawahar

Abstract: We aim to decompose a global histogram representation of an image into histograms of its associated objects and regions. This task is formulated as an optimization problem, given a set of linear classifiers, which can effectively discriminate the object categories present in the image. Our decomposition bypasses harder problems associated with accurately localizing and segmenting objects. We evaluate our method on a wide variety of composite histograms, and also compare it with MRF-based solutions. In addition to merely measuring the accuracy of decomposition, we also show the utility of the estimated object and background histograms for the task of image classification on the PASCAL VOC 2007 dataset.

4 0.096384235 204 iccv-2013-Human Attribute Recognition by Rich Appearance Dictionary

Author: Jungseock Joo, Shuo Wang, Song-Chun Zhu

Abstract: We present a part-based approach to the problem of human attribute recognition from a single image of a human body. To recognize the attributes of human from the body parts, it is important to reliably detect the parts. This is a challenging task due to the geometric variation such as articulation and view-point changes as well as the appearance variation of the parts arisen from versatile clothing types. The prior works have primarily focused on handling . edu . cn ???????????? geometric variation by relying on pre-trained part detectors or pose estimators, which require manual part annotation, but the appearance variation has been relatively neglected in these works. This paper explores the importance of the appearance variation, which is directly related to the main task, attribute recognition. To this end, we propose to learn a rich appearance part dictionary of human with significantly less supervision by decomposing image lattice into overlapping windows at multiscale and iteratively refining local appearance templates. We also present quantitative results in which our proposed method outperforms the existing approaches.

5 0.090943582 70 iccv-2013-Cascaded Shape Space Pruning for Robust Facial Landmark Detection

Author: Xiaowei Zhao, Shiguang Shan, Xiujuan Chai, Xilin Chen

Abstract: In this paper, we propose a novel cascaded face shape space pruning algorithm for robust facial landmark detection. Through progressively excluding the incorrect candidate shapes, our algorithm can accurately and efficiently achieve the globally optimal shape configuration. Specifically, individual landmark detectors are firstly applied to eliminate wrong candidates for each landmark. Then, the candidate shape space is further pruned by jointly removing incorrect shape configurations. To achieve this purpose, a discriminative structure classifier is designed to assess the candidate shape configurations. Based on the learned discriminative structure classifier, an efficient shape space pruning strategy is proposed to quickly reject most incorrect candidate shapes while preserve the true shape. The proposed algorithm is carefully evaluated on a large set of real world face images. In addition, comparison results on the publicly available BioID and LFW face databases demonstrate that our algorithm outperforms some state-of-the-art algorithms.

6 0.087414287 150 iccv-2013-Exemplar Cut

7 0.081560202 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction

8 0.078295566 186 iccv-2013-GrabCut in One Cut

9 0.077782959 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model

10 0.074637204 177 iccv-2013-From Point to Set: Extend the Learning of Distance Metrics

11 0.072919369 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

12 0.072281063 379 iccv-2013-Semantic Segmentation without Annotating Segments

13 0.068717234 62 iccv-2013-Bird Part Localization Using Exemplar-Based Models with Enforced Pose and Subcategory Consistency

14 0.06733916 140 iccv-2013-Elastic Net Constraints for Shape Matching

15 0.064251103 273 iccv-2013-Monocular Image 3D Human Pose Estimation under Self-Occlusion

16 0.063532032 61 iccv-2013-Beyond Hard Negative Mining: Efficient Detector Learning via Block-Circulant Decomposition

17 0.062427554 30 iccv-2013-A Simple Model for Intrinsic Image Decomposition with Depth Cues

18 0.062252499 236 iccv-2013-Learning Discriminative Part Detectors for Image Classification and Cosegmentation

19 0.062096894 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding

20 0.061948348 320 iccv-2013-Pose-Configurable Generic Tracking of Elongated Objects


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.157), (1, -0.014), (2, -0.002), (3, -0.035), (4, 0.032), (5, -0.011), (6, -0.014), (7, 0.022), (8, -0.012), (9, -0.059), (10, 0.009), (11, 0.077), (12, -0.059), (13, -0.017), (14, -0.001), (15, 0.059), (16, 0.059), (17, 0.038), (18, 0.002), (19, -0.046), (20, 0.072), (21, 0.042), (22, -0.008), (23, -0.014), (24, -0.018), (25, 0.04), (26, 0.039), (27, -0.012), (28, -0.0), (29, 0.005), (30, -0.018), (31, -0.034), (32, -0.045), (33, -0.006), (34, 0.041), (35, 0.038), (36, -0.016), (37, -0.018), (38, -0.05), (39, -0.065), (40, 0.007), (41, 0.028), (42, 0.064), (43, 0.085), (44, 0.035), (45, 0.056), (46, 0.046), (47, 0.019), (48, -0.083), (49, -0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95800114 21 iccv-2013-A Method of Perceptual-Based Shape Decomposition

Author: Chang Ma, Zhongqian Dong, Tingting Jiang, Yizhou Wang, Wen Gao

Abstract: In thispaper, wepropose a novelperception-based shape decomposition method which aims to decompose a shape into semantically meaningful parts. In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. The problem is formulated as a quadratically constrained quadratic program (QCQP) problem and is solved by a trust-region method. Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset.

2 0.70164371 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model

Author: Tom Sie Ho Lee, Sanja Fidler, Sven Dickinson

Abstract: Symmetry is a powerful shape regularity that’s been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale superpixel segmentation, a framework proposed by [13]. However, we learn affinities between adjacent superpixels in a space that’s invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pairwise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline [13].

3 0.66410226 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

Author: Claudia Nieuwenhuis, Evgeny Strekalovskiy, Daniel Cremers

Abstract: We propose a convex multilabel framework for image sequence segmentation which allows to impose proportion priors on object parts in order to preserve their size ratios across multiple images. The key idea is that for strongly deformable objects such as a gymnast the size ratio of respective regions (head versus torso, legs versus full body, etc.) is typically preserved. We propose different ways to impose such priors in a Bayesian framework for image segmentation. We show that near-optimal solutions can be computed using convex relaxation techniques. Extensive qualitative and quantitative evaluations demonstrate that the proportion priors allow for highly accurate segmentations, avoiding seeping-out of regions and preserving semantically relevant small-scale structures such as hands or feet. They naturally apply to multiple object instances such as players in sports scenes, and they can relate different objects instead of object parts, e.g. organs in medical imaging. The algorithm is efficient and easily parallelized leading to proportion-consistent segmentations at runtimes around one second.

4 0.61471111 379 iccv-2013-Semantic Segmentation without Annotating Segments

Author: Wei Xia, Csaba Domokos, Jian Dong, Loong-Fah Cheong, Shuicheng Yan

Abstract: Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.

5 0.60876626 390 iccv-2013-Shufflets: Shared Mid-level Parts for Fast Object Detection

Author: Iasonas Kokkinos

Abstract: We present a method to identify and exploit structures that are shared across different object categories, by using sparse coding to learn a shared basis for the ‘part’ and ‘root’ templates of Deformable Part Models (DPMs). Our first contribution consists in using Shift-Invariant Sparse Coding (SISC) to learn mid-level elements that can translate during coding. This results in systematically better approximations than those attained using standard sparse coding. To emphasize that the learned mid-level structures are shiftable we call them shufflets. Our second contribution consists in using the resulting score to construct probabilistic upper bounds to the exact template scores, instead of taking them ‘at face value ’ as is common in current works. We integrate shufflets in DualTree Branch-and-Bound and cascade-DPMs and demonstrate that we can achieve a substantial acceleration, with practically no loss in performance.

6 0.60406232 186 iccv-2013-GrabCut in One Cut

7 0.60386568 364 iccv-2013-SGTD: Structure Gradient and Texture Decorrelating Regularization for Image Decomposition

8 0.6022386 307 iccv-2013-Parallel Transport of Deformations in Shape Space of Elastic Surfaces

9 0.59877652 8 iccv-2013-A Deformable Mixture Parsing Model with Parselets

10 0.59296441 422 iccv-2013-Toward Guaranteed Illumination Models for Non-convex Objects

11 0.57963681 411 iccv-2013-Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

12 0.57359189 104 iccv-2013-Decomposing Bag of Words Histograms

13 0.57142133 169 iccv-2013-Fine-Grained Categorization by Alignments

14 0.56746691 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction

15 0.56577939 11 iccv-2013-A Fully Hierarchical Approach for Finding Correspondences in Non-rigid Shapes

16 0.56361878 320 iccv-2013-Pose-Configurable Generic Tracking of Elongated Objects

17 0.55485326 278 iccv-2013-Multi-scale Topological Features for Hand Posture Representation and Analysis

18 0.55216604 288 iccv-2013-Nested Shape Descriptors

19 0.55105609 388 iccv-2013-Shape Index Descriptors Applied to Texture-Based Galaxy Analysis

20 0.54923034 270 iccv-2013-Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.096), (2, 0.069), (7, 0.013), (12, 0.017), (13, 0.01), (16, 0.018), (26, 0.104), (31, 0.028), (35, 0.049), (37, 0.011), (40, 0.019), (42, 0.127), (48, 0.01), (51, 0.102), (64, 0.027), (73, 0.033), (89, 0.134), (95, 0.011), (98, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.86393517 21 iccv-2013-A Method of Perceptual-Based Shape Decomposition

Author: Chang Ma, Zhongqian Dong, Tingting Jiang, Yizhou Wang, Wen Gao

Abstract: In thispaper, wepropose a novelperception-based shape decomposition method which aims to decompose a shape into semantically meaningful parts. In addition to three popular perception rules (the Minima rule, the Short-cut rule and the Convexity rule) in shape decomposition, we propose a new rule named part-similarity rule to encourage consistent partition of similar parts. The problem is formulated as a quadratically constrained quadratic program (QCQP) problem and is solved by a trust-region method. Experiment results on MPEG-7 dataset show that we can get a more consistent shape decomposition with human perception compared with other state-of-the-art methods both qualitatively and quantitatively. Finally, we show the advantage of semantic parts over non-meaningful parts in object detection on the ETHZ dataset.

2 0.82147121 119 iccv-2013-Discriminant Tracking Using Tensor Representation with Semi-supervised Improvement

Author: Jin Gao, Junliang Xing, Weiming Hu, Steve Maybank

Abstract: Visual tracking has witnessed growing methods in object representation, which is crucial to robust tracking. The dominant mechanism in object representation is using image features encoded in a vector as observations to perform tracking, without considering that an image is intrinsically a matrix, or a 2nd-order tensor. Thus approaches following this mechanism inevitably lose a lot of useful information, and therefore cannot fully exploit the spatial correlations within the 2D image ensembles. In this paper, we address an image as a 2nd-order tensor in its original form, and find a discriminative linear embedding space approximation to the original nonlinear submanifold embedded in the tensor space based on the graph embedding framework. We specially design two graphs for characterizing the intrinsic local geometrical structure of the tensor space, so as to retain more discriminant information when reducing the dimension along certain tensor dimensions. However, spatial correlations within a tensor are not limited to the elements along these dimensions. This means that some part of the discriminant information may not be encoded in the embedding space. We introduce a novel technique called semi-supervised improvement to iteratively adjust the embedding space to compensate for the loss of discriminant information, hence improving the performance of our tracker. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker.

3 0.81808716 161 iccv-2013-Fast Sparsity-Based Orthogonal Dictionary Learning for Image Restoration

Author: Chenglong Bao, Jian-Feng Cai, Hui Ji

Abstract: In recent years, how to learn a dictionary from input images for sparse modelling has been one very active topic in image processing and recognition. Most existing dictionary learning methods consider an over-complete dictionary, e.g. the K-SVD method. Often they require solving some minimization problem that is very challenging in terms of computational feasibility and efficiency. However, if the correlations among dictionary atoms are not well constrained, the redundancy of the dictionary does not necessarily improve the performance of sparse coding. This paper proposed a fast orthogonal dictionary learning method for sparse image representation. With comparable performance on several image restoration tasks, the proposed method is much more computationally efficient than the over-complete dictionary based learning methods.

4 0.8172493 326 iccv-2013-Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

Author: Suyog Dutt Jain, Kristen Grauman

Abstract: The mode of manual annotation used in an interactive segmentation algorithm affects both its accuracy and easeof-use. For example, bounding boxes are fast to supply, yet may be too coarse to get good results on difficult images; freehand outlines are slower to supply and more specific, yet they may be overkill for simple images. Whereas existing methods assume a fixed form of input no matter the image, we propose to predict the tradeoff between accuracy and effort. Our approach learns whether a graph cuts segmentation will succeed if initialized with a given annotation mode, based on the image ’s visual separability and foreground uncertainty. Using these predictions, we optimize the mode of input requested on new images a user wants segmented. Whether given a single image that should be segmented as quickly as possible, or a batch of images that must be segmented within a specified time budget, we show how to select the easiest modality that will be sufficiently strong to yield high quality segmentations. Extensive results with real users and three datasets demonstrate the impact.

5 0.81402636 54 iccv-2013-Attribute Pivots for Guiding Relevance Feedback in Image Search

Author: Adriana Kovashka, Kristen Grauman

Abstract: In interactive image search, a user iteratively refines his results by giving feedback on exemplar images. Active selection methods aim to elicit useful feedback, but traditional approaches suffer from expensive selection criteria and cannot predict informativeness reliably due to the imprecision of relevance feedback. To address these drawbacks, we propose to actively select “pivot” exemplars for which feedback in the form of a visual comparison will most reduce the system’s uncertainty. For example, the system might ask, “Is your target image more or less crowded than this image? ” Our approach relies on a series of binary search trees in relative attribute space, together with a selection function that predicts the information gain were the user to compare his envisioned target to the next node deeper in a given attribute ’s tree. It makes interactive search more efficient than existing strategies—both in terms of the system ’s selection time as well as the user’s feedback effort.

6 0.81152797 327 iccv-2013-Predicting an Object Location Using a Global Image Representation

7 0.81137919 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

8 0.81124932 330 iccv-2013-Proportion Priors for Image Sequence Segmentation

9 0.81076014 150 iccv-2013-Exemplar Cut

10 0.81062973 44 iccv-2013-Adapting Classification Cascades to New Domains

11 0.80790067 383 iccv-2013-Semi-supervised Learning for Large Scale Image Cosegmentation

12 0.80779475 80 iccv-2013-Collaborative Active Learning of a Kernel Machine Ensemble for Recognition

13 0.80697876 156 iccv-2013-Fast Direct Super-Resolution by Simple Functions

14 0.80665338 259 iccv-2013-Manifold Based Face Synthesis from Sparse Samples

15 0.80601275 245 iccv-2013-Learning a Dictionary of Shape Epitomes with Applications to Image Labeling

16 0.80581391 398 iccv-2013-Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person

17 0.80350006 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization

18 0.80346501 241 iccv-2013-Learning Near-Optimal Cost-Sensitive Decision Policy for Object Detection

19 0.80270535 180 iccv-2013-From Where and How to What We See

20 0.80206567 20 iccv-2013-A Max-Margin Perspective on Sparse Representation-Based Classification