iccv iccv2013 iccv2013-368 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Xiaochun Cao, Hua Zhang, Si Liu, Xiaojie Guo, Liang Lin
Abstract: Recently, studies on sketch, such as sketch retrieval and sketch classification, have received more attention in the computer vision community. One of its most fundamental and essential problems is how to more effectively describe a sketch image. Many existing descriptors, such as shape context, have achieved great success. In this paper, we propose a new descriptor, namely Symmetric-aware Flip Invariant Sketch Histogram (SYM-FISH) to refine the shape context feature. Its extraction process includes three steps. First the Flip Invariant Sketch Histogram (FISH) descriptor is extracted on the input image, which is a flip-invariant version of the shape context feature. Then we explore the symmetry character of the image by calculating the kurtosis coefficient. Finally, the SYM-FISH is generated by constructing a symmetry table. The new SYM-FISH descriptor supplements the original shape context by encoding the symmetric information, which is a pervasive characteristic of natural scene and objects. We evaluate the efficacy of the novel descriptor in two applications, i.e., sketch retrieval and sketch classification. Extensive experiments on three datasets well demonstrate the effectiveness and robustness of the proposed SYM-FISH descriptor.
Reference: text
sentIndex sentText sentNum sentScore
1 org Abstract Recently, studies on sketch, such as sketch retrieval and sketch classification, have received more attention in the computer vision community. [sent-11, score-1.344]
2 One of its most fundamental and essential problems is how to more effectively describe a sketch image. [sent-12, score-0.615]
3 Then we explore the symmetry character of the image by calculating the kurtosis coefficient. [sent-17, score-0.738]
4 Finally, the SYM-FISH is generated by constructing a symmetry table. [sent-18, score-0.555]
5 The new SYM-FISH descriptor supplements the original shape context by encoding the symmetric information, which is a pervasive characteristic of natural scene and objects. [sent-19, score-0.305]
6 iPad and Microsoft Surface, sketch related studies become unprecedented popular nowadays. [sent-27, score-0.615]
7 The sketches drawn by users are used as queries to feed into any of the sketch retrieval system. [sent-29, score-0.817]
8 The first query is non-symmetric, the second query is bilateral symmetric while the last one is rotation symmetric. [sent-33, score-0.354]
9 For each query, the retrieval results of three kinds of shape descriptors: shape context, FISH and SYM-FISH are shown sequentially in different rows. [sent-34, score-0.258]
10 The first column is the query sketch images, and the remaining columns are returned real life images. [sent-35, score-0.78]
11 Actually, beside sketch retrieval [5], many other edge related tasks, such as sketch detection [23] and sketch recognition [2] are also extensively studied. [sent-39, score-1.959]
12 Moreover the symmetry is of scale invariance as well as translation invariance. [sent-44, score-0.596]
13 Although there is a long history of symmetry study [9, 13, 21], it has rarely been integrated into descriptors in a unified framework. [sent-45, score-0.603]
14 First, the image is represented by a flip invariant descriptor. [sent-48, score-0.254]
15 Then we minimize the energy measurement to determine the symmetry directions in each image patch. [sent-56, score-0.555]
16 Finally, we incorporate the symmetry character into image representation. [sent-57, score-0.6]
17 We construct a graph named symmetry table to describe the symmetry character and generate the SYMmetric-aware Flip Invariant Sketch Histogram (SYM- FISH)(Section 4. [sent-58, score-1.155]
18 Please note that, based on the graph, we can handle both the case with and without symmetry property in the image. [sent-60, score-0.586]
19 To validate the effectiveness of our proposed approach, we apply it on two applications: sketch retrieval [5] and sketch classification [15]. [sent-61, score-1.405]
20 The sketch retrieval task is quite challenging because of the huge gap between the sketch query and real life repository images. [sent-62, score-1.557]
21 Some of the representative works on sketch retrieval are MindFinder [4] [5] and Sketch2photo [6]. [sent-63, score-0.729]
22 However, little attention is paid on studying the symmetry character of the images. [sent-65, score-0.6]
23 Furthermore, we do the experiment on two benchmark datasets: ETH shape dataset [8] and a large scale sketch retrieval dataset [17]. [sent-68, score-0.887]
24 , sketch images from the same category may have certain common preference to symmetry. [sent-73, score-0.635]
25 We conduct extensive experiments on the sketch dataset [15]. [sent-75, score-0.63]
26 Experimental results show that the proposed SYM-FISH descriptor is more discriminating than standard descriptors, such as shape context, and can significantly improve sketch classification performance. [sent-76, score-0.791]
27 Related work There are limited related works on sketch classification. [sent-78, score-0.615]
28 However, all the aforementioned descriptors do not enforce the descriptors to be flip invariant, while our propose SYM-FISH can handle the flip cases well. [sent-84, score-0.555]
29 Therefore, we propose a novel sketch descriptors which can handle various transformations e. [sent-90, score-0.678]
30 Thus, in our proposed method, we introduce the symmetry structure of the image to compensate such shortcoming. [sent-95, score-0.555]
31 A recent related work is [10], which proposed a symmetry score approach to find the symmetry feature points, afterwards constructed a symmetry descriptor for building matching. [sent-97, score-1.804]
32 Our proposed symmetry visual word phase is robust to the distortions and noise. [sent-106, score-0.622]
33 FISH descriptor construction process: sampling feature points, mapping sampled points in the log-polar coordinate, and developing the descriptor representation. [sent-109, score-0.244]
34 metry character of the image by analyzing matching scores and 3) constructing a symmetry table combined with FISH descriptor to finally generate the SYM-FISH. [sent-112, score-0.71]
35 Although shape context has achieved great success, it cannot handle the flip case. [sent-121, score-0.382]
36 2 (a) are quite similar (only under flip changes). [sent-124, score-0.236]
37 To handle the flip variations, we propose a FISH descriptor, which can be viewed as a post-processing procedure after shape context feature is extracted. [sent-125, score-0.388]
38 More specifically, we re-order all the bins in the shape context by two steps: determine the reference bin and the rotation orientation sequentially. [sent-126, score-0.284]
39 To sum up, we can roughly align the FISH features by re-ordering the bins of shape context according to the inferred reference bin and the rotation orientation. [sent-140, score-0.262]
40 However there is no benchmark sketch dataset specially for matching, a sketch pairs database is collected by ourselves. [sent-160, score-1.301]
41 each pair is consisted of original image and its rotation, flip and scale version. [sent-162, score-0.239]
42 In the sketch pair database, the orientation angle is randomly selected from (0,360). [sent-163, score-0.652]
43 For the flip situation, we flip the whole original image. [sent-165, score-0.444]
44 The setting of the matching experiment is as follows: 300 feature points are sampled on two sketch images, then an adjacency matrix is constructed by computing the similarity among their FISH descriptors. [sent-166, score-0.735]
45 The red line indicates the detected symmetry axis of the original image. [sent-179, score-0.592]
46 For bilateral symmetry image (a), the kurtosis value is large and the distribution has a peak (d). [sent-180, score-0.767]
47 While the rotation symmetry image (b) has a small kurtosis value and a flat distribution (e). [sent-181, score-0.762]
48 Specially, two separated parts could be mapped to each other by the angle of the symmetry axis. [sent-195, score-0.57]
49 2n-fold rotation symmetry: There exist more than two symmetry lines of a sketch, while these lines are intersected in one point. [sent-198, score-0.624]
50 In our problem, if a sketch include two symmetry lines, which are not orthogonal, we also define it as rotation symmetry. [sent-199, score-1.239]
51 Thus, discovering local symmetry of a region can be converted to search the symmetric axis of the sketch. [sent-202, score-0.702]
52 To detect the symmetry axis on the input sketch, we propose a compact energy minimization method. [sent-203, score-0.575]
53 The whole strategy is 1The curved symmetry may be considered as another symmetry categories. [sent-204, score-1.125]
54 But we think the curved symmetry is a kind of piecewise bilateral symmetry. [sent-205, score-0.658]
55 symmetric 14: end ifimage Ii is not symmetric 15: end if 16: Output: its symmetry type and the symmetric points; shown in Algorithm 1. [sent-221, score-0.834]
56 O represents tahgee symmetry sdpirlaeycstio tnhes, Ewuhcilcidhe aisn f idxiesdta fnrcoem. [sent-235, score-0.555]
57 Then, we select the minimum scores of each orientation o as the potential symmetry orientation, We accumulate all the scores to generate a 36dim vector, shown in line 6 of Algorithm 1. [sent-237, score-0.594]
58 We observe that for the bilateral symmetry sketch the matching score S coreIi(O) has an unimodal distribution, while for the rotation symmetry, S coreIi(O) has a multimodal distribution. [sent-238, score-1.359]
59 λ(λ − μλ)4ScoreIi (3) 3 16 where σ is the standard deviation and λ represents the angel of the symmetry axis. [sent-241, score-0.555]
60 Usually, the bilateral symmetry produces much higher Kurt score comparing with rotation symmetric and non-symmetric, as shown in Fig. [sent-244, score-0.81]
61 Evaluation of symmetry discovering: We test the effectiveness Algorithm 1on a subset of sketch database [17]. [sent-257, score-1.203]
62 The validation database is composed of 3 1human drawing sketches, which contain bilateral symmetry sketch, rotation symmetry sketch and non-symmetry sketch. [sent-258, score-1.884]
63 We would like to know whether the symmetry type can be correctly classified. [sent-259, score-0.555]
64 Symmetry-aware togram Flip Invariant Sketch His- In image retrieval and classification the local descriptors, such as SIFT [14], shape context [2] and FISH, will not be directly used for the image representation. [sent-265, score-0.264]
65 Usually, we summarize all the local descriptors in a sketch image with the Bags of words (BoWs) representation. [sent-266, score-0.678]
66 Thus, in this section, we will illustrate how to fuse the symmetry property among feature points into the visual word representation. [sent-267, score-0.669]
67 In this paper we only focus on symmetry and do not exploit other spatial structure of the sketch images. [sent-270, score-1.17]
68 We propose to use a symmetry table to capture the symmetry relations among visual words. [sent-271, score-1.124]
69 Thirdly, we map the symmetry of feature points to symmetry of visual words. [sent-278, score-1.175]
70 Finally, we construct a symmetry table Y ∈ {0, 1}, which is a N×N matrix, where N is the number oYf ∈vi {s0u,a1l w,o wrhdisc. [sent-280, score-0.555]
71 h Y is i as an Nind mexa mrixa,tr wixh, wreh Nose is e thleem neunmt Yi,j indicates whether the visual word Vi and Vj are symmetric in the sketch images. [sent-281, score-0.755]
72 With the symmetry table, the symmetry relationship is transferred from feature points level to the visual words level. [sent-283, score-1.19]
73 To sum up, besides the original BoWs feature, for each sketch image, we have a new structural feature called SYMmetry-aware Flip In- variant Sketch Histogram (SYM-FISH) shape feature. [sent-284, score-0.709]
74 It is the combination of original FISH feature and a symmetry table. [sent-285, score-0.577]
75 Thus the distance between two symmetry table is just humming distance. [sent-287, score-0.575]
76 SYM-FISH descriptor in sketch retrieval Searching the real life images by using a sketch query is not an easy task. [sent-291, score-1.574]
77 For example, sketch images convey information mostly by edges while real life images always have rich texture. [sent-293, score-0.703]
78 The SYM-FISH is used in the sketch retrieval task by re-ranking the original ranking list. [sent-294, score-0.742]
79 For the SYM-FISH reranking, we first extract the symmetry table for all the images in the repository. [sent-296, score-0.555]
80 Then the original list is reorder by the distances between symmetry table. [sent-297, score-0.555]
81 SYM-FISH descriptor in sketch classification In the traditional classification approach [15] , chi-square distance is usually used to compute the similarity between different images while the symmetry character of the sketches is not considered. [sent-320, score-1.448]
82 In our approach, we combine the distance between visual word representation and the similarity of symmetry table. [sent-322, score-0.622]
83 Formally, we have: D(Ii, Ij) = χ2(Ii, Ij) + λ ∗ S T(Ii, Ij) (6) where χ2(Ii, Ij) computes the chi-square distance between different images, and S T(Ii, Ij) is the similarity of symmetry table between different images. [sent-323, score-0.575]
84 Experiments We evaluate our proposed descriptor FISH and SYM- FISH on two applications : sketch retrieval and sketch classification. [sent-329, score-1.427]
85 In the sketch retrieval experiment, we first check the performance of FISH and SYM-FISH on ETH shape database [8]. [sent-330, score-0.817]
86 The further validation of these two descriptors is on a large scale sketch retrieval dataset [17]. [sent-331, score-0.809]
87 In the sketch classification experiment, we test the performance of proposed descriptors on sketch classification dataset [15]. [sent-332, score-1.335]
88 The ETH dataset provides one representative sketch image for each class, which is used as the input query in our experiment. [sent-341, score-0.689]
89 The possible explanation is that the proposed SYM-FISH descriptor becomes more robustness because of the novel encoding approach and can better handle the flip situation. [sent-348, score-0.32]
90 Moreover, the introduced symmetry property enables the representation more discriminative. [sent-349, score-0.571]
91 The benchmark dataset contains 3 1benchmark sketches as well as 40 corresponding images for each sketch while the distractor image dataset contains 100,000 creative commons images. [sent-361, score-0.787]
92 1 2684-7F026I189SH In this section, we test the effectiveness of the proposed SYM-FISH on the sketch classification task. [sent-380, score-0.653]
93 Each sketch class contains about 80 images with different styles. [sent-382, score-0.615]
94 To train the sketch model, we randomly divide the dataset into 2 subset: 58 images from each category are randomly selected as the training set and the remaining images are used as testing set. [sent-383, score-0.65]
95 The first column is the query sketch image, while the remaining columns correspond to the retrieved real life images. [sent-390, score-0.762]
96 The reasons can be summarized as follows: firstly there exists many flip situations in the dataset 3 19 Table 3. [sent-407, score-0.259]
97 Secondly, the symmetry table can better preserve the symmetry properties of the sketches which both decrease the intra-category distances and increase inter-category distances. [sent-411, score-1.198]
98 Conclusion and Future Work In this paper, we propose a novel shape descriptor SYMFISH which can handle the flip changes and encode image’s symmetric property. [sent-413, score-0.485]
99 We thoroughly analyze its characteristics on two applications: sketch retrieval and classification. [sent-415, score-0.729]
100 Although we only validate the effectiveness of the descriptor on sketch retrieval and classification tasks in this paper, we believe that it can also be used in other tasks, such as sketch detection. [sent-417, score-1.488]
wordName wordTfidf (topN-words)
[('sketch', 0.615), ('symmetry', 0.555), ('fish', 0.29), ('flip', 0.222), ('kurtosis', 0.138), ('retrieval', 0.114), ('symmetric', 0.093), ('mdb', 0.089), ('sketches', 0.088), ('descriptor', 0.083), ('bilateral', 0.074), ('shape', 0.072), ('life', 0.069), ('rotation', 0.069), ('kurt', 0.066), ('vscore', 0.059), ('query', 0.059), ('context', 0.057), ('subwindows', 0.057), ('repository', 0.052), ('descriptors', 0.048), ('character', 0.045), ('coreii', 0.044), ('smdb', 0.044), ('symfish', 0.044), ('bows', 0.04), ('bin', 0.04), ('mindfinder', 0.039), ('word', 0.033), ('invariant', 0.032), ('distractor', 0.03), ('concordant', 0.03), ('kristian', 0.03), ('tamy', 0.03), ('tianjin', 0.03), ('till', 0.029), ('points', 0.029), ('mathias', 0.028), ('matching', 0.027), ('sampled', 0.027), ('ii', 0.026), ('iq', 0.025), ('bins', 0.024), ('histogram', 0.024), ('benchmark', 0.024), ('translation', 0.024), ('highlighted', 0.023), ('stars', 0.023), ('validate', 0.023), ('eth', 0.023), ('cao', 0.022), ('orientation', 0.022), ('feature', 0.022), ('firstly', 0.022), ('discovering', 0.021), ('classification', 0.021), ('ij', 0.021), ('phase', 0.02), ('distance', 0.02), ('category', 0.02), ('axis', 0.02), ('academy', 0.02), ('dissimilar', 0.02), ('traverse', 0.02), ('score', 0.019), ('real', 0.019), ('china', 0.019), ('chinese', 0.019), ('returned', 0.018), ('secondly', 0.017), ('line', 0.017), ('scale', 0.017), ('effectiveness', 0.017), ('dictionary', 0.017), ('database', 0.016), ('great', 0.016), ('property', 0.016), ('objectness', 0.016), ('specially', 0.016), ('words', 0.015), ('inverted', 0.015), ('afterwards', 0.015), ('curved', 0.015), ('dataset', 0.015), ('sciences', 0.015), ('experiment', 0.015), ('tpami', 0.015), ('angle', 0.015), ('handle', 0.015), ('vj', 0.014), ('guo', 0.014), ('think', 0.014), ('windows', 0.014), ('quite', 0.014), ('visual', 0.014), ('converted', 0.013), ('star', 0.013), ('ranking', 0.013), ('coefficient', 0.013), ('singapore', 0.013)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000007 368 iccv-2013-SYM-FISH: A Symmetry-Aware Flip Invariant Sketch Histogram Shape Descriptor
Author: Xiaochun Cao, Hua Zhang, Si Liu, Xiaojie Guo, Liang Lin
Abstract: Recently, studies on sketch, such as sketch retrieval and sketch classification, have received more attention in the computer vision community. One of its most fundamental and essential problems is how to more effectively describe a sketch image. Many existing descriptors, such as shape context, have achieved great success. In this paper, we propose a new descriptor, namely Symmetric-aware Flip Invariant Sketch Histogram (SYM-FISH) to refine the shape context feature. Its extraction process includes three steps. First the Flip Invariant Sketch Histogram (FISH) descriptor is extracted on the input image, which is a flip-invariant version of the shape context feature. Then we explore the symmetry character of the image by calculating the kurtosis coefficient. Finally, the SYM-FISH is generated by constructing a symmetry table. The new SYM-FISH descriptor supplements the original shape context by encoding the symmetric information, which is a pervasive characteristic of natural scene and objects. We evaluate the efficacy of the novel descriptor in two applications, i.e., sketch retrieval and sketch classification. Extensive experiments on three datasets well demonstrate the effectiveness and robustness of the proposed SYM-FISH descriptor.
2 0.37130237 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval
Author: Yen-Liang Lin, Cheng-Yu Huang, Hao-Jeng Wang, Winston Hsu
Abstract: We propose a 3D sub-query expansion approach for boosting sketch-based multi-view image retrieval. The core idea of our method is to automatically convert two (guided) 2D sketches into an approximated 3D sketch model, and then generate multi-view sketches as expanded sub-queries to improve the retrieval performance. To learn the weights among synthesized views (sub-queries), we present a new multi-query feature to model the similarity between subqueries and dataset images, and formulate it into a convex optimization problem. Our approach shows superior performance compared with the state-of-the-art approach on a public multi-view image dataset. Moreover, we also conduct sensitivity tests to analyze the parameters of our approach based on the gathered user sketches.
3 0.2008058 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model
Author: Tom Sie Ho Lee, Sanja Fidler, Sven Dickinson
Abstract: Symmetry is a powerful shape regularity that’s been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale superpixel segmentation, a framework proposed by [13]. However, we learn affinities between adjacent superpixels in a space that’s invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pairwise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline [13].
4 0.14990091 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning
Author: Jifeng Dai, Ying Nian Wu, Jie Zhou, Song-Chun Zhu
Abstract: Cosegmentation refers to theproblem ofsegmenting multiple images simultaneously by exploiting the similarities between the foreground and background regions in these images. The key issue in cosegmentation is to align common objects between these images. To address this issue, we propose an unsupervised learning framework for cosegmentation, by coupling cosegmentation with what we call “cosketch ”. The goal of cosketch is to automatically discover a codebook of deformable shape templates shared by the input images. These shape templates capture distinct image patterns and each template is matched to similar image patches in different images. Thus the cosketch of the images helps to align foreground objects, thereby providing crucial information for cosegmentation. We present a statistical model whose energy function couples cosketch and cosegmentation. We then present an unsupervised learning algorithm that performs cosketch and cosegmentation by energy minimization. Experiments show that our method outperforms state of the art methods for cosegmentation on the challenging MSRC and iCoseg datasets. We also illustrate our method on a new dataset called Coseg-Rep where cosegmentation can be performed within a single image with repetitive patterns.
5 0.095560551 353 iccv-2013-Revisiting the PnP Problem: A Fast, General and Optimal Solution
Author: Yinqiang Zheng, Yubin Kuang, Shigeki Sugimoto, Kalle Åström, Masatoshi Okutomi
Abstract: In this paper, we revisit the classical perspective-n-point (PnP) problem, and propose the first non-iterative O(n) solution that is fast, generally applicable and globally optimal. Our basic idea is to formulate the PnP problem into a functional minimization problem and retrieve all its stationary points by using the Gr¨ obner basis technique. The novelty lies in a non-unit quaternion representation to parameterize the rotation and a simple but elegant formulation of the PnP problem into an unconstrained optimization problem. Interestingly, the polynomial system arising from its first-order optimality condition assumes two-fold symmetry, a nice property that can be utilized to improve speed and numerical stability of a Gr¨ obner basis solver. Experiment results have demonstrated that, in terms of accuracy, our proposed solution is definitely better than the state-ofthe-art O(n) methods, and even comparable with the reprojection error minimization method.
6 0.081340723 210 iccv-2013-Image Retrieval Using Textual Cues
7 0.074691482 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation
8 0.069033064 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval
9 0.06606207 205 iccv-2013-Human Re-identification by Matching Compositional Template with Cluster Sampling
10 0.065981559 404 iccv-2013-Structured Forests for Fast Edge Detection
11 0.058988098 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding
12 0.058441352 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint
13 0.058249161 191 iccv-2013-Handling Uncertain Tags in Visual Recognition
14 0.056828454 90 iccv-2013-Content-Aware Rotation
15 0.055337302 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search
16 0.051617742 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection
17 0.050734673 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval
18 0.049415793 444 iccv-2013-Viewing Real-World Faces in 3D
19 0.048447493 333 iccv-2013-Quantize and Conquer: A Dimensionality-Recursive Solution to Clustering, Vector Quantization, and Image Retrieval
20 0.04841071 345 iccv-2013-Recognizing Text with Perspective Distortion in Natural Scenes
topicId topicWeight
[(0, 0.114), (1, 0.011), (2, -0.022), (3, -0.056), (4, 0.011), (5, 0.063), (6, 0.007), (7, -0.032), (8, -0.051), (9, -0.01), (10, 0.103), (11, 0.014), (12, 0.017), (13, 0.044), (14, 0.017), (15, 0.032), (16, 0.106), (17, -0.031), (18, 0.067), (19, -0.068), (20, 0.07), (21, -0.051), (22, -0.047), (23, 0.039), (24, 0.007), (25, 0.065), (26, 0.03), (27, 0.065), (28, -0.036), (29, 0.015), (30, -0.009), (31, 0.049), (32, -0.001), (33, 0.008), (34, 0.021), (35, 0.002), (36, -0.012), (37, 0.035), (38, -0.089), (39, -0.13), (40, -0.049), (41, -0.125), (42, -0.075), (43, -0.008), (44, 0.044), (45, -0.057), (46, 0.047), (47, 0.081), (48, -0.146), (49, -0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.93051809 368 iccv-2013-SYM-FISH: A Symmetry-Aware Flip Invariant Sketch Histogram Shape Descriptor
Author: Xiaochun Cao, Hua Zhang, Si Liu, Xiaojie Guo, Liang Lin
Abstract: Recently, studies on sketch, such as sketch retrieval and sketch classification, have received more attention in the computer vision community. One of its most fundamental and essential problems is how to more effectively describe a sketch image. Many existing descriptors, such as shape context, have achieved great success. In this paper, we propose a new descriptor, namely Symmetric-aware Flip Invariant Sketch Histogram (SYM-FISH) to refine the shape context feature. Its extraction process includes three steps. First the Flip Invariant Sketch Histogram (FISH) descriptor is extracted on the input image, which is a flip-invariant version of the shape context feature. Then we explore the symmetry character of the image by calculating the kurtosis coefficient. Finally, the SYM-FISH is generated by constructing a symmetry table. The new SYM-FISH descriptor supplements the original shape context by encoding the symmetric information, which is a pervasive characteristic of natural scene and objects. We evaluate the efficacy of the novel descriptor in two applications, i.e., sketch retrieval and sketch classification. Extensive experiments on three datasets well demonstrate the effectiveness and robustness of the proposed SYM-FISH descriptor.
2 0.81789792 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval
Author: Yen-Liang Lin, Cheng-Yu Huang, Hao-Jeng Wang, Winston Hsu
Abstract: We propose a 3D sub-query expansion approach for boosting sketch-based multi-view image retrieval. The core idea of our method is to automatically convert two (guided) 2D sketches into an approximated 3D sketch model, and then generate multi-view sketches as expanded sub-queries to improve the retrieval performance. To learn the weights among synthesized views (sub-queries), we present a new multi-query feature to model the similarity between subqueries and dataset images, and formulate it into a convex optimization problem. Our approach shows superior performance compared with the state-of-the-art approach on a public multi-view image dataset. Moreover, we also conduct sensitivity tests to analyze the parameters of our approach based on the gathered user sketches.
3 0.53529167 446 iccv-2013-Visual Semantic Complex Network for Web Images
Author: Shi Qiu, Xiaogang Wang, Xiaoou Tang
Abstract: This paper proposes modeling the complex web image collections with an automatically generated graph structure called visual semantic complex network (VSCN). The nodes on this complex network are clusters of images with both visual and semantic consistency, called semantic concepts. These nodes are connected based on the visual and semantic correlations. Our VSCN with 33, 240 concepts is generated from a collection of 10 million web images. 1 A great deal of valuable information on the structures of the web image collections can be revealed by exploring the VSCN, such as the small-world behavior, concept community, indegree distribution, hubs, and isolated concepts. It not only helps us better understand the web image collections at a macroscopic level, but also has many important practical applications. This paper presents two application examples: content-based image retrieval and image browsing. Experimental results show that the VSCN leads to significant improvement on both the precision of image retrieval (over 200%) and user experience for image browsing.
4 0.52297658 306 iccv-2013-Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items
Author: Kota Yamaguchi, M. Hadi Kiapour, Tamara L. Berg
Abstract: Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on theflyfrom retrieved examples, and transferredparse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.
5 0.52081466 90 iccv-2013-Content-Aware Rotation
Author: Kaiming He, Huiwen Chang, Jian Sun
Abstract: We present an image editing tool called Content-Aware Rotation. Casually shot photos can appear tilted, and are often corrected by rotation and cropping. This trivial solution may remove desired content and hurt image integrity. Instead of doing rigid rotation, we propose a warping method that creates the perception of rotation and avoids cropping. Human vision studies suggest that the perception of rotation is mainly due to horizontal/vertical lines. We design an optimization-based method that preserves the rotation of horizontal/vertical lines, maintains the completeness of the image content, and reduces the warping distortion. An efficient algorithm is developed to address the challenging optimization. We demonstrate our content-aware rotation method on a variety of practical cases.
7 0.49469993 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval
8 0.47886038 95 iccv-2013-Cosegmentation and Cosketch by Unsupervised Learning
9 0.45735407 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search
10 0.45709121 110 iccv-2013-Detecting Curved Symmetric Parts Using a Deformable Disc Model
11 0.44860181 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search
12 0.44301239 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning
13 0.43696412 148 iccv-2013-Example-Based Facade Texture Synthesis
14 0.43056694 10 iccv-2013-A Framework for Shape Analysis via Hilbert Space Embedding
15 0.42770839 235 iccv-2013-Learning Coupled Feature Spaces for Cross-Modal Matching
16 0.42431647 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval
17 0.41931626 302 iccv-2013-Optimization Problems for Fast AAM Fitting in-the-Wild
18 0.40754795 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint
19 0.40721747 278 iccv-2013-Multi-scale Topological Features for Hand Posture Representation and Analysis
20 0.40531847 210 iccv-2013-Image Retrieval Using Textual Cues
topicId topicWeight
[(2, 0.15), (4, 0.01), (5, 0.196), (7, 0.023), (8, 0.018), (12, 0.016), (26, 0.057), (31, 0.035), (34, 0.016), (42, 0.103), (48, 0.01), (58, 0.013), (64, 0.029), (73, 0.024), (78, 0.019), (89, 0.166)]
simIndex simValue paperId paperTitle
same-paper 1 0.82279491 368 iccv-2013-SYM-FISH: A Symmetry-Aware Flip Invariant Sketch Histogram Shape Descriptor
Author: Xiaochun Cao, Hua Zhang, Si Liu, Xiaojie Guo, Liang Lin
Abstract: Recently, studies on sketch, such as sketch retrieval and sketch classification, have received more attention in the computer vision community. One of its most fundamental and essential problems is how to more effectively describe a sketch image. Many existing descriptors, such as shape context, have achieved great success. In this paper, we propose a new descriptor, namely Symmetric-aware Flip Invariant Sketch Histogram (SYM-FISH) to refine the shape context feature. Its extraction process includes three steps. First the Flip Invariant Sketch Histogram (FISH) descriptor is extracted on the input image, which is a flip-invariant version of the shape context feature. Then we explore the symmetry character of the image by calculating the kurtosis coefficient. Finally, the SYM-FISH is generated by constructing a symmetry table. The new SYM-FISH descriptor supplements the original shape context by encoding the symmetric information, which is a pervasive characteristic of natural scene and objects. We evaluate the efficacy of the novel descriptor in two applications, i.e., sketch retrieval and sketch classification. Extensive experiments on three datasets well demonstrate the effectiveness and robustness of the proposed SYM-FISH descriptor.
2 0.77899325 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness
Author: Peng Jiang, Haibin Ling, Jingyi Yu, Jingliang Peng
Abstract: The goal of saliency detection is to locate important pixels or regions in an image which attract humans ’ visual attention the most. This is a fundamental task whose output may serve as the basis for further computer vision tasks like segmentation, resizing, tracking and so forth. In this paper we propose a novel salient region detection algorithm by integrating three important visual cues namely uniqueness, focusness and objectness (UFO). In particular, uniqueness captures the appearance-derived visual contrast; focusness reflects the fact that salient regions are often photographed in focus; and objectness helps keep completeness of detected salient regions. While uniqueness has been used for saliency detection for long, it is new to integrate focusness and objectness for this purpose. In fact, focusness and objectness both provide important saliency information complementary of uniqueness. In our experiments using public benchmark datasets, we show that, even with a simple pixel level combination of the three components, the proposed approach yields significant improve- ment compared with previously reported methods.
3 0.77863628 191 iccv-2013-Handling Uncertain Tags in Visual Recognition
Author: Arash Vahdat, Greg Mori
Abstract: Gathering accurate training data for recognizing a set of attributes or tags on images or videos is a challenge. Obtaining labels via manual effort or from weakly-supervised data typically results in noisy training labels. We develop the FlipSVM, a novel algorithm for handling these noisy, structured labels. The FlipSVM models label noise by “flipping ” labels on training examples. We show empirically that the FlipSVM is effective on images-and-attributes and video tagging datasets.
4 0.77827668 239 iccv-2013-Learning Hash Codes with Listwise Supervision
Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang
Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.
5 0.77692127 446 iccv-2013-Visual Semantic Complex Network for Web Images
Author: Shi Qiu, Xiaogang Wang, Xiaoou Tang
Abstract: This paper proposes modeling the complex web image collections with an automatically generated graph structure called visual semantic complex network (VSCN). The nodes on this complex network are clusters of images with both visual and semantic consistency, called semantic concepts. These nodes are connected based on the visual and semantic correlations. Our VSCN with 33, 240 concepts is generated from a collection of 10 million web images. 1 A great deal of valuable information on the structures of the web image collections can be revealed by exploring the VSCN, such as the small-world behavior, concept community, indegree distribution, hubs, and isolated concepts. It not only helps us better understand the web image collections at a macroscopic level, but also has many important practical applications. This paper presents two application examples: content-based image retrieval and image browsing. Experimental results show that the VSCN leads to significant improvement on both the precision of image retrieval (over 200%) and user experience for image browsing.
6 0.77416348 244 iccv-2013-Learning View-Invariant Sparse Representations for Cross-View Action Recognition
7 0.77309668 229 iccv-2013-Large-Scale Video Hashing via Structure Learning
8 0.77138448 214 iccv-2013-Improving Graph Matching via Density Maximization
9 0.77034509 83 iccv-2013-Complementary Projection Hashing
10 0.76873654 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies
11 0.76847684 248 iccv-2013-Learning to Rank Using Privileged Information
12 0.76832104 352 iccv-2013-Revisiting Example Dependent Cost-Sensitive Learning with Decision Trees
13 0.76759851 153 iccv-2013-Face Recognition Using Face Patch Networks
14 0.76516557 409 iccv-2013-Supervised Binary Hash Code Learning with Jensen Shannon Divergence
15 0.76247227 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint
16 0.76093763 313 iccv-2013-Person Re-identification by Salience Matching
17 0.76056147 197 iccv-2013-Hierarchical Joint Max-Margin Learning of Mid and Top Level Representations for Visual Recognition
18 0.75951588 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization
19 0.75941312 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria
20 0.75886846 443 iccv-2013-Video Synopsis by Heterogeneous Multi-source Correlation