iccv iccv2013 iccv2013-378 knowledge-graph by maker-knowledge-mining

378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval


Source: pdf

Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian

Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. [sent-6, score-0.975]

2 The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. [sent-11, score-1.522]

3 For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. [sent-12, score-1.641]

4 Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. [sent-13, score-0.402]

5 Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. [sent-14, score-0.963]

6 This leads to one fundamental challenge to content-based image retrieval: retrieval algorithms have no clue which subset of the “thousand words” in a query that a user is searching for. [sent-21, score-0.505]

7 A sample query from the Holidays dataset: retrieval using a vocabulary tree of local features (first row); retrieval using 1000 semantic attributes (second row); retrieval based on coindexing of both local features and semantic attributes (third row). [sent-30, score-2.607]

8 or near-duplicate images [14] by identifying similar local features [13]; finding similar images [20] by comparing hashing codes [21] of global features like GIST [15]; or retrieving objects of the same category by classifying images to multiple classes or attributes [5, 22, 2, 25]. [sent-31, score-0.562]

9 , returning near-duplicates to a query if presented in a database or otherwise similar ones related to relevant semantic concepts. [sent-34, score-0.556]

10 Different lines of retrieval methods tightly couple their search criteria with dramatically different image representations and indexing strategies. [sent-35, score-0.434]

11 Compact hashing codes for global features [21] or semantic attributes from object recognition [2, 25] are efficient for similar image search. [sent-39, score-0.63]

12 On the other hand, fusing retrieval results [4] requires online extraction of multiple image feature sets and storage for their respective indexes, which is costly in practice. [sent-41, score-0.389]

13 These challenges leave the effort on using multiple search criteria in one retrieval method rarely explored in the literature. [sent-42, score-0.349]

14 , image similarities based on local features and 11667733 semantic attributes, into the inverted indexes. [sent-45, score-0.769]

15 Then the retrieval not only searches for candidate images sharing similar local features but also encourages consensus in their semantic similarities as shown in Fig. [sent-46, score-0.735]

16 Towards these ends, we present a semantic-aware co-indexing algorithm which leverages semantic attributes from advanced object recognition to update the inverted indexes of local features quantized by a large vocabulary tree. [sent-48, score-1.45]

17 During online retrieval, we conduct conventional vocabulary tree based retrieval only using the local features in a query and do NOT compute the semantic attributes. [sent-52, score-1.12]

18 Nevertheless, the retrieval implicitly promotes candidates that are potentially with similar attributes to the query because the updated indexes are semantic-aware. [sent-53, score-1.051]

19 In this paper, we discover that editing the inverted index of a single local feature with multi-class classification scores effectively enhances its discriminative ability. [sent-54, score-0.517]

20 This is because the co-indexing jointly considers strong cues to low-level image contents and their semantic meanings, respectively. [sent-55, score-0.319]

21 The online query remains as efficient as before since only local features are extracted. [sent-56, score-0.319]

22 Meanwhile, we manage to consume an acceptable memory cost in the deletion and insertion of images on the indexes. [sent-57, score-0.514]

23 Large-scale object recognition and near-duplicate image search largely remain independent efforts due to different focuses on recognition accuracy and retrieval scalability. [sent-62, score-0.336]

24 Existing retrieval methods using multi-cues all extract multiple features online for a query. [sent-64, score-0.386]

25 Related Work This work focuses on improving near-duplicate image retrieval by co-indexing object recognition outcomes, which is closely related to vocabulary tree based image retrieval, learning semantic attributes, and how to incorporate two cues in retrieval. [sent-67, score-0.769]

26 The outcomes of these multi-class classifiers, often referred as semantic attributes [6, 5, 22], present a strong cue to find similar videos [8], faces [11], or images by hierarchical indexing [2] or mid-level weak attributes [25]. [sent-72, score-1.013]

27 Consequently, it is unaffordable to take advantage of the semantic attributes directly in near-duplicate retrieval, either in early fusion of the features [7], or late fusion of the retrieval results [4]. [sent-74, score-0.881]

28 Thus, near-duplicate image retrieval [14, 17, 9, 26, 28, 27, 18] using local features and similar image retrieval with attributes [20, 11, 2, 25] remain largely two separate lines of research. [sent-75, score-0.947]

29 semantic attributes into the inverted indexes of local features. [sent-79, score-1.279]

30 The semantic attributes are computed off-line for database images but not for online query images. [sent-80, score-0.935]

31 Furthermore, we learn the recognition models on totally independent datasets and do not assume query or database images are relevant to any of the object categories. [sent-81, score-0.32]

32 Proposed Approach Image retrieval using vocabulary trees and object recognition are two cornerstones of the proposed semantic-aware co-indexing, which are described in Sec. [sent-84, score-0.454]

33 Then we present how to off-line co-index semantic attributes among database images (Sec. [sent-89, score-0.684]

34 Image retrieval with vocabulary trees We employ the vocabulary tree based approach [14] as the baseline. [sent-98, score-0.669]

35 Denote q a query image and d an database image, and q is represented by a bag Sq of local descriptors {xi}i∈Sq , where xi ∈ RD indicate SIFT descriptors [13] of d{ixm}ension D = 128,∈ so does {xj }j∈Sd for d. [sent-99, score-0.344]

36 is Nt ootfe eM Tv( images lalondw tsh ree pTeFast eodf v in them are stored in the inverted index ofv for fast access, and are denoted by I(v) = and 3. [sent-118, score-0.519]

37 Semantic attributes from object recognition We follow the Bag-of-Words (BoW) paradigm to learn C = 1000 object category classifiers from the training images in the LSVRC [1], a subset of ImageNet dataset. [sent-120, score-0.358]

38 In fact, our testing query and database images are independent from the ImageNet dataset, hence it is likely one image is relevant to none or multiple categories in these 1000 attributes. [sent-127, score-0.332]

39 Semantic-aware off-line co-indexing The inverted indexes from visual words to images and their TFs summarize all knowledge about the database in the vocabulary tree based method [14]. [sent-132, score-1.032]

40 Though hundreds of local descriptors are capable of finding the near-duplicates to a query via the inverted indexes, the discriminative capacity of a single local descriptor is limited due to two reasons. [sent-133, score-0.676]

41 These motivate us to explore how to embed extra discriminative clue into the individual indexes of local descriptors. [sent-139, score-0.388]

42 we propose a semantic-aware co-indexing to address the two issues by off-line updating the inverted indexes with the image similarities induced from semantic attributes. [sent-140, score-0.975]

43 The attributes obtained by multi-class recognition may reveal an image’s rough high-level semantic contents, which is often complimentary to the low-level descriptors. [sent-141, score-0.568]

44 1 Distance of semantic attributes Let us first define how to measure the distance between semantic attributes given in Sec. [sent-147, score-1.136]

45 2 before proceeding to the specific schemes to alter the inverted indexes. [sent-149, score-0.402]

46 categories may not cover one image’s semantic contents or one image may be related to multiple categories, so we regard it as a partial probability distribution. [sent-156, score-0.357]

47 For two images dm and dn, we employ the Total Variance Distance (TVD) to measure the semantic distance between their partial probability vectors: ? [sent-157, score-0.344]

48 2 Semantic isolated image deletion Encoding nondiscriminative information [23] into the inverted indexes may not help the retrieval, since it makes the right candidate hard to stand out. [sent-171, score-1.079]

49 An isolated image on an inverted index, whose appearance is quite different from any other image on the same index, would contribute less in image retrieval, since they are less likely to help to find similar images. [sent-172, score-0.595]

50 Hence, we utilize a semantic isolated image deletion procedure to filter out isolated images from the perspective of semantic attributes, so as to obtain more consistent inverted indexes. [sent-173, score-1.557]

51 The semantic isolated image deletion can effectively reduce the index size without impacting the retrieval precision in our experiments. [sent-183, score-1.011]

52 3 Semantic nearest image insertion After the deletion of isolated images, we take advantage of the attribute feature to insert semantically similar images to the inverted indexes. [sent-186, score-1.155]

53 We compare the attributes of all database images using Eq. [sent-187, score-0.41]

54 okf does not appear on the same inverted index, we insert it to the entry of d’s inverted index, whose data structure is illustrated in Fig. [sent-191, score-0.844]

55 Namely, dk is inserted to I(v), if dk ∈ NK(d), d ∈ I(v), dk ∈/ I(v). [sent-193, score-0.363]

56 , Kd= akrg=m1:Kax∂2(TV∂Dk2(d,dk)), (6) where K is the maximum number of semantic nearest × neighbor images to check. [sent-199, score-0.419]

57 Note, it is very likely that dk ∈ NK (d) already appears on the inverted index that incl∈ude Ns d, since semantically similar images shall share some similar local features. [sent-200, score-0.765]

58 3, the two images between the dash black arrow are semantic nearest neighbors that are already on the same index. [sent-203, score-0.5]

59 Thus, semantic nearest image insertion does not increase the size of indexes by K times. [sent-204, score-0.792]

60 We will discuss how to further reduce the memory cost of the indexes in Sec. [sent-205, score-0.39]

61 The set of dk that is inserted to the indexes due to d is referred by GKNN(d), not including those already in the indexes. [sent-208, score-0.427]

62 Semantic-aware online query The online query process is almost identical to the conventional vocabulary tree based retrieval, except that we implicitly consider the joint similarity based on the local features and the semantic attributes. [sent-214, score-1.059]

63 GKNN(dg )} ω sim(q,dg), (7) where ω is a weighting parameter of the contribution from semantic attributes, and the second term includes the set 11667766 of images dg such that d is within their K semantic nearest neighbors (note d is dg’s neighbor, while dk is d neighbor). [sent-219, score-0.949]

64 Ideally, the weighting parameter ω shall be determined by the TVD between d and dg or its rank in the NK (dg), but different ω require extra storage in the inverted indexes. [sent-225, score-0.588]

65 Instead, we only need to scan image lists attached to the visual words found in the query, as well as the semantic nearest images inserted to the indexes. [sent-230, score-0.492]

66 im(q, Algorithm 1 Similarity calculation between a query q and Algorithm 1Similarity calculation between a query q and all database images. [sent-234, score-0.435]

67 Input: the inverted indexes I(v) stored as in Fig. [sent-235, score-0.676]

68 As our semantic-aware co-indexing focuses mainly on off-line indexing, we discuss the memory cost first and then its impact to online query and off-line indexing efficiency. [sent-255, score-0.452]

69 Memory consumption In the vocabulary tree based retrieval, the total memory cost of inverted indexes is proportional to? [sent-258, score-1.007]

70 fact, the memory cost shall be at O(KM), if we maintain a separate table for the K semantic nearest neighbors ofdatabase images which is a quite marg? [sent-266, score-0.638]

71 However, the online query demands for efficiency as high as possible, therefore we choose to consume the inverted indexes to obtain the database, ? [sent-274, score-0.968]

72 mM=1 semantic nearest neighbors in a streaming manner, rather than jumping to a separate table, which minimizes CPU cache misses. [sent-275, score-0.434]

73 3, the semantic nearest image insertion avoids adding redundant images if they already present in the inverted index. [sent-281, score-0.957]

74 Thus, our method does not increase the index size if the indexes of local features largely agree with the semantic features. [sent-282, score-0.721]

75 Moreover, the isolated image deletion removes a certain number ofimages and their associated TFs from the indexes and saves some storage. [sent-283, score-0.704]

76 Computational complexity The computation of online query using a vocabulary tree is composed of two parts: 1) the local feature extraction and their quantization, and 2) the voting of TF-IDFs along the inverted indexes. [sent-288, score-0.903]

77 For the latter, the co-indexing method roughly requires additional one multiplication and one add operation for each semantic nearest image inserted. [sent-290, score-0.382]

78 The search for semantic nearest neighbors in the database also allows for parallel processing. [sent-296, score-0.544]

79 The last three columns are retrieval performance directly using the semantic attributes (SA), GIST and color, respectively. [sent-324, score-0.848]

80 Note, the setting in the Oxford5K does not favor the semantic attributes because all its database images roughly belong to one category, i. [sent-326, score-0.684]

81 We employ the L2 distance for the GIST and color features in determining the isolated and nearest images. [sent-345, score-0.334]

82 , directly using the 1000D attributes to retrieve and rank the candidate images. [sent-354, score-0.32]

83 Performance We first compare the isolated image deletion and nearest image insertion against the baselines, respectively, includ- ing the sensitivity study of the key parameters. [sent-357, score-0.621]

84 During the deletion of isolated images, we tune the threshold ρ for the three types of features to remove the same ratio Δr of inverted indexes. [sent-360, score-0.812]

85 5, where the retrieval precision remains almost unchanged or even improves slightly when using SA to delete 10%-20% of the inverted indexes. [sent-362, score-0.682]

86 This validates that enforcing the semantic consensus among images on one index effectively saves storage without hurting the retrieval precision. [sent-363, score-0.757]

87 We first test inserting fixed K = 1 to 4 of the nearest images to the inverted indexes according to SA, GIST and Color, whose retrieval performance is presented in Fig. [sent-367, score-1.141]

88 Fine-grained attributes particularly for buildings may be more appropriate for landmark search. [sent-382, score-0.32]

89 To present the results concisely, we fix the ratio of isolated image deletion to 20% for T165 + SA and 5% for T107 + SA. [sent-408, score-0.377]

90 The computational overhead of the coindexing is hardly noticeable on these small-scale datasets, only about 1-3ms, for the quantization of local features dominates the online computation. [sent-419, score-0.327]

91 On the UKbench, we employ 1/4 images of the dataset as the query and the rest 3/4 in the database for indexing, thus the query images and dataset MethodsProposed [17][9] [27] [24] [18][3] HUoKlibednaycsh,, m NA-SP(%)803. [sent-421, score-0.509]

92 The comparison with recent retrieval methods (without re-ranking and query expansion) are shown in Table 4, which demonstrates that the performance of the proposed co-indexing is very competitive. [sent-450, score-0.458]

93 Attributes are also utilized in [3] for image search, yet differently it extracts both the attributes and visual features online from the queries. [sent-451, score-0.4]

94 Discussions The local feature based near-duplicate image retrieval essentially relies on finding a small set of matched local descriptors to retrieve the candidates. [sent-470, score-0.376]

95 , the semantic attributes, to enhance the discriminability of individual local feature’s inverted index, resulting in a prominent improvement on the overall discriminative ability of inverted indexes. [sent-473, score-1.113]

96 It is not a must to have semantic attributes and their Knearest neighbors available for all database images in coindexing. [sent-474, score-0.736]

97 Investigation of selectively co-indexing a portion of database images with reliable attributes and approximate nearest neighbor search will be our future work. [sent-475, score-0.549]

98 Conclusions In this paper, we present a new approach to jointly indexing both local features and semantic attributes for image retrieval. [sent-477, score-0.721]

99 By updating the indexes of local features guided by the semantic features, the proposed retrieval 11667799 DaKtasetsBase. [sent-478, score-0.896]

100 algorithm effectively applies two search criteria to enhance the overall discriminative capability of the inverted indexes, leading to more satisfactory retrieval results to users. [sent-561, score-0.751]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('inverted', 0.402), ('attributes', 0.294), ('retrieval', 0.28), ('semantic', 0.274), ('indexes', 0.274), ('isolated', 0.193), ('deletion', 0.184), ('ukbench', 0.184), ('query', 0.178), ('holidays', 0.167), ('sa', 0.142), ('vocabulary', 0.138), ('insertion', 0.136), ('tvd', 0.127), ('nearest', 0.108), ('dk', 0.105), ('coindexing', 0.102), ('dg', 0.099), ('memory', 0.09), ('indexing', 0.085), ('gist', 0.085), ('index', 0.08), ('database', 0.079), ('tree', 0.077), ('million', 0.076), ('online', 0.073), ('gknn', 0.061), ('sq', 0.056), ('semantically', 0.055), ('kd', 0.055), ('lsvrc', 0.054), ('mmv', 0.054), ('imagenet', 0.052), ('neighbors', 0.052), ('shall', 0.051), ('overhead', 0.05), ('deleted', 0.05), ('inserted', 0.048), ('clue', 0.047), ('sim', 0.047), ('contents', 0.045), ('consume', 0.041), ('syas', 0.041), ('tfq', 0.041), ('insert', 0.04), ('inserting', 0.04), ('criteria', 0.038), ('categories', 0.038), ('nk', 0.038), ('idf', 0.037), ('images', 0.037), ('storage', 0.036), ('bow', 0.036), ('tfd', 0.036), ('ida', 0.036), ('deleting', 0.036), ('tfs', 0.036), ('trees', 0.036), ('local', 0.035), ('im', 0.034), ('millions', 0.034), ('leaf', 0.034), ('quantization', 0.034), ('features', 0.033), ('dm', 0.033), ('conduct', 0.032), ('embed', 0.032), ('deep', 0.032), ('search', 0.031), ('pc', 0.03), ('distractors', 0.029), ('outcomes', 0.029), ('dash', 0.029), ('hashing', 0.029), ('removes', 0.028), ('laboratories', 0.028), ('jumps', 0.028), ('distractor', 0.028), ('cc', 0.027), ('category', 0.027), ('sdm', 0.027), ('antonio', 0.027), ('flickr', 0.026), ('candidate', 0.026), ('descriptors', 0.026), ('nec', 0.026), ('totally', 0.026), ('landmark', 0.026), ('cost', 0.026), ('largely', 0.025), ('candidates', 0.025), ('returning', 0.025), ('scalability', 0.025), ('consensus', 0.025), ('similarities', 0.025), ('shallow', 0.025), ('saves', 0.025), ('words', 0.025), ('dn', 0.025), ('queries', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian

Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.

2 0.25111279 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint

Author: Jayaguru Panda, Michael S. Brown, C.V. Jawahar

Abstract: Existing mobile image instance retrieval applications assume a network-based usage where image features are sent to a server to query an online visual database. In this scenario, there are no restrictions on the size of the visual database. This paper, however, examines how to perform this same task offline, where the entire visual index must reside on the mobile device itself within a small memory footprint. Such solutions have applications on location recognition and product recognition. Mobile instance retrieval requires a significant reduction in the visual index size. To achieve this, we describe a set of strategies that can reduce the visual index up to 60-80 compared to a scatannd raerddu iens tthaen vceis rueatrli ienvdaelx xim upple tom 6en0t-8at0io ×n found on ddte osk atops or servers. While our proposed reduction steps affect the overall mean Average Precision (mAP), they are able to maintain a good Precision for the top K results (PK). We argue that for such offline application, maintaining a good PK is sufficient. The effectiveness of this approach is demonstrated on several standard databases. A working application designed for a remote historical site is also presented. This application is able to reduce an 50,000 image index structure to 25 MBs while providing a precision of 97% for P10 and 100% for P1.

3 0.24023029 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

Author: Jing Wang, Jingdong Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo

Abstract: In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhoodgraph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST andHOG) show that our approach outperforms stateof-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system [18] also shows superior performance over the BIGANN dataset of 1 billion SIFT features compared with the best previously published result.

4 0.23511782 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation

Author: Basura Fernando, Tinne Tuytelaars

Abstract: In this paper we present a new method for object retrieval starting from multiple query images. The use of multiple queries allows for a more expressive formulation of the query object including, e.g., different viewpoints and/or viewing conditions. This, in turn, leads to more diverse and more accurate retrieval results. When no query images are available to the user, they can easily be retrieved from the internet using a standard image search engine. In particular, we propose a new method based on pattern mining. Using the minimal description length principle, we derive the most suitable set of patterns to describe the query object, with patterns corresponding to local feature configurations. This results in apowerful object-specific mid-level image representation. The archive can then be searched efficiently for similar images based on this representation, using a combination of two inverted file systems. Since the patterns already encode local spatial information, good results on several standard image retrieval datasets are obtained even without costly re-ranking based on geometric verification.

5 0.17935194 210 iccv-2013-Image Retrieval Using Textual Cues

Author: Anand Mishra, Karteek Alahari, C.V. Jawahar

Abstract: We present an approach for the text-to-image retrieval problem based on textual content present in images. Given the recent developments in understanding text in images, an appealing approach to address this problem is to localize and recognize the text, and then query the database, as in a text retrieval problem. We show that such an approach, despite being based on state-of-the-artmethods, is insufficient, and propose a method, where we do not rely on an exact localization and recognition pipeline. We take a query-driven search approach, where we find approximate locations of characters in the text query, and then impose spatial constraints to generate a ranked list of images in the database. The retrieval performance is evaluated on public scene text datasets as well as three large datasets, namely IIIT scene text retrieval, Sports-10K and TV series-1M, we introduce.

6 0.17697626 52 iccv-2013-Attribute Adaptation for Personalized Image Search

7 0.1734778 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval

8 0.17312066 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

9 0.16844629 333 iccv-2013-Quantize and Conquer: A Dimensionality-Recursive Solution to Clustering, Vector Quantization, and Image Retrieval

10 0.16003498 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

11 0.15683553 53 iccv-2013-Attribute Dominance: What Pops Out?

12 0.15227161 31 iccv-2013-A Unified Probabilistic Approach Modeling Relationships between Attributes and Objects

13 0.14554463 221 iccv-2013-Joint Inverted Indexing

14 0.14393097 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

15 0.1369299 446 iccv-2013-Visual Semantic Complex Network for Web Images

16 0.13347907 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

17 0.13214689 380 iccv-2013-Semantic Transform: Weakly Supervised Semantic Inference for Relating Visual Attributes

18 0.13159598 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

19 0.12159805 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval

20 0.12096238 107 iccv-2013-Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.197), (1, 0.139), (2, -0.075), (3, -0.192), (4, 0.112), (5, 0.183), (6, -0.032), (7, -0.127), (8, -0.032), (9, 0.121), (10, 0.105), (11, 0.042), (12, 0.091), (13, 0.094), (14, -0.025), (15, 0.021), (16, 0.119), (17, -0.15), (18, 0.115), (19, -0.086), (20, -0.067), (21, -0.038), (22, -0.003), (23, 0.023), (24, -0.072), (25, -0.001), (26, 0.062), (27, -0.009), (28, -0.001), (29, 0.056), (30, 0.015), (31, 0.008), (32, -0.008), (33, -0.0), (34, 0.021), (35, 0.019), (36, -0.046), (37, -0.03), (38, -0.007), (39, 0.046), (40, -0.014), (41, -0.02), (42, -0.085), (43, 0.04), (44, -0.008), (45, 0.07), (46, -0.01), (47, -0.049), (48, 0.096), (49, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97067434 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian

Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.

2 0.84820664 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint

Author: Jayaguru Panda, Michael S. Brown, C.V. Jawahar

Abstract: Existing mobile image instance retrieval applications assume a network-based usage where image features are sent to a server to query an online visual database. In this scenario, there are no restrictions on the size of the visual database. This paper, however, examines how to perform this same task offline, where the entire visual index must reside on the mobile device itself within a small memory footprint. Such solutions have applications on location recognition and product recognition. Mobile instance retrieval requires a significant reduction in the visual index size. To achieve this, we describe a set of strategies that can reduce the visual index up to 60-80 compared to a scatannd raerddu iens tthaen vceis rueatrli ienvdaelx xim upple tom 6en0t-8at0io ×n found on ddte osk atops or servers. While our proposed reduction steps affect the overall mean Average Precision (mAP), they are able to maintain a good Precision for the top K results (PK). We argue that for such offline application, maintaining a good PK is sufficient. The effectiveness of this approach is demonstrated on several standard databases. A working application designed for a remote historical site is also presented. This application is able to reduce an 50,000 image index structure to 25 MBs while providing a precision of 97% for P10 and 100% for P1.

3 0.82814693 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval

Author: Cai-Zhi Zhu, Hervé Jégou, Shin'Ichi Satoh

Abstract: Visual object retrieval aims at retrieving, from a collection of images, all those in which a given query object appears. It is inherently asymmetric: the query object is mostly included in the database image, while the converse is not necessarily true. However, existing approaches mostly compare the images with symmetrical measures, without considering the different roles of query and database. This paper first measure the extent of asymmetry on large-scale public datasets reflecting this task. Considering the standard bag-of-words representation, we then propose new asymmetrical dissimilarities accounting for the different inlier ratios associated with query and database images. These asymmetrical measures depend on the query, yet they are compatible with an inverted file structure, without noticeably impacting search efficiency. Our experiments show the benefit of our approach, and show that the visual object retrieval task is better treated asymmetrically, in the spirit of state-of-the-art text retrieval.

4 0.80167699 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation

Author: Basura Fernando, Tinne Tuytelaars

Abstract: In this paper we present a new method for object retrieval starting from multiple query images. The use of multiple queries allows for a more expressive formulation of the query object including, e.g., different viewpoints and/or viewing conditions. This, in turn, leads to more diverse and more accurate retrieval results. When no query images are available to the user, they can easily be retrieved from the internet using a standard image search engine. In particular, we propose a new method based on pattern mining. Using the minimal description length principle, we derive the most suitable set of patterns to describe the query object, with patterns corresponding to local feature configurations. This results in apowerful object-specific mid-level image representation. The archive can then be searched efficiently for similar images based on this representation, using a combination of two inverted file systems. Since the patterns already encode local spatial information, good results on several standard image retrieval datasets are obtained even without costly re-ranking based on geometric verification.

5 0.78436631 446 iccv-2013-Visual Semantic Complex Network for Web Images

Author: Shi Qiu, Xiaogang Wang, Xiaoou Tang

Abstract: This paper proposes modeling the complex web image collections with an automatically generated graph structure called visual semantic complex network (VSCN). The nodes on this complex network are clusters of images with both visual and semantic consistency, called semantic concepts. These nodes are connected based on the visual and semantic correlations. Our VSCN with 33, 240 concepts is generated from a collection of 10 million web images. 1 A great deal of valuable information on the structures of the web image collections can be revealed by exploring the VSCN, such as the small-world behavior, concept community, indegree distribution, hubs, and isolated concepts. It not only helps us better understand the web image collections at a macroscopic level, but also has many important practical applications. This paper presents two application examples: content-based image retrieval and image browsing. Experimental results show that the VSCN leads to significant improvement on both the precision of image retrieval (over 200%) and user experience for image browsing.

6 0.77103591 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search

7 0.76209098 221 iccv-2013-Joint Inverted Indexing

8 0.75690132 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

9 0.74405313 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

10 0.72659314 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

11 0.71111941 333 iccv-2013-Quantize and Conquer: A Dimensionality-Recursive Solution to Clustering, Vector Quantization, and Image Retrieval

12 0.69701517 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

13 0.67131376 192 iccv-2013-Handwritten Word Spotting with Corrected Attributes

14 0.66399497 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

15 0.64237332 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval

16 0.59962988 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

17 0.54107231 77 iccv-2013-Codemaps - Segment, Classify and Search Objects Locally

18 0.53464681 210 iccv-2013-Image Retrieval Using Textual Cues

19 0.52318937 306 iccv-2013-Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items

20 0.52049565 53 iccv-2013-Attribute Dominance: What Pops Out?


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.152), (7, 0.013), (12, 0.015), (13, 0.015), (26, 0.079), (27, 0.265), (31, 0.034), (40, 0.012), (42, 0.087), (64, 0.04), (73, 0.018), (89, 0.169)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.81417429 292 iccv-2013-Non-convex P-Norm Projection for Robust Sparsity

Author: Mithun Das Gupta, Sanjeev Kumar

Abstract: In this paper, we investigate the properties of Lp norm (p ≤ 1) within a projection framework. We start with the (KpK T≤ equations of the neoctni-olnin efraarm optimization problem a thnde then use its key properties to arrive at an algorithm for Lp norm projection on the non-negative simplex. We compare with L1projection which needs prior knowledge of the true norm, as well as hard thresholding based sparsificationproposed in recent compressed sensing literature. We show performance improvements compared to these techniques across different vision applications.

same-paper 2 0.8049258 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian

Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.

3 0.80246055 342 iccv-2013-Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length

Author: Zuzana Kukelova, Martin Bujnak, Tomas Pajdla

Abstract: Theproblem ofdetermining the absoluteposition andorientation of a camera from a set of 2D-to-3D point correspondences is one of the most important problems in computer vision with a broad range of applications. In this paper we present a new solution to the absolute pose problem for camera with unknown radial distortion and unknown focal length from five 2D-to-3D point correspondences. Our new solver is numerically more stable, more accurate, and significantly faster than the existing state-of-the-art minimal fourpoint absolutepose solvers for this problem. Moreover, our solver results in less solutions and can handle larger radial distortions. The new solver is straightforward and uses only simple concepts from linear algebra. Therefore it is simpler than the state-of-the-art Gr¨ obner basis solvers. We compare our new solver with the existing state-of-theart solvers and show its usefulness on synthetic and real datasets. 1

4 0.74986517 310 iccv-2013-Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Author: Tae-Hyun Oh, Hyeongwoo Kim, Yu-Wing Tai, Jean-Charles Bazin, In So Kweon

Abstract: Robust Principal Component Analysis (RPCA) via rank minimization is a powerful tool for recovering underlying low-rank structure of clean data corrupted with sparse noise/outliers. In many low-level vision problems, not only it is known that the underlying structure of clean data is low-rank, but the exact rank of clean data is also known. Yet, when applying conventional rank minimization for those problems, the objective function is formulated in a way that does not fully utilize a priori target rank information about the problems. This observation motivates us to investigate whether there is a better alternative solution when using rank minimization. In this paper, instead of minimizing the nuclear norm, we propose to minimize the partial sum of singular values. The proposed objective function implicitly encourages the target rank constraint in rank minimization. Our experimental analyses show that our approach performs better than conventional rank minimization when the number of samples is deficient, while the solutions obtained by the two approaches are almost identical when the number of samples is more than sufficient. We apply our approach to various low-level vision problems, e.g. high dynamic range imaging, photometric stereo and image alignment, and show that our results outperform those obtained by the conventional nuclear norm rank minimization method.

5 0.74954689 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

Author: Zhenyu Guo, Z. Jane Wang

Abstract: Digital images nowadays show large appearance variabilities on picture styles, in terms of color tone, contrast, vignetting, and etc. These ‘picture styles’ are directly related to the scene radiance, image pipeline of the camera, and post processing functions (e.g., photography effect filters). Due to the complexity and nonlinearity of these factors, popular gradient-based image descriptors generally are not invariant to different picture styles, which could degrade the performance for object recognition. Given that images shared online or created by individual users are taken with a wide range of devices and may be processed by various post processing functions, to find a robust object recognition system is useful and challenging. In this paper, we investigate the influence of picture styles on object recognition by making a connection between image descriptors and a pixel mapping function g, and accordingly propose an adaptive approach based on a g-incorporated kernel descriptor and multiple kernel learning, without estimating or specifying the image styles used in training and testing. We conduct experiments on the Domain Adaptation data set, the Oxford Flower data set, and several variants of the Flower data set by introducing popular photography effects through post-processing. The results demonstrate that theproposedmethod consistently yields recognition improvements over standard descriptors in all studied cases.

6 0.72952294 427 iccv-2013-Transfer Feature Learning with Joint Distribution Adaptation

7 0.694278 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

8 0.69345164 126 iccv-2013-Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

9 0.6931982 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria

10 0.68856454 191 iccv-2013-Handling Uncertain Tags in Visual Recognition

11 0.6874783 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies

12 0.685287 239 iccv-2013-Learning Hash Codes with Listwise Supervision

13 0.68449831 244 iccv-2013-Learning View-Invariant Sparse Representations for Cross-View Action Recognition

14 0.68191063 446 iccv-2013-Visual Semantic Complex Network for Web Images

15 0.68003666 352 iccv-2013-Revisiting Example Dependent Cost-Sensitive Learning with Decision Trees

16 0.67981476 214 iccv-2013-Improving Graph Matching via Density Maximization

17 0.67978883 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

18 0.67895937 153 iccv-2013-Face Recognition Using Face Patch Networks

19 0.67606449 83 iccv-2013-Complementary Projection Hashing

20 0.67601562 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation