iccv iccv2013 iccv2013-159 knowledge-graph by maker-knowledge-mining

159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation


Source: pdf

Author: Jing Wang, Jingdong Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo

Abstract: In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhoodgraph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST andHOG) show that our approach outperforms stateof-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system [18] also shows superior performance over the BIGANN dataset of 1 billion SIFT features compared with the best previously published result.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This structure augments the neighborhoodgraph with a bridge graph. [sent-2, score-0.645]

2 We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. [sent-3, score-0.663]

3 Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. [sent-4, score-1.697]

4 Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. [sent-5, score-1.275]

5 The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. [sent-6, score-2.361]

6 The simplest solution to NN search is linear scan, comparing each reference vector to the query vector. [sent-13, score-0.467]

7 The search complexity is linear with respect to both the number of reference vectors and the data dimensionality. [sent-14, score-0.497]

8 Recently neighborhood graph search has been attracting a lot of interests [1, 2, 5, 15, 3 1, 32, 38] because of its simplicity and good search performance. [sent-30, score-0.56]

9 The basic procedure of neighborhood graph search starts from one or several seeding vectors, and puts them into a priority queue with the distance to the query being the key. [sent-32, score-0.779]

10 , the nearest one to the query, and expand2128 ing its neighborhood vectors (from neighborhood graph), among which the vectors that have not been visited are pushed into the priority queue. [sent-35, score-1.009]

11 Using neighborhood vectors of a vector as candidates has two advantages. [sent-37, score-0.456]

12 The other is that if one vector is close to the query, its neighborhood vectors are also likely to be close to the query. [sent-39, score-0.401]

13 The other is to design efficient and effective ways to guide the search in the neighborhood graph, including presetting the seeds created via clustering [3 1, 32], picking the candidates from KD tress [2], iteratively searching between KD trees and the neighborhood graph [38]. [sent-42, score-0.821]

14 In this paper, we present a more effective way, combining the neighborhood graph with a bridge graph, to search for approximate nearest neighbors. [sent-43, score-1.265]

15 The inverted index algorithms are widely used for very large datasets of vectors (hundreds of million to billions) due to its small memory cost. [sent-44, score-0.55]

16 , [4, 18, 28, 34, 39], and is composed of a set of inverted lists, each of which corresponds to a cluster of reference vectors. [sent-48, score-0.517]

17 Other inverted indices include hash tables [10], tree codebooks [6] and complementary tree codebooks [37]. [sent-49, score-0.435]

18 This structure augments the neighborhood graph with a bridge graph that is able to boost approximate nearest neighbor search performance. [sent-53, score-1.492]

19 Inspired by the product quantization technology [4, 18], we adopt Cartesian concatenation (or Cartesian product), to generate a large set of vectors, which we call bridge vectors, from several small sets of subvectors to approximate the reference vectors. [sent-54, score-1.154]

20 Each bridge vector is then connected to a few reference vectors that are near enough to it, forming a bridge graph. [sent-55, score-1.697]

21 Combining the bridge graph with the neighborhood graph built over reference data vectors yields an augmented neighborhood graph. [sent-56, score-1.648]

22 The ANN search procedure starts by finding the nearest bridge vector to the query vector, and discovers the first set of reference vectors connected to such a bridge vector. [sent-57, score-2.072]

23 Then the search simulta- neously traverses the bridge graph and the neighborhood graph in the best-first manner using a shared priority queue. [sent-58, score-1.235]

24 The advantages of adopting the bridge graph lie in twofold. [sent-59, score-0.746]

25 First, computing the distances from bridge vectors to the query is very efficient, for instance, the computation for 1000000 bridge vectors that are formed by 3 sets of 100 subvectors takes almost the same time as that for 100 vectors. [sent-60, score-1.936]

26 Second, the best bridge vector is most likely to be very close to true NNs, allowing the ANN search to quickly reach true NNs through bridge vectors. [sent-61, score-1.424]

27 Note that different from applications to fast distance computation [18] and code book construction [4], the goal of Cartesian product in this paper is to build a bridge to connect the query and the reference vectors through bridge vectors. [sent-62, score-1.824]

28 aOlu rer goal cise vtoe cbtuoirlds, an in =de x{ xstructu,r·e· ·u ,sixng t}h,e x x bri∈dge R graph such that, given a query vector q, its nearest neighbors can be quickly discovered. [sent-68, score-0.489]

29 Data structure Our index structure consists oftwo components: a bridge graph that connects bridge vectors and their nearest reference vectors, and a neighborhood graph that connects each reference vector to its nearest reference vectors. [sent-72, score-2.853]

30 Inspired by tshpiist property, we use t ohef eCleamrteensitasn i nco Ync iaste nnation Y, called bridge vectors, as bridges to connect the query tvioecntoYr, ,w ciatlhle dtheb rridefgeerevnecceto ovrsec,taosrsb. [sent-83, score-0.737]

31 We propose to use product quantization [18], which aims to minimize the distance of each vector to the nearest concatenated center derived from subquantizers, to compute bridge vectors. [sent-85, score-0.959]

32 This ensures that 2129 the reference vectors discovered through one bridge vector are not far away from the query and hence the probability that those reference vectors are true NNs is high. [sent-86, score-1.557]

33 To this end, we generate a large amount of bridge vectors. [sent-88, score-0.622]

34 The augmented neighborhood graph is a combination of the neighborhood graph G¯ over the reference database X and the bridge graph B Gbetwo veeern t hthee bridge veec dtaotrasb aYs ean Xd tahned re thfeer berncideg vee gctroarpsh X B. [sent-91, score-2.214]

35 The bridge graph B is constructed by connecting each bridge vector yj in Y to its nearest vectors Adj [yi] in X. [sent-96, score-1.741]

36 graph approximately by finding top t (typically 100 in our experiments) nearest bridge vectors for each reference vector and then keeping top b nearest (typically 5 in our experiments) reference vectors for each bridge vector, which is efficient and takes O(Nt(log t b)) time. [sent-98, score-2.49]

37 The bridge graph is different from the inverted multiindex [4]. [sent-99, score-1.119]

38 In the inverted multi-index, each bridge vector y contains a list of vectors that are closer to y than all other bridge vectors, while in our approach each bridge is associated with a list of vectors that are closer to y than all other reference data points. [sent-100, score-2.844]

39 We give a brief overview of the ANN search procedure over a neighborhood graph before describing how to make use of bridge vectors. [sent-109, score-1.057]

40 Y → X: the bridge graph, and X → X: the neighborhood graph. [sent-114, score-0.808]

41 Magenta denotes the vectors in the main queue, green represents the vector being popped out from the main queue, and black indicates the vectors whose neighborhoods have already been expanded. [sent-116, score-0.477]

42 Then each neighborhood vector in Adj [p∗] is inserted to the queue if it is not visited, and at the same time it is added to the result set (maintained by a max-priority queue with a fixed length depending on how many nearest neighbors are expected). [sent-120, score-0.787]

43 To exploit the bridge vectors, we present an extractionon-demand strategy, instead of fetching all the bridge vec- tors to the main queue, which leads to expensive cost in sorting them and maintaining the main queue. [sent-121, score-1.337]

44 Our strategy is to maintain the main queue such that it consists of only one bridge vector if available. [sent-122, score-0.857]

45 To be specific, if the top vector p∗ in the main queue is a reference vector, the algorithm proceeds as usual, the same to the above procedure without using bridge vectors. [sent-123, score-1.071]

46 If the top vector is a bridge vector, we first insert its neighbors Adj [p∗] into the main queue and the result set, and in addition we find the next nearest bridge vector (to the query q) and insert it to the main queue. [sent-124, score-1.891]

47 The extraction-on-demand strategy needs to visit the bridge vector one by one in the order of increasing distance from q. [sent-132, score-0.657]

48 /* Extract the next nearest bridge vector if p is a bridge vector */ 23. [sent-159, score-1.472]

49 return R; (ik, jl) so that the corresponding bridge vectors are visited in the order ofincreasing distances to the query q. [sent-165, score-1.014]

50 The algorithm is very efficient and producing the t-th bridge vector only takes O(log(t)) time. [sent-166, score-0.684]

51 Slightly different from extracting a fixed number of nearest bridge vectors once [4], our algorithm automatically determines when to extract the next one, that is when there is no bridge vector in the main queue. [sent-167, score-1.642]

52 #reference means the number of reference vectors associated with the bridge vectors, and α means the average number of unique reference vectors associated with each bridge vector. [sent-179, score-1.988]

53 The BIGANN dataset [19] consists of 1B 128-dimensional byte-valued vectors as the reference set and 10K vectors as the queries. [sent-184, score-0.552]

54 The true nearest neighbors are computed by comparing each query with all the reference vectors in the data set. [sent-187, score-0.682]

55 We compare different algorithms by calculating the search accuracy given the same search time, where the search time is recorded by varying the number of accessed vectors. [sent-188, score-0.396]

56 The index structure construction in our approach includes partitioning the vector into m subvectors and grouping the vectors of each partition into n clusters. [sent-194, score-0.426]

57 This is because more clusters produce more bridge vectors and thus more reference vectors are associated with bridge vectors and their distances are much smaller. [sent-198, score-2.03]

58 The result with 4 partitions and 50 clusters per partition gets the best performance as in this case the properties desired for bridge vectors described in Section 2. [sent-199, score-0.86]

59 We compare our approach with stateof-the-art algorithms, including iterative neighborhood graph search [38], original neighborhood graph search (AryaM93) [2], trinary projection (TP) trees [20], vantage point (VP) tree [46], Spill trees [24], FLANN [27], and inverted multi-index [4]. [sent-202, score-1.392]

60 The neighborhood graphs of different algorithms are the same, and each vector is connected with 20 nearest vectors. [sent-205, score-0.402]

61 All the neighborhood graph search algorithms outperform the other algorithms, which shows that the neighborhood graph structure is good to index vectors. [sent-227, score-0.79]

62 The superiority of our approach to previous neighborhood graph algorithms stems from that our approach exploits the bridge graph to help the search. [sent-228, score-1.085]

63 It is shown in [4] that inverted multi-index works the best when using a secondorder multi-index and a large codebook, but this results in high quantization cost. [sent-230, score-0.395]

64 In contrast, our approach benefits from the neighborhood graph structure so that we can use a high-order product quantizer to save the quantization cost. [sent-231, score-0.444]

65 The parameters M (the number of inverted lists visited), L (the number of candidates for reranking) are given beside each marker of IVFADC. [sent-242, score-0.452]

66 We cluster the data points into K inverted lists and use a 64-bits code to represent each vector as done in [18]. [sent-246, score-0.432]

67 Given a query, we first find its M nearest inverted lists, then compute the approximate distance from the query to each of the candidates in the retrieved inverted lists. [sent-247, score-1.028]

68 Finally we re-rank the top L candidates using Euclidean distance and compute the 1-recall [18] of the nearest neighbor (the same to the definition of the search accuracy for 1-NN). [sent-248, score-0.418]

69 The IVFADC system organizes the data using inverted indices built via a coarse quantizer and represents each vector by a short code produced by product quantization. [sent-254, score-0.454]

70 During the search stage, the system visits the inverted lists in ascending order of the distances to the query and re-ranks the candidates according to the short codes. [sent-255, score-0.724]

71 The original implementation only uses a small number of inverted lists to avoid the expensive time cost in finding the exact nearest inverted indices. [sent-256, score-0.944]

72 The inverted multi-index [4] is used to replace the inverted indices in the IVFADC system, which is shown better than the original IVFADC implementation [18]. [sent-257, score-0.68]

73 We propose to replace the nearest inverted list identification using our approach. [sent-258, score-0.516]

74 The good search quality of our approach in terms of both accuracy and efficiency makes it feasible to handle a large number of inverted lists. [sent-259, score-0.45]

75 Then we use our approach to assign each vector to the inverted list corresponding to the nearest center, producing the inverted indices. [sent-261, score-0.876]

76 The recall@T score is equivalent to the accuracy for the nearest neighbor if a short list of T vectors is verified using exact Euclidean distances [19]. [sent-281, score-0.483]

77 The superiority over IVFADC stems from that our approach significantly increases the number of inverted indices and produces space partitions with smaller (coarse) quantization errors and that our system accesses a few coarse centers while guarantees relatively accurate inverted lists. [sent-284, score-0.815]

78 For inverted multi-index approach, although the total number of centers is quite large the data vectors are not evenly divided into inverted lists. [sent-285, score-0.83]

79 As reported in the supplementary material of [4], 61% of the inverted lists are empty. [sent-286, score-0.397]

80 In addition to the neighborhood graph and the reference vectors, the index structure of our approach includes a bridge graph and the bridge vectors. [sent-291, score-1.915]

81 The number of bridge vectors in our implementation is O(N), with N being the number of the reference ve√ctors. [sent-292, score-0.994]

82 The storage cost of the bridge vectors are then O( √mN), and the cost of the bridge graph is also O(N). [sent-293, score-1.67]

83 In the case of 1M 384-dimensional GIST byte-valued features, without optimization, the storage complexity (125M bytes) of the bridge graph is smaller than the reference vectors (384M bytes) and the neighborhood graph (160M bytes). [sent-294, score-1.464]

84 In comparison to source coding [18, 19] and hashing without using the original features, and inverted indices 2133 Table 2. [sent-297, score-0.465]

85 IVFADC uses inverted lists with K = 1024, Multi-D-ADC uses the second-order multi-index with K = 214 and our approach use inverted lists with K = 6M GMrauIlpSVthiy-FsDAte-mACD B IGAN41L30mi,s 1tl0eibon. [sent-299, score-0.794]

86 Recent research [42] shows that an approximate neighborhood graph can be built in O(N log N) time, which is comparable to the cost of constructing the bridge graph. [sent-313, score-1.025]

87 Because the number of data vectors is very large (1B), the most time-consuming stage is to assign each vector to the inverted lists and both take about 2 days. [sent-318, score-0.612]

88 The search procedure of our approach consists of the distance computation over the subvectors, the traversal over the bridge graph and the neighborhood graph. [sent-323, score-1.057]

89 The distance computation over the subvectors is very cheap and takes small constant time (about the distance computation cost with 100 vectors in our experiments). [sent-324, score-0.436]

90 Compared with the number of reference vectors that are required to reach an acceptable accuracy (e. [sent-325, score-0.396]

91 Besides the computation of the distances between the query vector and the visited reference vectors, the additional time overhead comes from maintaining the priority queue and querying the bridge vectors using the multisequence algorithm. [sent-328, score-1.547]

92 Given there are T reference vectors that have been discovered, it can be easily shown that the main queue is no longer than T. [sent-329, score-0.572]

93 Consider the worst case that all the T reference vectors come from the bridge graph, where each bridge vector is associated with α unique reference vectors on average (the statistics for α in our experiments is presented in Table 1), we have that αT bridge vectors are visited. [sent-330, score-2.825]

94 Extracting αT bridge vectors using the multi-sequence algorithm [4] takes O(Tα log(αT)). [sent-332, score-0.829]

95 Figure 5 shows the time cost of visiting 10K reference vectors in different algorithms on two datasets. [sent-334, score-0.457]

96 Linear scan represents the time cost of computing the distances between a query and all reference vectors. [sent-335, score-0.431]

97 We can see that the inverted multi-index takes the minimum overhead and our approach is the second minimum. [sent-337, score-0.408]

98 The other property is that the exact nearest vectors to a query vector from such a large set of concatenated vectors can be quickly found using the multi-sequence algorithm. [sent-342, score-0.747]

99 The application to inverted multiindex [4] makes use of the second property to fast retrieve concatenated quantizers. [sent-344, score-0.454]

100 Query-driven iterated neighborhood graph search for large scale indexing. [sent-638, score-0.435]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('bridge', 0.622), ('inverted', 0.325), ('ivfadc', 0.242), ('reference', 0.192), ('neighborhood', 0.186), ('vectors', 0.18), ('queue', 0.175), ('nearest', 0.158), ('subvectors', 0.137), ('search', 0.125), ('graph', 0.124), ('query', 0.115), ('bigann', 0.114), ('cartesian', 0.103), ('gist', 0.101), ('hashing', 0.09), ('trees', 0.085), ('neighbor', 0.08), ('bytes', 0.072), ('lists', 0.072), ('quantization', 0.07), ('sift', 0.07), ('kd', 0.069), ('visited', 0.065), ('ann', 0.063), ('pages', 0.059), ('adj', 0.057), ('overhead', 0.056), ('candidates', 0.055), ('priority', 0.054), ('approximate', 0.05), ('zeng', 0.049), ('extractnextnearestbridgevector', 0.048), ('logt', 0.048), ('multiindex', 0.048), ('nns', 0.047), ('index', 0.045), ('cost', 0.043), ('product', 0.042), ('discovered', 0.041), ('concatenation', 0.041), ('neighbors', 0.037), ('lsh', 0.037), ('partitions', 0.036), ('storage', 0.036), ('quad', 0.036), ('hog', 0.035), ('vector', 0.035), ('augmented', 0.034), ('jing', 0.034), ('list', 0.033), ('popped', 0.032), ('tress', 0.032), ('distances', 0.032), ('concatenated', 0.032), ('reranking', 0.031), ('tiny', 0.031), ('indices', 0.03), ('construction', 0.029), ('stems', 0.029), ('searching', 0.028), ('cheap', 0.028), ('scan', 0.028), ('property', 0.027), ('tree', 0.027), ('takes', 0.027), ('soda', 0.026), ('hash', 0.026), ('scalable', 0.025), ('arya', 0.025), ('wang', 0.025), ('main', 0.025), ('multimedia', 0.024), ('acceptable', 0.024), ('nn', 0.024), ('traversing', 0.024), ('connected', 0.023), ('hz', 0.023), ('hour', 0.023), ('gang', 0.023), ('oftwo', 0.023), ('augments', 0.023), ('flann', 0.023), ('forming', 0.023), ('acm', 0.022), ('gan', 0.022), ('milliseconds', 0.022), ('quantizer', 0.022), ('clusters', 0.022), ('proceeds', 0.022), ('fast', 0.022), ('insert', 0.021), ('visiting', 0.021), ('retrieval', 0.021), ('time', 0.021), ('rui', 0.021), ('pop', 0.021), ('billion', 0.021), ('quickly', 0.02), ('coding', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

Author: Jing Wang, Jingdong Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo

Abstract: In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhoodgraph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST andHOG) show that our approach outperforms stateof-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system [18] also shows superior performance over the BIGANN dataset of 1 billion SIFT features compared with the best previously published result.

2 0.24110834 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

Author: Masakazu Iwamura, Tomokazu Sato, Koichi Kise

Abstract: Approximate nearest neighbor search (ANNS) is a basic and important technique used in many tasks such as object recognition. It involves two processes: selecting nearest neighbor candidates and performing a brute-force search of these candidates. Only the former though has scope for improvement. In most existing methods, it approximates the space by quantization. It then calculates all the distances between the query and all the quantized values (e.g., clusters or bit sequences), and selects a fixed number of candidates close to the query. The performance of the method is evaluated based on accuracy as a function of the number of candidates. This evaluation seems rational but poses a serious problem; it ignores the computational cost of the process of selection. In this paper, we propose a new ANNS method that takes into account costs in the selection process. Whereas existing methods employ computationally expensive techniques such as comparative sort and heap, the proposed method does not. This realizes a significantly more efficient search. We have succeeded in reducing computation times by one-third compared with the state-of-the- art on an experiment using 100 million SIFT features.

3 0.24023029 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

Author: Shiliang Zhang, Ming Yang, Xiaoyu Wang, Yuanqing Lin, Qi Tian

Abstract: Inverted indexes in image retrieval not only allow fast access to database images but also summarize all knowledge about the database, so that their discriminative capacity largely determines the retrieval performance. In this paper, for vocabulary tree based image retrieval, we propose a semantic-aware co-indexing algorithm to jointly San Antonio, TX 78249 . j dl@gmai l com qit ian@cs .ut sa . edu . The query embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. For an initial set of inverted indexes of local features, we utilize 1000 semantic attributes to filter out isolated images and insert semantically similar images to the initial set. Encoding these two distinct cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online query cause only local features but no semantic attributes are used for query. Experiments and comparisons with recent retrieval methods on 3 datasets, i.e., UKbench, Holidays, Oxford5K, and 1.3 million images from Flickr as distractors, manifest the competitive performance of our method 1.

4 0.18140498 333 iccv-2013-Quantize and Conquer: A Dimensionality-Recursive Solution to Clustering, Vector Quantization, and Image Retrieval

Author: Yannis Avrithis

Abstract: Inspired by the close relation between nearest neighbor search and clustering in high-dimensional spaces as well as the success of one helping to solve the other, we introduce a new paradigm where both problems are solved simultaneously. Our solution is recursive, not in the size of input data but in the number of dimensions. One result is a clustering algorithm that is tuned to small codebooks but does not need all data in memory at the same time and is practically constant in the data size. As a by-product, a tree structure performs either exact or approximate quantization on trained centroids, the latter being not very precise but extremely fast. A lesser contribution is a new indexing scheme for image retrieval that exploits multiple small codebooks to provide an arbitrarily fine partition of the descriptor space. Large scale experiments on public datasets exhibit state of the art performance and remarkable generalization.

5 0.18058835 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

Author: Dror Aiger, Efi Kokiopoulou, Ehud Rivlin

Abstract: We propose two solutions for both nearest neighbors and range search problems. For the nearest neighbors problem, we propose a c-approximate solutionfor the restricted version ofthe decisionproblem with bounded radius which is then reduced to the nearest neighbors by a known reduction. For range searching we propose a scheme that learns the parameters in a learning stage adopting them to the case of a set of points with low intrinsic dimension that are embedded in high dimensional space (common scenario for image point descriptors). We compare our algorithms to the best known methods for these problems, i.e. LSH, ANN and FLANN. We show analytically and experimentally that we can do better for moderate approximation factor. Our algorithms are trivial to parallelize. In the experiments conducted, running on couple of million im- ages, our algorithms show meaningful speed-ups when compared with the above mentioned methods.

6 0.14810109 221 iccv-2013-Joint Inverted Indexing

7 0.13347681 444 iccv-2013-Viewing Real-World Faces in 3D

8 0.13164577 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

9 0.12759382 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation

10 0.11375709 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint

11 0.11162212 83 iccv-2013-Complementary Projection Hashing

12 0.10628697 238 iccv-2013-Learning Graphs to Match

13 0.10354045 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

14 0.10150602 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

15 0.090971105 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval

16 0.086765781 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

17 0.079349756 13 iccv-2013-A General Two-Step Approach to Learning-Based Hashing

18 0.077026352 239 iccv-2013-Learning Hash Codes with Listwise Supervision

19 0.073046528 210 iccv-2013-Image Retrieval Using Textual Cues

20 0.071525835 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.146), (1, 0.039), (2, -0.071), (3, -0.101), (4, 0.004), (5, 0.227), (6, 0.008), (7, -0.041), (8, -0.121), (9, 0.067), (10, 0.024), (11, 0.039), (12, 0.069), (13, 0.07), (14, 0.035), (15, 0.059), (16, 0.1), (17, -0.134), (18, 0.097), (19, -0.044), (20, -0.035), (21, -0.032), (22, 0.017), (23, 0.016), (24, 0.043), (25, -0.011), (26, -0.018), (27, -0.019), (28, 0.01), (29, -0.007), (30, 0.023), (31, 0.045), (32, 0.013), (33, 0.001), (34, 0.048), (35, 0.039), (36, 0.035), (37, -0.048), (38, 0.019), (39, 0.016), (40, -0.013), (41, 0.013), (42, -0.017), (43, -0.013), (44, -0.017), (45, -0.018), (46, -0.054), (47, -0.047), (48, 0.059), (49, 0.05)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96735525 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

Author: Jing Wang, Jingdong Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo

Abstract: In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhoodgraph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST andHOG) show that our approach outperforms stateof-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system [18] also shows superior performance over the BIGANN dataset of 1 billion SIFT features compared with the best previously published result.

2 0.8655358 221 iccv-2013-Joint Inverted Indexing

Author: Yan Xia, Kaiming He, Fang Wen, Jian Sun

Abstract: Inverted indexing is a popular non-exhaustive solution to large scale search. An inverted file is built by a quantizer such as k-means or a tree structure. It has been found that multiple inverted files, obtained by multiple independent random quantizers, are able to achieve practically good recall and speed. Instead of computing the multiple quantizers independently, we present a method that creates them jointly. Our method jointly optimizes all codewords in all quantizers. Then it assigns these codewords to the quantizers. In experiments this method shows significant improvement over various existing methods that use multiple independent quantizers. On the one-billion set of SIFT vectors, our method is faster and more accurate than a recent state-of-the-art inverted indexing method.

3 0.85228729 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

Author: Dror Aiger, Efi Kokiopoulou, Ehud Rivlin

Abstract: We propose two solutions for both nearest neighbors and range search problems. For the nearest neighbors problem, we propose a c-approximate solutionfor the restricted version ofthe decisionproblem with bounded radius which is then reduced to the nearest neighbors by a known reduction. For range searching we propose a scheme that learns the parameters in a learning stage adopting them to the case of a set of points with low intrinsic dimension that are embedded in high dimensional space (common scenario for image point descriptors). We compare our algorithms to the best known methods for these problems, i.e. LSH, ANN and FLANN. We show analytically and experimentally that we can do better for moderate approximation factor. Our algorithms are trivial to parallelize. In the experiments conducted, running on couple of million im- ages, our algorithms show meaningful speed-ups when compared with the above mentioned methods.

4 0.83375758 333 iccv-2013-Quantize and Conquer: A Dimensionality-Recursive Solution to Clustering, Vector Quantization, and Image Retrieval

Author: Yannis Avrithis

Abstract: Inspired by the close relation between nearest neighbor search and clustering in high-dimensional spaces as well as the success of one helping to solve the other, we introduce a new paradigm where both problems are solved simultaneously. Our solution is recursive, not in the size of input data but in the number of dimensions. One result is a clustering algorithm that is tuned to small codebooks but does not need all data in memory at the same time and is practically constant in the data size. As a by-product, a tree structure performs either exact or approximate quantization on trained centroids, the latter being not very precise but extremely fast. A lesser contribution is a new indexing scheme for image retrieval that exploits multiple small codebooks to provide an arbitrarily fine partition of the descriptor space. Large scale experiments on public datasets exhibit state of the art performance and remarkable generalization.

5 0.81154013 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

Author: Masakazu Iwamura, Tomokazu Sato, Koichi Kise

Abstract: Approximate nearest neighbor search (ANNS) is a basic and important technique used in many tasks such as object recognition. It involves two processes: selecting nearest neighbor candidates and performing a brute-force search of these candidates. Only the former though has scope for improvement. In most existing methods, it approximates the space by quantization. It then calculates all the distances between the query and all the quantized values (e.g., clusters or bit sequences), and selects a fixed number of candidates close to the query. The performance of the method is evaluated based on accuracy as a function of the number of candidates. This evaluation seems rational but poses a serious problem; it ignores the computational cost of the process of selection. In this paper, we propose a new ANNS method that takes into account costs in the selection process. Whereas existing methods employ computationally expensive techniques such as comparative sort and heap, the proposed method does not. This realizes a significantly more efficient search. We have succeeded in reducing computation times by one-third compared with the state-of-the- art on an experiment using 100 million SIFT features.

6 0.79143107 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search

7 0.78897244 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval

8 0.7860198 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint

9 0.77300763 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation

10 0.74887007 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

11 0.74200618 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

12 0.73045069 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.67101228 446 iccv-2013-Visual Semantic Complex Network for Web Images

14 0.61993557 287 iccv-2013-Neighbor-to-Neighbor Search for Fast Coding of Feature Vectors

15 0.61087102 131 iccv-2013-EVSAC: Accelerating Hypotheses Generation by Modeling Matching Scores with Extreme Value Theory

16 0.60105997 3 iccv-2013-3D Sub-query Expansion for Improving Sketch-Based Multi-view Image Retrieval

17 0.59848124 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

18 0.56946987 365 iccv-2013-SIFTpack: A Compact Representation for Efficient SIFT Matching

19 0.53384656 117 iccv-2013-Discovering Details and Scene Structure with Hierarchical Iconoid Shift

20 0.47789353 210 iccv-2013-Image Retrieval Using Textual Cues


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.119), (4, 0.011), (7, 0.021), (13, 0.026), (26, 0.068), (27, 0.05), (31, 0.033), (34, 0.014), (40, 0.012), (42, 0.078), (64, 0.021), (73, 0.016), (74, 0.01), (89, 0.223), (90, 0.177), (95, 0.011), (98, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88987118 15 iccv-2013-A Generalized Low-Rank Appearance Model for Spatio-temporally Correlated Rain Streaks

Author: Yi-Lei Chen, Chiou-Ting Hsu

Abstract: In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image/video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.

same-paper 2 0.87079984 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

Author: Jing Wang, Jingdong Wang, Gang Zeng, Rui Gan, Shipeng Li, Baining Guo

Abstract: In this paper, we propose a new data structure for approximate nearest neighbor search. This structure augments the neighborhoodgraph with a bridge graph. We propose to exploit Cartesian concatenation to produce a large set of vectors, called bridge vectors, from several small sets of subvectors. Each bridge vector is connected with a few reference vectors near to it, forming a bridge graph. Our approach finds nearest neighbors by simultaneously traversing the neighborhood graph and the bridge graph in the best-first strategy. The success of our approach stems from two factors: the exact nearest neighbor search over a large number of bridge vectors can be done quickly, and the reference vectors connected to a bridge (reference) vector near the query are also likely to be near the query. Experimental results on searching over large scale datasets (SIFT, GIST andHOG) show that our approach outperforms stateof-the-art ANN search algorithms in terms of efficiency and accuracy. The combination of our approach with the IVFADC system [18] also shows superior performance over the BIGANN dataset of 1 billion SIFT features compared with the best previously published result.

3 0.84688127 140 iccv-2013-Elastic Net Constraints for Shape Matching

Author: Emanuele Rodolà, Andrea Torsello, Tatsuya Harada, Yasuo Kuniyoshi, Daniel Cremers

Abstract: We consider a parametrized relaxation of the widely adopted quadratic assignment problem (QAP) formulation for minimum distortion correspondence between deformable shapes. In order to control the accuracy/sparsity trade-off we introduce a weighting parameter on the combination of two existing relaxations, namely spectral and game-theoretic. This leads to the introduction of the elastic net penalty function into shape matching problems. In combination with an efficient algorithm to project onto the elastic net ball, we obtain an approach for deformable shape matching with controllable sparsity. Experiments on a standard benchmark confirm the effectiveness of the approach.

4 0.83552265 341 iccv-2013-Real-Time Body Tracking with One Depth Camera and Inertial Sensors

Author: Thomas Helten, Meinard Müller, Hans-Peter Seidel, Christian Theobalt

Abstract: In recent years, the availability of inexpensive depth cameras, such as the Microsoft Kinect, has boosted the research in monocular full body skeletal pose tracking. Unfortunately, existing trackers often fail to capture poses where a single camera provides insufficient data, such as non-frontal poses, and all other poses with body part occlusions. In this paper, we present a novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations. It takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database. In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors. These sensors provide reliable and complementary information when the monocular depth information alone is not sufficient. We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers. One is a new visibility model to determine global body pose, occlusions and usable depth correspondences and to decide what data modality to use for discriminative tracking. We also contribute with a new inertial-basedpose retrieval, and an adapted late fusion step to calculate the final body pose.

5 0.83214688 384 iccv-2013-Semi-supervised Robust Dictionary Learning via Efficient l-Norms Minimization

Author: Hua Wang, Feiping Nie, Weidong Cai, Heng Huang

Abstract: Representing the raw input of a data set by a set of relevant codes is crucial to many computer vision applications. Due to the intrinsic sparse property of real-world data, dictionary learning, in which the linear decomposition of a data point uses a set of learned dictionary bases, i.e., codes, has demonstrated state-of-the-art performance. However, traditional dictionary learning methods suffer from three weaknesses: sensitivity to noisy and outlier samples, difficulty to determine the optimal dictionary size, and incapability to incorporate supervision information. In this paper, we address these weaknesses by learning a Semi-Supervised Robust Dictionary (SSR-D). Specifically, we use the ℓ2,0+ norm as the loss function to improve the robustness against outliers, and develop a new structured sparse regularization com, , tom. . cai@sydney . edu . au , heng@uta .edu make the learning tasks easier to deal with and reduce the computational cost. For example, in image tagging, instead of using the raw pixel-wise features, semi-local or patch- based features, such as SIFT and geometric blur, are usually more desirable to achieve better performance. In practice, finding a set of compact features bases, also referred to as dictionary, with enhanced representative and discriminative power, plays a significant role in building a successful computer vision system. In this paper, we explore this important problem by proposing a novel formulation and its solution for learning Semi-Supervised Robust Dictionary (SSRD), where we examine the challenges in dictionary learning, and seek opportunities to overcome them and improve the dictionary qualities. 1.1. Challenges in Dictionary Learning to incorporate the supervision information in dictionary learning, without incurring additional parameters. Moreover, the optimal dictionary size is automatically learned from the input data. Minimizing the derived objective function is challenging because it involves many non-smooth ℓ2,0+ -norm terms. We present an efficient algorithm to solve the problem with a rigorous proof of the convergence of the algorithm. Extensive experiments are presented to show the superior performance of the proposed method.

6 0.82589966 378 iccv-2013-Semantic-Aware Co-indexing for Image Retrieval

7 0.82519644 360 iccv-2013-Robust Subspace Clustering via Half-Quadratic Minimization

8 0.82253766 448 iccv-2013-Weakly Supervised Learning of Image Partitioning Using Decision Trees with Structured Split Criteria

9 0.82012117 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies

10 0.81919634 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

11 0.81914318 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

12 0.81788784 404 iccv-2013-Structured Forests for Fast Edge Detection

13 0.81724453 238 iccv-2013-Learning Graphs to Match

14 0.81474918 419 iccv-2013-To Aggregate or Not to aggregate: Selective Match Kernels for Image Search

15 0.8144958 437 iccv-2013-Unsupervised Random Forest Manifold Alignment for Lipreading

16 0.81346303 48 iccv-2013-An Adaptive Descriptor Design for Object Recognition in the Wild

17 0.81203562 255 iccv-2013-Local Signal Equalization for Correspondence Matching

18 0.81186104 71 iccv-2013-Category-Independent Object-Level Saliency Detection

19 0.81161308 147 iccv-2013-Event Recognition in Photo Collections with a Stopwatch HMM

20 0.81016803 4 iccv-2013-ACTIVE: Activity Concept Transitions in Video Event Classification