cvpr cvpr2013 cvpr2013-223 knowledge-graph by maker-knowledge-mining

223 cvpr-2013-Inductive Hashing on Manifolds


Source: pdf

Author: Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van_den_Hengel, Zhenmin Tang

Abstract: Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original highdimensional data. The complexity of these models, and the problems with out-of-sample data, havepreviously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. [sent-2, score-0.198]

2 In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. [sent-5, score-0.147]

3 In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. [sent-6, score-0.826]

4 Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. [sent-7, score-0.679]

5 We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths. [sent-8, score-1.042]

6 Various hashing techniques have attracted considerable attention in computer vision, information retrieval and machine learning [8, 9, 19, 3 1, 33], and seem to offer great promise towards this goal. [sent-11, score-0.511]

7 Locality sensitive hashing (LSH) [8] is one of the most well-known data-independent hashing methods, and generates hash codes based on random projections. [sent-17, score-1.457]

8 With the success of LSH, random hash functions have been extended to several similarity measures, including p-norm distances [6], the Mahalanobis metric [17], and kernel similarity [16, 24]. [sent-18, score-0.425]

9 However, the methods belonging to the LSH family nor- mally require relatively long hash codes and several hash tables to achieve both high precision and recall. [sent-19, score-0.898]

10 Data-dependent or learning-based hashing methods have been developed with the goal of learning more compact hash codes. [sent-21, score-0.878]

11 Directly learning binary embeddings typically results in an optimization problem which is very difficult to solve, however. [sent-22, score-0.178]

12 For example, PCAH [31], SSH [31], and ITQ [9] generate linear hash functions through simple PCA projections, while LDAhash [3] is based on LDA. [sent-27, score-0.367]

13 Extending this idea, there are also methods which learn hash functions in a kernel space, such as reconstructive embeddings (BRE) [15], random maximum margin hashing (RMMH) [14] and kernelbased supervised hashing (KSH) [20]. [sent-28, score-1.452]

14 In a departure from such methods, however, spectral hashing (SH) [33], one of the most popular learning-based methods, generates hash codes by solving the relaxed mathematical program that is similar to the one in Laplacian eigenmaps [1]. [sent-29, score-1.081]

15 Embedding the original data into a low dimensional space while simultaneously preserving the inherent neighborhood structure is critical for learning compact, effective hash codes. [sent-30, score-0.504]

16 In general, nonlinear manifold learning meth- ods are more powerful than linear dimensionality reduction techniques, as they are able to more effectively preserve the local structure of the input data without assuming global linearity [26]. [sent-31, score-0.331]

17 The geodesic distance on a manifold has been shown to outperform the Euclidean distance in the high-dimensional space for image retrieval [10], for 111555666200 (a) Queries (b) ? [sent-32, score-0.168]

18 Search is conducted in the original feature space (b, c) and embedding space by t-SNE [29] (d, e) using Euclidean distance (b, d) and hamming distance (c, e). [sent-38, score-0.285]

19 However, the only widely used nonlinear embedding method for hashing is Laplacian eigenmaps (LE) (e. [sent-41, score-0.743]

20 , LLE [25], elastic embedding [4] or t-SNE [29]) have rarely been explored for hashing. [sent-46, score-0.209]

21 One problem hindering the use of manifold learning for hashing is that these methods do not directly scale to large datasets. [sent-47, score-0.679]

22 This fundamentally limits their application to hashing, as generating codes for new samples is an essential part of the problem. [sent-50, score-0.13]

23 As mentioned in [33], however, this is impractical for large-scale hashing since the Nystr ¨om extension is as expensive as doing exhaustive nearest neighbor search (O(n)). [sent-54, score-0.515]

24 A more significant problem, however, is the fact that the Nystr ¨om extension cannot be directly applied to non-spectral manifold learning methods such as t-SNE. [sent-55, score-0.234]

25 This method allows rapid assignment of new codes to previously unseen data in a manner which preserves the underlying structure of the manifold. [sent-57, score-0.157]

26 Having solved the out-of-sample extension problem, we develop a method by which a learned manifold may be used as the basis for a binary encoding. [sent-58, score-0.272]

27 On this basis we develop a range of new embedding approaches based on a variety of manifold learning methods. [sent-60, score-0.4]

28 This process leads to an embedding method we label Inductive Manifold-Hashing (IMH) which we show to outperform state-of-the-art methods on several large scale datasets both quantitatively and qualitatively. [sent-63, score-0.177]

29 [33] formulated the spectral hashing (SH) problem as mYinxi? [sent-65, score-0.53]

30 Here yi ∈ {−1, 1}r, the ith row in Y, is the hash code we want ∈to {le−a1rn,1 }for xi ∈ Rd, which is one of the n data points in the training ∈data R set X. [sent-73, score-0.505]

31 The last two constraints force the learned hash bits to be uncorrelated and balanced, respectively. [sent-81, score-0.367]

32 However, this strong assumption is not true in practice and the manifold structure of the original data are thus destroyed [21]. [sent-87, score-0.168]

33 Then the desired hash functions may be efficiently identified by binarizing the Nystr ¨om eigenfunctions [2] with the approximated affinity matrix Wˆ. [sent-89, score-0.472]

34 AGH is thus efficient, in that it has linear training time and constant search time, but as is the case for SH [33], the Wˆ Wˆ 111555666311 generalized eigenfunction is derived only for the Laplacian eigenmaps embedding. [sent-90, score-0.14]

35 Self-Taught Hashing Self-taught hashing (STH) [35] addressed the out-of-sample problem by a novel way: hash functions are obtained by training an SVM classifier for each bit using the pre-learned binary codes as class labels. [sent-91, score-1.071]

36 The binary codes were learned by directly solving (1) with a cosine similarity function. [sent-92, score-0.204]

37 Inductive learning for hashing Assuming that we have the manifold-based embedding Y := {y1, y2, · · · , yn} for the entire training data X := {Yx1 : , x2 , · · · , ,x ·n·}·. [sent-97, score-0.713]

38 G yiv}en f a new ednatitare point xq, we a Xim t=o generate an embedding yq w ah niecwh preserves tth xe local neighborhood relationships among its neighbors Nk (xq) in X. [sent-98, score-0.408]

39 k(xq), Minimizing (2) naturally uncovers an embedding for the new point on the basis of its nearest neighbors on the lowdimensional manifold initially learned on the base set. [sent-109, score-0.493]

40 imple inductive formulation for the embedding: produce the embedding for a new data point by a (sparse) linear combination of the base embeddings. [sent-124, score-0.424]

41 Our aim here is completely different: We try to scale up the manifold learning process for hashing in an unsupervised manner. [sent-127, score-0.679]

42 The resulting solution (4) is consistent with the basic smoothness assumption in manifold learning, that closeby data points lie on or close to a locally linear manifold [1, 25, 27]. [sent-128, score-0.398]

43 In this paper, we propose to apply this assumption to hash function learning. [sent-130, score-0.367]

44 Next, we show that the following prototype algorithm is able to approximate yq using only a small base set well. [sent-134, score-0.352]

45 , [21, 33]), we obtain our general inductive hash function by binarizing the low-dimensional embedding h(x) = sgn? [sent-223, score-0.693]

46 nction and YB := {y1, y2 , ·w w· h· , eym sg}n (is· )th ies embedding fnocrt itohne ba nadse Yset B: := {yc1 , c2 , · · · , c ym}}, iwsh thiceh eism tbhee dcdliunstger f ocre nthteers b aobseta sineted B by =K {-mceans. [sent-227, score-0.177]

47 ·H·e·re , we assume itsha tht teh cel embeddings yi are ecden btyer Ked-m on tnhse. [sent-228, score-0.133]

48 The inductive hash function provides a natural means for generalization to new data, which has a constant O(dm + rk) time. [sent-231, score-0.49]

49 With this, the embedding for the training data becomes Y = W¯XBYB, (8) ? [sent-232, score-0.202]

50 The embeddings YB can be learned by any appropriate manifold learning method which preserves the similarity of interest in the low dimensional space. [sent-235, score-0.397]

51 We empirically evaluate several other embedding methods in Section 2. [sent-236, score-0.177]

52 Actually, as we show, some manifold learning methods (e. [sent-238, score-0.199]

53 Considering that m (normally a few hundreds) is much less than n, and is a function of manifold complexity rather than the volume of data, the total training time is linear in the size of training set. [sent-245, score-0.218]

54 If the embedding method is LE, for example, then using IMH to compute YB requires constructing the small affinity matrix WB and solving r eigenvectors of the m m Laplacian matrixa nLdB s owlvhiincgh ris Oige(dnmvec2 o+r sr omf). [sent-246, score-0.216]

55 ,xn}, code length r, base set size m, neighborhood size k Output: Binary codes Y := {y1, y2 , . [sent-252, score-0.351]

56 2) Embed B into the low dimensional space by (9), (12) or any other appropriate manifold leaning method. [sent-258, score-0.208]

57 3) Obtain the low dimensional embedding Y for the whole dataset inductively by Equation (8). [sent-259, score-0.25]

58 K˜ K˜ij In AGH [21], the formulated hash function was proved to be the corresponding Nystr ¨om eigenfunction with the approximate low-rank affinity matrix. [sent-266, score-0.467]

59 Note, however, that our method differs in that it is not restricted to spectral methods such as LE, and that we aim to learn binary hash functions for similarity-based search rather than dimensionality reduction. [sent-269, score-0.509]

60 LELVM [5] cannot be applied to other embedding methods other than LE. [sent-270, score-0.177]

61 Stochastic neighborhood preserving hashing In order to demonstrate our approach we now derive a hashing method based on t-SNE [29], which is a nonspectral embedding method. [sent-273, score-1.203]

62 t-SNE is a modification of stochastic neighborhood embedding (SNE) [13] which aims to overcome the tendency of that method to crowd points together in one location. [sent-274, score-0.24]

63 (9) Here pij is the symmetrized conditional probability in the high dimensional space, and qij is the joint probability defined using the t-distribution in the low dimensional embedding space. [sent-285, score-0.257]

64 After we get embeddings YB of samples xi ∈ B, the hash codes for the entire dataset can be easily computed using (h7 c). [sent-287, score-0.599]

65 CBB enforces smoothness of the learned embeddings within B while CBX ensures the smoothness between B and X. [sent-302, score-0.178]

66 1 8264138Co6de4ln8g0th9612 8I PGM MICHS HA−T −HtPDLESCMLE2N ABEscan Figure 2: Comparison among different manifold learning methods within our IMH hashing framework on CIFAR-10. [sent-320, score-0.679]

67 Manifold learning methods for hashing In this section, we compare different manifold learning methods for hashing within our IMH framework. [sent-327, score-1.19]

68 Figure 2 shows that LE (IMH-LEB in the figure), the most widely used embedding method in hashing, does not perform as well as a variety of other methods (including t-SNE), and in fact performs worse than PCA, which is a linear technique. [sent-335, score-0.177]

69 Based on the above observations, we argue that manifold learning methods (e. [sent-338, score-0.199]

70 1 765298NumberCoIfFn1A0Rar@es6t4nbigh1o5IAuMrGsH k−tLSENBH20 Figure 3: MAP results versus varying base set size m (left, fixing k = 5) and number of nearest base points k (right, fixing m = 400) for the proposed methods and AGH. [sent-354, score-0.272]

71 The comparison is performed on the CIFAR-10 dataset with code lengths from 32 to 96 and base set size 400. [sent-379, score-0.223]

72 We compare nine hashing algorithms including the proposed IMH-tSNE, IMH-LE and seven other unsupervised state-of-the-art methods: PCAH [31], SH [33], AGH [21] and STH [35], BRE [15], ITQ [9], Spherical Hashing (SpH) [11]. [sent-383, score-0.48]

73 We measure performance by mean of average precision (MAP) or precision and recall curves for hamming ranking using 16 to 128 hash bits. [sent-387, score-0.611]

74 We also report the results for hash lookup using a Hamming radius within 2 by F1 score [22]: F1 = 2(precision · recall)/(precision+ recall). [sent-388, score-0.446]

75 Base selection In this section, we take the CIFAR-10 dataset for example to compare different base generation methods and different base sizes for the proposed methods. [sent-390, score-0.248]

76 Table 1 compares two methods for generating base point sets: random sampling and K-means on the training data. [sent-392, score-0.149]

77 From Figure 3, we see that the performance of the proposed methods and AGH do not change significantly with both the base set size m and the number of nearest base points k. [sent-401, score-0.272]

78 Also it is clear that IMH-LEB, which only enforces smoothness in the base set, does not perform as well as IMH-LE, which also enforces smoothness between the base set and training set. [sent-403, score-0.349]

79 Results on CIFAR-10 dataset We report the comparative results based on MAP for hamming ranking with code lengths from 16 to 128 bits in Figure 4. [sent-405, score-0.257]

80 SH and PCAH perform worst in this case, because SH relies upon its uniform data assumption while PCAH simply generates the hash hyperplanes by PCA directions, which does not explicitly capture the similarity information. [sent-410, score-0.43]

81 We also report the F1 results for hash lookup with Hamming radius 2 It is can be seen that IMH-LE and IMH-tSNE also outperform all other methods by large margins. [sent-412, score-0.446]

82 Figure 5 shows the precision and recall curves of hamming ranking for the compared methods. [sent-414, score-0.21]

83 On this dataset 111555666755 (a) Query(b) IMH-tSNE(c) SH(d) AGH(e) STH Figure 6: The query image (a) and the query results returned by various methods with 32 hash bits. [sent-421, score-0.427]

84 This further demonstrates the advantage of t-SNE as a tool for hashing by embedding high dimensional data into a low dimensional space. [sent-432, score-0.737]

85 The dimensionality reduction procedure not only preserves the local neighborhood structure, but also reveals important global structure (such as clusters) [29]. [sent-433, score-0.143]

86 ITQ and BRE obtain high MAPs with longer bit lengths, but they still perform less well for the hash look up F1. [sent-435, score-0.391]

87 7654321 8CSo6d4IFeTl1Mn8g0th96IBSPATMRpCHGQEA 1−HtL2SEN8 Figure 8: Comparative results results on SIFT1M for F1 (left) and recall (right) with hamming radius 2. [sent-498, score-0.189]

88 In terms of test time, both IMH algorithms are comparable to other methods, except STH which takes much more time to predict the binary codes by SVM on this non-sparse dataset. [sent-503, score-0.175]

89 Again, IMH consistently achieves superior results in terms of both F1 score and recall with hamming radius 2. [sent-510, score-0.189]

90 We see that the performance of most of these methods decreases dramatically with increasing code length as the hamming spaces become more sparse, which makes the hash lookup fail more often. [sent-511, score-0.575]

91 5432168CoGdI4SeTl1nM8g0th96ISAPTMpCHGQ A1−HtL2SEN18 Figure 9: Comparative results results on GIST1M by F1 (left) and recall (right) with hamming radius 2. [sent-518, score-0.189]

92 MNIST r)%yac(Au 798 05 326418Co2d5e6lngth384512IBSPATMRTpCHGQHEhAH− tLSEN Figure 10: Classification accuracy (%) on MNIST with binary codes of various hashing methods by linear SVM. [sent-520, score-0.655]

93 Classification on binary codes In order to demonstrate classification performance we have trained a linear SVM on the binary codes generate by IMH for the MNIST data set. [sent-522, score-0.35]

94 In order to learn codes with higher bit lengths for IMH and AGH, we set the size of the base set to 1, 000. [sent-523, score-0.319]

95 1% applying the linear SVM to the uncompressed 784D features, which occupy several hundreds times as much space as the learned hash codes. [sent-530, score-0.4]

96 Conclusion We have proposed a simple yet effective hashing framework which provides a practical connection between manifold learning methods (typically nonparametric and with high computational cost) and hash function learning (requiring high efficiency). [sent-531, score-1.077]

97 By preserving the underlying manifold structure with several nonparametric dimensionality reduction methods, the proposed hashing methods outperform several state-of-the-art methods in terms of both hash lookup and hamming ranking on several large-scale retrieval-datasets. [sent-532, score-1.293]

98 The proposed inductive formulation of the hash function sees the proposed methods require only linear time (O(n)) for indexing all of the training data and a constant search time for a novel query. [sent-533, score-0.515]

99 The learned hash codes were also shown to have promising results on a classification problem even with very short code lengths. [sent-534, score-0.555]

100 Laplacian eigenmaps and spectral techniques for embedding and clustering. [sent-539, score-0.281]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hashing', 0.48), ('hash', 0.367), ('imh', 0.337), ('agh', 0.248), ('embedding', 0.177), ('manifold', 0.168), ('yq', 0.165), ('mnist', 0.163), ('yb', 0.148), ('pcah', 0.144), ('codes', 0.13), ('base', 0.124), ('nystr', 0.124), ('inductive', 0.123), ('hamming', 0.108), ('embeddings', 0.102), ('itq', 0.096), ('bre', 0.089), ('xq', 0.078), ('le', 0.074), ('sh', 0.072), ('sth', 0.067), ('prototype', 0.063), ('eigenfunction', 0.061), ('om', 0.061), ('lsh', 0.06), ('code', 0.058), ('cifar', 0.056), ('dbx', 0.056), ('eigenmaps', 0.054), ('spectral', 0.05), ('delalleau', 0.05), ('dimensionality', 0.047), ('binary', 0.045), ('recall', 0.044), ('wb', 0.043), ('lookup', 0.042), ('lengths', 0.041), ('laplacian', 0.04), ('dimensional', 0.04), ('eigenfunctions', 0.04), ('sph', 0.04), ('affinity', 0.039), ('neighborhood', 0.039), ('cj', 0.038), ('smoothness', 0.038), ('cbb', 0.037), ('cbx', 0.037), ('cxx', 0.037), ('lelvm', 0.037), ('xwq', 0.037), ('radius', 0.037), ('extension', 0.035), ('ij', 0.034), ('precision', 0.034), ('hyperplanes', 0.034), ('inductively', 0.033), ('xwi', 0.033), ('uncompressed', 0.033), ('neural', 0.033), ('nonlinear', 0.032), ('lle', 0.032), ('kulis', 0.032), ('elastic', 0.032), ('shen', 0.032), ('percent', 0.031), ('million', 0.031), ('yi', 0.031), ('euclidean', 0.031), ('learning', 0.031), ('yj', 0.031), ('query', 0.03), ('jm', 0.03), ('reduction', 0.03), ('similarity', 0.029), ('ldahash', 0.029), ('bengio', 0.028), ('cluster', 0.028), ('hengel', 0.028), ('preserving', 0.027), ('preserves', 0.027), ('pca', 0.027), ('encodings', 0.026), ('indyk', 0.026), ('db', 0.026), ('manifolds', 0.026), ('comparative', 0.026), ('gist', 0.026), ('binarizing', 0.026), ('training', 0.025), ('diag', 0.025), ('intractable', 0.025), ('bit', 0.024), ('ranking', 0.024), ('points', 0.024), ('basis', 0.024), ('reconstructive', 0.023), ('preserve', 0.023), ('desktop', 0.022), ('digits', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.000001 223 cvpr-2013-Inductive Hashing on Manifolds

Author: Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van_den_Hengel, Zhenmin Tang

Abstract: Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original highdimensional data. The complexity of these models, and the problems with out-of-sample data, havepreviously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.

2 0.4939959 87 cvpr-2013-Compressed Hashing

Author: Yue Lin, Rong Jin, Deng Cai, Shuicheng Yan, Xuelong Li

Abstract: Recent studies have shown that hashing methods are effective for high dimensional nearest neighbor search. A common problem shared by many existing hashing methods is that in order to achieve a satisfied performance, a large number of hash tables (i.e., long codewords) are required. To address this challenge, in this paper we propose a novel approach called Compressed Hashing by exploring the techniques of sparse coding and compressed sensing. In particular, we introduce a sparse coding scheme, based on the approximation theory of integral operator, that generate sparse representation for high dimensional vectors. We then project sparse codes into a low dimensional space by effectively exploring the Restricted Isometry Property (RIP), a key property in compressed sensing theory. Both of the theoretical analysis and the empirical studies on two large data sets show that the proposed approach is more effective than the state-of-the-art hashing algorithms.

3 0.48925164 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance

Author: Lei Zhang, Yongdong Zhang, Jinhu Tang, Ke Lu, Qi Tian

Abstract: Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.

4 0.44080302 236 cvpr-2013-K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes

Author: Kaiming He, Fang Wen, Jian Sun

Abstract: In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.

5 0.34617952 163 cvpr-2013-Fast, Accurate Detection of 100,000 Object Classes on a Single Machine

Author: Thomas Dean, Mark A. Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, Jay Yagnik

Abstract: Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object’s appearance, such as the presence of component parts. We exploit locality-sensitive hashing to replace the dot-product kernel operator in the convolution with a fixed number of hash-table probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformable-part models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multi-core processor with 20GB of RAM. This represents a speed-up of approximately 20,000 times— four orders of magnitude— when compared withperforming the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20% of the classes.

6 0.28725407 249 cvpr-2013-Learning Compact Binary Codes for Visual Tracking

7 0.15377355 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search

8 0.15295656 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

9 0.13893415 69 cvpr-2013-Boosting Binary Keypoint Descriptors

10 0.13533524 79 cvpr-2013-Cartesian K-Means

11 0.11696918 306 cvpr-2013-Non-rigid Structure from Motion with Diffusion Maps Prior

12 0.11096758 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features

13 0.10956959 259 cvpr-2013-Learning a Manifold as an Atlas

14 0.10111265 241 cvpr-2013-Label-Embedding for Attribute-Based Classification

15 0.10018197 405 cvpr-2013-Sparse Subspace Denoising for Image Manifolds

16 0.093467608 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness

17 0.086382322 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification

18 0.084928222 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds

19 0.079010613 91 cvpr-2013-Consensus of k-NNs for Robust Neighborhood Selection on Graph-Based Manifolds

20 0.077426538 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.164), (1, -0.046), (2, -0.053), (3, 0.078), (4, 0.118), (5, 0.015), (6, -0.092), (7, -0.296), (8, -0.32), (9, -0.108), (10, -0.305), (11, -0.005), (12, -0.003), (13, 0.215), (14, 0.014), (15, 0.177), (16, -0.013), (17, 0.055), (18, -0.15), (19, 0.117), (20, -0.206), (21, 0.061), (22, -0.034), (23, 0.025), (24, 0.005), (25, -0.102), (26, 0.036), (27, -0.001), (28, -0.022), (29, 0.015), (30, -0.091), (31, -0.101), (32, -0.008), (33, 0.017), (34, 0.112), (35, 0.011), (36, 0.001), (37, 0.049), (38, -0.004), (39, -0.037), (40, -0.013), (41, -0.063), (42, -0.062), (43, -0.031), (44, -0.035), (45, -0.012), (46, 0.039), (47, 0.064), (48, 0.015), (49, -0.011)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92375702 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance

Author: Lei Zhang, Yongdong Zhang, Jinhu Tang, Ke Lu, Qi Tian

Abstract: Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.

same-paper 2 0.92221165 223 cvpr-2013-Inductive Hashing on Manifolds

Author: Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van_den_Hengel, Zhenmin Tang

Abstract: Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original highdimensional data. The complexity of these models, and the problems with out-of-sample data, havepreviously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.

3 0.90035093 87 cvpr-2013-Compressed Hashing

Author: Yue Lin, Rong Jin, Deng Cai, Shuicheng Yan, Xuelong Li

Abstract: Recent studies have shown that hashing methods are effective for high dimensional nearest neighbor search. A common problem shared by many existing hashing methods is that in order to achieve a satisfied performance, a large number of hash tables (i.e., long codewords) are required. To address this challenge, in this paper we propose a novel approach called Compressed Hashing by exploring the techniques of sparse coding and compressed sensing. In particular, we introduce a sparse coding scheme, based on the approximation theory of integral operator, that generate sparse representation for high dimensional vectors. We then project sparse codes into a low dimensional space by effectively exploring the Restricted Isometry Property (RIP), a key property in compressed sensing theory. Both of the theoretical analysis and the empirical studies on two large data sets show that the proposed approach is more effective than the state-of-the-art hashing algorithms.

4 0.84128988 236 cvpr-2013-K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes

Author: Kaiming He, Fang Wen, Jian Sun

Abstract: In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.

5 0.51354176 249 cvpr-2013-Learning Compact Binary Codes for Visual Tracking

Author: Xi Li, Chunhua Shen, Anthony Dick, Anton van_den_Hengel

Abstract: A key problem in visual tracking is to represent the appearance of an object in a way that is robust to visual changes. To attain this robustness, increasingly complex models are used to capture appearance variations. However, such models can be difficult to maintain accurately and efficiently. In this paper, we propose a visual tracker in which objects are represented by compact and discriminative binary codes. This representation can be processed very efficiently, and is capable of effectively fusing information from multiple cues. An incremental discriminative learner is then used to construct an appearance model that optimally separates the object from its surrounds. Furthermore, we design a hypergraph propagation method to capture the contextual information on samples, which further improves the tracking accuracy. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker.

6 0.50531977 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search

7 0.50122607 163 cvpr-2013-Fast, Accurate Detection of 100,000 Object Classes on a Single Machine

8 0.49202371 69 cvpr-2013-Boosting Binary Keypoint Descriptors

9 0.47794899 79 cvpr-2013-Cartesian K-Means

10 0.4394471 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

11 0.38478923 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition

12 0.34537923 259 cvpr-2013-Learning a Manifold as an Atlas

13 0.3314563 191 cvpr-2013-Graph-Laplacian PCA: Closed-Form Solution and Robustness

14 0.3183724 276 cvpr-2013-MKPLS: Manifold Kernel Partial Least Squares for Lipreading and Speaker Identification

15 0.28743628 433 cvpr-2013-Top-Down Segmentation of Non-rigid Visual Objects Using Derivative-Based Search on Sparse Manifolds

16 0.27984428 238 cvpr-2013-Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices

17 0.27356425 237 cvpr-2013-Kernel Learning for Extrinsic Classification of Manifold Features

18 0.26666427 367 cvpr-2013-Rolling Riemannian Manifolds to Solve the Multi-class Classification Problem

19 0.26491717 126 cvpr-2013-Diffusion Processes for Retrieval Revisited

20 0.26289442 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.096), (16, 0.029), (26, 0.027), (28, 0.029), (33, 0.255), (36, 0.068), (59, 0.012), (67, 0.083), (69, 0.041), (70, 0.017), (73, 0.188), (87, 0.069)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.85524929 223 cvpr-2013-Inductive Hashing on Manifolds

Author: Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van_den_Hengel, Zhenmin Tang

Abstract: Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original highdimensional data. The complexity of these models, and the problems with out-of-sample data, havepreviously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.

2 0.83391577 362 cvpr-2013-Robust Monocular Epipolar Flow Estimation

Author: Koichiro Yamaguchi, David McAllester, Raquel Urtasun

Abstract: We consider the problem of computing optical flow in monocular video taken from a moving vehicle. In this setting, the vast majority of image flow is due to the vehicle ’s ego-motion. We propose to take advantage of this fact and estimate flow along the epipolar lines of the egomotion. Towards this goal, we derive a slanted-plane MRF model which explicitly reasons about the ordering of planes and their physical validity at junctions. Furthermore, we present a bottom-up grouping algorithm which produces over-segmentations that respect flow boundaries. We demonstrate the effectiveness of our approach in the challenging KITTI flow benchmark [11] achieving half the error of the best competing general flow algorithm and one third of the error of the best epipolar flow algorithm.

3 0.83223045 236 cvpr-2013-K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes

Author: Kaiming He, Fang Wen, Jian Sun

Abstract: In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.

4 0.83001441 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search

Author: Tiezheng Ge, Kaiming He, Qifa Ke, Jian Sun

Abstract: Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a nonparametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.

5 0.82832861 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance

Author: Lei Zhang, Yongdong Zhang, Jinhu Tang, Ke Lu, Qi Tian

Abstract: Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.

6 0.82706308 268 cvpr-2013-Leveraging Structure from Motion to Learn Discriminative Codebooks for Scalable Landmark Classification

7 0.82410771 79 cvpr-2013-Cartesian K-Means

8 0.82401568 179 cvpr-2013-From N to N+1: Multiclass Transfer Incremental Learning

9 0.82399118 110 cvpr-2013-Dense Object Reconstruction with Semantic Priors

10 0.81945533 284 cvpr-2013-Mesh Based Semantic Modelling for Indoor and Outdoor Scenes

11 0.81940413 4 cvpr-2013-3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image

12 0.81912589 267 cvpr-2013-Least Soft-Threshold Squares Tracking

13 0.81879735 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval

14 0.81831461 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

15 0.8177259 225 cvpr-2013-Integrating Grammar and Segmentation for Human Pose Estimation

16 0.81770188 248 cvpr-2013-Learning Collections of Part Models for Object Recognition

17 0.81733954 94 cvpr-2013-Context-Aware Modeling and Recognition of Activities in Video

18 0.81718427 206 cvpr-2013-Human Pose Estimation Using Body Parts Dependent Joint Regressors

19 0.81681079 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation

20 0.81660932 446 cvpr-2013-Understanding Indoor Scenes Using 3D Geometric Phrases