cvpr cvpr2013 cvpr2013-246 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Yunchao Gong, Sanjiv Kumar, Henry A. Rowley, Svetlana Lazebnik
Abstract: Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint.
Reference: text
sentIndex sentText sentNum sentScore
1 com Abstract Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e. [sent-4, score-0.2]
2 We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. [sent-7, score-1.024]
3 This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint. [sent-8, score-0.455]
4 To achieve high retrieval and classification accuracy on such datasets, it is necessary to use extremely high-dimensional descriptors such as Fisher Vectors (FV) [23, 24, 28], Vectors of Locally Aggregated Descriptors (VLAD) [14], or Locality Constrained Linear Codes (LLC) [32]. [sent-12, score-0.289]
5 Our goal is to convert such descriptors to binary codes in order to improve the efficiency of retrieval and classification without sacrificing accuracy. [sent-13, score-0.595]
6 There is a lot of recent work on learning similaritypreserving binary codes [6, 9, 10, 11, 15, 18, 19, 20, 21, 27, 30, 3 1, 33], but most of it is focused on relatively low-dimensional descriptors such as GIST [22], which are not sufficient for state-of-the-art applications. [sent-14, score-0.423]
7 [24, 28] have found that to preserve discriminative power, binary codes for such descriptors must have a very large number of bits. [sent-17, score-0.423]
8 Indeed, Figure 1(a) shows that compressing a 64Kdimensional descriptor to 100-1,000 bits (typical code sizes in most similarity-preserving coding papers) results in an unacceptable loss of retrieval accuracy compared to code Henry A. [sent-18, score-0.555]
9 Comparison of our proposed binary coding method with state-of-the-art ITQ method [6] for retrieval on 50K Flickr images (1K queries) with 64,000-dimensional VLAD descriptors. [sent-22, score-0.262]
10 (b) Storage requirements for projection matrices needed for coding (note the logarithmic scale of the vertical axis). [sent-24, score-0.17]
11 While ITQ has slightly higher accuracy than our method for shorter code sizes, larger code sizes are needed to get the highest absolute accuracy, and ITQ cannot scale to them due to memory limitations. [sent-26, score-0.291]
12 By contrast, our method can be used to efficiently compute very highdimensional codes that achieve comparable accuracy to the original descriptor. [sent-27, score-0.312]
13 Thus, our goal is to convert very high-dimensional real vectors to long binary strings. [sent-29, score-0.153]
14 A common step of many binary coding methods is lin- ear projection of the data (e. [sent-31, score-0.202]
15 When the dimensionality of both the input feature vector and the resulting binary string are sufficiently high, the storage and computation requirements for even a random projection matrix become extreme (Figure 1(b)). [sent-34, score-0.428]
16 There are a few works on compressing high-dimensional descriptors such as FV and VLAD. [sent-38, score-0.133]
17 However, descriptors like FV and VLAD require a random rotation to balance their variance prior to PQ [14], and as explained above, rotation becomes quite expensive for high dimensions. [sent-42, score-0.403]
18 In this paper, we propose a bilinear method that takes advantage of the natural two-dimensional structure of descriptors such as FV, VLAD, and LLC and uses two smaller matrices to implicitly represent one big rotation or projec- ×× tion matrix. [sent-43, score-0.549]
19 This method is inspired by bilinear models used for other applications [26, 29, 34]. [sent-44, score-0.291]
20 We begin with a method based on random bilinear projections and then show how to efficiently learn the projections from data. [sent-45, score-0.445]
21 We demonstrate the promise of our method, dubbed bilinear projectionbased binary codes (BPBC), through experiments on two large-scale datasets and two descriptors, VLAD and LLC. [sent-46, score-0.625]
22 For most scenarios we consider, BPBC produces little or no degradation in performance compared to the original continuous descriptors; furthermore, it matches the accuracy of PQ codes while having much lower running time and storage requirements for code generation. [sent-47, score-0.479]
23 We will first introduce a randomized method to obtain d-bit bilinear codes in Section 2. [sent-57, score-0.522]
24 1 and then explain how to learn data-dependent codes in Section 2. [sent-58, score-0.231]
25 Learning of reduced-dimension codes will be discussed in Section 2. [sent-60, score-0.231]
26 1We have also tried to randomly reorganize descriptors into matrices but found it to produce inferior performance. [sent-62, score-0.158]
27 Random Bilinear Binary Codes To convert a descriptor x ∈ Rd to a d-dimensional binary string, we fdiersstc rciopntsorid xer ∈the R framework of [1, 6] that applies a random rotation R ∈ Rd×d to x: H(x) = sgn(RTx). [sent-65, score-0.355]
28 It is easy to show that applying a bilinear rotation to X ∈ Rd1I ×t ids2 aiss equivalent htoa applying a d1d2 ×a d ro1tda2t iroonta totio Xn t ∈o vec(X). [sent-69, score-0.428]
29 This rotation is given by Rˆ = R×2 d⊗ R1, where ⊗ denotes the Kronecker product: vec(R1TXR2) = (RT2 ⊗ R1T) vec(X) = RˆTvec(X) follows from the properties of the Kronecker product [16]. [sent-70, score-0.172]
30 Thus, a bilinear rotation is⊗ simply a special case of a full rotation, such that the full rotation matrix Rˆ can be reconstructed from two smaller matrices R1 and R2. [sent-72, score-0.649]
31 While the degree of freedom of our bilinear rotation is more restricted than a full rotation, the projection matrices are much smaller, and the projection speed is much faster. [sent-73, score-0.608]
32 In terms of time complexity, performing a full rotation on x takes O((d1d2)2) time, while our approach is O(d12d2 + d1d22). [sent-74, score-0.163]
33 In terms of space for projections, full rotation takes O((d1d2)2), and our approach only takes O(d12 + d22). [sent-75, score-0.163]
34 4, for a 64K-dimensional vector, a full rotation will take roughly 16GB of RAM, while the bilinear rotations only take 1MB of RAM. [sent-77, score-0.508]
35 The projection time for a full rotation is more than a second, vs. [sent-78, score-0.224]
36 Following [6], we want to find a rotation such that the angle θi between a rotated feature vector = vec(R1TXiR2) and its binary encoding (geometrically, the nearest vertex of the binary hypercube), = is minimized. [sent-83, score-0.377]
37 5 125 sno eidTFISimvsualcodewrds (a) Original VLAD descriptor (b) Original binary code (c) Bilinearly rotated VLAD (d) Bilinearly rotated binary code Figure 2. [sent-90, score-0.512]
38 Visualization of the VLAD descriptor and resulting binary code (given by the sign function) before and after learned bilinear rotation. [sent-91, score-0.596]
39 After the rotation, the descriptor and the binary code look more whitened. [sent-94, score-0.248]
40 HRowe ∈ve Rr, after reformulation into bilinear form (5), the expression only involves the two small matrices R1 and R2. [sent-108, score-0.323]
41 Figure 2 visualizes the structure of a VLAD descriptor and the corresponding binary code before and after a learned bilinear rotation. [sent-161, score-0.566]
42 2 is used to learn ddimensional binary codes starting from d-dimensional descriptors. [sent-165, score-0.334]
43 Now, to produce a code of size c = c1 c2, where c1 < d1 and c2 < d2, we need projection mat×ric ces R1 ∈ Rd1×c1, R2 ∈ Rd2×c2 such that R1TR1 = I and (S2) Up? [sent-166, score-0.154]
44 Distance Computation for Binary Codes × At retrieval time, given a query descriptor, we need to compute its distance to every binary code in the database. [sent-193, score-0.449]
45 The most popular measure of distance for binary codes is the Hamming distance. [sent-194, score-0.358]
46 For improved accuracy, we use asymmetric distance, in which the database points are quantized but the query is not [3, 8, 13]. [sent-196, score-0.205]
47 In this work, we adopt the lowerbounded asymmetric distance proposed in [3] to measure the distance between the query and binary codes. [sent-197, score-0.356]
48 For a query x ∈ Rc and database points b ∈ {−1, +1}c, the lqouweeryr-b xou ∈nde Rd asymmetric edis ptaoninctes ibs simply ,th+e L}2 distance between x and b: da(x, b) = ? [sent-198, score-0.229]
49 Datasets and Features + We test our proposed approach, bilinear projection-based binary codes (BPBC), on two large-scale image datasets and two feature types. [sent-211, score-0.625]
50 Experimental Protocols To learn binary codes using the methods of Sections 2. [sent-230, score-0.334]
51 These are defined as the top ten Euclidean nearest neighbors for each query based on original descriptors. [sent-238, score-0.141]
52 Our performance measure for this task is the recall of 10NN for different numbers of retrieved points [9, 13]. [sent-239, score-0.125]
53 At query time, PQ uses asymmetric distance to compare an uncompressed query point to quantized database points. [sent-270, score-0.374]
54 Namely, the distances between the code centers and corresponding dimensions of the query are first computed and stored in a lookup table. [sent-271, score-0.262]
55 Then the distances between the query and database points are computed by table lookup and summation. [sent-272, score-0.143]
56 Whenever the descriptor dimensionality in our experiments is too high for us to perform the full random rotation, we perform a bilinear rotation instead (BR+PQ). [sent-276, score-0.641]
57 3 As simpler baselines, we also consider LSH based on random projection [1] and the α = 0 binarization scheme proposed in [24], which simply takes the sign of each dimension. [sent-277, score-0.175]
58 , [6, 20, 30, 33], but on our data, they produce poor results for small code sizes and do not scale to larger code sizes (recall Figure 1). [sent-280, score-0.262]
59 Computation and Storage Requirements First, we evaluate the scalability ofour method compared to LSH, PQ, and RR+PQ for converting d-dimensional vectors to d-bit binary strings. [sent-283, score-0.155]
60 Table 1reports code generation time for different VLAD sizes and Table 2 reports the memory requirements for storing the projection matrix. [sent-287, score-0.304]
61 It is clear that our bilinear formulation is orders of 2The original PQ paper [13] states that a random rotation is not needed prior to PQ. [sent-288, score-0.501]
62 3As another efficient alternative to random rotation prior to PQ, we have also considered a random permutation, but found that on our data it has no effect. [sent-290, score-0.217]
63 asymmetric distance (ASD) computation for two code sizes on ILSVRC2010. [sent-309, score-0.252]
64 However, note that for binary codes with ASD, one can first use SD to find a short list and then do re-ranking with ASD, which will be much faster than exhaustive ASD computation. [sent-313, score-0.334]
65 For 64,000dimensional features, evaluating RR+PQ is prohibitively expensive, so instead we try to combine the bilinear rotation with PQ (denoted as BR+PQ). [sent-319, score-0.428]
66 3), we use bilinear projections R1 ∈ R500×400, R2 ∈ R128×80. [sent-321, score-0.348]
67 Figure 3(a) shows the recall of 10NN from the original feature space for different numbers of retrieved images. [sent-323, score-0.158]
68 PQ without rotation fails on this dataset; BR+PQ is slightly better, but is still disappointing. [sent-324, score-0.137]
69 Bilinear rotation appears to be insufficient to fully balance the variance in this case, and performing the full random rotation is too expensive. [sent-328, score-0.34]
70 For a code size of 32,000 (dimensionality reduction by a factor of two), learned ro4J´ egou et al. [sent-330, score-0.166]
71 By contrast, our goal is to produce higherdimensional codes that do not lose discriminative power. [sent-333, score-0.231]
72 Indeed, by raising the dimensionality of the code, we are able to improve the retrieval accuracy in absolute terms: the mAP for our BPBC setup (Figure 3(b)) is about 0. [sent-334, score-0.24]
73 “1/2” refers to reducing the code dimensionality in half with the method of Section 2. [sent-353, score-0.213]
74 tation works much better than random, while for the fulldimensional BPBC, learned and random rotations perform similarly. [sent-356, score-0.162]
75 Next, Figure 3(b) reports instance-level image retrieval accuracy measured by mean average precision (mAP), or the area under the recall-precision curve. [sent-358, score-0.217]
76 PQ without rotation works poorly, and BR+PQ is more reasonable, but still worse than our method. [sent-360, score-0.137]
77 1, our VLAD descriptors for the ILSVRC2010 dataset have dimensionality 25,600. [sent-366, score-0.184]
78 Random rotation for this descriptor size is still feasible, so we (a) Recall of 10NN. [sent-367, score-0.189]
79 For BPBC with dimensionality reduction, we construct bilinear projections R1 ∈ R200×160, R2 ∈ R128×80, which reduces the dimensionality in half. [sent-375, score-0.538]
80 Figure R4(a) compares the recall of 10NN from the original feature space with increasing number of retrieved points. [sent-376, score-0.158]
81 Overall, RR+PQ is the strongest of all the baseline methods, and full-dimensional BPBC with asymmetric distance is able to match its performance while having a lower memory footprint and faster code generation time (recall Tables 1and 2). [sent-381, score-0.257]
82 Next, we evaluate the semantic retrieval accuracy on this dataset. [sent-383, score-0.145]
83 We can observe that RR+PQ and most × versions of BPBC have comparable precision to the original uncompressed features. [sent-385, score-0.165]
84 Retrieval on ILSVRC2010 with LLC To demonstrate that the BPBC method is applicable to other high-dimensional descriptors besides VLAD, we also report retrieval results on the ILSVRC2010 dataset with LLC features. [sent-391, score-0.21]
85 Learned rotation works significantly better than the random one. [sent-399, score-0.177]
86 As in Table 4(b), PQ with rotation and different versions of BPBC work simi- larly. [sent-401, score-0.166]
87 Image Classification Finally, we demonstrate the effectiveness of our proposed codes for SVM classification on ILSVRC2010. [sent-405, score-0.259]
88 For PQ-compressed data, we perform decoding in order to train the SVM (decoding consists of looking up and concatenating the codewords corresponding to each subset of dimensions) and test on original descriptors (rotated if necessary). [sent-407, score-0.159]
89 Note that unlike [28], we use batch training, and decoding all the training descriptors ahead of time does not actually save us on storage. [sent-408, score-0.126]
90 (b) Categorical retrieval precision at 10 and 50 × retrieved images. [sent-418, score-0.223]
91 The fulldimensional BPBC incurs very little loss of accuracy over the original features, and the learned and random rotations work comparably. [sent-425, score-0.219]
92 Interestingly, PQ without RR still produces reasonable classification results, while its performance on retrieval tasks is severely degraded. [sent-429, score-0.149]
93 An interesting question is as follows: instead of starting with d-dimensional data and reducing the dimensionality to d/2 through binary coding, can we obtain the same accuracy if we start with d/2-dimensional features and maintain this dimensionality in the coding step? [sent-439, score-0.355]
94 This is comparable to the accuracy of our 12,800-dimensional binary descriptor learned from the 25,600-dimensional VLAD. [sent-442, score-0.23]
95 On the other hand, classifying 12,800-dimensional codes computed from 12,800-dimensional VLAD gives an accuracy of 41. [sent-443, score-0.255]
96 Thus, starting with the highest possible dimensionality of the original features appears to be important for learning binary codes with the most discriminative power. [sent-445, score-0.462]
97 Discussion and Future Work This paper has presented a novel bilinear rotation formulation for learning binary codes for high dimensional feature vectors that exploits the natural two-dimensional structure of many existing descriptors. [sent-447, score-0.789]
98 Our approach matches the accuracy of Product Quantization for retrieval and classification tasks while being much more efficient in terms of memory and computation. [sent-448, score-0.216]
99 Angular quantization based binary codes for fast similarity search. [sent-492, score-0.403]
100 Iterative quantization: A Procrustean approach to learning binary codes for large-scale image retrieval. [sent-504, score-0.334]
wordName wordTfidf (topN-words)
[('pq', 0.403), ('vlad', 0.383), ('bpbc', 0.353), ('bilinear', 0.291), ('codes', 0.231), ('holiday', 0.184), ('asd', 0.162), ('rotation', 0.137), ('retrieval', 0.121), ('rr', 0.12), ('vec', 0.117), ('sgn', 0.116), ('query', 0.108), ('binary', 0.103), ('llc', 0.103), ('asymmetric', 0.097), ('dimensionality', 0.095), ('hashing', 0.093), ('code', 0.093), ('descriptors', 0.089), ('itq', 0.08), ('bi', 0.072), ('quantization', 0.069), ('fv', 0.068), ('recall', 0.065), ('hamming', 0.065), ('txi', 0.064), ('br', 0.064), ('perronnin', 0.061), ('projection', 0.061), ('retrieved', 0.06), ('storage', 0.059), ('projections', 0.057), ('rotations', 0.054), ('lsh', 0.053), ('sd', 0.053), ('bits', 0.052), ('descriptor', 0.052), ('flickr', 0.052), ('fisher', 0.051), ('tvec', 0.048), ('anchez', 0.047), ('egou', 0.046), ('binarization', 0.044), ('compressing', 0.044), ('memory', 0.043), ('precision', 0.042), ('bilinearly', 0.041), ('fulldimensional', 0.041), ('random', 0.04), ('requirements', 0.039), ('gong', 0.039), ('coding', 0.038), ('sizes', 0.038), ('decoding', 0.037), ('ilk', 0.037), ('popcount', 0.037), ('yunchao', 0.037), ('uncompressed', 0.037), ('reorganize', 0.037), ('kronecker', 0.036), ('compact', 0.036), ('tr', 0.035), ('product', 0.035), ('lookup', 0.035), ('rotated', 0.034), ('douze', 0.034), ('original', 0.033), ('gordo', 0.032), ('ram', 0.032), ('matrices', 0.032), ('orthogonal', 0.031), ('string', 0.031), ('imagenet', 0.031), ('sign', 0.03), ('reports', 0.03), ('norouzi', 0.029), ('svd', 0.029), ('versions', 0.029), ('xor', 0.028), ('google', 0.028), ('locality', 0.028), ('classification', 0.028), ('kumar', 0.028), ('learned', 0.027), ('vectors', 0.027), ('extremely', 0.027), ('caption', 0.027), ('full', 0.026), ('dimensions', 0.026), ('procrustean', 0.025), ('converting', 0.025), ('refers', 0.025), ('svm', 0.025), ('accuracy', 0.024), ('comparable', 0.024), ('analogously', 0.024), ('kulis', 0.024), ('distance', 0.024), ('convert', 0.023)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections
Author: Yunchao Gong, Sanjiv Kumar, Henry A. Rowley, Svetlana Lazebnik
Abstract: Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint.
2 0.37347078 38 cvpr-2013-All About VLAD
Author: unkown-author
Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).
3 0.30380455 79 cvpr-2013-Cartesian K-Means
Author: Mohammad Norouzi, David J. Fleet
Abstract: A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and runtime cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ [5], and to Product Quantization for vector quantization [7]. The models are tested on largescale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10). 1. Introduction and background Techniques for vector quantization, like the well-known k-means algorithm, are used widely in vision and learning. Common applications include codebook learning for object recognition [16], and approximate nearest neighbor (ANN) search for information retrieval [12, 14]. In general terms, such techniques involve partitioning an input vector space into multiple regions {Si}ik=1, mapping points x in each region uinlttiop region-specific representatives {ci}ik=1, known as cioennte irnst.o A resg siuonch-,s a quantizer, qse(nxt)a,t can b {ec expressed as k q(x) = ?1(x∈Si)ci, (1) i?= ?1 where 1(·) is the usual indicator function. eTrhee 1 quality oef u a quantizer oisr expressed in terms of expected distortion, a common measure of which is squared error ?x − q(x) ?22. In this case, given centers {ci}, the region t ?ox xw −hic qh(x a point nis t assigned gwivitehn m ceinnitemrasl {dcis}to,r tthioen r eisobtained by Euclidean nearest neighbor (NN) search. The k-means algorithm can be used to learn centers from data. To reduce expected distortion, crucial for many applications, one can shrink region volumes by increasing k, the number of regions. In practice, however, increasing k results in prohibitive storage and run-time costs. Even if one resorts to ANN search with approximate k-means [14] or hierarchical k-means [12], it is hard to scale to large k (e.g., k = 264), as storing the centers is untenable. This paper concerns methods for scalable quantization with tractable storage and run-time costs. Inspired by Product Quantization (PQ), a state-of-the-art algorithm for ANN search with high-dimensional data (e.g., [7]), compositionality is one key. By expressing data in terms of recurring, reusable parts, the representational capacity of compositional models grows exponentially in the number ofparameters. Compression techniques like JPEG accomplish this by encoding images as disjoint rectangular patches. PQ divides the feature space into disjoint subspaces that are quantized independently. Other examples include part-based recognition models (e.g., [18]), and tensor-based models for stylecontent separation (e.g., [17]). Here, with a compositional parameterization of region centers, we find a family of models that reduce the enc√oding cost of k centers down from k to between log2 k and √k. A model parameter controls the trade-off between model fidelity and compactness. We formulate two related algorithms, Orthogonal kmeans (ok-means) and Cartesian k-means (ck-means). They are natural extensions of k-means, and are closely related to other hashing and quantization methods. The okmeans algorithm is a generalization of the Iterative Quantization (ITQ) algorithm for finding locality-sensitive binary codes [5]. The ck-means model is an extension of okmeans, and can be viewed as a generalization of PQ.1 We then report empirical results on large-scale ANN search tasks, and on codebook learning for recognition. For ANN search we use datasets of 1M GIST and 1B SIFT features, with both symmetric and asymmetric distance measures on the latent codes. We consider codebook learning for a generic form of recognition on CIFAR-10. 2. k-means Given a dataset of n p-dim points, D ≡ {xj }jn=1, the kmeans algorithm partitions ithme n points in ≡to { kx c}lusters, and 1A very similar generalization of PQ, was developed concurrently by Ge, He, Ke and Sun, and also appears in CVPR 2013 [4]. 333000111755 represents each cluster by a center point. Let C ∈ Rp×k breep a mseantrtsix e wachhos celu sctoelurm byns a comprise tihnet .k Lceltus Cter ∈ centers, i.e., C = [c1, c2 , · · · , ck] . The k-means objective is to minimize the within,-c··lu·s ,tecr squared distances: ?k-means(C) = = x?∈Dmiin?x − ci?22 x?∈Db∈mHin1/k?x − Cb?22 (2) where H1/k ≡ {b | b ∈ {0, 1}k and ?b? = 1}, i.e., b is a binary vee Hctor comprising a ,1-1o}f-ka encoding. Lloyd’s b k- imse aa bni-s algorithm [11] finds a local minimum of (2) by iterative, alternating optimization with respect to C and the b’s. The k-means model is simple and intuitive, using NN search to assign points to centers. The assignment of points to centers can be represented with a log k-bit index per data point. The cost of storing the centers grows linearly with k. 3. Orthogonal k-means with 2m centers With a compositional model one can represent cluster centers more efficiently. One such approach is to re- construct each input with an additive combination of the columns of C. To this end, instead of the 1-of-k encoding in (2), we let b be a general m-bit vector, b ∈ Hm ≡ {0, 1}m, (a2n)d, wCe e∈ l Rt bp× bme. aA gse nsuerchal, meac-bhi tc vluesctteorr c, ebnt ∈er H His th≡e sum o}f a asundbs Cet o∈f Rthe columns of C. There are 2m possible subsets, and therefore k = 2m centers. While the number of parameters in the quantizer is linear in m, the number of centers increases exponentially. While efficient in representing cluster centers, the approach is problematic, because solving bˆ = abrg∈Hmmin?x − Cb?22,(3) is intractable; i.e., the discrete optimization is not submodular. Obviously, for small 2m one could generate all possible centers and then perform NN search to find the optimal solution, but this would not scale well to large values of m. One key observation is that if the columns of C are orthogonal, then optimization (3) becomes tractable. To explain this, without loss of generality, assume the bits belong to {−1, 1} instead of {0, 1}, i.e., b? ∈ Hm? ≡ {−1, 1}m. tToh e{−n, ?x − Cb??22 = xTx + b?TCTCb? − 2xTCb? .(4) For diagonal CTC, the middle quadratic term on the RHS becomes trace(CTC), independent of b?. As a consequence, when C has orthogonal columns, abr?g∈mH?min?x − Cb??22 = sgn(CTx) ,(5) where sgn(·) is the element-wise sign function. Cluster centers are depicted by dots, and cluster boundaries by dashed lines. (Left) Clusters formed by a 2-cube with no rotation, scaling or translation; centers = {b? |b? ∈ H2? }. (Right) Rotation, scaling and translation are used to reduce distances between data points and cluster centers; centers = {μ + RDb? | b? ∈ H2? }. To reduce quantization error further we can also introduce an offset, denoted μ, to translate x. Taken together with (5), this leads to the loss function for the model we call orthogonal k-means (ok-means):2 ?ok-means(C,μ) = x?∈Db?m∈iHn?m?x − μ − Cb??22. (6) Clearly, with a change of variables, b? = 2b −1, we can define new versions of μ and C, with ide=ntic 2abl −lo1s,s, w wfoer c wanh dicehthe unknowns are binary, but in {0, 1}m. T uhnek nook-wmnesa anrse quantizer uetn icnod {e0,s 1ea}ch data point as a vertex of a transformed m-dimensional unit hypercube. The transform, via C and μ, maps the hypercube vertices onto the input feature space, ideally as close as possible to the data points. The matrix C has orthogonal columns and can therefore be expressed in terms of rotation and scaling; i.e., C ≡ RD, where R ∈ Rp×m has orthonormal cinoglu;m i.ne.s, ( CRT ≡R = R DIm,) w, ahnerde eD R Ris ∈ diagonal and positive definite. The goal of learning is to find the parameters, R, D, and μ, which minimize quantization error. Fig. 1 depicts a small set of 2D data points (red x’s) and two possible quantizations. Fig. 1 (left) depicts the vertices of a 2-cube with C = I2 and zero translation. The cluster regions are simply the four quadrants of the 2D space. The distances between data points and cluster centers, i.e., the quantization errors, are relatively large. By comparison, Fig. 1 (right) shows how a transformed 2-cube, the full model, can greatly reduce quantization errors. 3.1. Learning ok-means To derive the learning algorithm for ok-means we first rewrite the objective in matrix terms. Given n data points, let X = [x1, x2, · · · , xn] ∈ Rp×n. Let B? ∈ {−1, 1}m×n denote the corresponding ∈clu Rster assignment {co−ef1f,ic1i}ents. Our 2While closely related to ITQ, we use the the relationship to k-means. term ok-means to emphasize 333000111 686 goal is to find the assignment coefficients B? and the transformation parameters, namely, the rotation R, scaling D, and translation μ, that minimize ?ok-means(B?, R, D,μ) = ?X − μ1T − RDB??f2 (7) = ?X? − RDB??f2 (8) where ?·?f denotes the Frobenius norm, X? ≡ X − μ1T, R iws hceornes ?tr·a?ined to have orthonormal columns≡ ≡(R XTR − μ=1 Im), and D is a positive diagonal matrix. Like k-means, coordinate descent is effective for optimizing (8). We first initialize μ and R, and then iteratively optimize ?ok-means with respect to B?, D, R, and μ: • Optimize B? and D: With straightforward algebraic manipulation, one can show that ?ok-means = ?RTX?−DB??f2 + ?R⊥TX??f2 , (9) where columns of R⊥ span the orthogonal complement of the column-space of R (i.e., the block matrix [R R⊥] is orthogonal). It follows that, given X? and R, we can optimize the first term in (9) to solve for B? and D. Here, DB? is the leastsquares approximation to RTX?, where RTX? and DB? × are m n. Further, the ith row of DB? can only contain earleem men×tsn .fr Fomur t{h−erd,i t,h +e dii} where di = Dii. Wherever tehleem corresponding delement} }o fw RheTrXe? d is positive (negative) we should put a positive (negative) value in DB?. The optimal di is determined by the mean absolute value of the elements in the ith row of RTX?: • • B? ← sgn ?RTX?? (10) D ← Diag?(mroewan?(abs(RTX?))) (11) Optimize R: Using the original objective (8), find R that minimizes ?X? −RA?f2 , subject to RTR = Im, and Am n≡i mDizBes?. ?TXhis i−s equivalent to an Orthogonal Procrustes problem [15], and can be solved exactly using SVD. In particular, by adding p − m rows of zeros to the bottom poaf rDtic, uDlaBr, bb eyc aodmdeins p p× − n. mT rhoewn sR o ifs z square a tnhde orthogoonf aDl ,a DndB can boem eessti pm ×at end. Twhiethn RSV isD s. qBuuatr es ainncde oDrtBho gisdegenerate we are only interested in the first m columns of R; the remaining columns are unique only up to rotation of the null-space.) Optimize μ: Given R, B? and D, the optimal μ is given by the column average of X −RDB?: μ ← mcoeluamn(X−RDB?) (12) 3.2. Approximate nearest neighbor search One application of scalable quantization is ANN search. Before introducing more advanced quantization techniques, we describe some experimental results with ok-means on Euclidean ANN search. The benefits of ok-means for ANN search are two-fold. Storage costs for the centers is reduced to O(log k), from O(k) with k-means. Second, substantial speedups are possible by exploiting fast methods for NN search on binary codes in Hamming space (e.g., [13]). Generally, in terms of a generic quantizer q(·), there are twoG neanteurraalll ways etrom messti omfa ate g ethneer dicis qtaunanceti z beetrw qe(·e)n, ttwheor vectors, v and u [7]. Using the Symmetric quantizer distance (SQD) ?v−u? is approximated by ?q(v)−q(u) ? . Using the Asymmetric quantizer doixsimtanacteed (A byQ ?Dq()v, only one o Uf sthineg tw thoe vectors is quantized, and ?v−u? is estimated as ?v−q(u) ? . vWechtiloer sS iQs qDu might b, aen slightly f?as itse ers t iom compute, vA−QqD(u i)?n-. curs lower quantization errors, and hence is more accurate. For ANN search, in a pre-processing stage, each database entry, u, is encoded into a binary vector corresponding to the cluster center index to which u is assigned. At test time, the queries may or may not be encoded into indices, depending on whether one uses SQD or AQD. In the ok-means model, the quantization of an input x is straightforwardly shown to be qok(x) = μ + RD sgn(RT(x − μ)) . (13) The corresponding m-bit cluster index is sgn(RT(x − μ)). Given two indices, namely b?1 , b?2 ∈ {−1, +1}m,( txhe − symmetric ok-means quantizer distance∈ ∈is { SQDok(b?1, b?2) = ?μ + RDb?1 − μ − RDb?2 ?22 = ?D(b?1 − b?2)?22 .(14) In effect, SQDok is a weighted Hamming distance. It is the sum of the squared diagonal entries of D corresponding to bits where b?1 and b?2 differ. Interestingly, in our experiments with ok-means, Hamming and weighted Hamming distances yield similar results. Thus, in ok-means experiments we simply report results for Hamming distance, to facilitate comparisons with other techniques. When the scaling in ok-means is constrained to be isotropic (i.e., D = αIm for α ∈ R+), then SQDok becomes a constant multiple off othre α αu ∈sua Rl Hamming distance. As discussed in Sec. 5, this isotropic ok-means is closely related to ITQ [5]. The ok-means AQD between a feature vector x and a cluster index b?, is defined as AQDok(x, b?) = ?x −μ − RDb??22 = ?RTx? − Db??22 + ?R⊥Tx??22 , (15) where x? = x−μ. For ANN search, in comparing distances from query x t−o a .d Faotars AetN oNf binary i,n idni ccoesm, ptharei snegc odnisdta tnecrems on the RHS of (15) is irrelevant, since it does not depend on b?. Without this term, AQDok becomes a form of asymmetric Hamming (AH) distance between RTx? and b?. While previous work [6] discussed various ad hoc AH distance measures for binary hashing, interestingly, the optimal AH distance for ok-means is derived directly in our model. 333000111977 1M SIFT, 64−bit encoding (k = 264) lRc@aeR0 . 206148 10 RIoPT1Qk K−Q m( AHeDa)Hn )s (AH)10K Figure 2. Euclidean ANN retrieval results for different methods and distance functions on the 1M SIFT dataset. 3.3. Experiments with ok-means Following [7], we report ANN search results on 1M SIFT, a corpus of 128D SIFT descriptors with disjoint sets of 100K training, 1M base, and 10K test vectors. The training set is used to train models. The base set is the database, and the test points are queries. The number of bits m is typically less than p, but no pre-processing is needed for dimensionality reduction. Rather, to initialize learning, R is a random rotation of the first m principal directions of the training data, and μ is the mean of the data. For each query we find R nearest neighbors, and compute Recall@R, the fraction of queries for which the ground-truth Euclidean NN is found in the R retrieved items. Fig. 2 shows the Recall@R plots for ok-means with Hamming (H) ≈ SQDok and asymmetric Hamming (AH) H≡a mAmQiDngok ( Hd)is t≈an SceQ, vs. ITQ [5] and PQ [7]. The PQ ≡met AhoQdD exploits a more complex asymmetric distance func- tion denoted AD ≡ AQDck (defined in Sec. 6. 1). Note first tthioant od ke-nmoeteadns A improves upon ITQ, with both Hamming and asymmetric Hamming distances. This is due to the scaling parameters (i.e., D) in ok-means. If one is interested in Hamming distance efficient retrieval, ok-means is prefered over ITQ. But better results are obtained with the asymmetric distance function. Fig. 2 also shows that PQ achieves superior recall rates. This stems from its joint encoding of multiple feature dimensions. In ok-means, each bit represents a partition ofthe feature space into two clusters, separated by a hyperplane. The intersection of m orthogonal hyperplanes yields 2m regions. Hence we obtain just two clusters per dimension, and each dimension is encoded independently. In PQ, by comparison, multiple dimensions are encoded jointly, with arbitrary numbers of clusters. PQ thereby captures statistical dependencies among different dimensions. We next extend ok-means to jointly encode multiple dimensions. 4. Cartesian k-means In the Cartesian k-means (ck-means) model, each region center is expressed parametrically as an additive combination of multiple subcenters. Let there be m sets of subcen- Fidg2ure31.Ddep4icton5fCartde?1si2nqdu?4ati5z?3aton 4qDck(da)t= ,?wd i?5134t?h the first (last) two dimensions sub-quantized on the left (right). Cartesian k-means quantizer denoted qck, combines the subquantizations in subspaces, and produces a 4D reconstruction. ters, each with h elements.3 Let C(i) be a matrix whose columns comprise the elements of the ith subcenter set; C(i) ∈ Rp×h. Finally, assume that each cluster center, c, is the∈ sum of exactly one element from each subcenter set: = ?C(i)b(i) m c i?= ?1 , (16) where b(i) ∈ H1/h is a 1-of-h encoding. As a conc∈re Hte example (see Fig. 3), suppose we are given 4D inputs, x ∈ R4, and we split each datum into m = 2 parts: z(1) = ?I2 0? x , and Then, suppose w?e z(2) = ?0 I2? x .(17) qu?antize each part, z(?1) and? z(2) , sepa- × rately. As depicted in Fig. 3 (left and middle), we could use h = 5 subcenters for each one. Placing the corresponding subcenters in the columns of 4 5 matrices C(1) and C(2) , C(1)=?d1d20d2×35d4d5?, C(2)=?d?1d?20d2×?35d?4d?5?, we obtain a model (16) that provides 52 possible centers with which to quantize the data. More generally, the total number of model centers is k = hm. Each center is a member of the Cartesian product of the subcenter sets, hence the name Cartesian k-means. Importantly, while the number of centers is hm, the number of subcenters is only mh. The model provides a super-linear number of centers with a linear number of parameters. The learning objective for Cartesian k-means is ?ck-means(C) =x?∈D{b(mi)}inim=1???x −i?=m1C(i)b(i)??22 where b(i) ∈ H1/h, and C ≡ [C(1), ··· , (18) C(m)] ∈ Rp×mh. [b(1)T, ··· ,b(m)T] If we let bT ≡ then the second sum in (18) can be expressed succinctly as Cb. 3While here we assume a fixed cardinality for all subcenter sets, the model is easily extended to allow sets with different cardinalities. 333000112088 The key problem with this formulation is that the min- imization of (18) with respect to the b(i) ’s is intractable. Nevertheless, motivated by orthogonal k-means (Sec. 3), encoding can be shown to be both efficient and exact if we impose orthogonality constraints on the sets of subcenters. To that end, assume that all subcenters in different sets are pairwise orthogonal; i.e., ∀i,j | i = j C(i)TC(j) = 0h×h .(19) Each subcenter matrix C(i) spans a linear subspace of Rp, and the linear subspaces for different subcenter sets do not intersect. Hence, (19) implies that only the subcenters in C(i) can explain the projection of x onto the C(i) subspace. In the example depicted in Fig. 3, the input features are simply partitioned (17), and the subspaces clearly satisfy the orthogonality constraints. It is also clear that C ≡ [ C(1) C(2)] is block diagonal, Iwtit ihs 2 a ×lso o5 c bleloacrks t,h adte Cnote ≡d D(1) and D(]2 i)s . bTlohec quantization error t×he5re bfolorcek bse,c doemnoeste ?x − Cb?22 = ???zz((12))?−?D0(1) D0(2)? ?b ( 21) ? ?2 = ?????z(1)−D(1)b(1)??2+???z(2)−D(2??)b(2)??2. In words, the squa??zred quantization?? erro??r zis the sum of t??he squared errors on the subspaces. One can therefore solve for the binary coefficients of the subcenter sets independently. In the general case, assuming (19) is satisfied, C can be expressed as a product R D, where R has orthonormal columns, and D is block diagonal; i.e., C = R D where Ra=nd[hRe(n1c),e·C(,i)R=(mR)]i,Dan(di).DW=i⎢t⎡⎣⎢hDs0i.(1≡)Dra0(n2)k. C.(Di)0(.m,i)t⎦⎥ ⎤fo,l(2w0)s that D(i) ∈ Rsi×h and R(i) ∈ Rp×≡sira. Clearly, ? si ≤ p, because of∈ ∈th Re orthogonality ∈con Rstraints. Replacing C(i) with R(i)D(i) in the RHS of (18?), we find ?x−Cb?22 = ?m?z(i)−D(i)b(i)?22+?R⊥Tx?22, (21) i?= ?1 where z(i)≡R(i)Tx, and R⊥is the orthogonal complement of R. This≡ ≡shRows that, with orthogonality constraints (19), the ck-means encoding problem can be split into m independent sub-encoding problems, one for each subcenter set. To find the b(i) that minimizes ??z(i) we perform NN search with z(i) again??st h si-dim vec??tors in D(i) . This entails a cost of O(hsi).? Fortunately, all? the elements of b can be found very efficiently, in O(hs), where s ≡ ? si. If we also include the cost of rotating x to −D(i)b(i)?22, Taocbkml-emt1he.aoAnds um#ceh2nkmrtyeofskm-#lobgeiatknsh,cOo-rm( pOec(2oamkn+spt),hpan)dkO- m(pOecosa(n+mst(hipns)khtesro)m of number of centers, number of bits needed for indices (i.e., log #centers), and the storage cost of representation, which is the same as the encoding cost to convert inputs to indices. The last column shows the costs given an assumption that C has a rank of s ≥ m. obtain each z(i) , the total encoding cost is O(ps + hs), i.e., O(p2+hp). Alternatively, one could perform NN search in p-dim C(i) ’s, to find the b(i) ’s, which costs O(mhp). Table 1 summerizes the models in terms of their number of centers, index size, and cost of storage and encoding. 4.1. Learning ck-means We can re-write the ck-means objective (18) in matrix form with the Frobenius norm; i.e., ?ck-means(R, D, B) = ? X − RDB ?f2 (22) where the columns of X and B comprise the data points and the subcenter assignment coefficients. The input to the learning algorithm is the training data X, the number of subcenter sets m, the cardinality of the subcenter sets h, and an upper bound on the rank of C, i.e., s. In practice, we also let each rotation matrix R(i) have the same number of columns, i.e., si = s/m. The outputs are the matrices {R(i) } and {D(i) } that provide a local minima of (22). Learning begins hwaitth p trohev idneit aia lloizcaatlio mni noimf Ra oanf d(2 D2)., followed by iterative coordinate descent in B, D, and R: • Optimize B and D: With R fixed, the objective is given by (21) where ?R⊥TX?f2 R(i)TX, is constant. Given data pro- jections Z(i) ≡ to optimize for B and D we perform one step oRf k-means for each subcenter set: – Assignment: Perform NN searches for each subcenter set to find the assignment coefficients, B(i) , B(i)← arBg(mi)in?Z(i)− D(i)B(i)?f2 – • Update: D(i)← arDg(mi)in?Z(i)− D(i)B(i)?f2 Optimize R: Placing the D(i) ’s along the diagonal of D, as in (20), and concatenating B(i) ’s as rows of B, [B(1)T, ...,B(m)T], i.e., BT = the optimization of R reduces to the orthogonal Procrustes problem: R ← argRmin?X − RDB?f2. In experiments below, R ∈ Rp×p, and rank(C) ≤ p is unIcnon esxtpraeirniemde. tFso rb high-dimensional adnadta r awnhke(rCe ) ra ≤nk( pX is) ?np, fnostrr efficiency ri th may dbime eusnesfiuoln atol dcoantast wrahiner er anr akn(kC(X). 333000112199 5. Relations and related work As mentioned above, there are close mathematical relationships between ok-means, ck-means, ITQ for binary hashing [5], and PQ for vector quantization [7]. It is instructive to specify these relationships in some detail. Iterative Quantization vs. Orthogonal k-means. ITQ [5] is a variant of locality-sensitive hashing, mapping data to binary codes for fast retrieval. To extract m-bit codes, ITQ first zero-centers the data matrix to obtain X?. PCA is then used for dimensionality reduction, fromp down to m dimensions, after which the subspace representation is randomly rotated. The composition of PCA and the random rotation can be expressed as WTX? where W ∈ Rp×m. ITQ then solves for the m×m rotation mawthriex,r eR W, t ∈hatR minimizes ?ITQ(B?, R) = ?RTWTX?− B??f2 , s.t. RTR = Im×m, (23) where B? ∈ {−1, +1}n×p. ITQ ro∈tate {s− t1h,e+ subspace representation of the data to match the binary codes, and then minimizes quantization error within the subspace. By contrast, ok-means maps the binary codes into the original input space, and then considers both the quantization error within the subspace and the out-of-subspace projection error. A key difference is the inclusion of ?R⊥X? ?f2 in the ok-means objective (9). This is important s ?inRce one can often reduce quantization errors by projecting out significant portions of the feature space. Another key difference between ITQ and ok-means is the inclusion of non-uniform scaling in ok-means. This is important when the data are not isotropic, and contributes to the marked improvement in recall rates in Fig. 2. Orthogonal k-means vs. Cartesian k-means. We next show that ok-means is a special case of ck-means with h = 2, where each subcenter set has two elements. To this end, let C(i) = and let b(i) = be the ith subcenter matrix selection vector. Since b(i) is a 1-of-2 encoding (?10? or ?01?), it follows that: [c(1i) c2(i)], b1(i)c(1i)+ b(2i)c2(i) = [b(1i) b2(i)]T c(1i)+2 c2(i)+ bi?c1(i)−2 c2(i), (24) b1(i) − b2(i) where bi? ≡ ∈ {−1, +1}. With the following setting of t≡he bok-m−e abns parameters, μ=?i=m1c1(i)+2c(2i),andC=?c1(1)−2c2(1),...,c1(m)−2c2(m)?, it should be clear that ?i C(i)b(i) = μ + Cb?, where b? ∈ {−1, +1}m, and b??i is the ith bit of b?, used in (24). Similarly, one can also map ok-means parameters onto corresponding subcenters for ck-means. Thus, there is a 1-to-1 mapping between the parameterizations of cluster centers in ok-means and ck-means for h = 2. The benefits of ok-means are its small number of parameters, and its intuitive formulation. The benefit of the ck-means generalization is its joint encoding of multiple dimensions with an arbitrary number of centers, allowing it to capture intrinsic dependence among data dimensions. Product Quantization vs. Cartesian k-means. PQ first splits the input vectors into m disjoint sub-vectors, and then quantizes each sub-vector separately using a subquantizer. Thus PQ is a special case of ck-means where the rotation R is not optimized; rather, R is fixed in both learning and retrieval. This is important because the sub-quantizers act independently, thereby ignoring intrasubspace statistical dependence. Thus the selection of subspaces is critical for PQ [7, 8]. J e´gou et al. [8] suggest using PCA, followed by random rotation, before applying PQ to high-dimensional vectors. Finding the rotation to minimize quantization error is mentioned in [8], but it was considered too difficult to estimate. Here we show that one can find a rotation to minimize the quantization error. The ck-means learning algorithm optimizes sub-quantizers in the inner loop, but they interact in an outer loop that optimizes the rotation (Sec. 4.1). 6. Experiments 6.1. Euclidean ANN search Euclidean ANN is a useful task for comparing quantization techniques. Given a database of ck-means indices, and a query, we use Asymmetric and Symmetric ck-means quantizer distance, denoted AQDck and SQDck. The AQDck between a query x and a binary index b ≡ ?b(1)T, ...,b(m)T?T, derived in (21), is AQDck(x,b) ?m?z(i)−D(i)b(i)?22+ ?R⊥Tx?22.(25) = i?= ?1 ?z(i)−D(i)b(i)?22is the distance between the ithpro- Here, jection?? of x, i.e., z(i) , ?a?nd a subcenter projection from D(i) selecte?d by b(i) . Give?n a query, these distances for each i, and all h possible values of b(i) can be pre-computed and stored in a query-specific h×m lookup table. Once created, tshtoer lookup qtaubelery i-ss pusecedif itoc compare akullp pda tatbablea.se O points etaot tehde, query. So computing AQDck entails m lookups and additions, somewhat more costly than Hamming distance. The last term on the RHS of (25) is irrelevant for NN search. Since PQ is a special case of ck-means with pre-defined subspaces, the same distances are used for PQ (cf. [7]). The SQDck between binary codes b1 and b2 is given by SQDck(b1,b2) = ?m?D(i)b1(i)− D(i)b2(i)?22.(26) i?= ?1 Since b1(i) and b(2i) are 1-of-?h encodings, an m×h?×h lookup table can be createadre t 1o- sft-ohre e nacllo pairwise msu×bh×-dhis ltaonockeusp. 333000222200 1M SIFT, 64−bit encoding (k = 264) 1M GIST, 64−bit encoding (k = 264) 1B SIFT, 64−bit encoding (k = 264) Re@acl0 .4186021RcIPoTQk0−Q m( SAeDaH)n s (SADH)1K0 .1642081 0RIocP1TkQ −K m (SAeDHa)n s (ASHD1)0K .186420 1 R0IoPc1TkQ −KQm( ASe DaHn) s (SADH1)0K Figure 4. Euclidean NN recall@R (number of items retrieved) based on different quantizers and corresponding distance functions on the 1M SIFT, 1M GIST, and 1B SIFT datasets. The dashed curves use symmetric distance. (AH ≡ AQDok, SD ≡ SQDck, AD ≡ AQDck) While the cost of computing SQDck is the same as AQDck, SQDck could also be used to estimate the distance between the indexed database entries, for diversifying the retrieval results, or to detect near duplicates, for example. Datasets. We use the 1M SIFT dataset, as in Sec. 3.3, along with two others, namely, 1M GIST (960D features) and 1B SIFT, both comprising disjoint sets of training, base and test vectors. 1M GIST has 500K training, 1M base, and 1K query vectors. 1B SIFT has 100M training, 1B base, and 10K query points. In each case, recall rates are averaged over queries in test set for a database populated from the base set. For expedience, we use only the first 1M training points for the 1B SIFT experiments. Parameters. In ANN experiments below, for both ckmeans and PQ, we use m = 8 and h = 256. Hence the number of clusters is k = 2568 = 264, so 64-bits are used as database indices. Using h = 256 is particularly attractive because the resulting lookup tables are small, encoding is fast, and each subcenter index fits into one byte. As h increases we expect retrieval results to improve, but encoding and indexing of a query to become slower. Initialization. To initialize the Di’s for learning, as in kmeans, we simply begin with random samples from the set of Zi’s (see Sec. 4.1). To initialize R we consider the different methods that J e´gou et al. [7] proposed for splitting the feature dimensions into m sub-vectors for PQ: (1) natural: sub-vectors comprise consecutive dimensions, (2) structured: dimensions with the same index modulo 8 are grouped, and (3) random: random permutations are used. For PQ in the experiments below, we use the orderings that produced the best results in [7], namely, the structured ordering for 960D GIST, and the natural ordering for 128D SIFT. For learning ck-means, R is initialized to the identity with SIFT corpora. For 1M GIST, where the PQ ordering is significant, we consider all three orderings to initialize R. Results. Fig. 4 shows Recall@R plots for ck-means and PQ [7] with symmetric and asymmetric distances (SD ≡ PSQQD [7c]k wanitdh A syDm m≡e rAicQ aDncdk) a on tmhee t3r cda dtaissteatns.c Tsh (eS Dhor ≡izontal axainsd represents tQheD number of retrieved items, R, on a log-scale. The results consistently favor ck-means. On 1M GIST, 64−bit encoding (k = 264) lae@Rc0 .261084 1R0Pc Qk −1m(K 321e )a (A nD s )(132) (A D )10K Figure 5. PQ and ck-means results using natural (1), structured (2), and random (3) ordering to define the (initial) subspaces. the high-dimensional GIST data, ck-means with AD significantly outperforms other methods; even ck-means with SD performs on par with PQ with AD. On 1M SIFT, the Recall@ 10 numbers for PQ and ck-means, both using AD, × are 59.9% and 63.7%. On 1B SIFT, Recall@ 100 numbers are 56.5% and 64.9%. As expected, with increasing dataset size, the difference between methods is more significant. In 1B SIFT, each feature vector is 128 bytes, hence a total of 119 GB. Using any method in Fig. 4 (including ckmeans) to index the database into 64 bits, this storage cost reduces to only 7.5 GB. This allows one to work with much larger datasets. In the experiments we use linear scan to find the nearest items according to quantizer distances. For NN search using 10K SIFT queries on 1B SIFT this takes about 8 hours for AD and AH and 4 hours for Hamming distance on a 2 4-core computer. Search can be sped up significantly; using a coarse tienri.tia Sl quantization sapnedd an pin svigenritefidfile structure for AD and AH, as suggested by [7, 1], and using the multi-index hashing method of [13] for Hamming distance. In this paper we did not implement these efficiencies as we focus primarily on the quality of quantization. Fig. 5 compares ck-means to PQ when R in ck-means is initialized using the 3 orderings of [7]. It shows that ckmeans is superior in all cases. Simiarly interesting, it also shows that despite the non-convexity ofthe optimization objective, ck-means learning tends to find similarly good encodings under different initial conditions. Finally, Fig. 6 compares the methods under different code dimensionality. 333000222311 1M GIST, encoding with 64, 96, and 128 bits Rl@ace0 .814206 1 R0cP kQ − m1 69e4216a−8 Knb −itsb 1t69248− bit10K Figure 6. PQ and ck-means results using different number of bits for encoding. In all cases asymmetric distance is used. Table2.Rcokg-nmiteoamPnQCesac(ondksue=r(bkaoc416y=0ko26)n40t2h[eC]IFAR7c598-u1.72096r%a tcesyuingdfferent codebook learning algorithms. 6.2. Learning visual codebooks While the ANN seach tasks above require too many clusters for k-means, it is interesing to compare k-means and ck-means on a task with a moderate number of clusters. To this end we consider codebook learning for bag-ofword√s models [3, 10]. We use ck-means with m = 2 and h = √k, and hence k centers. The main advantage of ck- × here is that finding the closest cluster center is done in O(√k) time, much faster than standard NN search with k-means in O(k). Alternatives for k-means, to improve efficiency, include approximate k-means [14], and hierarchical k-means [12]. Here we only compare to exact k-means. CIFAR-10 [9] comprises 50K training and 10K test images (32 32 pixels). Each image is one of 10 classes (airplane, 3b2i×rd,3 car, cat, )d.e Eera,c dog, frog, sh oonrsee o, ship, laansds etrsu (caki)r.We use publicly available code from Coates et al [2], with changes to the codebook learning and cluster assignment modules. Codebooks are built on 6×6 whitened color image patches. .O Cnoed histogram i sb ucirleta otend 6 per image quadrant, aagnde a linear SVM is applied to 4k-dim feature vectors. Recognition accuracy rates on the test set for different models and k are given in Table 2. Despite having fewer parameters, ck-means performs on par or better than kmeans. This is consistent for different initializations of the algorithms. Although k-means has higher fedility than ckmeans, with fewer parameters, ck-means may be less susceptible to overfitting. Table 2, also compares with the approach of [19], where PQ without learning a rotation is used for clustering features. As expected, learning the rotation has a significant impact on recognition rates, outperforming all three initializations of PQ. mean√s 7. Conclusions We present the Cartesian k-means algorithm, a generalization of k-means with a parameterization of the clus- ter centers such that number of centers is super-linear in the number of parameters. The method is also shown to be a generalization of the ITQ algorithm and Product Quantization. In experiments on large-scale retrieval and codebook learning for recognition the results are impressive, outperforming product quantization by a significant margin. An implementation of the method is available at https : / / github . com/norou z i ckmeans . / References [1] A. Babenko and V. Lempitsky. The inverted multi-index. CVPR, 2012. [2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. AISTATS, 2011. [3] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. ECCV Workshop Statistical Learning in Computer Vision, 2004. [4] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization for approximate nearest neighbor search. CVPR, 2013. [5] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. CVPR, 2011. [6] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011. [7] H. J ´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 2011. [8] H. J ´egou, M. Douze, C. Schmid, and P. P ´erez. Aggregating local descriptors into a compact image representation. CVPR, 2010. [9] A. Krizhevsky. Learning multiple layers of features from tiny images. MSc Thesis, Univ. Toronto, 2009. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. CVPR, 2006. [11] S. P. Lloyd. Least squares quantization in pcm. IEEE Trans. IT, 28(2): 129–137, 1982. [12] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. CVPR, 2006. [13] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in hamming space with multi-index hashing. CVPR, 2012. [14] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR, 2007. [15] P. Sch o¨nemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 3 1, 1966. [16] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV, 2003. [17] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Comp., 12: 1247–1283, 2000. [18] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE T. PAMI, 29(5):854–869, 2007. [19] S. Wei, X. Wu, and D. Xu. Partitioned k-means clustering for fast construction of unbiased visual vocabulary. The Era of Interactive Media, 2012. 333000222422
4 0.25725842 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search
Author: Tiezheng Ge, Kaiming He, Qifa Ke, Jian Sun
Abstract: Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a nonparametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.
5 0.21623328 236 cvpr-2013-K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes
Author: Kaiming He, Fang Wen, Jian Sun
Abstract: In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.
6 0.18215664 59 cvpr-2013-Better Exploiting Motion for Better Action Recognition
7 0.17691842 87 cvpr-2013-Compressed Hashing
8 0.17468117 268 cvpr-2013-Leveraging Structure from Motion to Learn Discriminative Codebooks for Scalable Landmark Classification
9 0.15789159 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance
10 0.15295656 223 cvpr-2013-Inductive Hashing on Manifolds
11 0.12899666 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval
12 0.11904296 69 cvpr-2013-Boosting Binary Keypoint Descriptors
14 0.10793575 151 cvpr-2013-Event Retrieval in Large Video Collections with Circulant Temporal Encoding
15 0.10478119 249 cvpr-2013-Learning Compact Binary Codes for Visual Tracking
16 0.10315013 62 cvpr-2013-Bilinear Programming for Human Activity Recognition with Unknown MRF Graphs
17 0.096354462 293 cvpr-2013-Multi-attribute Queries: To Merge or Not to Merge?
18 0.094560049 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition
19 0.091980711 189 cvpr-2013-Graph-Based Discriminative Learning for Location Recognition
20 0.089314044 425 cvpr-2013-Tensor-Based High-Order Semantic Relation Transfer for Semantic Scene Segmentation
topicId topicWeight
[(0, 0.165), (1, -0.052), (2, -0.051), (3, 0.052), (4, 0.07), (5, 0.005), (6, -0.098), (7, -0.194), (8, -0.231), (9, -0.075), (10, -0.183), (11, 0.043), (12, 0.101), (13, 0.123), (14, 0.074), (15, 0.023), (16, 0.092), (17, 0.008), (18, -0.003), (19, -0.061), (20, 0.091), (21, -0.108), (22, -0.012), (23, 0.06), (24, 0.008), (25, 0.144), (26, 0.021), (27, 0.0), (28, -0.024), (29, -0.061), (30, 0.104), (31, 0.073), (32, -0.044), (33, 0.027), (34, -0.082), (35, -0.115), (36, 0.019), (37, -0.015), (38, 0.077), (39, 0.032), (40, 0.049), (41, 0.197), (42, 0.126), (43, 0.018), (44, 0.034), (45, -0.014), (46, -0.097), (47, -0.003), (48, -0.04), (49, -0.114)]
simIndex simValue paperId paperTitle
same-paper 1 0.92764121 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections
Author: Yunchao Gong, Sanjiv Kumar, Henry A. Rowley, Svetlana Lazebnik
Abstract: Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint.
2 0.84713435 79 cvpr-2013-Cartesian K-Means
Author: Mohammad Norouzi, David J. Fleet
Abstract: A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and runtime cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ [5], and to Product Quantization for vector quantization [7]. The models are tested on largescale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10). 1. Introduction and background Techniques for vector quantization, like the well-known k-means algorithm, are used widely in vision and learning. Common applications include codebook learning for object recognition [16], and approximate nearest neighbor (ANN) search for information retrieval [12, 14]. In general terms, such techniques involve partitioning an input vector space into multiple regions {Si}ik=1, mapping points x in each region uinlttiop region-specific representatives {ci}ik=1, known as cioennte irnst.o A resg siuonch-,s a quantizer, qse(nxt)a,t can b {ec expressed as k q(x) = ?1(x∈Si)ci, (1) i?= ?1 where 1(·) is the usual indicator function. eTrhee 1 quality oef u a quantizer oisr expressed in terms of expected distortion, a common measure of which is squared error ?x − q(x) ?22. In this case, given centers {ci}, the region t ?ox xw −hic qh(x a point nis t assigned gwivitehn m ceinnitemrasl {dcis}to,r tthioen r eisobtained by Euclidean nearest neighbor (NN) search. The k-means algorithm can be used to learn centers from data. To reduce expected distortion, crucial for many applications, one can shrink region volumes by increasing k, the number of regions. In practice, however, increasing k results in prohibitive storage and run-time costs. Even if one resorts to ANN search with approximate k-means [14] or hierarchical k-means [12], it is hard to scale to large k (e.g., k = 264), as storing the centers is untenable. This paper concerns methods for scalable quantization with tractable storage and run-time costs. Inspired by Product Quantization (PQ), a state-of-the-art algorithm for ANN search with high-dimensional data (e.g., [7]), compositionality is one key. By expressing data in terms of recurring, reusable parts, the representational capacity of compositional models grows exponentially in the number ofparameters. Compression techniques like JPEG accomplish this by encoding images as disjoint rectangular patches. PQ divides the feature space into disjoint subspaces that are quantized independently. Other examples include part-based recognition models (e.g., [18]), and tensor-based models for stylecontent separation (e.g., [17]). Here, with a compositional parameterization of region centers, we find a family of models that reduce the enc√oding cost of k centers down from k to between log2 k and √k. A model parameter controls the trade-off between model fidelity and compactness. We formulate two related algorithms, Orthogonal kmeans (ok-means) and Cartesian k-means (ck-means). They are natural extensions of k-means, and are closely related to other hashing and quantization methods. The okmeans algorithm is a generalization of the Iterative Quantization (ITQ) algorithm for finding locality-sensitive binary codes [5]. The ck-means model is an extension of okmeans, and can be viewed as a generalization of PQ.1 We then report empirical results on large-scale ANN search tasks, and on codebook learning for recognition. For ANN search we use datasets of 1M GIST and 1B SIFT features, with both symmetric and asymmetric distance measures on the latent codes. We consider codebook learning for a generic form of recognition on CIFAR-10. 2. k-means Given a dataset of n p-dim points, D ≡ {xj }jn=1, the kmeans algorithm partitions ithme n points in ≡to { kx c}lusters, and 1A very similar generalization of PQ, was developed concurrently by Ge, He, Ke and Sun, and also appears in CVPR 2013 [4]. 333000111755 represents each cluster by a center point. Let C ∈ Rp×k breep a mseantrtsix e wachhos celu sctoelurm byns a comprise tihnet .k Lceltus Cter ∈ centers, i.e., C = [c1, c2 , · · · , ck] . The k-means objective is to minimize the within,-c··lu·s ,tecr squared distances: ?k-means(C) = = x?∈Dmiin?x − ci?22 x?∈Db∈mHin1/k?x − Cb?22 (2) where H1/k ≡ {b | b ∈ {0, 1}k and ?b? = 1}, i.e., b is a binary vee Hctor comprising a ,1-1o}f-ka encoding. Lloyd’s b k- imse aa bni-s algorithm [11] finds a local minimum of (2) by iterative, alternating optimization with respect to C and the b’s. The k-means model is simple and intuitive, using NN search to assign points to centers. The assignment of points to centers can be represented with a log k-bit index per data point. The cost of storing the centers grows linearly with k. 3. Orthogonal k-means with 2m centers With a compositional model one can represent cluster centers more efficiently. One such approach is to re- construct each input with an additive combination of the columns of C. To this end, instead of the 1-of-k encoding in (2), we let b be a general m-bit vector, b ∈ Hm ≡ {0, 1}m, (a2n)d, wCe e∈ l Rt bp× bme. aA gse nsuerchal, meac-bhi tc vluesctteorr c, ebnt ∈er H His th≡e sum o}f a asundbs Cet o∈f Rthe columns of C. There are 2m possible subsets, and therefore k = 2m centers. While the number of parameters in the quantizer is linear in m, the number of centers increases exponentially. While efficient in representing cluster centers, the approach is problematic, because solving bˆ = abrg∈Hmmin?x − Cb?22,(3) is intractable; i.e., the discrete optimization is not submodular. Obviously, for small 2m one could generate all possible centers and then perform NN search to find the optimal solution, but this would not scale well to large values of m. One key observation is that if the columns of C are orthogonal, then optimization (3) becomes tractable. To explain this, without loss of generality, assume the bits belong to {−1, 1} instead of {0, 1}, i.e., b? ∈ Hm? ≡ {−1, 1}m. tToh e{−n, ?x − Cb??22 = xTx + b?TCTCb? − 2xTCb? .(4) For diagonal CTC, the middle quadratic term on the RHS becomes trace(CTC), independent of b?. As a consequence, when C has orthogonal columns, abr?g∈mH?min?x − Cb??22 = sgn(CTx) ,(5) where sgn(·) is the element-wise sign function. Cluster centers are depicted by dots, and cluster boundaries by dashed lines. (Left) Clusters formed by a 2-cube with no rotation, scaling or translation; centers = {b? |b? ∈ H2? }. (Right) Rotation, scaling and translation are used to reduce distances between data points and cluster centers; centers = {μ + RDb? | b? ∈ H2? }. To reduce quantization error further we can also introduce an offset, denoted μ, to translate x. Taken together with (5), this leads to the loss function for the model we call orthogonal k-means (ok-means):2 ?ok-means(C,μ) = x?∈Db?m∈iHn?m?x − μ − Cb??22. (6) Clearly, with a change of variables, b? = 2b −1, we can define new versions of μ and C, with ide=ntic 2abl −lo1s,s, w wfoer c wanh dicehthe unknowns are binary, but in {0, 1}m. T uhnek nook-wmnesa anrse quantizer uetn icnod {e0,s 1ea}ch data point as a vertex of a transformed m-dimensional unit hypercube. The transform, via C and μ, maps the hypercube vertices onto the input feature space, ideally as close as possible to the data points. The matrix C has orthogonal columns and can therefore be expressed in terms of rotation and scaling; i.e., C ≡ RD, where R ∈ Rp×m has orthonormal cinoglu;m i.ne.s, ( CRT ≡R = R DIm,) w, ahnerde eD R Ris ∈ diagonal and positive definite. The goal of learning is to find the parameters, R, D, and μ, which minimize quantization error. Fig. 1 depicts a small set of 2D data points (red x’s) and two possible quantizations. Fig. 1 (left) depicts the vertices of a 2-cube with C = I2 and zero translation. The cluster regions are simply the four quadrants of the 2D space. The distances between data points and cluster centers, i.e., the quantization errors, are relatively large. By comparison, Fig. 1 (right) shows how a transformed 2-cube, the full model, can greatly reduce quantization errors. 3.1. Learning ok-means To derive the learning algorithm for ok-means we first rewrite the objective in matrix terms. Given n data points, let X = [x1, x2, · · · , xn] ∈ Rp×n. Let B? ∈ {−1, 1}m×n denote the corresponding ∈clu Rster assignment {co−ef1f,ic1i}ents. Our 2While closely related to ITQ, we use the the relationship to k-means. term ok-means to emphasize 333000111 686 goal is to find the assignment coefficients B? and the transformation parameters, namely, the rotation R, scaling D, and translation μ, that minimize ?ok-means(B?, R, D,μ) = ?X − μ1T − RDB??f2 (7) = ?X? − RDB??f2 (8) where ?·?f denotes the Frobenius norm, X? ≡ X − μ1T, R iws hceornes ?tr·a?ined to have orthonormal columns≡ ≡(R XTR − μ=1 Im), and D is a positive diagonal matrix. Like k-means, coordinate descent is effective for optimizing (8). We first initialize μ and R, and then iteratively optimize ?ok-means with respect to B?, D, R, and μ: • Optimize B? and D: With straightforward algebraic manipulation, one can show that ?ok-means = ?RTX?−DB??f2 + ?R⊥TX??f2 , (9) where columns of R⊥ span the orthogonal complement of the column-space of R (i.e., the block matrix [R R⊥] is orthogonal). It follows that, given X? and R, we can optimize the first term in (9) to solve for B? and D. Here, DB? is the leastsquares approximation to RTX?, where RTX? and DB? × are m n. Further, the ith row of DB? can only contain earleem men×tsn .fr Fomur t{h−erd,i t,h +e dii} where di = Dii. Wherever tehleem corresponding delement} }o fw RheTrXe? d is positive (negative) we should put a positive (negative) value in DB?. The optimal di is determined by the mean absolute value of the elements in the ith row of RTX?: • • B? ← sgn ?RTX?? (10) D ← Diag?(mroewan?(abs(RTX?))) (11) Optimize R: Using the original objective (8), find R that minimizes ?X? −RA?f2 , subject to RTR = Im, and Am n≡i mDizBes?. ?TXhis i−s equivalent to an Orthogonal Procrustes problem [15], and can be solved exactly using SVD. In particular, by adding p − m rows of zeros to the bottom poaf rDtic, uDlaBr, bb eyc aodmdeins p p× − n. mT rhoewn sR o ifs z square a tnhde orthogoonf aDl ,a DndB can boem eessti pm ×at end. Twhiethn RSV isD s. qBuuatr es ainncde oDrtBho gisdegenerate we are only interested in the first m columns of R; the remaining columns are unique only up to rotation of the null-space.) Optimize μ: Given R, B? and D, the optimal μ is given by the column average of X −RDB?: μ ← mcoeluamn(X−RDB?) (12) 3.2. Approximate nearest neighbor search One application of scalable quantization is ANN search. Before introducing more advanced quantization techniques, we describe some experimental results with ok-means on Euclidean ANN search. The benefits of ok-means for ANN search are two-fold. Storage costs for the centers is reduced to O(log k), from O(k) with k-means. Second, substantial speedups are possible by exploiting fast methods for NN search on binary codes in Hamming space (e.g., [13]). Generally, in terms of a generic quantizer q(·), there are twoG neanteurraalll ways etrom messti omfa ate g ethneer dicis qtaunanceti z beetrw qe(·e)n, ttwheor vectors, v and u [7]. Using the Symmetric quantizer distance (SQD) ?v−u? is approximated by ?q(v)−q(u) ? . Using the Asymmetric quantizer doixsimtanacteed (A byQ ?Dq()v, only one o Uf sthineg tw thoe vectors is quantized, and ?v−u? is estimated as ?v−q(u) ? . vWechtiloer sS iQs qDu might b, aen slightly f?as itse ers t iom compute, vA−QqD(u i)?n-. curs lower quantization errors, and hence is more accurate. For ANN search, in a pre-processing stage, each database entry, u, is encoded into a binary vector corresponding to the cluster center index to which u is assigned. At test time, the queries may or may not be encoded into indices, depending on whether one uses SQD or AQD. In the ok-means model, the quantization of an input x is straightforwardly shown to be qok(x) = μ + RD sgn(RT(x − μ)) . (13) The corresponding m-bit cluster index is sgn(RT(x − μ)). Given two indices, namely b?1 , b?2 ∈ {−1, +1}m,( txhe − symmetric ok-means quantizer distance∈ ∈is { SQDok(b?1, b?2) = ?μ + RDb?1 − μ − RDb?2 ?22 = ?D(b?1 − b?2)?22 .(14) In effect, SQDok is a weighted Hamming distance. It is the sum of the squared diagonal entries of D corresponding to bits where b?1 and b?2 differ. Interestingly, in our experiments with ok-means, Hamming and weighted Hamming distances yield similar results. Thus, in ok-means experiments we simply report results for Hamming distance, to facilitate comparisons with other techniques. When the scaling in ok-means is constrained to be isotropic (i.e., D = αIm for α ∈ R+), then SQDok becomes a constant multiple off othre α αu ∈sua Rl Hamming distance. As discussed in Sec. 5, this isotropic ok-means is closely related to ITQ [5]. The ok-means AQD between a feature vector x and a cluster index b?, is defined as AQDok(x, b?) = ?x −μ − RDb??22 = ?RTx? − Db??22 + ?R⊥Tx??22 , (15) where x? = x−μ. For ANN search, in comparing distances from query x t−o a .d Faotars AetN oNf binary i,n idni ccoesm, ptharei snegc odnisdta tnecrems on the RHS of (15) is irrelevant, since it does not depend on b?. Without this term, AQDok becomes a form of asymmetric Hamming (AH) distance between RTx? and b?. While previous work [6] discussed various ad hoc AH distance measures for binary hashing, interestingly, the optimal AH distance for ok-means is derived directly in our model. 333000111977 1M SIFT, 64−bit encoding (k = 264) lRc@aeR0 . 206148 10 RIoPT1Qk K−Q m( AHeDa)Hn )s (AH)10K Figure 2. Euclidean ANN retrieval results for different methods and distance functions on the 1M SIFT dataset. 3.3. Experiments with ok-means Following [7], we report ANN search results on 1M SIFT, a corpus of 128D SIFT descriptors with disjoint sets of 100K training, 1M base, and 10K test vectors. The training set is used to train models. The base set is the database, and the test points are queries. The number of bits m is typically less than p, but no pre-processing is needed for dimensionality reduction. Rather, to initialize learning, R is a random rotation of the first m principal directions of the training data, and μ is the mean of the data. For each query we find R nearest neighbors, and compute Recall@R, the fraction of queries for which the ground-truth Euclidean NN is found in the R retrieved items. Fig. 2 shows the Recall@R plots for ok-means with Hamming (H) ≈ SQDok and asymmetric Hamming (AH) H≡a mAmQiDngok ( Hd)is t≈an SceQ, vs. ITQ [5] and PQ [7]. The PQ ≡met AhoQdD exploits a more complex asymmetric distance func- tion denoted AD ≡ AQDck (defined in Sec. 6. 1). Note first tthioant od ke-nmoeteadns A improves upon ITQ, with both Hamming and asymmetric Hamming distances. This is due to the scaling parameters (i.e., D) in ok-means. If one is interested in Hamming distance efficient retrieval, ok-means is prefered over ITQ. But better results are obtained with the asymmetric distance function. Fig. 2 also shows that PQ achieves superior recall rates. This stems from its joint encoding of multiple feature dimensions. In ok-means, each bit represents a partition ofthe feature space into two clusters, separated by a hyperplane. The intersection of m orthogonal hyperplanes yields 2m regions. Hence we obtain just two clusters per dimension, and each dimension is encoded independently. In PQ, by comparison, multiple dimensions are encoded jointly, with arbitrary numbers of clusters. PQ thereby captures statistical dependencies among different dimensions. We next extend ok-means to jointly encode multiple dimensions. 4. Cartesian k-means In the Cartesian k-means (ck-means) model, each region center is expressed parametrically as an additive combination of multiple subcenters. Let there be m sets of subcen- Fidg2ure31.Ddep4icton5fCartde?1si2nqdu?4ati5z?3aton 4qDck(da)t= ,?wd i?5134t?h the first (last) two dimensions sub-quantized on the left (right). Cartesian k-means quantizer denoted qck, combines the subquantizations in subspaces, and produces a 4D reconstruction. ters, each with h elements.3 Let C(i) be a matrix whose columns comprise the elements of the ith subcenter set; C(i) ∈ Rp×h. Finally, assume that each cluster center, c, is the∈ sum of exactly one element from each subcenter set: = ?C(i)b(i) m c i?= ?1 , (16) where b(i) ∈ H1/h is a 1-of-h encoding. As a conc∈re Hte example (see Fig. 3), suppose we are given 4D inputs, x ∈ R4, and we split each datum into m = 2 parts: z(1) = ?I2 0? x , and Then, suppose w?e z(2) = ?0 I2? x .(17) qu?antize each part, z(?1) and? z(2) , sepa- × rately. As depicted in Fig. 3 (left and middle), we could use h = 5 subcenters for each one. Placing the corresponding subcenters in the columns of 4 5 matrices C(1) and C(2) , C(1)=?d1d20d2×35d4d5?, C(2)=?d?1d?20d2×?35d?4d?5?, we obtain a model (16) that provides 52 possible centers with which to quantize the data. More generally, the total number of model centers is k = hm. Each center is a member of the Cartesian product of the subcenter sets, hence the name Cartesian k-means. Importantly, while the number of centers is hm, the number of subcenters is only mh. The model provides a super-linear number of centers with a linear number of parameters. The learning objective for Cartesian k-means is ?ck-means(C) =x?∈D{b(mi)}inim=1???x −i?=m1C(i)b(i)??22 where b(i) ∈ H1/h, and C ≡ [C(1), ··· , (18) C(m)] ∈ Rp×mh. [b(1)T, ··· ,b(m)T] If we let bT ≡ then the second sum in (18) can be expressed succinctly as Cb. 3While here we assume a fixed cardinality for all subcenter sets, the model is easily extended to allow sets with different cardinalities. 333000112088 The key problem with this formulation is that the min- imization of (18) with respect to the b(i) ’s is intractable. Nevertheless, motivated by orthogonal k-means (Sec. 3), encoding can be shown to be both efficient and exact if we impose orthogonality constraints on the sets of subcenters. To that end, assume that all subcenters in different sets are pairwise orthogonal; i.e., ∀i,j | i = j C(i)TC(j) = 0h×h .(19) Each subcenter matrix C(i) spans a linear subspace of Rp, and the linear subspaces for different subcenter sets do not intersect. Hence, (19) implies that only the subcenters in C(i) can explain the projection of x onto the C(i) subspace. In the example depicted in Fig. 3, the input features are simply partitioned (17), and the subspaces clearly satisfy the orthogonality constraints. It is also clear that C ≡ [ C(1) C(2)] is block diagonal, Iwtit ihs 2 a ×lso o5 c bleloacrks t,h adte Cnote ≡d D(1) and D(]2 i)s . bTlohec quantization error t×he5re bfolorcek bse,c doemnoeste ?x − Cb?22 = ???zz((12))?−?D0(1) D0(2)? ?b ( 21) ? ?2 = ?????z(1)−D(1)b(1)??2+???z(2)−D(2??)b(2)??2. In words, the squa??zred quantization?? erro??r zis the sum of t??he squared errors on the subspaces. One can therefore solve for the binary coefficients of the subcenter sets independently. In the general case, assuming (19) is satisfied, C can be expressed as a product R D, where R has orthonormal columns, and D is block diagonal; i.e., C = R D where Ra=nd[hRe(n1c),e·C(,i)R=(mR)]i,Dan(di).DW=i⎢t⎡⎣⎢hDs0i.(1≡)Dra0(n2)k. C.(Di)0(.m,i)t⎦⎥ ⎤fo,l(2w0)s that D(i) ∈ Rsi×h and R(i) ∈ Rp×≡sira. Clearly, ? si ≤ p, because of∈ ∈th Re orthogonality ∈con Rstraints. Replacing C(i) with R(i)D(i) in the RHS of (18?), we find ?x−Cb?22 = ?m?z(i)−D(i)b(i)?22+?R⊥Tx?22, (21) i?= ?1 where z(i)≡R(i)Tx, and R⊥is the orthogonal complement of R. This≡ ≡shRows that, with orthogonality constraints (19), the ck-means encoding problem can be split into m independent sub-encoding problems, one for each subcenter set. To find the b(i) that minimizes ??z(i) we perform NN search with z(i) again??st h si-dim vec??tors in D(i) . This entails a cost of O(hsi).? Fortunately, all? the elements of b can be found very efficiently, in O(hs), where s ≡ ? si. If we also include the cost of rotating x to −D(i)b(i)?22, Taocbkml-emt1he.aoAnds um#ceh2nkmrtyeofskm-#lobgeiatknsh,cOo-rm( pOec(2oamkn+spt),hpan)dkO- m(pOecosa(n+mst(hipns)khtesro)m of number of centers, number of bits needed for indices (i.e., log #centers), and the storage cost of representation, which is the same as the encoding cost to convert inputs to indices. The last column shows the costs given an assumption that C has a rank of s ≥ m. obtain each z(i) , the total encoding cost is O(ps + hs), i.e., O(p2+hp). Alternatively, one could perform NN search in p-dim C(i) ’s, to find the b(i) ’s, which costs O(mhp). Table 1 summerizes the models in terms of their number of centers, index size, and cost of storage and encoding. 4.1. Learning ck-means We can re-write the ck-means objective (18) in matrix form with the Frobenius norm; i.e., ?ck-means(R, D, B) = ? X − RDB ?f2 (22) where the columns of X and B comprise the data points and the subcenter assignment coefficients. The input to the learning algorithm is the training data X, the number of subcenter sets m, the cardinality of the subcenter sets h, and an upper bound on the rank of C, i.e., s. In practice, we also let each rotation matrix R(i) have the same number of columns, i.e., si = s/m. The outputs are the matrices {R(i) } and {D(i) } that provide a local minima of (22). Learning begins hwaitth p trohev idneit aia lloizcaatlio mni noimf Ra oanf d(2 D2)., followed by iterative coordinate descent in B, D, and R: • Optimize B and D: With R fixed, the objective is given by (21) where ?R⊥TX?f2 R(i)TX, is constant. Given data pro- jections Z(i) ≡ to optimize for B and D we perform one step oRf k-means for each subcenter set: – Assignment: Perform NN searches for each subcenter set to find the assignment coefficients, B(i) , B(i)← arBg(mi)in?Z(i)− D(i)B(i)?f2 – • Update: D(i)← arDg(mi)in?Z(i)− D(i)B(i)?f2 Optimize R: Placing the D(i) ’s along the diagonal of D, as in (20), and concatenating B(i) ’s as rows of B, [B(1)T, ...,B(m)T], i.e., BT = the optimization of R reduces to the orthogonal Procrustes problem: R ← argRmin?X − RDB?f2. In experiments below, R ∈ Rp×p, and rank(C) ≤ p is unIcnon esxtpraeirniemde. tFso rb high-dimensional adnadta r awnhke(rCe ) ra ≤nk( pX is) ?np, fnostrr efficiency ri th may dbime eusnesfiuoln atol dcoantast wrahiner er anr akn(kC(X). 333000112199 5. Relations and related work As mentioned above, there are close mathematical relationships between ok-means, ck-means, ITQ for binary hashing [5], and PQ for vector quantization [7]. It is instructive to specify these relationships in some detail. Iterative Quantization vs. Orthogonal k-means. ITQ [5] is a variant of locality-sensitive hashing, mapping data to binary codes for fast retrieval. To extract m-bit codes, ITQ first zero-centers the data matrix to obtain X?. PCA is then used for dimensionality reduction, fromp down to m dimensions, after which the subspace representation is randomly rotated. The composition of PCA and the random rotation can be expressed as WTX? where W ∈ Rp×m. ITQ then solves for the m×m rotation mawthriex,r eR W, t ∈hatR minimizes ?ITQ(B?, R) = ?RTWTX?− B??f2 , s.t. RTR = Im×m, (23) where B? ∈ {−1, +1}n×p. ITQ ro∈tate {s− t1h,e+ subspace representation of the data to match the binary codes, and then minimizes quantization error within the subspace. By contrast, ok-means maps the binary codes into the original input space, and then considers both the quantization error within the subspace and the out-of-subspace projection error. A key difference is the inclusion of ?R⊥X? ?f2 in the ok-means objective (9). This is important s ?inRce one can often reduce quantization errors by projecting out significant portions of the feature space. Another key difference between ITQ and ok-means is the inclusion of non-uniform scaling in ok-means. This is important when the data are not isotropic, and contributes to the marked improvement in recall rates in Fig. 2. Orthogonal k-means vs. Cartesian k-means. We next show that ok-means is a special case of ck-means with h = 2, where each subcenter set has two elements. To this end, let C(i) = and let b(i) = be the ith subcenter matrix selection vector. Since b(i) is a 1-of-2 encoding (?10? or ?01?), it follows that: [c(1i) c2(i)], b1(i)c(1i)+ b(2i)c2(i) = [b(1i) b2(i)]T c(1i)+2 c2(i)+ bi?c1(i)−2 c2(i), (24) b1(i) − b2(i) where bi? ≡ ∈ {−1, +1}. With the following setting of t≡he bok-m−e abns parameters, μ=?i=m1c1(i)+2c(2i),andC=?c1(1)−2c2(1),...,c1(m)−2c2(m)?, it should be clear that ?i C(i)b(i) = μ + Cb?, where b? ∈ {−1, +1}m, and b??i is the ith bit of b?, used in (24). Similarly, one can also map ok-means parameters onto corresponding subcenters for ck-means. Thus, there is a 1-to-1 mapping between the parameterizations of cluster centers in ok-means and ck-means for h = 2. The benefits of ok-means are its small number of parameters, and its intuitive formulation. The benefit of the ck-means generalization is its joint encoding of multiple dimensions with an arbitrary number of centers, allowing it to capture intrinsic dependence among data dimensions. Product Quantization vs. Cartesian k-means. PQ first splits the input vectors into m disjoint sub-vectors, and then quantizes each sub-vector separately using a subquantizer. Thus PQ is a special case of ck-means where the rotation R is not optimized; rather, R is fixed in both learning and retrieval. This is important because the sub-quantizers act independently, thereby ignoring intrasubspace statistical dependence. Thus the selection of subspaces is critical for PQ [7, 8]. J e´gou et al. [8] suggest using PCA, followed by random rotation, before applying PQ to high-dimensional vectors. Finding the rotation to minimize quantization error is mentioned in [8], but it was considered too difficult to estimate. Here we show that one can find a rotation to minimize the quantization error. The ck-means learning algorithm optimizes sub-quantizers in the inner loop, but they interact in an outer loop that optimizes the rotation (Sec. 4.1). 6. Experiments 6.1. Euclidean ANN search Euclidean ANN is a useful task for comparing quantization techniques. Given a database of ck-means indices, and a query, we use Asymmetric and Symmetric ck-means quantizer distance, denoted AQDck and SQDck. The AQDck between a query x and a binary index b ≡ ?b(1)T, ...,b(m)T?T, derived in (21), is AQDck(x,b) ?m?z(i)−D(i)b(i)?22+ ?R⊥Tx?22.(25) = i?= ?1 ?z(i)−D(i)b(i)?22is the distance between the ithpro- Here, jection?? of x, i.e., z(i) , ?a?nd a subcenter projection from D(i) selecte?d by b(i) . Give?n a query, these distances for each i, and all h possible values of b(i) can be pre-computed and stored in a query-specific h×m lookup table. Once created, tshtoer lookup qtaubelery i-ss pusecedif itoc compare akullp pda tatbablea.se O points etaot tehde, query. So computing AQDck entails m lookups and additions, somewhat more costly than Hamming distance. The last term on the RHS of (25) is irrelevant for NN search. Since PQ is a special case of ck-means with pre-defined subspaces, the same distances are used for PQ (cf. [7]). The SQDck between binary codes b1 and b2 is given by SQDck(b1,b2) = ?m?D(i)b1(i)− D(i)b2(i)?22.(26) i?= ?1 Since b1(i) and b(2i) are 1-of-?h encodings, an m×h?×h lookup table can be createadre t 1o- sft-ohre e nacllo pairwise msu×bh×-dhis ltaonockeusp. 333000222200 1M SIFT, 64−bit encoding (k = 264) 1M GIST, 64−bit encoding (k = 264) 1B SIFT, 64−bit encoding (k = 264) Re@acl0 .4186021RcIPoTQk0−Q m( SAeDaH)n s (SADH)1K0 .1642081 0RIocP1TkQ −K m (SAeDHa)n s (ASHD1)0K .186420 1 R0IoPc1TkQ −KQm( ASe DaHn) s (SADH1)0K Figure 4. Euclidean NN recall@R (number of items retrieved) based on different quantizers and corresponding distance functions on the 1M SIFT, 1M GIST, and 1B SIFT datasets. The dashed curves use symmetric distance. (AH ≡ AQDok, SD ≡ SQDck, AD ≡ AQDck) While the cost of computing SQDck is the same as AQDck, SQDck could also be used to estimate the distance between the indexed database entries, for diversifying the retrieval results, or to detect near duplicates, for example. Datasets. We use the 1M SIFT dataset, as in Sec. 3.3, along with two others, namely, 1M GIST (960D features) and 1B SIFT, both comprising disjoint sets of training, base and test vectors. 1M GIST has 500K training, 1M base, and 1K query vectors. 1B SIFT has 100M training, 1B base, and 10K query points. In each case, recall rates are averaged over queries in test set for a database populated from the base set. For expedience, we use only the first 1M training points for the 1B SIFT experiments. Parameters. In ANN experiments below, for both ckmeans and PQ, we use m = 8 and h = 256. Hence the number of clusters is k = 2568 = 264, so 64-bits are used as database indices. Using h = 256 is particularly attractive because the resulting lookup tables are small, encoding is fast, and each subcenter index fits into one byte. As h increases we expect retrieval results to improve, but encoding and indexing of a query to become slower. Initialization. To initialize the Di’s for learning, as in kmeans, we simply begin with random samples from the set of Zi’s (see Sec. 4.1). To initialize R we consider the different methods that J e´gou et al. [7] proposed for splitting the feature dimensions into m sub-vectors for PQ: (1) natural: sub-vectors comprise consecutive dimensions, (2) structured: dimensions with the same index modulo 8 are grouped, and (3) random: random permutations are used. For PQ in the experiments below, we use the orderings that produced the best results in [7], namely, the structured ordering for 960D GIST, and the natural ordering for 128D SIFT. For learning ck-means, R is initialized to the identity with SIFT corpora. For 1M GIST, where the PQ ordering is significant, we consider all three orderings to initialize R. Results. Fig. 4 shows Recall@R plots for ck-means and PQ [7] with symmetric and asymmetric distances (SD ≡ PSQQD [7c]k wanitdh A syDm m≡e rAicQ aDncdk) a on tmhee t3r cda dtaissteatns.c Tsh (eS Dhor ≡izontal axainsd represents tQheD number of retrieved items, R, on a log-scale. The results consistently favor ck-means. On 1M GIST, 64−bit encoding (k = 264) lae@Rc0 .261084 1R0Pc Qk −1m(K 321e )a (A nD s )(132) (A D )10K Figure 5. PQ and ck-means results using natural (1), structured (2), and random (3) ordering to define the (initial) subspaces. the high-dimensional GIST data, ck-means with AD significantly outperforms other methods; even ck-means with SD performs on par with PQ with AD. On 1M SIFT, the Recall@ 10 numbers for PQ and ck-means, both using AD, × are 59.9% and 63.7%. On 1B SIFT, Recall@ 100 numbers are 56.5% and 64.9%. As expected, with increasing dataset size, the difference between methods is more significant. In 1B SIFT, each feature vector is 128 bytes, hence a total of 119 GB. Using any method in Fig. 4 (including ckmeans) to index the database into 64 bits, this storage cost reduces to only 7.5 GB. This allows one to work with much larger datasets. In the experiments we use linear scan to find the nearest items according to quantizer distances. For NN search using 10K SIFT queries on 1B SIFT this takes about 8 hours for AD and AH and 4 hours for Hamming distance on a 2 4-core computer. Search can be sped up significantly; using a coarse tienri.tia Sl quantization sapnedd an pin svigenritefidfile structure for AD and AH, as suggested by [7, 1], and using the multi-index hashing method of [13] for Hamming distance. In this paper we did not implement these efficiencies as we focus primarily on the quality of quantization. Fig. 5 compares ck-means to PQ when R in ck-means is initialized using the 3 orderings of [7]. It shows that ckmeans is superior in all cases. Simiarly interesting, it also shows that despite the non-convexity ofthe optimization objective, ck-means learning tends to find similarly good encodings under different initial conditions. Finally, Fig. 6 compares the methods under different code dimensionality. 333000222311 1M GIST, encoding with 64, 96, and 128 bits Rl@ace0 .814206 1 R0cP kQ − m1 69e4216a−8 Knb −itsb 1t69248− bit10K Figure 6. PQ and ck-means results using different number of bits for encoding. In all cases asymmetric distance is used. Table2.Rcokg-nmiteoamPnQCesac(ondksue=r(bkaoc416y=0ko26)n40t2h[eC]IFAR7c598-u1.72096r%a tcesyuingdfferent codebook learning algorithms. 6.2. Learning visual codebooks While the ANN seach tasks above require too many clusters for k-means, it is interesing to compare k-means and ck-means on a task with a moderate number of clusters. To this end we consider codebook learning for bag-ofword√s models [3, 10]. We use ck-means with m = 2 and h = √k, and hence k centers. The main advantage of ck- × here is that finding the closest cluster center is done in O(√k) time, much faster than standard NN search with k-means in O(k). Alternatives for k-means, to improve efficiency, include approximate k-means [14], and hierarchical k-means [12]. Here we only compare to exact k-means. CIFAR-10 [9] comprises 50K training and 10K test images (32 32 pixels). Each image is one of 10 classes (airplane, 3b2i×rd,3 car, cat, )d.e Eera,c dog, frog, sh oonrsee o, ship, laansds etrsu (caki)r.We use publicly available code from Coates et al [2], with changes to the codebook learning and cluster assignment modules. Codebooks are built on 6×6 whitened color image patches. .O Cnoed histogram i sb ucirleta otend 6 per image quadrant, aagnde a linear SVM is applied to 4k-dim feature vectors. Recognition accuracy rates on the test set for different models and k are given in Table 2. Despite having fewer parameters, ck-means performs on par or better than kmeans. This is consistent for different initializations of the algorithms. Although k-means has higher fedility than ckmeans, with fewer parameters, ck-means may be less susceptible to overfitting. Table 2, also compares with the approach of [19], where PQ without learning a rotation is used for clustering features. As expected, learning the rotation has a significant impact on recognition rates, outperforming all three initializations of PQ. mean√s 7. Conclusions We present the Cartesian k-means algorithm, a generalization of k-means with a parameterization of the clus- ter centers such that number of centers is super-linear in the number of parameters. The method is also shown to be a generalization of the ITQ algorithm and Product Quantization. In experiments on large-scale retrieval and codebook learning for recognition the results are impressive, outperforming product quantization by a significant margin. An implementation of the method is available at https : / / github . com/norou z i ckmeans . / References [1] A. Babenko and V. Lempitsky. The inverted multi-index. CVPR, 2012. [2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. AISTATS, 2011. [3] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. ECCV Workshop Statistical Learning in Computer Vision, 2004. [4] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization for approximate nearest neighbor search. CVPR, 2013. [5] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. CVPR, 2011. [6] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011. [7] H. J ´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 2011. [8] H. J ´egou, M. Douze, C. Schmid, and P. P ´erez. Aggregating local descriptors into a compact image representation. CVPR, 2010. [9] A. Krizhevsky. Learning multiple layers of features from tiny images. MSc Thesis, Univ. Toronto, 2009. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. CVPR, 2006. [11] S. P. Lloyd. Least squares quantization in pcm. IEEE Trans. IT, 28(2): 129–137, 1982. [12] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. CVPR, 2006. [13] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in hamming space with multi-index hashing. CVPR, 2012. [14] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR, 2007. [15] P. Sch o¨nemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 3 1, 1966. [16] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV, 2003. [17] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Comp., 12: 1247–1283, 2000. [18] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE T. PAMI, 29(5):854–869, 2007. [19] S. Wei, X. Wu, and D. Xu. Partitioned k-means clustering for fast construction of unbiased visual vocabulary. The Era of Interactive Media, 2012. 333000222422
3 0.80947083 38 cvpr-2013-All About VLAD
Author: unkown-author
Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).
4 0.77165991 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search
Author: Tiezheng Ge, Kaiming He, Qifa Ke, Jian Sun
Abstract: Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a nonparametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.
5 0.68763125 236 cvpr-2013-K-Means Hashing: An Affinity-Preserving Quantization Method for Learning Binary Compact Codes
Author: Kaiming He, Fang Wen, Jian Sun
Abstract: In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.
6 0.64776033 69 cvpr-2013-Boosting Binary Keypoint Descriptors
7 0.63169253 268 cvpr-2013-Leveraging Structure from Motion to Learn Discriminative Codebooks for Scalable Landmark Classification
8 0.61070561 404 cvpr-2013-Sparse Quantization for Patch Description
9 0.51839978 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval
10 0.50742477 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance
11 0.48651809 87 cvpr-2013-Compressed Hashing
12 0.43588734 275 cvpr-2013-Lp-Norm IDF for Large Scale Image Search
13 0.43014443 223 cvpr-2013-Inductive Hashing on Manifolds
14 0.41423669 53 cvpr-2013-BFO Meets HOG: Feature Extraction Based on Histograms of Oriented p.d.f. Gradients for Image Classification
15 0.41016293 456 cvpr-2013-Visual Place Recognition with Repetitive Structures
16 0.40180176 403 cvpr-2013-Sparse Output Coding for Large-Scale Visual Recognition
17 0.37560707 112 cvpr-2013-Dense Segmentation-Aware Descriptors
18 0.37159345 151 cvpr-2013-Event Retrieval in Large Video Collections with Circulant Temporal Encoding
19 0.36796811 178 cvpr-2013-From Local Similarity to Global Coding: An Application to Image Classification
20 0.35271996 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition
topicId topicWeight
[(10, 0.081), (16, 0.015), (26, 0.029), (33, 0.27), (36, 0.031), (67, 0.402), (69, 0.036), (87, 0.042)]
simIndex simValue paperId paperTitle
1 0.96975702 142 cvpr-2013-Efficient Detector Adaptation for Object Detection in a Video
Author: Pramod Sharma, Ram Nevatia
Abstract: In this work, we present a novel and efficient detector adaptation method which improves the performance of an offline trained classifier (baseline classifier) by adapting it to new test datasets. We address two critical aspects of adaptation methods: generalizability and computational efficiency. We propose an adaptation method, which can be applied to various baseline classifiers and is computationally efficient also. For a given test video, we collect online samples in an unsupervised manner and train a randomfern adaptive classifier . The adaptive classifier improves precision of the baseline classifier by validating the obtained detection responses from baseline classifier as correct detections or false alarms. Experiments demonstrate generalizability, computational efficiency and effectiveness of our method, as we compare our method with state of the art approaches for the problem of human detection and show good performance with high computational efficiency on two different baseline classifiers.
2 0.964477 103 cvpr-2013-Decoding Children's Social Behavior
Author: James M. Rehg, Gregory D. Abowd, Agata Rozga, Mario Romero, Mark A. Clements, Stan Sclaroff, Irfan Essa, Opal Y. Ousley, Yin Li, Chanho Kim, Hrishikesh Rao, Jonathan C. Kim, Liliana Lo Presti, Jianming Zhang, Denis Lantsman, Jonathan Bidwell, Zhefan Ye
Abstract: We introduce a new problem domain for activity recognition: the analysis of children ’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1–2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3–5 minute child-adult interaction. In each session, the adult examiner followed a semistructured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.
3 0.94367975 398 cvpr-2013-Single-Pedestrian Detection Aided by Multi-pedestrian Detection
Author: Wanli Ouyang, Xiaogang Wang
Abstract: In this paper, we address the challenging problem of detecting pedestrians who appear in groups and have interaction. A new approach is proposed for single-pedestrian detection aided by multi-pedestrian detection. A mixture model of multi-pedestrian detectors is designed to capture the unique visual cues which are formed by nearby multiple pedestrians but cannot be captured by single-pedestrian detectors. A probabilistic framework is proposed to model the relationship between the configurations estimated by single- and multi-pedestrian detectors, and to refine the single-pedestrian detection result with multi-pedestrian detection. It can integrate with any single-pedestrian detector without significantly increasing the computation load. 15 state-of-the-art single-pedestrian detection approaches are investigated on three widely used public datasets: Caltech, TUD-Brussels andETH. Experimental results show that our framework significantly improves all these approaches. The average improvement is 9% on the Caltech-Test dataset, 11% on the TUD-Brussels dataset and 17% on the ETH dataset in terms of average miss rate. The lowest average miss rate is reduced from 48% to 43% on the Caltech-Test dataset, from 55% to 50% on the TUD-Brussels dataset and from 51% to 41% on the ETH dataset.
Author: Enrique G. Ortiz, Alan Wright, Mubarak Shah
Abstract: This paper presents an end-to-end video face recognition system, addressing the difficult problem of identifying a video face track using a large dictionary of still face images of a few hundred people, while rejecting unknown individuals. A straightforward application of the popular ?1minimization for face recognition on a frame-by-frame basis is prohibitively expensive, so we propose a novel algorithm Mean Sequence SRC (MSSRC) that performs video face recognition using a joint optimization leveraging all of the available video data and the knowledge that the face track frames belong to the same individual. By adding a strict temporal constraint to the ?1-minimization that forces individual frames in a face track to all reconstruct a single identity, we show the optimization reduces to a single minimization over the mean of the face track. We also introduce a new Movie Trailer Face Dataset collected from 101 movie trailers on YouTube. Finally, we show that our methodmatches or outperforms the state-of-the-art on three existing datasets (YouTube Celebrities, YouTube Faces, and Buffy) and our unconstrained Movie Trailer Face Dataset. More importantly, our method excels at rejecting unknown identities by at least 8% in average precision.
5 0.8981787 45 cvpr-2013-Articulated Pose Estimation Using Discriminative Armlet Classifiers
Author: Georgia Gkioxari, Pablo Arbeláez, Lubomir Bourdev, Jitendra Malik
Abstract: We propose a novel approach for human pose estimation in real-world cluttered scenes, and focus on the challenging problem of predicting the pose of both arms for each person in the image. For this purpose, we build on the notion of poselets [4] and train highly discriminative classifiers to differentiate among arm configurations, which we call armlets. We propose a rich representation which, in addition to standardHOGfeatures, integrates the information of strong contours, skin color and contextual cues in a principled manner. Unlike existing methods, we evaluate our approach on a large subset of images from the PASCAL VOC detection dataset, where critical visual phenomena, such as occlusion, truncation, multiple instances and clutter are the norm. Our approach outperforms Yang and Ramanan [26], the state-of-the-art technique, with an improvement from 29.0% to 37.5% PCP accuracy on the arm keypoint prediction task, on this new pose estimation dataset.
6 0.89287317 275 cvpr-2013-Lp-Norm IDF for Large Scale Image Search
same-paper 7 0.88409501 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections
8 0.88191593 2 cvpr-2013-3D Pictorial Structures for Multiple View Articulated Pose Estimation
9 0.87104982 375 cvpr-2013-Saliency Detection via Graph-Based Manifold Ranking
10 0.85570335 345 cvpr-2013-Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues
11 0.82787454 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation
12 0.81831264 254 cvpr-2013-Learning SURF Cascade for Fast and Accurate Object Detection
13 0.81700402 288 cvpr-2013-Modeling Mutual Visibility Relationship in Pedestrian Detection
14 0.80324817 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval
15 0.79529995 363 cvpr-2013-Robust Multi-resolution Pedestrian Detection in Traffic Scenes
16 0.79448164 122 cvpr-2013-Detection Evolution with Multi-order Contextual Co-occurrence
17 0.77494806 438 cvpr-2013-Towards Pose Robust Face Recognition
18 0.77250808 63 cvpr-2013-Binary Code Ranking with Weighted Hamming Distance
19 0.77044946 60 cvpr-2013-Beyond Physical Connections: Tree Models in Human Pose Estimation
20 0.77014291 338 cvpr-2013-Probabilistic Elastic Matching for Pose Variant Face Verification