cvpr cvpr2013 cvpr2013-38 knowledge-graph by maker-knowledge-mining

38 cvpr-2013-All About VLAD


Source: pdf

Author: unkown-author

Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. [sent-6, score-0.379]

2 Introduction The area of large scale particular object retrieval has seen a steady train of improvements in performance over the last decade. [sent-12, score-0.111]

3 Since the original introduction of the bag of visual words (BoW) formulation [16], there have been many notable contributions that have enhanced the descriptors [1, 15, 19], reduced quantization loss [6, 14, 18], and improved recall [1, 3, 4]. [sent-13, score-0.123]

4 16 bytes per image) so that all the descriptors for very large image datasets (e. [sent-18, score-0.121]

5 Its introduction has opened up a new research theme on the trade-off between memory footprint of an image descriptor and retrieval performance, e. [sent-21, score-0.143]

6 It differs from the BoW image descriptor by recording the difference from the cluster center, rather than the number of SIFTs assigned to the cluster. [sent-26, score-0.141]

7 VLAD is similar in spirit to the earlier Fisher vectors [11], as both record aspects ofthe distribution of SIFTs assigned to a cluster center. [sent-30, score-0.125]

8 The new normalization is simple, and always improves retrieval performance. [sent-35, score-0.132]

9 Multi-VLAD: We study the benefits of recording multiple VLADs for an image and show that retrieval performance is improved for small objects (those that cover only a small part of the image, or where there is a significant scale change from the query image). [sent-37, score-0.15]

10 Vocabulary adaptation: We investigate the problem of vocabulary sensitivity, where a vocabulary trained on one dataset, A, is used to represent another dataset B, and the performance is inferior to using a vocabulary trained on B. [sent-40, score-0.42]

11 We propose an efficient, simple, method for improving VLAD descriptors via vocabulary adaptation, without the need to store or recompute any local descriptors in the image database. [sent-41, score-0.392]

12 The first improves retrieval in general, and the second partially overcomes an important deficiency that VLAD has inferior invariance to changes in scale (compared to a BoW approach). [sent-43, score-0.111]

13 The third contribution addresses a problem that arises in real-world applications where, for example, image databases grow with time and the original vocabulary is incapable of representing the additional images well. [sent-44, score-0.152]

14 The methods are combined and compared to the state of the art for larger scale retrieval (Oxford 105k and Flickr1M) in – section 6. [sent-46, score-0.111]

15 Each descriptor is then assigned to the closest cluster of a vocabulary of size k (where k is typically 64 or 256, so that clusters are quite coarse). [sent-52, score-0.302]

16 For each of the k clusters, the residuals (vector differences between descriptors and cluster centers) are accumulated, and the k 128-D sums of residuals are concatenated into a single k 128 dimensional descriptor; we refer to it as the unnormalized VLAD. [sent-53, score-0.472]

17 Note, VLAD is similar to other descriptors that record residuals such as Fisher vectors [11] and super-vector coding [20]. [sent-54, score-0.247]

18 Subsequently, a signed square rooting (SSR) normalization was introduced [5, 9], following its use by Perronnin et al. [sent-57, score-0.139]

19 [2] propose a different normalization scheme for the residuals and also investigate omitting SIFT descriptors that lie close to cluster boundaries. [sent-65, score-0.364]

20 Both give a substantial retrieval performance improvement for negligible additional computational cost, and we employ them here. [sent-67, score-0.11]

21 Benchmark datasets and evaluation procedure The performance is measured on two standard and publicly available image retrieval benchmarks, Oxford buildings and Holidays. [sent-70, score-0.117]

22 We follow the standard experimental scenario of [5] for all benchmarks: for Oxford 5k and 105k the detector and SIFT descriptor are computed as in [10]; while for Holidays(+Flickr1M) the publicly available SIFT descriptors are used. [sent-77, score-0.138]

23 Three different datasets are used for vocabulary building (i. [sent-79, score-0.135]

24 7 million SIFT descriptors, this vocabulary can be considered independent from all datasets. [sent-83, score-0.162]

25 Red crosses and blue circles correspond to local descriptors extracted from two different images, while the red and blue arrows correspond to the sum of their residuals (differences between descriptors and the cluster center). [sent-87, score-0.43]

26 However, by performing cluster center adaptation the residuals are better estimated (c), thus inducing a better estimate of the image similarity which is now consistent with the one induced by the clustering in (a). [sent-92, score-0.418]

27 1), VLAD is constructed by aggregating differences between local descriptors and coarse cluster centers, followed by L2 normalization. [sent-95, score-0.268]

28 For the dataset used to learn the clusters (by k-means) the centers are consistent in that the mean of all vectors assigned to a cluster over the entire dataset is the cluster center. [sent-96, score-0.309]

29 The similarity between VLAD descriptors is measured as the scalar product between them, and this decomposes as the sum of scalar products of aggregated residuals for each coarse cluster independently. [sent-101, score-0.45]

30 Consider a contribution to the similarity for one particular coarse cluster k. [sent-102, score-0.167]

31 We denote with x(k1) and x(k2) the set of all descriptors in image 1 and 2, respectively, which get assigned to the same coarse cluster k. [sent-103, score-0.247]

32 Thus, the similarity measure induced by the VLAD descriptors is increased if the scalar product between the residuals is positive, and decreased otherwise. [sent-107, score-0.299]

33 For example, the sets of descriptors illustrated in figure 1a are deemed to be very different (they are on opposite sides of the cluster center) thus giving a negative contribution to the similarity of the two images. [sent-108, score-0.259]

34 It is clear that the VLAD similarity measure is strongly affected by the cluster center. [sent-109, score-0.153]

35 We now introduce cluster center adaptation to improve residual estimates for an inconsistent vocabulary, namely, using new adapted cluster centers μˆk that are consistent when computing residuals (equation (1)), instead of the original cluster centers μk. [sent-112, score-0.674]

36 The algorithm consists of two steps: (i) compute the adapted cluster centers μˆk as the mean of all local descriptors in the dataset which are assigned to the same cluster k; (ii) recompute all VLAD descriptors by aggregating differences between local descrip- tors and the adapted centers ˆμ k. [sent-113, score-0.587]

37 Note that step (ii) can be performed without actually storing or recomputing all local descriptors as their assignment to clusters remains unchanged and thus it is sufficient only to store the descriptor sums for every image and each cluster. [sent-114, score-0.2]

38 Figure 1c illustrates the improvement achieved with center adaptation, as now residuals, and thus similarity scores, are similar to the ones obtained using the original clustering in figure 1a. [sent-115, score-0.134]

39 Note that for an adapted clustering the cluster center is indeed equal to the mean of all the descriptors assigned to it from the dataset. [sent-116, score-0.293]

40 Thus, our cluster adaptation scheme has no effect on VLADs obtained using consistent clusters, as desired. [sent-117, score-0.194]

41 To illustrate the power of the adaptation, a simple test is performed where the Flickr60k vocabulary is used for the Oxford 5k dataset, and the difference between the original vocabulary and the adapted one measured. [sent-118, score-0.308]

42 The mean magnitude of the displacements between the k = 256 adapted and original cluster centers is 0. [sent-119, score-0.166]

43 For comparison, when the Paris vocabulary is used, the mean magnitude of the difference is only 0. [sent-121, score-0.135]

44 Figure 2 shows the improvement in retrieval performance obtained when using cluster center adaptation (adapt) compared to the standard VLAD under various 111555778088 PAm0. [sent-124, score-0.346]

45 Six methods are compared, namely: (i) baseline: the standard VLAD, (ii) intra-normalization, innorm (section 4), (iii) center adaptation, adapt (section 3), (iv) adapt followed by innorm, (v) baseline: signed square rooting SSR, (vi) aided baseline: adapt followed by SSR. [sent-127, score-0.42]

46 The results were generated using RootSIFT [1] descriptors and vocabularies of size k = 256. [sent-129, score-0.165]

47 Center adaptation improves results in all cases, especially when the vocabulary was computed on a vastly different image database or not computed at all. [sent-131, score-0.236]

48 For example, on Holidays with Paris vocabulary the mAP increases by 9. [sent-132, score-0.135]

49 The improvement is smaller when the Flickr60k vocabulary is used since the distribution of descriptors is more similar to the ones from the Holidays dataset, but it still exists: 3. [sent-138, score-0.259]

50 In this scenario, one is forced to use a fixed precomputed vocabulary since it is impractical (due to storage and processing requirements) to recompute too frequently as the database grows, and reassign all descriptors to the newly obtained clusters. [sent-146, score-0.288]

51 Using cluster center adaptation fits this scenario perfectly as it provides a way of computing better similarity estimates without the need to recompute or store all local descriptors, as descriptor assignment to clusters does not change. [sent-148, score-0.381]

52 Intra-normalization In this section, it is shown that current methods for normalizing VLAD descriptors, namely simple L2 normaliza- tion [8] and signed square rooting [12], are prone to putting too much weight on bursty visual features, resulting in a suboptimal measure of image similarity. [sent-150, score-0.154]

53 We propose here a new normalization, termed intranormalization, where the sum of residuals is L2 normalized within each VLAD block (i. [sent-156, score-0.125]

54 This way, regardless of the amount of bursty image features their effect on VLAD similarity is localized to their coarse cluster, and is of similar magnitude to all other contributions from other clusters. [sent-160, score-0.113]

55 The geometric interpretation of intranormalization is that the similarity of two VLAD vectors depends on the angles between the residuals in corresponding clusters. [sent-164, score-0.205]

56 This follows from the scalar product of equation (1): since the residuals are now L2 normalized the scalar product depends only on the cosine of the differences in angles of the residuals, not on their magnitudes. [sent-165, score-0.177]

57 [2] have also proposed an alternative normalization where the per-cluster mean of residuals is computed instead of the sum. [sent-167, score-0.165]

58 Note that all the arguments made in favor of cluster center adaptation (section 3) are unaffected by intranormalization. [sent-169, score-0.236]

59 Specifically, only the values of C(1) and C(2) change in equation (1), and not the dependence of the VLAD similarity measure on the quality of coarse clustering which is addressed by cluster center adaptation. [sent-170, score-0.224]

60 The relative improvement in the retrieval performance (mAP) is 7. [sent-182, score-0.11]

61 All three experiments were performed on Holidays with a vocabulary of size k = 64 (small so that the components are visible) learnt on Paris with cluster center adaptation. [sent-185, score-0.27]

62 As shown in figure 2, intra-normalization (innorm) combined with center adaptation (adapt) always improves retrieval performance, and consistently outperforms other VLAD normalization schemes, namely the original VLAD with L2 normalization and SSR. [sent-187, score-0.348]

63 Center adaptation with intra-normalization (adapt+innorm) significantly outperforms the next best method (which is adapt+SSR); the average relative improvement on Oxford 5k and Holidays is 4. [sent-188, score-0.119]

64 Compared to SSR without center adaptation our improvements are even more evident: 35. [sent-191, score-0.143]

65 As before, our constraints are the memory footprint and that any performance gain should not involve returning to the original SIFT descriptors for the image. [sent-196, score-0.142]

66 14 VLAD descriptors are extracted: nine (3 3) at the finest scale, ×× × four (2 2) at the medium scale (each tile is formed by 2 2 tiles from the finest scale), and one covering the entire image. [sent-199, score-0.182]

67 An image in the database is assigned a score equal to the maximum similarity between any of its VLAD descriptors and the query. [sent-201, score-0.164]

68 As will be shown below, computing VLAD descriptors at fine scales enables retrieval of small objects, but at the cost ofincreased storage (memory) requirements. [sent-202, score-0.248]

69 To assess the retrieval performance, additional ROI annotation is provided for the Oxford 5k dataset, as the original only specifies ROIs for the query images. [sent-204, score-0.148]

70 Note that residuals in (d) and (e) are very small, thus VLAD similarities are very good linear estimators of region overlap. [sent-230, score-0.125]

71 Fine object localization Given similarity scores between a query ROI and all the VLADs contained in the MultiVLAD of a result image, we show here how to obtain an estimate of the corresponding location within the result image. [sent-236, score-0.116]

72 The best ROI, rbest, is determined by minimizing residuals as rbest = argrminmλin||λv(r) − s|| (2) where any negative similarities are clipped to zero. [sent-244, score-0.147]

73 Regressed overlap scores mimic the similarity scores very well, as shown by small residuals in figure 4d and 4e. [sent-245, score-0.223]

74 In an analogous manner to computing mean average precision (mAP) scores for retrieval performance evaluation, for the purpose of localization evaluation the average overlap score is computed for each query, and averaged across queries to obtain the mean average overlap score. [sent-259, score-0.175]

75 For the region descriptors we use MultiVLAD descriptors with center adaptation and intra-normalization, with multiple vocabularies trained on Paris and projected down to 128-D. [sent-260, score-0.414]

76 For the greedy baseline, the MultiVLAD retrieval system returns the most similar tile to the query in terms of similarity of their VLAD descriptors. [sent-265, score-0.227]

77 Image descriptors of medium-dimensionality (20k-D to 32k-D) are compared in terms of retrieval performance (mAP) on the Oxford 5k and Holidays benchmarks. [sent-270, score-0.198]

78 The mean results are averaged over four different runs (corresponding to different random initializations of k-means for vocabulary building), and the single best result is from the vocabulary with the highest mAP. [sent-274, score-0.27]

79 Results and discussion In the following sections we compare our two improvements of the VLAD descriptor, namely cluster center adaptation and intra-normalization, with the state-of-the-art. [sent-280, score-0.252]

80 First, the retrieval performance of the full size VLAD descriptors is evaluated, followed by tests on more compact descriptors obtained using dimensionality reduction, and then the variation in performance using vocabularies trained on different datasets is evaluated. [sent-281, score-0.4]

81 For all these tests we used RootSIFT descriptors clustered into k = 256 coarse clusters, and the vocabularies were trained on Paris and Flickr60k for Oxford 5k(+100k) and Holidays(+Flickr1M), respectively. [sent-283, score-0.197]

82 Cluster center adaptation followed by intra-normalization outperforms all previous methods. [sent-286, score-0.161]

83 128-D dimensional image descriptors are compared in terms of retrieval performance (mAP) on the Oxford 5k and Holidays benchmarks. [sent-293, score-0.198]

84 [9], apart from the recent multiple vocabulary (Multivoc) method [5]. [sent-295, score-0.135]

85 We employ the state-ofthe-art method of [5] (Multivoc) which uses multiple vocabularies to obtain multiple VLAD (with SSR) descriptions of one image, and then perform dimensionality reduction, using PCA, and whitening to produce very small image descriptors (128-D). [sent-298, score-0.181]

86 We mimic the experimental setup of [5], and learn the vocabulary and PCA on Paris 6k for the Oxford 5k tests. [sent-299, score-0.135]

87 In order to assess how the retrieval performance varies when using different vocabularies, we measure the proportion of the ideal mAP (i. [sent-307, score-0.133]

88 when the vocabulary is built on the benchmark dataset itself) achieved for each of the methods. [sent-309, score-0.15]

89 The baselines (VLAD and VLAD+SSR) perform very badly when an inappropriate (Flickr60k) vocabulary is used achieving only 68% of the ideal performance for the best baseline (VLAD+SSR). [sent-311, score-0.179]

90 Using the Flickr60k vocabulary with adapt+innorm for Oxford 5k achieves 59% of the ideal performance, which is much worse than the 86% obtained 111555888422 V MTaL betA lhDo d3+:a\SEdvaofRpceta+biSuonflaRruysming0Od. [sent-316, score-0.158]

91 Column one is the ideal case where retrieval is assessed on the same dataset as used to build the vocabulary. [sent-320, score-0.13]

92 when the vocabulary is built on Oxford 5k itself) is given in brackets. [sent-325, score-0.135]

93 With datasets of up to 1 million images and compact image descriptors (128-D) it is still possible to perform exhaustive nearest neighbor search. [sent-330, score-0.152]

94 There are no previously reported results on compact image descriptors for this dataset to compare to. [sent-337, score-0.14]

95 Conclusions and recommendations We have presented three methods which improve standard VLAD descriptors over various aspects, namely cluster center adaptation, intra-normalization and MultiVLAD. [sent-344, score-0.257]

96 Cluster center adaptation is a useful method for large scale retrieval tasks where image databases grow with time as content gets added. [sent-345, score-0.254]

97 However, we recommend intra-normalization always be used in conjunction with a good visual vocabulary or with center adaptation (as intra-normalization is sometimes outperformed by SSR when inconsistent clusters are used and no center adaptation is performed). [sent-349, score-0.466]

98 Aggregating local images descriptors into compact [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] codes. [sent-424, score-0.125]

99 Object retrieval with large vocabularies and fast spatial matching. [sent-453, score-0.151]

100 Lost in quantization: Improving particular object retrieval in large scale image databases. [sent-462, score-0.111]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('vlad', 0.828), ('ssr', 0.218), ('holidays', 0.177), ('oxford', 0.165), ('vocabulary', 0.135), ('vlads', 0.131), ('residuals', 0.125), ('innorm', 0.12), ('descriptors', 0.106), ('roi', 0.104), ('adaptation', 0.101), ('cluster', 0.093), ('retrieval', 0.092), ('multivlad', 0.077), ('multivoc', 0.076), ('vocabularies', 0.059), ('rooting', 0.054), ('gou', 0.046), ('similarity', 0.042), ('center', 0.042), ('adapt', 0.041), ('normalization', 0.04), ('query', 0.039), ('bursty', 0.039), ('bow', 0.038), ('paris', 0.035), ('centers', 0.035), ('tile', 0.034), ('flickr', 0.033), ('coarse', 0.032), ('chum', 0.032), ('descriptor', 0.032), ('fisher', 0.031), ('egou', 0.03), ('fine', 0.03), ('million', 0.027), ('recompute', 0.027), ('rootsift', 0.027), ('signed', 0.027), ('clusters', 0.026), ('scalar', 0.026), ('burstiness', 0.025), ('sifts', 0.025), ('buildings', 0.025), ('overlap', 0.024), ('alleviates', 0.024), ('sift', 0.024), ('douze', 0.023), ('tiles', 0.023), ('unnormalized', 0.023), ('ideal', 0.023), ('intranormalization', 0.022), ('rbest', 0.022), ('rdtm', 0.022), ('roap', 0.022), ('pca', 0.022), ('perronnin', 0.021), ('adapted', 0.021), ('baselines', 0.021), ('storage', 0.02), ('greedy', 0.02), ('perd', 0.019), ('reimplementation', 0.019), ('rorm', 0.019), ('scale', 0.019), ('rectangles', 0.019), ('memory', 0.019), ('philbin', 0.019), ('compact', 0.019), ('aggregating', 0.019), ('inconsistent', 0.019), ('localization', 0.019), ('sivic', 0.019), ('store', 0.018), ('strongly', 0.018), ('square', 0.018), ('deemed', 0.018), ('improvement', 0.018), ('followed', 0.018), ('nla', 0.018), ('normalizations', 0.018), ('och', 0.018), ('recomputing', 0.018), ('proportion', 0.018), ('map', 0.017), ('benchmarks', 0.017), ('isard', 0.017), ('original', 0.017), ('scores', 0.016), ('arandjelovi', 0.016), ('rois', 0.016), ('whitening', 0.016), ('assigned', 0.016), ('vectors', 0.016), ('namely', 0.016), ('bytes', 0.015), ('clusterings', 0.015), ('occupying', 0.015), ('clustering', 0.015), ('dataset', 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 38 cvpr-2013-All About VLAD

Author: unkown-author

Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).

2 0.37347078 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Author: Yunchao Gong, Sanjiv Kumar, Henry A. Rowley, Svetlana Lazebnik

Abstract: Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint.

3 0.33011037 59 cvpr-2013-Better Exploiting Motion for Better Action Recognition

Author: Mihir Jain, Hervé Jégou, Patrick Bouthemy

Abstract: Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description. This paper establishes that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms. Then, we design a new motion descriptor, the DCS descriptor, based on differential motion scalar quantities, divergence, curl and shear features. It captures additional information on the local motion patterns enhancing results. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. Our three contributions are complementary and lead to outperform all reported results by a significant margin on three challenging datasets, namely Hollywood 2, HMDB51 and Olympic Sports. 1. Introduction and related work Human actions often convey the essential meaningful content in videos. Yet, recognizing human actions in un- constrained videos is a challenging problem in Computer Vision which receives a sustained attention due to the potential applications. In particular, there is a large interest in designing video-surveillance systems, providing some automatic annotation of video archives as well as improving human-computer interaction. The solutions proposed to address this problem inherit, to a large extent, from the techniques first designed for the goal of image search and classification. The successful local features developed to describe image patches [15, 23] have been translated in the 2D+t domain as spatio-temporal local descriptors [13, 30] and now include motion clues [29]. These descriptors are often extracted from spatial-temporal interest points [12, 3 1]. More recent techniques assume some underlying temporal motion model involving trajectories [2, 6, 7, 17, 18, 25, 29, 32]. Most of these approaches produce large set of local descriptors which are in turn aggregated to produce a single vector representing the video, in order to enable the use of powerful discriminative classifiers such as support vector machines (SVMs). This is usually done with the bag- Figure 1. Optical flow field vectors (green vectors with red end points) before and after dominant motion compensation. Most of the flow vectors due to camera motion are suppressed after compensation. One of the contributions of this paper is to show that compensating for the dominant motion is beneficial for most of the existing descriptors used for action recognition. of-words technique [24], which quantizes the local features using a k-means codebook. Thanks to the successful combination of this encoding technique with the aforementioned local descriptors, the state of the art in action recognition is able to go beyond the toy problems ofclassifying simple human actions in controlled environment and considers the detection of actions in real movies or video clips [11, 16]. Despite these progresses, the existing descriptors suffer from an uncompleted handling of motion in the video sequence. Motion is arguably the most reliable source of information for action recognition, as often related to the actions of interest. However, it inevitably involves the background or camera motion when dealing with uncontrolled and re- alistic situations. Although some attempts have been made to compensate camera motion in several ways [10, 21, 26, 29, 32], how to separate action motion from that caused by the camera, and how to reflect it in the video description remains an open issue. The motion compensation mechanism employed in [10] is tailor-made to the Motion Interchange Pattern encoding technique. The Motion Boundary Histogram (MBH) [29] is a recent appealing approach to 222555555533 suppress the constant motion by considering the flow gradient. It is robust to some extent to the presence of camera motion, yet it does not explicitly handle the camera motion. Another approach [26] uses a sophisticated and robust (RANSAC) estimation of camera motion. It first segments the color image into regions corresponding to planar parts in the scene and estimates the (three) dominant homographies to update the motion associated with local features. A rather different view is adopted in [32] where the motion decomposition is performed at the trajectory level. All these works support the potential of motion compensation. As the first contribution of this paper, we address the problem in a way that departs from these works by considering the compensation of the dominant motion in both the tracking stages and encoding stages involved in the computation of action recognition descriptors. We rely on the pioneering works on motion compensation such as the technique proposed in [20], that considers 2D polynomial affine motion models for estimating the dominant image motion. We consider this particular model for its robustness and its low computational cost. It was already used in [21] to separate the dominant motion (assumed to be due to the camera motion) and the residual motion (corresponding to the independent scene motions) for dynamic event recognition in videos. However, the statistical modeling of both motion components was global (over the entire image) and only the normal flow was computed for the latter. Figure 1 shows the vectors of optical flow before and after applying the proposed motion compensation. Our method successfully suppresses most of the background motion and reinforces the focus towards the action of interest. We exploit this compensated motion both for descriptor computation and for extracting trajectories. However, we also show that the camera motion should not be thrown as it contains complementary information that is worth using to recognize certain action categories. Then, we introduce the Divergence-Curl-Shear (DCS) descriptor, which encodes scalar first-order motion features, namely the motion divergence, curl and shear. It captures physical properties of the flow pattern that are not involved in the best existing descriptors for action recognition, except in the work of [1] which exploits divergence and vorticity among a set of eleven kinematic features computed from the optical flow. Our DCS descriptor provides a good performance recognition performance on its own. Most importantly, it conveys some information which is not captured by existing descriptors and further improves the recognition performance when combined with the other descriptors. As a last contribution, we bring an encoding technique known as VLAD (vector oflocal aggregated descriptors) [8] to the field of action recognition. This technique is shown to be better than the bag-of-words representation for combining all the local video descriptors we have considered. The organization of the paper is as follows. Section 2 introduces the motion properties that we will consider through this paper. Section 3 presents the datasets and classification scheme used in our different evaluations. Section 4 details how we revisit several popular descriptors of the literature by the means of dominant motion compensation. Our DCS descriptor based on kinematic properties is introduced in Section 5 and improved by the VLAD encoding technique, which is introduced and bench-marked in Section 6 for several video descriptors. Section 7 provides a comparison showing the large improvement achieved over the state of the art. Finally, Section 8 concludes the paper. 2. Motion Separation and Kinematic Features In this section, we describe the motion clues we incorporate in our action recognition framework. We separate the dominant motion and the residual motion. In most cases, this will account to distinguishing the impact of camera movement and independent actions. Note that we do not aim at recovering the 3D camera motion: The 2D parametric motion model describes the global (or dominant) motion between successive frames. We first explain how we estimate the dominant motion and employ it to separate the dominant flow from the optical flow. Then, we will introduce kinematic features, namely divergence, curl and shear for a more comprehensive description of the visual motion. 2.1. Affine motion for compensating camera motion Among polynomial motion models, we consider the 2D affine motion model. Simplest motion models such as the 4parameter model formed by the combination of 2D translation, 2D rotation and scaling, or more complex ones such as the 8-parameter quadratic model (equivalent to a homography), could be selected as well. The affine model is a good trade-off between accuracy and efficiency which is of primary importance when processing a huge video database. It does have limitations since strictly speaking it implies a single plane assumption for the static background. However, this is not that penalizing (especially for outdoor scenes) if differences in depth remain moderated with respect to the distance to the camera. The affine flow vector at point p = (x, y) and at time t, is defined as waff(pt) =?cc12((t ) ?+?aa31((t ) aa42((t ) ? ?xytt?. (1) = + + = + uaff(pt) c1(t) a1(t)xt a2(t)yt and vaff(pt) c2(t) a3 (t)xt + a4(t)yt are horizontal and vertical components of waff(pt) respectively. Let us denote the optical flow vector at point p at time t as w(pt) = (u(pt) , v(pt)). We introduce the flow vector ω(pt) obtained by removing the affine flow vector from the optical flow vector ω(pt) = w(pt) − waff(pt) . (2) 222555555644 The dominant motion (estimated as waff(pt)) is usually due to the camera motion. In this case, Equation 2 amounts to canceling (or compensating) the camera motion. Note that this is not always true. For example in case of close-up on a moving actor, the dominant motion will be the affine estimation of the apparent actor motion. The interpretation of the motion compensation output will not be that straightforward in this case, however the resulting ω-field will still exhibit different patterns for the foreground action part and the background part. In the remainder, we will refer to the “compensated” flow as ω-flow. Figure 1 displays the computed optical flow and the ωflow. We compute the affine flow with the publicly available Motion2D software1 [20] which implements a realtime robust multiresolution incremental estimation framework. The affine motion model has correctly accounted for the motion induced by the camera movement which corresponds to the dominant motion in the image pair. Indeed, we observe that the compensated flow vectors in the background are close to null and the compensated flow in the foreground, i.e., corresponding to the actors, is conversely inflated. The experiments presented along this paper will show that effective separation of dominant motion from the residual motions is beneficial for action recognition. As explained in Section 4, we will compute local motion descriptors, such as HOF, on both the optical flow and the compensated flow (ω-flow), which allows us to explicitly and directly characterize the scene motion. 2.2. Local kinematic features By kinematic features, we mean local first-order differential scalar quantities computed on the flow field. We consider the divergence, the curl (or vorticity) and the hyperbolic terms. They inform on the physical pattern of the flow so that they convey useful information on actions in videos. They can be computed from the first-order derivatives of the flow at every point p at every frame t as ⎨⎪ ⎪ ⎪ ⎧hcdyuipvr1l2(p t) = −∂ u ∂ (yxp(xtp) +−∂ v ∂ v(px ypxt ) The diverg⎪⎩ence is related to axial motion, expansion scaling effects, the curl to rotation in the image plane. hyperbolic terms express the shear of the visual flow responding to more complex configuration. We take account the shear quantity only: shear(pt) = ?hyp12(pt) + hyp22(pt). (3) and The corinto (4) 1http://www.irisa.fr/vista/Motion2D/ In Section 5, we propose the DCS descriptor that is based on the kinematic features (divergence, curl and shear) of the visual motion discussed in this subsection. It is computed on either the optical or the compensated flow, ω-flow. 3. Datasets and evaluation This section first introduces the datasets used for the evaluation. Then, we briefly present the bag-of-feature model and the classification scheme used to encode the descriptors which will be introduced in Section 4. Hollywood2. The Hollywood2 dataset [16] contains 1,707 video clips from 69 movies representing 12 action classes. It is divided into train set and test set of 823 and 884 samples respectively. Following the standard evaluation protocol of this benchmark, we use average precision (AP) for each class and the mean of APs (mAP) for evaluation. HMDB51. The HMDB51 dataset [11] is a large dataset containing 6,766 video clips extracted from various sources, ranging from movies to YouTube. It consists of 51 action classes, each having at least 101 samples. We follow the evaluation protocol of [11] and use three train/test splits, each with 70 training and 30 testing samples per class. The average classification accuracy is computed over all classes. Out of the two released sets, we use the original set as it is more challenging and used by most of the works reporting results in action recognition. Olympic Sports. The third dataset we use is Olympic Sports [19], which again is obtained from YouTube. This dataset contains 783 samples with 16 sports action classes. We use the provided2 train/test split, there are 17 to 56 training samples and 4 to 11test samples per class. Mean AP is used for the evaluation, which is the standard choice. Bag of features and classification setup. We first adopt the standard BOF [24] approach to encode all kinds of descriptors. It produces a vector that serves as the video representation. The codebook is constructed for each type of descriptor separately by the k-means algorithm. Following a common practice in the literature [27, 29, 30], the codebook size is set to k=4,000 elements. Note that Section 6 will consider encoding technique for descriptors. For the classification, we use a non-linear SVM with χ2kernel. When combining different descriptors, we simply add the kernel matrices, as done in [27]: K(xi,xj) = exp?−?cγ1cD(xic,xjc)?, 2http://vision.stanford.edu/Datasets/OlympicSports/ 222555555755 (5) where D(xic, xjc) is χ2 distance between video xic and xjc with respect to c-th channel, corresponding to c-th descriptor. The quantity γc is the mean value of χ2 distances between the training samples for the c-th channel. The multiclass classification problem that we consider is addressed by applying a one-against-rest approach. 4. Compensated descriptors This section describes how the compensation ofthe dominant motion is exploited to improve the quality of descriptors encoding the motion and the appearance around spatio-temporal positions, hence the term “compensated descriptors”. First, we briefly review the local descriptors [5, 13, 16, 29, 30] used here along with dense trajectories [29]. Second, we analyze the impact of motion flow compensation when used in two different stages of the descriptor computation, namely in the tracking and the description part. 4.1. Dense trajectories and local descriptors Employing dense trajectories to compute local descriptors is one of the state-of-the-art approaches for action recognition. It has been shown [29] that when local descriptors are computed over dense trajectories the performance improves considerably compared to when computed over spatio temporal features [30]. Dense Trajectories [29]: The trajectories are obtained by densely tracking sampled points using optical flow fields. First, feature points are sampled from a dense grid, with step size of 5 pixels and over 8 scales. Each feature point pt = (xt, yt) at frame t is then tracked to the next frame by median filtering in a dense optical flow field F = (ut, vt) as follows: pt+1 = (xt+1 , yt+1) = (xt, yt) + (M ∗ F) | (x ¯t,y ¯t) , (6) where M is the kernel of median filtering and ( x¯ t, y¯ t) is the rounded position of (xt, yt). The tracking is limited to L (=15) frames to avoid any drifting effect. Excessively short trajectories and trajectories exhibiting sudden large displacements are removed as they induce some artifacts. Trajectories must be understood here as tracks in the spacetime volume of the video. Local descriptors: The descriptors are computed within a space-time volume centered around each trajectory. Four types of descriptors are computed to encode the shape of the trajectory, local motion pattern and appearance, namely Trajectory [29], HOF (histograms of optical flow) [13], MBH [4] and HOG (histograms of oriented gradients) [3]. All these descriptors depend on the flow field used for the tracking and as input of the descriptor computation: 1. The Trajectory descriptor encodes the shape of the trajectory represented by the normalized relative coor- × dinates of the successive points forming the trajectory. It directly depends on the dense flow used for tracking points. 2. HOF is computed using the orientations and magnitudes of the flow field. 3. MBH is designed to capture the gradient of horizontal and vertical components of the flow. The motion boundaries encode the relative pixel motion and therefore suppress camera motion, but only to some extent. 4. HOG encodes the appearance by using the intensity gradient orientations and magnitudes. It is formally not a motion descriptor. Yet the position where the descriptor is computed depends on the trajectory shape. As in [29], volume around a feature point is divided into a 2 2 3 space-time grid. The orientations are quantized ian 2to × ×8 b2i ×ns 3fo srp HacOe-Gti amned g g9r ibdi.ns T fhoer o oHriOenFt (awtioitnhs one a qdudainttiiozneadl zero bin). The horizontal and vertical components of MBH are separately quantized into 8 bins each. 4.2. Impact of motion compensation The optical flow is simply referred to as flow in the following, while the compensated flow (see subsection 2. 1) is denoted by ω-flow. Both of them are considered in the tracking and descriptor computation stages. The trajectories obtained by tracking with the ω-flow are called ω-trajectories. Figure 2 comparatively illustrates the ωtrajectories and the trajectories obtained using the flow. The input video shows a man moving away from the car. In this video excerpt, the camera is following the man walking to the right, thus inducing a global motion to the left in the video. When using the flow, the computed trajectories reflect the combination of these two motion components (camera and scene motion) as depicted by Subfigure 2(b), which hampers the characterization of the current action. In contrast, the ω-trajectories plotted in Subfigure 2(c) are more active on the actor moving on the foreground, while those localized in the background are now parallel to the time axis enhancing static parts of the scene. The ω-trajectories are therefore more relevant for action recognition, since they are more regularly and more exclusively following the actor’s motion. Impact on Trajectory and HOG descriptors. Table 1reports the impact of ω-trajectories on Trajectory and HOG descriptors, which are both significantly improved by 3%4% of mAP on the two datasets. When improved by ωflow, these descriptors will be respectively referred to as ω-Trajdesc and ω-HOG in the rest of the paper. Although the better performance of ω-Trajdesc versus the original Trajectory descriptor was expected, the one 222555555866 2. Trajectories obtained from optical and compensated flows. The green tail is the trajectory the current frame. The trajectories are sub-sampled for the sake of clarity. The frames are extracted Figure over every 15 frames with red dot indicating 5 frames in this example. DescriptorHollywood2HMDB51 BaseTrliaωnje- Tc(rtoarejrdpyreos[c2d9u]ced)54 7 1. 7 4% %2382.–89% BaseliHnωOe- (GHreOp [2rG9od]uced)4 451 . 658%%%2296.– 13%% Table 1. ω-Trajdesc and ω-HOG: Impact of compensating flow on Trajectory descriptor and HOG descriptors. achieved by ω-HOG might be surprising. Our interpretation is that HOG captures more context with the modified trajectories. More precisely, the original HOG descriptor is computed from a 2D+t sub-volume aligned with the corresponding trajectory and hence represents the appearance along the trajectory shape. When using ω-flow, we do not align the video sequence. As a result, the ω-HOG descriptor is no more computed around the very same tracked physical point in the space-time volume but around points lying in a patch of the initial feature point, whose size depends on the affine flow magnitude. ω-HOG can be viewed as a “patchbased” computation capturing more information about the appearance of the background or of the moving foreground. As for ω-trajectories, they are closer to the real trajectories of the moving actors as they usually cancel the camera movement, and so, more easier to train and recognize. Impact on HOF. The ω-flow impacts computation used as an input to HOF computation itself. Therefore, HOF can both types of trajectories (ω-trajectories both the trajectory and the descriptor be computed along or those extracted MethodHollywood2HMDB51 Table(ω2rHf.-alocO IwomkF)inpHgacOtFobf[2u9ωhsb]i:f-nlo ωgwot-hwωHOflFown5H 02 34O. 58291F% %descripto3 r706s38.:–1076% m%APfor Hollywood2 and average accuracy for HMDB5 1. The ω-HOF is used in subsequent evaluations. from flow) and can encode both kinds of flows (ω-flow or flow). For the sake of completeness, we evaluate all the variants as well as the combination of both flows in the descriptor computation stage. The results are presented in Table 2 and demonstrate the significant improvement obtained by computing the HOF descriptor with the ω-flow instead of the optical flow. Note that the type of trajectories which is used, either “Tracking flow” or “Tracking ω-flow”, has a limited impact in this case. From now on, we only consider the “Tracking ω-flow” case where HOF is computed along ω-trajectories. Interestingly, combining the HOF computed from the flow and the ω-flow further improves the results. This suggests that the two flow fields are complementary and the affine flow that was subtracted from ω-flow brings in additional information. For the sake of brevity, the combination of the two kinds of HOF, i.e., computed from the flow and the ω-flow using ω-trajectories, is referred to as the ω-HOF 222555555977 MethodHollywood2HMDB51 Tab(lerT3a.cIkmM inpBgMacHgωtBf-loH w [u2)s9in]gω f-lo wo MBH5 d42 e.052s7c% riptos:m34A90P.–3769f% orHllywood2 and average accuracy for HMDB5 1. DTerHasMjcBrOeblitpHGeFor4.ySumTωraw- frcilykto hw ionfgtheduωpCs-fcaolrtωmeiwNp-df/tl+Aωoutrw-finlogwthdesωcr- isTpc-fHtrMloaiOjrBpdswtGeHFosrc descriptor in the rest of this paper. Compared to the HOF baseline, the ω-HOF descriptor achieves a gain of +3.1% of mAP on Hollywood 2 and of +7.8% on HMDB51. Impact on MBH. Since MBH is computed from gradient of flow and cancel the constant motion, there is practically no benefit in using the ω-flow to compute the MBH descriptors, as shown in Table 3. However, by tracking ω-flow, the performance improves by around 1.3% for HMDB5 1 dataset and drops by around 1.5% for Hollywood2. This relative performance depends on the encoding technique. We will come back on this descriptor when considering another encoding scheme for local descriptors in Section 6. 4.3. Summary of compensated descriptors Table 4 summarizes the refined versions of the descriptors obtained by exploiting the ω-flow, and both ω-flow and the optical flow in the case of HOF. The revisited descriptors considerably improve the results compared to the orig- inal ones, with the noticeable exception of ω-MBH which gives mixed performance with a bag-of-features encoding scheme. But we already mention as this point that this incongruous behavior of ω-MBH is stabilized with the VLAD encoding scheme considered in Section 6. Another advantage of tracking the compensated flow is that fewer trajectories are produced. For instance, the total number of trajectories decreases by about 9. 16% and 22.81% on the Hollywood2 and HMDB51 datasets, respectively. Note that exploiting both the flow and the ω-flow do not induce much computational overhead, as the latter is obtained from the flow and the affine flow which is computed in real-time and already used to get the ω-trajectories. The only additional computational cost that we introduce by using the descriptors summarized in Table 4 is the computation of a second HOF descriptor, but this stage is relatively efficient and not the bottleneck of the extraction procedure. 5. Divergence-Curl-Shear descriptor This section introduces a new descriptor encoding the kinematic properties of motion discussed in Section 2.2. It is denoted by DCS in the rest of this paper. Combining kinematic features. The spatial derivatives are computed for the horizontal and vertical components of the flow field, which are used in turn to compute the divergence, curl and shear scalar values, see Equation 3. We consider all possible pairs of kinematic features, namely (div, curl), (div, shear) and (curl, shear). At each × ×× pixel, we compute the orientation and magnitude of the 2-D vector corresponding to each of these pairs. The orientation is quantized into histograms and the magnitude is used for weighting, similar to SIFT. Our motivation for encoding pairs is that the joint distribution of kinematic features conveys more information than exploiting them independently. Implementation details. The descriptor computation and parameters are similar to HOG and other popular descriptors such as MBH, HOF. We obtain 8-bin histograms for each of the three feature pairs or components of DCS. The range of possible angles is 2π for the (div,curl) pair and π for the other pairs, because the shear is always positive. The DCS descriptor is computed for a space-time volume aligned with a trajectory, as done with the four descriptors mentioned in the previous section. In order to capture the spatio-temporal structure of kinematic features, the volume (32 32 pixels and L = 15 frames) is subdivided into a spatio-temporal grid nofd s Lize = nx 5× f ny m×e nt, sw situhb nx =de ny =to 2a and nt = 3. These parameters ×hnave× × bneen fixed for the sake of consistency with the other descriptors. For each pair of kinematic features, each cell in the grid is represented by a histogram. The resulting local descriptors have a dimensionality equal to 288 = nx ny nt 8 3. At the video level, these descriptors are nenc×od end i×nto 8 a single vector representation using either BOF or the VLAD encoding scheme introduced in the next section. 6. VLAD in actions VLAD [8] is a descriptor encoding technique that aggregates the descriptors based on a locality criterion in the feature space. To our knowledge, this technique has never been considered for action recognition. Below, we briefly introduce this approach and give the performance achieved for all the descriptors introduced along the previous sections. VLAD in brief. Similar to BOF, VLAD relies on a codebook C = {c1, c2 , ...ck} of k centroids learned by k-means. bTohoek representation is ob}t oaifn ked c by summing, efodr b yea kch-m mveiasunasl. word ci, the differences x − ci of the vectors x assigned to ci, thereby producing a sv exct −or c representation oflength d×k, 222555556088 DMeBscHriptorV5 LH.A1o%Dlywo5Bo4d.O2 %F4V3L.3HA%MD B35B91.O7%F Taωbl-eDHM5rOCBa.FSjGPdHe+rsωfco-mMHaBOnFeofV54L2936A.51D% with5431ω208-.5T96% rajde3s42c97158,.ω3% -HOG342,58019ω.6-% HOF descriptors and their combination. where d is the dimension ofthe local descriptors. We use the codebook size, k = 256. Despite this large dimensionality, VLAD is efficient because it is effectively compared with a linear kernel. VLAD is post-processed using a componentwise power normalization, which dramatically improves its performance [8]. While cross validating the parameter α involved in this power normalization, we consistently observe, for all the descriptors, a value between 0.15 and 0.3. Therefore, this parameter is set to α = 0.2 in all our experiments. For classification, we use a linear SVM and oneagainst-rest approach everywhere, unless stated otherwise. Impact on existing descriptors. We employ VLAD because it is less sensitive to quantization parameters and appears to provide better performance with descriptors having a large dimensionality. These properties are interesting in our case, because the quantization parameters involved in the DCS and MBH descriptors have been used unchanged in Section 4 for the sake of direct comparison. They might be suboptimal when using the ω-flow instead of the optical flow on which they have initially been optimized [29]. Results for MBH and ω-MBH in Table 5 supports this argument. When using VLAD instead of BOF, the scores are stable in both the cases and there is no mixed inference as that observed in Table 3. VLAD also has significant positive influence on accuracy of ω-DCS descriptor. We also observe that ω-DCS is complementary to ω-MBH and adds to the performance. Still DCS is probably not best utilized in the current setting of parameters. In case of ω-Trajdesc and ω-HOG, the scores are better with BOF on both the datasets. ω-HOF with VLAD improves on HMDB5 1, but remains equivalent for Hollywood2. Although BOF leads to better scores for the descriptors considered individually, their combination with VLAD outperforms the BOF. 7. Comparison with the state of the art This section reports our results with all descriptors combined and compares our method with the state of the art. TrajectorCy+omHbOiGna+tHioOnF+ MDBCHSHol5 l98y.w76%o%od2H4M489.D02%B%51 All ω-descriptors all five compensated descriptors using combined62.5%52.1% Table 6. Combination of VLAD representation. WU*JVliaOnughreM tHaeolth. [yo2w9d87o] 256 0985. 37% SKa*duOeJinhrau tnegdMteHatlMa.h [ol1Dd.0B[91]25 24 609.8172% Table 7. Comparison with the state of the art on Hollywood2 and HMDB5 1 datasets. *Vig et al. [28] gets 61.9% by using external eye movements data. *Jiang et al. [9] used one-vs-one multi class SVM while our and other methods use one-vs-rest SVMs. With one-against-one multi class SVM we obtain 45. 1% for HMDB51. Descriptor combination. Table 6 reports the results obtained when the descriptors are combined. Since we use VLAD, our baseline is updated that is combination of Trajectory, HOG, HOF and MBH with VLAD representation. When DCS is added to the baseline there is an improvement of 0.9% and 1.2%. With combination of all five compensated descriptors we obtain 62.5% and 52.1% on the two datasets. This is a large improvement even over the updated baseline, which shows that the proposed motion compensation and the way we exploit it are significantly important for action recognition. The comparison with the state of the art is shown in Table 7. Our method outperforms all the previously reported results in the literature. In particular, on the HMDB51 dataset, the improvement over the best reported results to date is more than 11% in average accuracy. Jiang el al. [9] used a one-against-one multi-class SVM, which might have resulted in inferior scores. With a similar multi-class SVM approach, our method obtains 45. 1%, which remains significantly better than their result. All others results were reported with one-against-rest approach. On Olympic Sports dataset we obtain mAP of 83.2% with ‘All ω-descriptors combined’ and the improvement is mostly because of VLAD and ω-flow. The best reported mAPs on this dataset are Liu et al. [14] (74.4%) and Jiang et al. [9] (80.6%), which we exceed convincingly. Gaidon et al. [6] reports the best average accuracy of 82.7%. 8. Conclusions This paper first demonstrates the interest of canceling the dominant motion (predominantly camera motion) to make the visual motion truly related to actions, for both the trajectory extraction and descriptor computation stages. It pro222555556199 duces significantly better versions (called compensated descriptors) of several state-of-the-art local descriptors for action recognition. The simplicity, efficiency and effectiveness of this motion compensation approach make it applicable to any action recognition framework based on motion descriptors and trajectories. The second contribution is the new DCS descriptor derived from the first-order scalar motion quantities specifying the local motion patterns. It captures additional information which is proved complementary to the other descriptors. Finally, we show that VLAD encoding technique instead of bag-of-words boosts several action descriptors, and overall exhibits a significantly better performance when combining different types of descriptors. Our contributions are all complementary and significantly outperform the state of the art when combined, as demon- strated by our extensive experiments on the Hollywood 2, HMDB51 and Olympic Sports datasets. Acknowledgments This work was supported by the Quaero project, funded by Oseo, French agency for innovation. We acknowledge Heng Wang’s help for reproducing some of their results. References [1] S. Ali and M. Shah. Human action recognition in videos using kinematic features and multiple instance learning. IEEE T-PAMI, 32(2):288–303, Feb. 2010. [2] T. Brox and J. Malik. Object segmentation by long term analysis of point trajectories. In ECCV, Sep. 2010. [3] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, Jun. 2005. [4] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In ECCV, May 2006. [5] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In VS-PETS, Oct. 2005. [6] A. Gaidon, Z. Harchaoui, and C. Schmid. Recognizing activities with cluster-trees of tracklets. In BMVC, Sep. 2012. [7] A. Hervieu, P. Bouthemy, and J.-P. Le Cadre. A statistical video content recognition method using invariant features on object trajectories. IEEE T-CSVT, 18(1 1): 1533–1543, 2008. [8] H. J ´egou, F. Perronnin, M. Douze, J. S ´anchez, P. P ´erez, and C. Schmid. Aggregating local descriptors into compact [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] codes. IEEE T-PAMI, 34(9):1704–1716, 2012. Y.-G. Jiang, Q. Dai, X. Xue, W. Liu, and C.-W. Ngo. Trajectory-based modeling of human actions with motion reference points. In ECCV, Oct. 2012. O. Kliper-Gross, Y. Gurovich, T. Hassner, and L. Wolf. Motion interchange patterns for action recognition in unconstrained videos. In ECCV, Oct. 2012. H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: A large video database for human motion recognition. In ICCV, Nov. 2011. I. Laptev and T. Lindeberg. Space-time interest points. In ICCV, Oct. 2003. I. Laptev, M. Marzalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, Jun. 2008. J. Liu, B. Kuipers, and S. Savarese. Recognizing human actions by attributes. In CVPR, Jun. 2011. D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–1 10, Nov. 2004. M. Marzalek, I. Laptev, and C. Schmid. Actions in context. In CVPR, Jun. 2009. P. Matikainen, M. Hebert, and R. Sukthankar. Trajectons: Action recognition through the motion analysis of tracked features. In Workshop on Video-Oriented Object and Event Classification, ICCV, Sep. 2009. R. Messing, C. J. Pal, and H. A. Kautz. Activity recognition using the velocity histories of tracked keypoints. In ICCV, Sep. 2009. J. C. Niebles, C.-W. Chen, and F.-F. Li. Modeling temporal structure of decomposable motion segments for activity classification. In ECCV, Sep. 2010. [20] J.-M. Odobez and P. Bouthemy. Robust multiresolution estimation of parametric motion models. Jal of Vis. Comm. and Image Representation, 6(4):348–365, Dec. 1995. [21] G. Piriou, P. Bouthemy, and J.-F. Yao. Recognition of dynamic video contents with global probabilistic models of visual motion. IEEE T-IP, 15(1 1):3417–3430, 2006. [22] S. Sadanand and J. J. Corso. Action bank: A high-level representation of activity in video. In CVPR, Jun. 2012. [23] C. Schmid and R. Mohr. Local grayvalue invariants for image retrieval. IEEE T-PAMI, 19(5):530–534, May 1997. [24] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In ICCV, pages 1470– 1477, Oct. 2003. [25] J. Sun, X. Wu, S. Yan, L. F. Cheong, T.-S. Chua, and J. Li. Hierarchical spatio-temporal context modeling for action recognition. In CVPR, Jun. 2009. [26] H. Uemura, S. Ishikawa, and K. Mikolajczyk. Feature tracking and motion compensation for action recognition. In BMVC, Sep. 2008. [27] M. M. Ullah, S. N. Parizi, and I. Laptev. Improving bag-offeatures action recognition with non-local cues. In BMVC, Sep. 2010. [28] E. Vig, M. Dorr, and D. Cox. Saliency-based space-variant descriptor sampling for action recognition. In ECCV, Oct. 2012. [29] H. Wang, A. Kl¨ aser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, Jun. 2011. [30] H. Wang, M. M. Ullah, A. Kl¨ aser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In BMVC, Sep. 2009. [3 1] G. Willems, T. Tuytelaars, and L. J. V. Gool. An efficient dense and scale-invariant spatio-temporal interest point detector. In ECCV, Oct. 2008. [32] S. Wu, O. Oreifej, and M. Shah. Action recognition in videos acquired by a moving camera using motion decomposition of lagrangian particle trajectories. In ICCV, Nov. 2011. 222555666200

4 0.26671329 268 cvpr-2013-Leveraging Structure from Motion to Learn Discriminative Codebooks for Scalable Landmark Classification

Author: Alessandro Bergamo, Sudipta N. Sinha, Lorenzo Torresani

Abstract: In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets offeature descriptors during codebook learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importanceweighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.

5 0.11059418 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval

Author: Danfeng Qin, Christian Wengert, Luc Van_Gool

Abstract: Many recent object retrieval systems rely on local features for describing an image. The similarity between a pair of images is measured by aggregating the similarity between their corresponding local features. In this paper we present a probabilistic framework for modeling the feature to feature similarity measure. We then derive a query adaptive distance which is appropriate for global similarity evaluation. Furthermore, we propose a function to score the individual contributions into an image to image similarity within the probabilistic framework. Experimental results show that our method improves the retrieval accuracy significantly and consistently. Moreover, our result compares favorably to the state-of-the-art.

6 0.094846681 456 cvpr-2013-Visual Place Recognition with Repetitive Structures

7 0.075610735 151 cvpr-2013-Event Retrieval in Large Video Collections with Circulant Temporal Encoding

8 0.074730866 189 cvpr-2013-Graph-Based Discriminative Learning for Location Recognition

9 0.067139432 387 cvpr-2013-Semi-supervised Domain Adaptation with Instance Constraints

10 0.063254878 260 cvpr-2013-Learning and Calibrating Per-Location Classifiers for Visual Place Recognition

11 0.054738302 134 cvpr-2013-Discriminative Sub-categorization

12 0.05243811 67 cvpr-2013-Blocks That Shout: Distinctive Parts for Scene Classification

13 0.051940944 130 cvpr-2013-Discriminative Color Descriptors

14 0.05143737 275 cvpr-2013-Lp-Norm IDF for Large Scale Image Search

15 0.04850762 145 cvpr-2013-Efficient Object Detection and Segmentation for Fine-Grained Recognition

16 0.048280843 309 cvpr-2013-Nonparametric Scene Parsing with Adaptive Feature Relevance and Semantic Context

17 0.047483001 79 cvpr-2013-Cartesian K-Means

18 0.047436256 82 cvpr-2013-Class Generative Models Based on Feature Regression for Pose Estimation of Object Categories

19 0.047244426 100 cvpr-2013-Crossing the Line: Crowd Counting by Integer Programming with Local Features

20 0.044129789 53 cvpr-2013-BFO Meets HOG: Feature Extraction Based on Histograms of Oriented p.d.f. Gradients for Image Classification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.1), (1, -0.031), (2, -0.014), (3, -0.007), (4, 0.007), (5, 0.007), (6, -0.048), (7, -0.087), (8, -0.095), (9, -0.032), (10, -0.056), (11, 0.028), (12, 0.107), (13, 0.033), (14, 0.065), (15, -0.078), (16, 0.048), (17, -0.058), (18, 0.04), (19, -0.104), (20, 0.159), (21, -0.134), (22, -0.002), (23, 0.081), (24, -0.013), (25, 0.187), (26, 0.044), (27, -0.003), (28, -0.064), (29, -0.085), (30, 0.111), (31, 0.062), (32, -0.034), (33, 0.073), (34, -0.058), (35, -0.108), (36, 0.056), (37, -0.03), (38, 0.08), (39, 0.034), (40, 0.025), (41, 0.213), (42, 0.095), (43, -0.003), (44, 0.02), (45, -0.009), (46, -0.075), (47, 0.09), (48, 0.025), (49, -0.156)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92837757 38 cvpr-2013-All About VLAD

Author: unkown-author

Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).

2 0.73768789 246 cvpr-2013-Learning Binary Codes for High-Dimensional Data Using Bilinear Projections

Author: Yunchao Gong, Sanjiv Kumar, Henry A. Rowley, Svetlana Lazebnik

Abstract: Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on largescale datasets like ImageNet, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders ofmagnitudefaster code generation time and smaller memory footprint.

3 0.66764313 268 cvpr-2013-Leveraging Structure from Motion to Learn Discriminative Codebooks for Scalable Landmark Classification

Author: Alessandro Bergamo, Sudipta N. Sinha, Lorenzo Torresani

Abstract: In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets offeature descriptors during codebook learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importanceweighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable.

4 0.55488229 79 cvpr-2013-Cartesian K-Means

Author: Mohammad Norouzi, David J. Fleet

Abstract: A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and runtime cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ [5], and to Product Quantization for vector quantization [7]. The models are tested on largescale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10). 1. Introduction and background Techniques for vector quantization, like the well-known k-means algorithm, are used widely in vision and learning. Common applications include codebook learning for object recognition [16], and approximate nearest neighbor (ANN) search for information retrieval [12, 14]. In general terms, such techniques involve partitioning an input vector space into multiple regions {Si}ik=1, mapping points x in each region uinlttiop region-specific representatives {ci}ik=1, known as cioennte irnst.o A resg siuonch-,s a quantizer, qse(nxt)a,t can b {ec expressed as k q(x) = ?1(x∈Si)ci, (1) i?= ?1 where 1(·) is the usual indicator function. eTrhee 1 quality oef u a quantizer oisr expressed in terms of expected distortion, a common measure of which is squared error ?x − q(x) ?22. In this case, given centers {ci}, the region t ?ox xw −hic qh(x a point nis t assigned gwivitehn m ceinnitemrasl {dcis}to,r tthioen r eisobtained by Euclidean nearest neighbor (NN) search. The k-means algorithm can be used to learn centers from data. To reduce expected distortion, crucial for many applications, one can shrink region volumes by increasing k, the number of regions. In practice, however, increasing k results in prohibitive storage and run-time costs. Even if one resorts to ANN search with approximate k-means [14] or hierarchical k-means [12], it is hard to scale to large k (e.g., k = 264), as storing the centers is untenable. This paper concerns methods for scalable quantization with tractable storage and run-time costs. Inspired by Product Quantization (PQ), a state-of-the-art algorithm for ANN search with high-dimensional data (e.g., [7]), compositionality is one key. By expressing data in terms of recurring, reusable parts, the representational capacity of compositional models grows exponentially in the number ofparameters. Compression techniques like JPEG accomplish this by encoding images as disjoint rectangular patches. PQ divides the feature space into disjoint subspaces that are quantized independently. Other examples include part-based recognition models (e.g., [18]), and tensor-based models for stylecontent separation (e.g., [17]). Here, with a compositional parameterization of region centers, we find a family of models that reduce the enc√oding cost of k centers down from k to between log2 k and √k. A model parameter controls the trade-off between model fidelity and compactness. We formulate two related algorithms, Orthogonal kmeans (ok-means) and Cartesian k-means (ck-means). They are natural extensions of k-means, and are closely related to other hashing and quantization methods. The okmeans algorithm is a generalization of the Iterative Quantization (ITQ) algorithm for finding locality-sensitive binary codes [5]. The ck-means model is an extension of okmeans, and can be viewed as a generalization of PQ.1 We then report empirical results on large-scale ANN search tasks, and on codebook learning for recognition. For ANN search we use datasets of 1M GIST and 1B SIFT features, with both symmetric and asymmetric distance measures on the latent codes. We consider codebook learning for a generic form of recognition on CIFAR-10. 2. k-means Given a dataset of n p-dim points, D ≡ {xj }jn=1, the kmeans algorithm partitions ithme n points in ≡to { kx c}lusters, and 1A very similar generalization of PQ, was developed concurrently by Ge, He, Ke and Sun, and also appears in CVPR 2013 [4]. 333000111755 represents each cluster by a center point. Let C ∈ Rp×k breep a mseantrtsix e wachhos celu sctoelurm byns a comprise tihnet .k Lceltus Cter ∈ centers, i.e., C = [c1, c2 , · · · , ck] . The k-means objective is to minimize the within,-c··lu·s ,tecr squared distances: ?k-means(C) = = x?∈Dmiin?x − ci?22 x?∈Db∈mHin1/k?x − Cb?22 (2) where H1/k ≡ {b | b ∈ {0, 1}k and ?b? = 1}, i.e., b is a binary vee Hctor comprising a ,1-1o}f-ka encoding. Lloyd’s b k- imse aa bni-s algorithm [11] finds a local minimum of (2) by iterative, alternating optimization with respect to C and the b’s. The k-means model is simple and intuitive, using NN search to assign points to centers. The assignment of points to centers can be represented with a log k-bit index per data point. The cost of storing the centers grows linearly with k. 3. Orthogonal k-means with 2m centers With a compositional model one can represent cluster centers more efficiently. One such approach is to re- construct each input with an additive combination of the columns of C. To this end, instead of the 1-of-k encoding in (2), we let b be a general m-bit vector, b ∈ Hm ≡ {0, 1}m, (a2n)d, wCe e∈ l Rt bp× bme. aA gse nsuerchal, meac-bhi tc vluesctteorr c, ebnt ∈er H His th≡e sum o}f a asundbs Cet o∈f Rthe columns of C. There are 2m possible subsets, and therefore k = 2m centers. While the number of parameters in the quantizer is linear in m, the number of centers increases exponentially. While efficient in representing cluster centers, the approach is problematic, because solving bˆ = abrg∈Hmmin?x − Cb?22,(3) is intractable; i.e., the discrete optimization is not submodular. Obviously, for small 2m one could generate all possible centers and then perform NN search to find the optimal solution, but this would not scale well to large values of m. One key observation is that if the columns of C are orthogonal, then optimization (3) becomes tractable. To explain this, without loss of generality, assume the bits belong to {−1, 1} instead of {0, 1}, i.e., b? ∈ Hm? ≡ {−1, 1}m. tToh e{−n, ?x − Cb??22 = xTx + b?TCTCb? − 2xTCb? .(4) For diagonal CTC, the middle quadratic term on the RHS becomes trace(CTC), independent of b?. As a consequence, when C has orthogonal columns, abr?g∈mH?min?x − Cb??22 = sgn(CTx) ,(5) where sgn(·) is the element-wise sign function. Cluster centers are depicted by dots, and cluster boundaries by dashed lines. (Left) Clusters formed by a 2-cube with no rotation, scaling or translation; centers = {b? |b? ∈ H2? }. (Right) Rotation, scaling and translation are used to reduce distances between data points and cluster centers; centers = {μ + RDb? | b? ∈ H2? }. To reduce quantization error further we can also introduce an offset, denoted μ, to translate x. Taken together with (5), this leads to the loss function for the model we call orthogonal k-means (ok-means):2 ?ok-means(C,μ) = x?∈Db?m∈iHn?m?x − μ − Cb??22. (6) Clearly, with a change of variables, b? = 2b −1, we can define new versions of μ and C, with ide=ntic 2abl −lo1s,s, w wfoer c wanh dicehthe unknowns are binary, but in {0, 1}m. T uhnek nook-wmnesa anrse quantizer uetn icnod {e0,s 1ea}ch data point as a vertex of a transformed m-dimensional unit hypercube. The transform, via C and μ, maps the hypercube vertices onto the input feature space, ideally as close as possible to the data points. The matrix C has orthogonal columns and can therefore be expressed in terms of rotation and scaling; i.e., C ≡ RD, where R ∈ Rp×m has orthonormal cinoglu;m i.ne.s, ( CRT ≡R = R DIm,) w, ahnerde eD R Ris ∈ diagonal and positive definite. The goal of learning is to find the parameters, R, D, and μ, which minimize quantization error. Fig. 1 depicts a small set of 2D data points (red x’s) and two possible quantizations. Fig. 1 (left) depicts the vertices of a 2-cube with C = I2 and zero translation. The cluster regions are simply the four quadrants of the 2D space. The distances between data points and cluster centers, i.e., the quantization errors, are relatively large. By comparison, Fig. 1 (right) shows how a transformed 2-cube, the full model, can greatly reduce quantization errors. 3.1. Learning ok-means To derive the learning algorithm for ok-means we first rewrite the objective in matrix terms. Given n data points, let X = [x1, x2, · · · , xn] ∈ Rp×n. Let B? ∈ {−1, 1}m×n denote the corresponding ∈clu Rster assignment {co−ef1f,ic1i}ents. Our 2While closely related to ITQ, we use the the relationship to k-means. term ok-means to emphasize 333000111 686 goal is to find the assignment coefficients B? and the transformation parameters, namely, the rotation R, scaling D, and translation μ, that minimize ?ok-means(B?, R, D,μ) = ?X − μ1T − RDB??f2 (7) = ?X? − RDB??f2 (8) where ?·?f denotes the Frobenius norm, X? ≡ X − μ1T, R iws hceornes ?tr·a?ined to have orthonormal columns≡ ≡(R XTR − μ=1 Im), and D is a positive diagonal matrix. Like k-means, coordinate descent is effective for optimizing (8). We first initialize μ and R, and then iteratively optimize ?ok-means with respect to B?, D, R, and μ: • Optimize B? and D: With straightforward algebraic manipulation, one can show that ?ok-means = ?RTX?−DB??f2 + ?R⊥TX??f2 , (9) where columns of R⊥ span the orthogonal complement of the column-space of R (i.e., the block matrix [R R⊥] is orthogonal). It follows that, given X? and R, we can optimize the first term in (9) to solve for B? and D. Here, DB? is the leastsquares approximation to RTX?, where RTX? and DB? × are m n. Further, the ith row of DB? can only contain earleem men×tsn .fr Fomur t{h−erd,i t,h +e dii} where di = Dii. Wherever tehleem corresponding delement} }o fw RheTrXe? d is positive (negative) we should put a positive (negative) value in DB?. The optimal di is determined by the mean absolute value of the elements in the ith row of RTX?: • • B? ← sgn ?RTX?? (10) D ← Diag?(mroewan?(abs(RTX?))) (11) Optimize R: Using the original objective (8), find R that minimizes ?X? −RA?f2 , subject to RTR = Im, and Am n≡i mDizBes?. ?TXhis i−s equivalent to an Orthogonal Procrustes problem [15], and can be solved exactly using SVD. In particular, by adding p − m rows of zeros to the bottom poaf rDtic, uDlaBr, bb eyc aodmdeins p p× − n. mT rhoewn sR o ifs z square a tnhde orthogoonf aDl ,a DndB can boem eessti pm ×at end. Twhiethn RSV isD s. qBuuatr es ainncde oDrtBho gisdegenerate we are only interested in the first m columns of R; the remaining columns are unique only up to rotation of the null-space.) Optimize μ: Given R, B? and D, the optimal μ is given by the column average of X −RDB?: μ ← mcoeluamn(X−RDB?) (12) 3.2. Approximate nearest neighbor search One application of scalable quantization is ANN search. Before introducing more advanced quantization techniques, we describe some experimental results with ok-means on Euclidean ANN search. The benefits of ok-means for ANN search are two-fold. Storage costs for the centers is reduced to O(log k), from O(k) with k-means. Second, substantial speedups are possible by exploiting fast methods for NN search on binary codes in Hamming space (e.g., [13]). Generally, in terms of a generic quantizer q(·), there are twoG neanteurraalll ways etrom messti omfa ate g ethneer dicis qtaunanceti z beetrw qe(·e)n, ttwheor vectors, v and u [7]. Using the Symmetric quantizer distance (SQD) ?v−u? is approximated by ?q(v)−q(u) ? . Using the Asymmetric quantizer doixsimtanacteed (A byQ ?Dq()v, only one o Uf sthineg tw thoe vectors is quantized, and ?v−u? is estimated as ?v−q(u) ? . vWechtiloer sS iQs qDu might b, aen slightly f?as itse ers t iom compute, vA−QqD(u i)?n-. curs lower quantization errors, and hence is more accurate. For ANN search, in a pre-processing stage, each database entry, u, is encoded into a binary vector corresponding to the cluster center index to which u is assigned. At test time, the queries may or may not be encoded into indices, depending on whether one uses SQD or AQD. In the ok-means model, the quantization of an input x is straightforwardly shown to be qok(x) = μ + RD sgn(RT(x − μ)) . (13) The corresponding m-bit cluster index is sgn(RT(x − μ)). Given two indices, namely b?1 , b?2 ∈ {−1, +1}m,( txhe − symmetric ok-means quantizer distance∈ ∈is { SQDok(b?1, b?2) = ?μ + RDb?1 − μ − RDb?2 ?22 = ?D(b?1 − b?2)?22 .(14) In effect, SQDok is a weighted Hamming distance. It is the sum of the squared diagonal entries of D corresponding to bits where b?1 and b?2 differ. Interestingly, in our experiments with ok-means, Hamming and weighted Hamming distances yield similar results. Thus, in ok-means experiments we simply report results for Hamming distance, to facilitate comparisons with other techniques. When the scaling in ok-means is constrained to be isotropic (i.e., D = αIm for α ∈ R+), then SQDok becomes a constant multiple off othre α αu ∈sua Rl Hamming distance. As discussed in Sec. 5, this isotropic ok-means is closely related to ITQ [5]. The ok-means AQD between a feature vector x and a cluster index b?, is defined as AQDok(x, b?) = ?x −μ − RDb??22 = ?RTx? − Db??22 + ?R⊥Tx??22 , (15) where x? = x−μ. For ANN search, in comparing distances from query x t−o a .d Faotars AetN oNf binary i,n idni ccoesm, ptharei snegc odnisdta tnecrems on the RHS of (15) is irrelevant, since it does not depend on b?. Without this term, AQDok becomes a form of asymmetric Hamming (AH) distance between RTx? and b?. While previous work [6] discussed various ad hoc AH distance measures for binary hashing, interestingly, the optimal AH distance for ok-means is derived directly in our model. 333000111977 1M SIFT, 64−bit encoding (k = 264) lRc@aeR0 . 206148 10 RIoPT1Qk K−Q m( AHeDa)Hn )s (AH)10K Figure 2. Euclidean ANN retrieval results for different methods and distance functions on the 1M SIFT dataset. 3.3. Experiments with ok-means Following [7], we report ANN search results on 1M SIFT, a corpus of 128D SIFT descriptors with disjoint sets of 100K training, 1M base, and 10K test vectors. The training set is used to train models. The base set is the database, and the test points are queries. The number of bits m is typically less than p, but no pre-processing is needed for dimensionality reduction. Rather, to initialize learning, R is a random rotation of the first m principal directions of the training data, and μ is the mean of the data. For each query we find R nearest neighbors, and compute Recall@R, the fraction of queries for which the ground-truth Euclidean NN is found in the R retrieved items. Fig. 2 shows the Recall@R plots for ok-means with Hamming (H) ≈ SQDok and asymmetric Hamming (AH) H≡a mAmQiDngok ( Hd)is t≈an SceQ, vs. ITQ [5] and PQ [7]. The PQ ≡met AhoQdD exploits a more complex asymmetric distance func- tion denoted AD ≡ AQDck (defined in Sec. 6. 1). Note first tthioant od ke-nmoeteadns A improves upon ITQ, with both Hamming and asymmetric Hamming distances. This is due to the scaling parameters (i.e., D) in ok-means. If one is interested in Hamming distance efficient retrieval, ok-means is prefered over ITQ. But better results are obtained with the asymmetric distance function. Fig. 2 also shows that PQ achieves superior recall rates. This stems from its joint encoding of multiple feature dimensions. In ok-means, each bit represents a partition ofthe feature space into two clusters, separated by a hyperplane. The intersection of m orthogonal hyperplanes yields 2m regions. Hence we obtain just two clusters per dimension, and each dimension is encoded independently. In PQ, by comparison, multiple dimensions are encoded jointly, with arbitrary numbers of clusters. PQ thereby captures statistical dependencies among different dimensions. We next extend ok-means to jointly encode multiple dimensions. 4. Cartesian k-means In the Cartesian k-means (ck-means) model, each region center is expressed parametrically as an additive combination of multiple subcenters. Let there be m sets of subcen- Fidg2ure31.Ddep4icton5fCartde?1si2nqdu?4ati5z?3aton 4qDck(da)t= ,?wd i?5134t?h the first (last) two dimensions sub-quantized on the left (right). Cartesian k-means quantizer denoted qck, combines the subquantizations in subspaces, and produces a 4D reconstruction. ters, each with h elements.3 Let C(i) be a matrix whose columns comprise the elements of the ith subcenter set; C(i) ∈ Rp×h. Finally, assume that each cluster center, c, is the∈ sum of exactly one element from each subcenter set: = ?C(i)b(i) m c i?= ?1 , (16) where b(i) ∈ H1/h is a 1-of-h encoding. As a conc∈re Hte example (see Fig. 3), suppose we are given 4D inputs, x ∈ R4, and we split each datum into m = 2 parts: z(1) = ?I2 0? x , and Then, suppose w?e z(2) = ?0 I2? x .(17) qu?antize each part, z(?1) and? z(2) , sepa- × rately. As depicted in Fig. 3 (left and middle), we could use h = 5 subcenters for each one. Placing the corresponding subcenters in the columns of 4 5 matrices C(1) and C(2) , C(1)=?d1d20d2×35d4d5?, C(2)=?d?1d?20d2×?35d?4d?5?, we obtain a model (16) that provides 52 possible centers with which to quantize the data. More generally, the total number of model centers is k = hm. Each center is a member of the Cartesian product of the subcenter sets, hence the name Cartesian k-means. Importantly, while the number of centers is hm, the number of subcenters is only mh. The model provides a super-linear number of centers with a linear number of parameters. The learning objective for Cartesian k-means is ?ck-means(C) =x?∈D{b(mi)}inim=1???x −i?=m1C(i)b(i)??22 where b(i) ∈ H1/h, and C ≡ [C(1), ··· , (18) C(m)] ∈ Rp×mh. [b(1)T, ··· ,b(m)T] If we let bT ≡ then the second sum in (18) can be expressed succinctly as Cb. 3While here we assume a fixed cardinality for all subcenter sets, the model is easily extended to allow sets with different cardinalities. 333000112088 The key problem with this formulation is that the min- imization of (18) with respect to the b(i) ’s is intractable. Nevertheless, motivated by orthogonal k-means (Sec. 3), encoding can be shown to be both efficient and exact if we impose orthogonality constraints on the sets of subcenters. To that end, assume that all subcenters in different sets are pairwise orthogonal; i.e., ∀i,j | i = j C(i)TC(j) = 0h×h .(19) Each subcenter matrix C(i) spans a linear subspace of Rp, and the linear subspaces for different subcenter sets do not intersect. Hence, (19) implies that only the subcenters in C(i) can explain the projection of x onto the C(i) subspace. In the example depicted in Fig. 3, the input features are simply partitioned (17), and the subspaces clearly satisfy the orthogonality constraints. It is also clear that C ≡ [ C(1) C(2)] is block diagonal, Iwtit ihs 2 a ×lso o5 c bleloacrks t,h adte Cnote ≡d D(1) and D(]2 i)s . bTlohec quantization error t×he5re bfolorcek bse,c doemnoeste ?x − Cb?22 = ???zz((12))?−?D0(1) D0(2)? ?b ( 21) ? ?2 = ?????z(1)−D(1)b(1)??2+???z(2)−D(2??)b(2)??2. In words, the squa??zred quantization?? erro??r zis the sum of t??he squared errors on the subspaces. One can therefore solve for the binary coefficients of the subcenter sets independently. In the general case, assuming (19) is satisfied, C can be expressed as a product R D, where R has orthonormal columns, and D is block diagonal; i.e., C = R D where Ra=nd[hRe(n1c),e·C(,i)R=(mR)]i,Dan(di).DW=i⎢t⎡⎣⎢hDs0i.(1≡)Dra0(n2)k. C.(Di)0(.m,i)t⎦⎥ ⎤fo,l(2w0)s that D(i) ∈ Rsi×h and R(i) ∈ Rp×≡sira. Clearly, ? si ≤ p, because of∈ ∈th Re orthogonality ∈con Rstraints. Replacing C(i) with R(i)D(i) in the RHS of (18?), we find ?x−Cb?22 = ?m?z(i)−D(i)b(i)?22+?R⊥Tx?22, (21) i?= ?1 where z(i)≡R(i)Tx, and R⊥is the orthogonal complement of R. This≡ ≡shRows that, with orthogonality constraints (19), the ck-means encoding problem can be split into m independent sub-encoding problems, one for each subcenter set. To find the b(i) that minimizes ??z(i) we perform NN search with z(i) again??st h si-dim vec??tors in D(i) . This entails a cost of O(hsi).? Fortunately, all? the elements of b can be found very efficiently, in O(hs), where s ≡ ? si. If we also include the cost of rotating x to −D(i)b(i)?22, Taocbkml-emt1he.aoAnds um#ceh2nkmrtyeofskm-#lobgeiatknsh,cOo-rm( pOec(2oamkn+spt),hpan)dkO- m(pOecosa(n+mst(hipns)khtesro)m of number of centers, number of bits needed for indices (i.e., log #centers), and the storage cost of representation, which is the same as the encoding cost to convert inputs to indices. The last column shows the costs given an assumption that C has a rank of s ≥ m. obtain each z(i) , the total encoding cost is O(ps + hs), i.e., O(p2+hp). Alternatively, one could perform NN search in p-dim C(i) ’s, to find the b(i) ’s, which costs O(mhp). Table 1 summerizes the models in terms of their number of centers, index size, and cost of storage and encoding. 4.1. Learning ck-means We can re-write the ck-means objective (18) in matrix form with the Frobenius norm; i.e., ?ck-means(R, D, B) = ? X − RDB ?f2 (22) where the columns of X and B comprise the data points and the subcenter assignment coefficients. The input to the learning algorithm is the training data X, the number of subcenter sets m, the cardinality of the subcenter sets h, and an upper bound on the rank of C, i.e., s. In practice, we also let each rotation matrix R(i) have the same number of columns, i.e., si = s/m. The outputs are the matrices {R(i) } and {D(i) } that provide a local minima of (22). Learning begins hwaitth p trohev idneit aia lloizcaatlio mni noimf Ra oanf d(2 D2)., followed by iterative coordinate descent in B, D, and R: • Optimize B and D: With R fixed, the objective is given by (21) where ?R⊥TX?f2 R(i)TX, is constant. Given data pro- jections Z(i) ≡ to optimize for B and D we perform one step oRf k-means for each subcenter set: – Assignment: Perform NN searches for each subcenter set to find the assignment coefficients, B(i) , B(i)← arBg(mi)in?Z(i)− D(i)B(i)?f2 – • Update: D(i)← arDg(mi)in?Z(i)− D(i)B(i)?f2 Optimize R: Placing the D(i) ’s along the diagonal of D, as in (20), and concatenating B(i) ’s as rows of B, [B(1)T, ...,B(m)T], i.e., BT = the optimization of R reduces to the orthogonal Procrustes problem: R ← argRmin?X − RDB?f2. In experiments below, R ∈ Rp×p, and rank(C) ≤ p is unIcnon esxtpraeirniemde. tFso rb high-dimensional adnadta r awnhke(rCe ) ra ≤nk( pX is) ?np, fnostrr efficiency ri th may dbime eusnesfiuoln atol dcoantast wrahiner er anr akn(kC(X). 333000112199 5. Relations and related work As mentioned above, there are close mathematical relationships between ok-means, ck-means, ITQ for binary hashing [5], and PQ for vector quantization [7]. It is instructive to specify these relationships in some detail. Iterative Quantization vs. Orthogonal k-means. ITQ [5] is a variant of locality-sensitive hashing, mapping data to binary codes for fast retrieval. To extract m-bit codes, ITQ first zero-centers the data matrix to obtain X?. PCA is then used for dimensionality reduction, fromp down to m dimensions, after which the subspace representation is randomly rotated. The composition of PCA and the random rotation can be expressed as WTX? where W ∈ Rp×m. ITQ then solves for the m×m rotation mawthriex,r eR W, t ∈hatR minimizes ?ITQ(B?, R) = ?RTWTX?− B??f2 , s.t. RTR = Im×m, (23) where B? ∈ {−1, +1}n×p. ITQ ro∈tate {s− t1h,e+ subspace representation of the data to match the binary codes, and then minimizes quantization error within the subspace. By contrast, ok-means maps the binary codes into the original input space, and then considers both the quantization error within the subspace and the out-of-subspace projection error. A key difference is the inclusion of ?R⊥X? ?f2 in the ok-means objective (9). This is important s ?inRce one can often reduce quantization errors by projecting out significant portions of the feature space. Another key difference between ITQ and ok-means is the inclusion of non-uniform scaling in ok-means. This is important when the data are not isotropic, and contributes to the marked improvement in recall rates in Fig. 2. Orthogonal k-means vs. Cartesian k-means. We next show that ok-means is a special case of ck-means with h = 2, where each subcenter set has two elements. To this end, let C(i) = and let b(i) = be the ith subcenter matrix selection vector. Since b(i) is a 1-of-2 encoding (?10? or ?01?), it follows that: [c(1i) c2(i)], b1(i)c(1i)+ b(2i)c2(i) = [b(1i) b2(i)]T c(1i)+2 c2(i)+ bi?c1(i)−2 c2(i), (24) b1(i) − b2(i) where bi? ≡ ∈ {−1, +1}. With the following setting of t≡he bok-m−e abns parameters, μ=?i=m1c1(i)+2c(2i),andC=?c1(1)−2c2(1),...,c1(m)−2c2(m)?, it should be clear that ?i C(i)b(i) = μ + Cb?, where b? ∈ {−1, +1}m, and b??i is the ith bit of b?, used in (24). Similarly, one can also map ok-means parameters onto corresponding subcenters for ck-means. Thus, there is a 1-to-1 mapping between the parameterizations of cluster centers in ok-means and ck-means for h = 2. The benefits of ok-means are its small number of parameters, and its intuitive formulation. The benefit of the ck-means generalization is its joint encoding of multiple dimensions with an arbitrary number of centers, allowing it to capture intrinsic dependence among data dimensions. Product Quantization vs. Cartesian k-means. PQ first splits the input vectors into m disjoint sub-vectors, and then quantizes each sub-vector separately using a subquantizer. Thus PQ is a special case of ck-means where the rotation R is not optimized; rather, R is fixed in both learning and retrieval. This is important because the sub-quantizers act independently, thereby ignoring intrasubspace statistical dependence. Thus the selection of subspaces is critical for PQ [7, 8]. J e´gou et al. [8] suggest using PCA, followed by random rotation, before applying PQ to high-dimensional vectors. Finding the rotation to minimize quantization error is mentioned in [8], but it was considered too difficult to estimate. Here we show that one can find a rotation to minimize the quantization error. The ck-means learning algorithm optimizes sub-quantizers in the inner loop, but they interact in an outer loop that optimizes the rotation (Sec. 4.1). 6. Experiments 6.1. Euclidean ANN search Euclidean ANN is a useful task for comparing quantization techniques. Given a database of ck-means indices, and a query, we use Asymmetric and Symmetric ck-means quantizer distance, denoted AQDck and SQDck. The AQDck between a query x and a binary index b ≡ ?b(1)T, ...,b(m)T?T, derived in (21), is AQDck(x,b) ?m?z(i)−D(i)b(i)?22+ ?R⊥Tx?22.(25) = i?= ?1 ?z(i)−D(i)b(i)?22is the distance between the ithpro- Here, jection?? of x, i.e., z(i) , ?a?nd a subcenter projection from D(i) selecte?d by b(i) . Give?n a query, these distances for each i, and all h possible values of b(i) can be pre-computed and stored in a query-specific h×m lookup table. Once created, tshtoer lookup qtaubelery i-ss pusecedif itoc compare akullp pda tatbablea.se O points etaot tehde, query. So computing AQDck entails m lookups and additions, somewhat more costly than Hamming distance. The last term on the RHS of (25) is irrelevant for NN search. Since PQ is a special case of ck-means with pre-defined subspaces, the same distances are used for PQ (cf. [7]). The SQDck between binary codes b1 and b2 is given by SQDck(b1,b2) = ?m?D(i)b1(i)− D(i)b2(i)?22.(26) i?= ?1 Since b1(i) and b(2i) are 1-of-?h encodings, an m×h?×h lookup table can be createadre t 1o- sft-ohre e nacllo pairwise msu×bh×-dhis ltaonockeusp. 333000222200 1M SIFT, 64−bit encoding (k = 264) 1M GIST, 64−bit encoding (k = 264) 1B SIFT, 64−bit encoding (k = 264) Re@acl0 .4186021RcIPoTQk0−Q m( SAeDaH)n s (SADH)1K0 .1642081 0RIocP1TkQ −K m (SAeDHa)n s (ASHD1)0K .186420 1 R0IoPc1TkQ −KQm( ASe DaHn) s (SADH1)0K Figure 4. Euclidean NN recall@R (number of items retrieved) based on different quantizers and corresponding distance functions on the 1M SIFT, 1M GIST, and 1B SIFT datasets. The dashed curves use symmetric distance. (AH ≡ AQDok, SD ≡ SQDck, AD ≡ AQDck) While the cost of computing SQDck is the same as AQDck, SQDck could also be used to estimate the distance between the indexed database entries, for diversifying the retrieval results, or to detect near duplicates, for example. Datasets. We use the 1M SIFT dataset, as in Sec. 3.3, along with two others, namely, 1M GIST (960D features) and 1B SIFT, both comprising disjoint sets of training, base and test vectors. 1M GIST has 500K training, 1M base, and 1K query vectors. 1B SIFT has 100M training, 1B base, and 10K query points. In each case, recall rates are averaged over queries in test set for a database populated from the base set. For expedience, we use only the first 1M training points for the 1B SIFT experiments. Parameters. In ANN experiments below, for both ckmeans and PQ, we use m = 8 and h = 256. Hence the number of clusters is k = 2568 = 264, so 64-bits are used as database indices. Using h = 256 is particularly attractive because the resulting lookup tables are small, encoding is fast, and each subcenter index fits into one byte. As h increases we expect retrieval results to improve, but encoding and indexing of a query to become slower. Initialization. To initialize the Di’s for learning, as in kmeans, we simply begin with random samples from the set of Zi’s (see Sec. 4.1). To initialize R we consider the different methods that J e´gou et al. [7] proposed for splitting the feature dimensions into m sub-vectors for PQ: (1) natural: sub-vectors comprise consecutive dimensions, (2) structured: dimensions with the same index modulo 8 are grouped, and (3) random: random permutations are used. For PQ in the experiments below, we use the orderings that produced the best results in [7], namely, the structured ordering for 960D GIST, and the natural ordering for 128D SIFT. For learning ck-means, R is initialized to the identity with SIFT corpora. For 1M GIST, where the PQ ordering is significant, we consider all three orderings to initialize R. Results. Fig. 4 shows Recall@R plots for ck-means and PQ [7] with symmetric and asymmetric distances (SD ≡ PSQQD [7c]k wanitdh A syDm m≡e rAicQ aDncdk) a on tmhee t3r cda dtaissteatns.c Tsh (eS Dhor ≡izontal axainsd represents tQheD number of retrieved items, R, on a log-scale. The results consistently favor ck-means. On 1M GIST, 64−bit encoding (k = 264) lae@Rc0 .261084 1R0Pc Qk −1m(K 321e )a (A nD s )(132) (A D )10K Figure 5. PQ and ck-means results using natural (1), structured (2), and random (3) ordering to define the (initial) subspaces. the high-dimensional GIST data, ck-means with AD significantly outperforms other methods; even ck-means with SD performs on par with PQ with AD. On 1M SIFT, the Recall@ 10 numbers for PQ and ck-means, both using AD, × are 59.9% and 63.7%. On 1B SIFT, Recall@ 100 numbers are 56.5% and 64.9%. As expected, with increasing dataset size, the difference between methods is more significant. In 1B SIFT, each feature vector is 128 bytes, hence a total of 119 GB. Using any method in Fig. 4 (including ckmeans) to index the database into 64 bits, this storage cost reduces to only 7.5 GB. This allows one to work with much larger datasets. In the experiments we use linear scan to find the nearest items according to quantizer distances. For NN search using 10K SIFT queries on 1B SIFT this takes about 8 hours for AD and AH and 4 hours for Hamming distance on a 2 4-core computer. Search can be sped up significantly; using a coarse tienri.tia Sl quantization sapnedd an pin svigenritefidfile structure for AD and AH, as suggested by [7, 1], and using the multi-index hashing method of [13] for Hamming distance. In this paper we did not implement these efficiencies as we focus primarily on the quality of quantization. Fig. 5 compares ck-means to PQ when R in ck-means is initialized using the 3 orderings of [7]. It shows that ckmeans is superior in all cases. Simiarly interesting, it also shows that despite the non-convexity ofthe optimization objective, ck-means learning tends to find similarly good encodings under different initial conditions. Finally, Fig. 6 compares the methods under different code dimensionality. 333000222311 1M GIST, encoding with 64, 96, and 128 bits Rl@ace0 .814206 1 R0cP kQ − m1 69e4216a−8 Knb −itsb 1t69248− bit10K Figure 6. PQ and ck-means results using different number of bits for encoding. In all cases asymmetric distance is used. Table2.Rcokg-nmiteoamPnQCesac(ondksue=r(bkaoc416y=0ko26)n40t2h[eC]IFAR7c598-u1.72096r%a tcesyuingdfferent codebook learning algorithms. 6.2. Learning visual codebooks While the ANN seach tasks above require too many clusters for k-means, it is interesing to compare k-means and ck-means on a task with a moderate number of clusters. To this end we consider codebook learning for bag-ofword√s models [3, 10]. We use ck-means with m = 2 and h = √k, and hence k centers. The main advantage of ck- × here is that finding the closest cluster center is done in O(√k) time, much faster than standard NN search with k-means in O(k). Alternatives for k-means, to improve efficiency, include approximate k-means [14], and hierarchical k-means [12]. Here we only compare to exact k-means. CIFAR-10 [9] comprises 50K training and 10K test images (32 32 pixels). Each image is one of 10 classes (airplane, 3b2i×rd,3 car, cat, )d.e Eera,c dog, frog, sh oonrsee o, ship, laansds etrsu (caki)r.We use publicly available code from Coates et al [2], with changes to the codebook learning and cluster assignment modules. Codebooks are built on 6×6 whitened color image patches. .O Cnoed histogram i sb ucirleta otend 6 per image quadrant, aagnde a linear SVM is applied to 4k-dim feature vectors. Recognition accuracy rates on the test set for different models and k are given in Table 2. Despite having fewer parameters, ck-means performs on par or better than kmeans. This is consistent for different initializations of the algorithms. Although k-means has higher fedility than ckmeans, with fewer parameters, ck-means may be less susceptible to overfitting. Table 2, also compares with the approach of [19], where PQ without learning a rotation is used for clustering features. As expected, learning the rotation has a significant impact on recognition rates, outperforming all three initializations of PQ. mean√s 7. Conclusions We present the Cartesian k-means algorithm, a generalization of k-means with a parameterization of the clus- ter centers such that number of centers is super-linear in the number of parameters. The method is also shown to be a generalization of the ITQ algorithm and Product Quantization. In experiments on large-scale retrieval and codebook learning for recognition the results are impressive, outperforming product quantization by a significant margin. An implementation of the method is available at https : / / github . com/norou z i ckmeans . / References [1] A. Babenko and V. Lempitsky. The inverted multi-index. CVPR, 2012. [2] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. AISTATS, 2011. [3] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual categorization with bags of keypoints. ECCV Workshop Statistical Learning in Computer Vision, 2004. [4] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization for approximate nearest neighbor search. CVPR, 2013. [5] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. CVPR, 2011. [6] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011. [7] H. J ´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 2011. [8] H. J ´egou, M. Douze, C. Schmid, and P. P ´erez. Aggregating local descriptors into a compact image representation. CVPR, 2010. [9] A. Krizhevsky. Learning multiple layers of features from tiny images. MSc Thesis, Univ. Toronto, 2009. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. CVPR, 2006. [11] S. P. Lloyd. Least squares quantization in pcm. IEEE Trans. IT, 28(2): 129–137, 1982. [12] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. CVPR, 2006. [13] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in hamming space with multi-index hashing. CVPR, 2012. [14] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR, 2007. [15] P. Sch o¨nemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 3 1, 1966. [16] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV, 2003. [17] J. Tenenbaum and W. Freeman. Separating style and content with bilinear models. Neural Comp., 12: 1247–1283, 2000. [18] A. Torralba, K. Murphy, and W. Freeman. Sharing visual features for multiclass and multiview object detection. IEEE T. PAMI, 29(5):854–869, 2007. [19] S. Wei, X. Wu, and D. Xu. Partitioned k-means clustering for fast construction of unbiased visual vocabulary. The Era of Interactive Media, 2012. 333000222422

5 0.5357067 404 cvpr-2013-Sparse Quantization for Patch Description

Author: Xavier Boix, Michael Gygli, Gemma Roig, Luc Van_Gool

Abstract: The representation of local image patches is crucial for the good performance and efficiency of many vision tasks. Patch descriptors have been designed to generalize towards diverse variations, depending on the application, as well as the desired compromise between accuracy and efficiency. We present a novel formulation of patch description, that serves such issues well. Sparse quantization lies at its heart. This allows for efficient encodings, leading to powerful, novel binary descriptors, yet also to the generalization of existing descriptors like SIFTorBRIEF. We demonstrate the capabilities of our formulation for both keypoint matching and image classification. Our binary descriptors achieve state-of-the-art results for two keypoint matching benchmarks, namely those by Brown [6] and Mikolajczyk [18]. For image classification, we propose new descriptors that perform similar to SIFT on Caltech101 [10] and PASCAL VOC07 [9].

6 0.47662255 275 cvpr-2013-Lp-Norm IDF for Large Scale Image Search

7 0.45426002 343 cvpr-2013-Query Adaptive Similarity for Large Scale Object Retrieval

8 0.45280862 69 cvpr-2013-Boosting Binary Keypoint Descriptors

9 0.4255484 319 cvpr-2013-Optimized Product Quantization for Approximate Nearest Neighbor Search

10 0.41650972 456 cvpr-2013-Visual Place Recognition with Repetitive Structures

11 0.41436848 59 cvpr-2013-Better Exploiting Motion for Better Action Recognition

12 0.40328157 112 cvpr-2013-Dense Segmentation-Aware Descriptors

13 0.35321081 151 cvpr-2013-Event Retrieval in Large Video Collections with Circulant Temporal Encoding

14 0.35300568 149 cvpr-2013-Evaluation of Color STIPs for Human Action Recognition

15 0.34650597 53 cvpr-2013-BFO Meets HOG: Feature Extraction Based on Histograms of Oriented p.d.f. Gradients for Image Classification

16 0.32185081 130 cvpr-2013-Discriminative Color Descriptors

17 0.30619392 5 cvpr-2013-A Bayesian Approach to Multimodal Visual Dictionary Learning

18 0.28610808 421 cvpr-2013-Supervised Kernel Descriptors for Visual Recognition

19 0.27717459 234 cvpr-2013-Joint Spectral Correspondence for Disparate Image Matching

20 0.27652645 309 cvpr-2013-Nonparametric Scene Parsing with Adaptive Feature Relevance and Semantic Context


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.082), (15, 0.19), (16, 0.019), (26, 0.058), (28, 0.012), (33, 0.309), (59, 0.018), (67, 0.103), (69, 0.027), (87, 0.061)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.90438366 41 cvpr-2013-An Iterated L1 Algorithm for Non-smooth Non-convex Optimization in Computer Vision

Author: Peter Ochs, Alexey Dosovitskiy, Thomas Brox, Thomas Pock

Abstract: Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge to optimize them. Recently, iteratively reweighed ?1 minimization has been proposed as a way to tackle a class of non-convex functions by solving a sequence of convex ?2-?1 problems. Here we extend the problem class to linearly constrained optimization of a Lipschitz continuous function, which is the sum of a convex function and a function being concave and increasing on the non-negative orthant (possibly non-convex and nonconcave on the whole space). This allows to apply the algorithm to many computer vision tasks. We show the effect of non-convex regularizers on image denoising, deconvolution, optical flow, and depth map fusion. Non-convexity is particularly interesting in combination with total generalized variation and learned image priors. Efficient optimization is made possible by some important properties that are shown to hold.

2 0.90400547 21 cvpr-2013-A New Perspective on Uncalibrated Photometric Stereo

Author: Thoma Papadhimitri, Paolo Favaro

Abstract: We investigate the problem of reconstructing normals, albedo and lights of Lambertian surfaces in uncalibrated photometric stereo under the perspective projection model. Our analysis is based on establishing the integrability constraint. In the orthographicprojection case, it is well-known that when such constraint is imposed, a solution can be identified only up to 3 parameters, the so-called generalized bas-relief (GBR) ambiguity. We show that in the perspective projection case the solution is unique. We also propose a closed-form solution which is simple, efficient and robust. We test our algorithm on synthetic data and publicly available real data. Our quantitative tests show that our method outperforms all prior work of uncalibrated photometric stereo under orthographic projection.

3 0.89935696 438 cvpr-2013-Towards Pose Robust Face Recognition

Author: Dong Yi, Zhen Lei, Stan Z. Li

Abstract: Most existing pose robust methods are too computational complex to meet practical applications and their performance under unconstrained environments are rarely evaluated. In this paper, we propose a novel method for pose robust face recognition towards practical applications, which is fast, pose robust and can work well under unconstrained environments. Firstly, a 3D deformable model is built and a fast 3D model fitting algorithm is proposed to estimate the pose of face image. Secondly, a group of Gabor filters are transformed according to the pose and shape of face image for feature extraction. Finally, PCA is applied on the pose adaptive Gabor features to remove the redundances and Cosine metric is used to evaluate the similarity. The proposed method has three advantages: (1) The pose correction is applied in the filter space rather than image space, which makes our method less affected by the precision of the 3D model; (2) By combining the holistic pose transformation and local Gabor filtering, the final feature is robust to pose and other negative factors in face recognition; (3) The 3D structure and facial symmetry are successfully used to deal with self-occlusion. Extensive experiments on FERET and PIE show the proposed method outperforms state-ofthe-art methods significantly, meanwhile, the method works well on LFW.

same-paper 4 0.8933301 38 cvpr-2013-All About VLAD

Author: unkown-author

Abstract: The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by J´ egou et al. [8] for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance; second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new stateof-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and local- ization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).

5 0.89089268 241 cvpr-2013-Label-Embedding for Attribute-Based Classification

Author: Zeynep Akata, Florent Perronnin, Zaid Harchaoui, Cordelia Schmid

Abstract: Attributes are an intermediate representation, which enables parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function which measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. The label embedding framework offers other advantages such as the ability to leverage alternative sources of information in addition to attributes (e.g. class hierarchies) or to transition smoothly from zero-shot learning to learning with large quantities of data.

6 0.87500083 254 cvpr-2013-Learning SURF Cascade for Fast and Accurate Object Detection

7 0.87401158 119 cvpr-2013-Detecting and Aligning Faces by Image Retrieval

8 0.87233067 322 cvpr-2013-PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors

9 0.87038672 167 cvpr-2013-Fast Multiple-Part Based Object Detection Using KD-Ferns

10 0.86949956 338 cvpr-2013-Probabilistic Elastic Matching for Pose Variant Face Verification

11 0.86940998 204 cvpr-2013-Histograms of Sparse Codes for Object Detection

12 0.86873817 202 cvpr-2013-Hierarchical Saliency Detection

13 0.86820912 339 cvpr-2013-Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation

14 0.86783981 94 cvpr-2013-Context-Aware Modeling and Recognition of Activities in Video

15 0.86774921 122 cvpr-2013-Detection Evolution with Multi-order Contextual Co-occurrence

16 0.86695415 43 cvpr-2013-Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs

17 0.86685079 207 cvpr-2013-Human Pose Estimation Using a Joint Pixel-wise and Part-wise Formulation

18 0.86681056 168 cvpr-2013-Fast Object Detection with Entropy-Driven Evaluation

19 0.8666988 89 cvpr-2013-Computationally Efficient Regression on a Dependency Graph for Human Pose Estimation

20 0.86663878 173 cvpr-2013-Finding Things: Image Parsing with Regions and Per-Exemplar Detectors