iccv iccv2013 iccv2013-239 knowledge-graph by maker-knowledge-mining

239 iccv-2013-Learning Hash Codes with Listwise Supervision


Source: pdf

Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang

Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. [sent-11, score-1.178]

2 Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. [sent-12, score-0.573]

3 How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. [sent-13, score-1.275]

4 In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. [sent-14, score-0.835]

5 In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. [sent-15, score-0.441]

6 Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. [sent-16, score-0.899]

7 The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead. [sent-18, score-0.893]

8 Briefly speaking, the objective of hashing is to map an original D-dimensional data space RD to a binary Hamming space BK, where each data point is represented by a binary hash code (i. [sent-28, score-0.911]

9 , a K-bit hash key) and the entire data set is mapped to a table with hash keys as entries, namely a hash table. [sent-30, score-1.662]

10 Most of the early hashing techniques including locality sensitive hashing [5] and MinHash are dataindependent random approaches, which do not perform well in applications like image retrieval and search [21]. [sent-31, score-0.604]

11 Recently, semi-supervsied/supervised learning algorithms have been employed to design more effective hash functions and many new hashing methods have been proposed [6, 9, 12, 14, 17, 18, 21, 22]. [sent-32, score-0.881]

12 These learning to hash methods are either pointwise or pairwise, which—in other words—leverage instance-level labels or pairwise relations between instances into the learning procedure. [sent-33, score-0.65]

13 Their hashing objectives are to preserve the pointwise or pairwise label information in the learned Hamming space. [sent-34, score-0.351]

14 In this paper, we propose a novel framework for learning hash functions that preserve ground-truth orders of ranking lists. [sent-36, score-0.947]

15 Our learning procedure takes into account the ranking orders, not just instance-level or pairwise label information that was widely used in the prior works. [sent-37, score-0.341]

16 The left component demonstrates the procedure of deriving ground-truth ranking list r using the semantic relevance or feature similarity/distance, and then converting it to a triplet matrix S(q) for a given query q. [sent-84, score-0.711]

17 The right component describes the estimation of a relaxed ranking triplet matrix S˜(q) from the binary hash codes. [sent-85, score-1.035]

18 The central component shows the objective of minimizing the inconsistency between the two ranking triplet matrices. [sent-86, score-0.509]

19 truth ranking lists for individual queries are converted to a triplet representation where each triplet S(q; xi , xj) indicates the order of each instance pair (xi , xj) given a certain query q. [sent-87, score-0.815]

20 Second, given the hash code H(q) for a query q and H(xi) , H(xj ) for an instance pair xi, xj, we apply inner products to compute the similarities among hash codes and then derive the rank triplet S˜ (H(q) ; H(xi) , H(xj)) in the Hamming space. [sent-88, score-1.497]

21 The final step is to minimize the inconsistency between the ground-truth rank triplets and the ones derived from the corresponding hash codes, which implic- itly preserves the ground-truth ranking orders in the Hamming space. [sent-89, score-1.059]

22 Section 2 gives a brief overview of related works in hash function design. [sent-93, score-0.554]

23 Section 3 describes the conversion from ranking lists to the triplet representation, and then defines a loss function measuring the inconsistency between the groundtruth ranking list and the ranking list derived in the Hamming space. [sent-94, score-1.312]

24 Specifically, we focus on two categories of methods, which are more related to this paper: label-dependent hashing and ranking order statistics based hashing. [sent-100, score-0.566]

25 Label-Dependent Hashing Realizing that semantic relevance or similarity among data can not be fully defined by a single distance metric, several researchers explored supervision information to design task-specific hash functions. [sent-103, score-0.76]

26 The first category can be named as pointwise approaches since each hash function is essentially treated as a binary classifier and the provided label information is exploited to guide the hash function design. [sent-105, score-1.157]

27 For instance, in [21], the authors proposed a semi-supervise hashing (SSH) technique, which aims to maximize the empirical fitness over the labeled sample pairs, while also maximize the information gain from each hash bit to avoid overfitting. [sent-108, score-0.846]

28 Besides SSH, many other recent methods also belong to this category, such as binary reconstructive embedding [9], minimal loss hashing [14], complementary hashing [26], and distance metric learning based hashing [10]. [sent-110, score-0.959]

29 In summary, the existing supervised and semi-supervised hashing methods often use the pairwise label information to pursue the optimal hash functions. [sent-111, score-0.899]

30 The general objec- tive is to encode similar point pairs to the same hash bucket while maximizing the difference of hash codes of dissimilar point pairs. [sent-112, score-1.173]

31 However, these approaches do not directly optimize on ranking lists, which is critical to assess the search quality in practical applications. [sent-114, score-0.342]

32 CMH exploits the theory of concomitant order statistics to develop locality sensitive hash functions, which can improve the performance under the Cosine similarity measure. [sent-119, score-0.646]

33 More recently, winner-takes-all hashing (WTAH) explores partial order statistics and encodes relative orders of feature dimensions to develop randomized data-dependent hash functions [27]. [sent-120, score-0.894]

34 WTAH produces a sparse embedding and the resulting Hamming distance closely correlates with the ranking similarity measure. [sent-121, score-0.344]

35 developed a hamming metric learning framework to minimize the piecewisesmooth upper bound on a triplet ranking loss [15]. [sent-126, score-0.709]

36 Then we propose a triplet representation to formulate our objective of preserving ranking orders. [sent-129, score-0.478]

37 For any specific query point qm, we can derive a ranking list over X, which can be written as a vector as r(qm, X) = (rm1, · · · , rnm, · · · , rNm), (1) where each element rnm falls into the integer range [1, N] and no two elements share the same value. [sent-136, score-0.499]

38 If rim < rjm (i, j = 1, · · · , N), it indicates sample xi has higher rank than xj, which means xi is more relevant or similar to qm than xj . [sent-137, score-0.747]

39 If given a similarity function sim(·), we can easily derive the ranking list (rm1, · · · , rnm) associated with a query sample qm, where the order is estimated as rim < rjm if sim(xi, qm) > sim(xj , qm). [sent-138, score-0.539]

40 Though there could be multiple database points with the same similar measure to the query point, in this paper, we ignore the equal case k(xi , qm) = k(xi , qm) and assume each database point has an exact ranking order. [sent-140, score-0.36]

41 However, if given the semantic label information, it is also fairly straightforward to convert semantic labels to ranking lists through counting the commonly shared labels between the query point and the database points. [sent-143, score-0.479]

42 In our formulation for learning hash functions, we will leverage the above supervision information in the form of a ranking list to design ranking-preserving binary hash codes. [sent-144, score-1.593]

43 Conversion from a Ranking List to Triplets Compared to those pairwise based hash learning approaches, using the listwise supervision has a clear advantage since optimizing the ranking list can directly improve the quality of nearest neighbor search. [sent-147, score-1.216]

44 In the previous research of learning to rank [11], the framework using the ranked list to train a ranking function, namely listwise approaches, has been well studied. [sent-149, score-0.58]

45 Typically, the objective is to minimize an expected loss function, which intrinsically measure the difference between the ground-truth ranking and the ranking derived from the rank function [25]. [sent-150, score-0.721]

46 One of the fundamental difference in learning hash function lies in that the hash function only generates binary codes, instead of explicit ranking scores. [sent-151, score-1.44]

47 Here we present an efficient way to translate the groundtruth ranking list into a set of rank triplets, which can be easily fed into the hash function learning paradigm. [sent-152, score-1.005]

48 Recall we have the ground-truth ranking list for a query qm represented as a vector (rm1, · · · , rnm, · · · , rNm). [sent-153, score-0.868]

49 Therefore, we use a rank triplet S(qm; xi, xj) ∈ R to represent the listwise supervision and the value of S(qm; xi , xj) is defined as S(qm;xi,xj) =⎨⎧−011 : r iq i>= g(xj , qm), otherwise g(xi, qm) < g(xj , qm). [sent-154, score-0.514]

50 Hence, we can have the following loss function measuring the quality of the ranking list derived from the scoring function g(·) L(g,X,Y) = −? [sent-155, score-0.439]

51 ,i,j where S˜(qm) (i, j) is a ranking triplet computed from the similarity scores g(xi, qm) ,g(xj , qm) and S(qm) (i, j) is the ground-truth. [sent-157, score-0.486]

52 Recall the definition of the ranking triplet in Eq. [sent-160, score-0.459]

53 Ranking-Based Supervised Hashing This section first introduces the form of hash function that is used in our approach. [sent-171, score-0.554]

54 Then a critical step of converting binary hash codes to rank triplet is described. [sent-172, score-0.858]

55 Finally the loss function using listwise supervision is defined, followed by an efficient solution using the augmented Lagrange multipliers (ALM) method. [sent-173, score-0.339]

56 For simplicity, in this paper we use linear form in the proposed hash functions. [sent-174, score-0.554]

57 However, our approach can be easily extended to learn nonlinear hash functions. [sent-175, score-0.572]

58 Linear Hash Functions For a data point x ∈ RD, a hash function h(·) is used to generate a binary code h : R → {−1, 1}. [sent-178, score-0.595]

59 1 Linear hash functions, such as the well-known locality sensitive hashing [5], are computationally very efficient for large scale applications. [sent-179, score-0.853]

60 Many recent learning based hashing techniques such as the semi-supervised hashing approach [21] applied this type of linear formulation. [sent-180, score-0.569]

61 Then we follow the general idea of linear functions to define our hash functions in the form as hk(x) = sgn ? [sent-182, score-0.644]

62 Let H = [h1, · · · , hk , · · · , hK] be a sequence of hash functions. [sent-188, score-0.596]

63 x) · · , ak, · · · , aK] ∈ RD×K is the coefak is the coefficient vector of the hash · 4. [sent-192, score-0.554]

64 Deriving Rank Triplet from Hamming Embedding Given a query qm and the dataset X = {x1, · · · , xn}, the corresponding hash codes can be computed as H(qm) 1Here convert hash bits as {−1, 1}, which to {0, 1} valued hash codes. [sent-194, score-2.274]

65 Then the Hamming distance can be used to rank the samples {xn} ∈ X associated the query sample qm, resulting in a ranking list rH (H(qm) , H(X)). [sent-196, score-0.483]

66 To represent such ranking list by the rank triplets, we first define the following similarity measure gH (H(qm) , H(xi)) between hash codes using the normalized inner product, i. [sent-197, score-1.042]

67 khk(qm) · hk(xi) Following the computation from similarity measure to approximated ranking triplet in Eq. [sent-206, score-0.486]

68 − gH (H(qm), H(xj)) (9) It is easy to see that Hamming distance based ranking in the ascending order is equivalent to the ranking generated from the cosine similarity of hash codes in the descending order. [sent-209, score-1.245]

69 Therefore, the corresponding ranking triplets S˜(qm) (i, j) are the same. [sent-210, score-0.371]

70 Finally, we explicitly make the learned hash codes hold least redundant information by enforcing the orthogonality constraints. [sent-249, score-0.619]

71 Following the work in [24, 21], we relax such hard constraints on hash codes and instead make the projection directions orthogonal, leading to the following: A∗ = argmAinLH= argmAin−tr(AA? [sent-250, score-0.619]

72 Complexity Analysis The computational cost for learning the ranking-based supervised hash function consists of two parts, the offline 333000333666 mtHDNeCkgnaor iGgmhnf0. [sent-273, score-0.612]

73 Performance evaluation on CIFAR dataset using different number of hash bits. [sent-276, score-0.554]

74 The online training for deriving the optimal hash functions is fairly fast. [sent-282, score-0.636]

75 Performance evaluation on NUSWIDE dataset using different number of hash bits. [sent-306, score-0.554]

76 Accordingly, we can derive ground-truth weak ranking lists with three relevance levels. [sent-309, score-0.42]

77 Compared to CIFAR dataset, NUSWIDE has much more relevance levels in the ranking lists. [sent-314, score-0.362]

78 Finally we can obtain ground-truth ranking lists with three relevance levels. [sent-324, score-0.401]

79 Experimental Setup We compare the proposed RSH methods with four representative techniques, including spectral hashing (SH) [24], semi-supervised hashing (SSH) [21], and two order statistics based methods, i. [sent-327, score-0.55]

80 , concomitant min hashing (CMH) [4] and winner-takes-all hashing (WTAH) [27]. [sent-329, score-0.591]

81 Similarly, for RSH, we randomly sample 100 query samples and 1000 data points to compute the ground-truth ranking lists, which finally give us a triplet tensor S with the size 333000333777 × mNtnrfChiaekDGmHago0231. [sent-332, score-0.564]

82 Performance evaluation on One-Million tiny image dataset using different number of hash bits. [sent-335, score-0.587]

83 Evaluation Metrics Following the evaluation method suggested in [23], we measure the search quality of a single hash table using both hamming ranking and hash lookup. [sent-343, score-1.632]

84 We use normalized discounted cumulative gain (NDCG) to evaluate the ranking quality in Hamming space for each individual query [7], which is a very popular measure in the IR community. [sent-345, score-0.446]

85 (12) Here p indicates the truncated position in a ranking list and the value of reli indicates the relevance level for the returned ith sample. [sent-348, score-0.467]

86 Accordingly, if the ideal ranking gives the DCG value as IDCG, NDCG is calculated as NDCGp = For those samples falling in the same Hamming distance to query points, the expectation of NDCG is computed. [sent-350, score-0.378]

87 On the other hand, hash lookup returns the samples within a certain Hamming radius r (set as r ≤ 3 in the experiments). [sent-351, score-0.621]

88 Since hash lookup does not provide ranking for returned points with equal Hamming distance to the queries, we use average cumulative gain (ACG) to measure the quality of these returned samples [7], which is calculated as IDDCCGGpp. [sent-352, score-1.049]

89 Results The performances using various numbers of hash bits are presented in Figures 2, 3, 4, for the CIFAR, NUSWIDE and One-Million datasets, respectively. [sent-363, score-0.58]

90 For performance comparison of hash lookup performance, SSH achieves a similar high performance, and even outperforms RSH in a few tested cases on the CIFAR dataset and the One-Million dataset. [sent-367, score-0.578]

91 Note that CIFAR dataset and the One-Million dataset only have three relevance levels which encode very weak ranking orders. [sent-368, score-0.362]

92 When using longer bits, the Hamming embedding becomes increasingly sparse and many queries have empty returns (treated as zero cumulative gain), which prevents a single hash table to achieve higher performance. [sent-369, score-0.633]

93 The time cost for training the hash functions can be ranked as : RSH > SSH > SH > WTAH ? [sent-373, score-0.587]

94 Figure 5 shows the time cost for generating hash codes using different approaches on CIFAR and NUSWIDE. [sent-377, score-0.619]

95 WTAH longest time to compute the hash codes need a sorting process over the feature WTAH is fairly slow in generating long and CMH take the since both of them space. [sent-380, score-0.641]

96 Conclusions In this paper, we have introduced a learning to hash framework through leveraging listwise supervision to train efficient hash functions. [sent-383, score-1.371]

97 First, realizing the difficulty of directly optimizing over discrete ranking orders, we introduced a triplet representation for listwise supervision and proved that this representation is equivalent to rank orders. [sent-385, score-0.772]

98 Then, we proposed to match Hamming ranking to the given ranking in the semantic space through minimizing the inconsistency between two triplet representations. [sent-386, score-0.81]

99 Important future works include the extension to nonlinear cases by applying kernels and the design of multiple ranking-based supervised hash tables to further boost image search quality. [sent-391, score-0.623]

100 Locality sensitive hash functions based on concomitant rank order statistics. [sent-423, score-0.677]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('hash', 0.554), ('qm', 0.452), ('ranking', 0.291), ('hashing', 0.275), ('rsh', 0.192), ('hamming', 0.182), ('triplet', 0.168), ('listwise', 0.165), ('cifar', 0.123), ('wtah', 0.109), ('ssh', 0.101), ('nuswide', 0.096), ('cmh', 0.085), ('xj', 0.081), ('triplets', 0.08), ('supervision', 0.079), ('ndcg', 0.074), ('relevance', 0.071), ('query', 0.069), ('acg', 0.069), ('codes', 0.065), ('rnm', 0.064), ('list', 0.056), ('smij', 0.055), ('xi', 0.053), ('rank', 0.049), ('returned', 0.049), ('loss', 0.049), ('hk', 0.042), ('concomitant', 0.041), ('multiplier', 0.039), ('supervised', 0.039), ('alm', 0.039), ('lists', 0.039), ('flops', 0.036), ('norouzi', 0.036), ('gh', 0.033), ('functions', 0.033), ('tiny', 0.033), ('orders', 0.032), ('rim', 0.032), ('sim', 0.032), ('pairwise', 0.031), ('inconsistency', 0.031), ('search', 0.03), ('lagrange', 0.03), ('aa', 0.029), ('semantic', 0.029), ('bellevue', 0.027), ('dcg', 0.027), ('fudan', 0.027), ('minhash', 0.027), ('rjm', 0.027), ('superclass', 0.027), ('pointwise', 0.027), ('queries', 0.027), ('similarity', 0.027), ('pages', 0.027), ('deriving', 0.027), ('ibm', 0.026), ('bits', 0.026), ('embedding', 0.026), ('cumulative', 0.026), ('radius', 0.025), ('lagrangian', 0.025), ('augmented', 0.025), ('percentile', 0.025), ('ak', 0.025), ('pm', 0.025), ('sgn', 0.024), ('lookup', 0.024), ('graded', 0.024), ('locality', 0.024), ('andy', 0.022), ('discounted', 0.022), ('derived', 0.022), ('fairly', 0.022), ('binary', 0.022), ('multipliers', 0.021), ('sh', 0.021), ('quality', 0.021), ('analytics', 0.02), ('realizing', 0.02), ('xn', 0.019), ('code', 0.019), ('derive', 0.019), ('learning', 0.019), ('watson', 0.019), ('objective', 0.019), ('descent', 0.019), ('groundtruth', 0.018), ('samples', 0.018), ('tensor', 0.018), ('kumar', 0.018), ('easily', 0.018), ('leverage', 0.018), ('reconstructive', 0.018), ('preserve', 0.018), ('gain', 0.017), ('cosine', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 239 iccv-2013-Learning Hash Codes with Listwise Supervision

Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang

Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.

2 0.54854745 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

Author: Guangnan Ye, Dong Liu, Jun Wang, Shih-Fu Chang

Abstract: Recently, learning based hashing methods have become popular for indexing large-scale media data. Hashing methods map high-dimensional features to compact binary codes that are efficient to match and robust in preserving original similarity. However, most of the existing hashing methods treat videos as a simple aggregation of independent frames and index each video through combining the indexes of frames. The structure information of videos, e.g., discriminative local visual commonality and temporal consistency, is often neglected in the design of hash functions. In this paper, we propose a supervised method that explores the structure learning techniques to design efficient hash functions. The proposed video hashing method formulates a minimization problem over a structure-regularized empirical loss. In particular, the structure regularization exploits the common local visual patterns occurring in video frames that are associated with the same semantic class, and simultaneously preserves the temporal consistency over successive frames from the same video. We show that the minimization objective can be efficiently solved by an Acceler- ated Proximal Gradient (APG) method. Extensive experiments on two large video benchmark datasets (up to around 150K video clips with over 12 million frames) show that the proposed method significantly outperforms the state-ofthe-art hashing methods.

3 0.5420593 13 iccv-2013-A General Two-Step Approach to Learning-Based Hashing

Author: Guosheng Lin, Chunhua Shen, David Suter, Anton van_den_Hengel

Abstract: Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. Here we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problemspecific hashing methods. Our framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Both problems have been extensively studied in the literature. Our extensive experiments demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art.

4 0.42876771 83 iccv-2013-Complementary Projection Hashing

Author: Zhongming Jin, Yao Hu, Yue Lin, Debing Zhang, Shiding Lin, Deng Cai, Xuelong Li

Abstract: Recently, hashing techniques have been widely applied to solve the approximate nearest neighbors search problem in many vision applications. Generally, these hashing approaches generate 2c buckets, where c is the length of the hash code. A good hashing method should satisfy the following two requirements: 1) mapping the nearby data points into the same bucket or nearby (measured by xue long l i opt . ac . cn @ a(a)b(b) the Hamming distance) buckets. 2) all the data points are evenly distributed among all the buckets. In this paper, we propose a novel algorithm named Complementary Projection Hashing (CPH) to find the optimal hashing functions which explicitly considers the above two requirements. Specifically, CPHaims at sequentiallyfinding a series ofhyperplanes (hashing functions) which cross the sparse region of the data. At the same time, the data points are evenly distributed in the hypercubes generated by these hyperplanes. The experiments comparing with the state-of-the-art hashing methods demonstrate the effectiveness of the proposed method.

5 0.42067966 409 iccv-2013-Supervised Binary Hash Code Learning with Jensen Shannon Divergence

Author: Lixin Fan

Abstract: This paper proposes to learn binary hash codes within a statistical learning framework, in which an upper bound of the probability of Bayes decision errors is derived for different forms of hash functions and a rigorous proof of the convergence of the upper bound is presented. Consequently, minimizing such an upper bound leads to consistent performance improvements of existing hash code learning algorithms, regardless of whether original algorithms are unsupervised or supervised. This paper also illustrates a fast hash coding method that exploits simple binary tests to achieve orders of magnitude improvement in coding speed as compared to projection based methods.

6 0.14259945 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

7 0.12298372 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

8 0.12230952 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

9 0.11915601 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

10 0.10318388 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

11 0.078458056 168 iccv-2013-Finding the Best from the Second Bests - Inhibiting Subjective Bias in Evaluation of Visual Tracking Algorithms

12 0.077026352 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

13 0.075386919 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning

14 0.067739576 305 iccv-2013-POP: Person Re-identification Post-rank Optimisation

15 0.063919857 380 iccv-2013-Semantic Transform: Weakly Supervised Semantic Inference for Relating Visual Attributes

16 0.062380094 221 iccv-2013-Joint Inverted Indexing

17 0.061746716 266 iccv-2013-Mining Multiple Queries for Image Retrieval: On-the-Fly Learning of an Object-Specific Mid-level Representation

18 0.061743848 431 iccv-2013-Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias

19 0.058919553 334 iccv-2013-Query-Adaptive Asymmetrical Dissimilarities for Visual Object Retrieval

20 0.057618376 446 iccv-2013-Visual Semantic Complex Network for Web Images


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.124), (1, 0.082), (2, -0.081), (3, -0.123), (4, -0.048), (5, 0.394), (6, 0.011), (7, -0.013), (8, -0.296), (9, 0.213), (10, -0.266), (11, 0.141), (12, -0.02), (13, -0.171), (14, -0.009), (15, 0.041), (16, -0.137), (17, 0.175), (18, -0.166), (19, 0.066), (20, -0.006), (21, 0.041), (22, 0.001), (23, -0.011), (24, 0.046), (25, 0.002), (26, -0.012), (27, -0.004), (28, -0.025), (29, -0.007), (30, 0.012), (31, -0.029), (32, -0.003), (33, 0.005), (34, -0.024), (35, -0.006), (36, -0.015), (37, 0.047), (38, -0.032), (39, 0.001), (40, -0.012), (41, -0.006), (42, 0.007), (43, -0.024), (44, 0.01), (45, 0.014), (46, -0.008), (47, 0.023), (48, -0.001), (49, -0.002)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9593454 239 iccv-2013-Learning Hash Codes with Listwise Supervision

Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang

Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.

2 0.95255929 83 iccv-2013-Complementary Projection Hashing

Author: Zhongming Jin, Yao Hu, Yue Lin, Debing Zhang, Shiding Lin, Deng Cai, Xuelong Li

Abstract: Recently, hashing techniques have been widely applied to solve the approximate nearest neighbors search problem in many vision applications. Generally, these hashing approaches generate 2c buckets, where c is the length of the hash code. A good hashing method should satisfy the following two requirements: 1) mapping the nearby data points into the same bucket or nearby (measured by xue long l i opt . ac . cn @ a(a)b(b) the Hamming distance) buckets. 2) all the data points are evenly distributed among all the buckets. In this paper, we propose a novel algorithm named Complementary Projection Hashing (CPH) to find the optimal hashing functions which explicitly considers the above two requirements. Specifically, CPHaims at sequentiallyfinding a series ofhyperplanes (hashing functions) which cross the sparse region of the data. At the same time, the data points are evenly distributed in the hypercubes generated by these hyperplanes. The experiments comparing with the state-of-the-art hashing methods demonstrate the effectiveness of the proposed method.

3 0.94768226 13 iccv-2013-A General Two-Step Approach to Learning-Based Hashing

Author: Guosheng Lin, Chunhua Shen, David Suter, Anton van_den_Hengel

Abstract: Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. Here we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problemspecific hashing methods. Our framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Both problems have been extensively studied in the literature. Our extensive experiments demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art.

4 0.89170122 409 iccv-2013-Supervised Binary Hash Code Learning with Jensen Shannon Divergence

Author: Lixin Fan

Abstract: This paper proposes to learn binary hash codes within a statistical learning framework, in which an upper bound of the probability of Bayes decision errors is derived for different forms of hash functions and a rigorous proof of the convergence of the upper bound is presented. Consequently, minimizing such an upper bound leads to consistent performance improvements of existing hash code learning algorithms, regardless of whether original algorithms are unsupervised or supervised. This paper also illustrates a fast hash coding method that exploits simple binary tests to achieve orders of magnitude improvement in coding speed as compared to projection based methods.

5 0.88518548 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

Author: Guangnan Ye, Dong Liu, Jun Wang, Shih-Fu Chang

Abstract: Recently, learning based hashing methods have become popular for indexing large-scale media data. Hashing methods map high-dimensional features to compact binary codes that are efficient to match and robust in preserving original similarity. However, most of the existing hashing methods treat videos as a simple aggregation of independent frames and index each video through combining the indexes of frames. The structure information of videos, e.g., discriminative local visual commonality and temporal consistency, is often neglected in the design of hash functions. In this paper, we propose a supervised method that explores the structure learning techniques to design efficient hash functions. The proposed video hashing method formulates a minimization problem over a structure-regularized empirical loss. In particular, the structure regularization exploits the common local visual patterns occurring in video frames that are associated with the same semantic class, and simultaneously preserves the temporal consistency over successive frames from the same video. We show that the minimization objective can be efficiently solved by an Acceler- ated Proximal Gradient (APG) method. Extensive experiments on two large video benchmark datasets (up to around 150K video clips with over 12 million frames) show that the proposed method significantly outperforms the state-ofthe-art hashing methods.

6 0.42269066 29 iccv-2013-A Scalable Unsupervised Feature Merging Approach to Efficient Dimensionality Reduction of High-Dimensional Visual Data

7 0.41848007 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

8 0.37664387 221 iccv-2013-Joint Inverted Indexing

9 0.33790481 400 iccv-2013-Stable Hyper-pooling and Query Expansion for Event Detection

10 0.31056759 337 iccv-2013-Random Grids: Fast Approximate Nearest Neighbors and Range Searching for Image Search

11 0.28032669 159 iccv-2013-Fast Neighborhood Graph Search Using Cartesian Concatenation

12 0.23928964 162 iccv-2013-Fast Subspace Search via Grassmannian Based Hashing

13 0.23348483 446 iccv-2013-Visual Semantic Complex Network for Web Images

14 0.22793326 332 iccv-2013-Quadruplet-Wise Image Similarity Learning

15 0.22528501 347 iccv-2013-Recursive Estimation of the Stein Center of SPD Matrices and Its Applications

16 0.22442973 287 iccv-2013-Neighbor-to-Neighbor Search for Fast Coding of Feature Vectors

17 0.22256006 171 iccv-2013-Fix Structured Learning of 2013 ICCV paper k2opt.pdf

18 0.21936251 248 iccv-2013-Learning to Rank Using Privileged Information

19 0.19806167 227 iccv-2013-Large-Scale Image Annotation by Efficient and Robust Kernel Metric Learning

20 0.19768739 445 iccv-2013-Visual Reranking through Weakly Supervised Multi-graph Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.184), (7, 0.041), (26, 0.056), (31, 0.04), (42, 0.113), (64, 0.031), (73, 0.019), (74, 0.23), (89, 0.123), (98, 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82396442 239 iccv-2013-Learning Hash Codes with Listwise Supervision

Author: Jun Wang, Wei Liu, Andy X. Sun, Yu-Gang Jiang

Abstract: Hashing techniques have been intensively investigated in the design of highly efficient search engines for largescale computer vision applications. Compared with prior approximate nearest neighbor search approaches like treebased indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. How- ever, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.

2 0.73157275 446 iccv-2013-Visual Semantic Complex Network for Web Images

Author: Shi Qiu, Xiaogang Wang, Xiaoou Tang

Abstract: This paper proposes modeling the complex web image collections with an automatically generated graph structure called visual semantic complex network (VSCN). The nodes on this complex network are clusters of images with both visual and semantic consistency, called semantic concepts. These nodes are connected based on the visual and semantic correlations. Our VSCN with 33, 240 concepts is generated from a collection of 10 million web images. 1 A great deal of valuable information on the structures of the web image collections can be revealed by exploring the VSCN, such as the small-world behavior, concept community, indegree distribution, hubs, and isolated concepts. It not only helps us better understand the web image collections at a macroscopic level, but also has many important practical applications. This paper presents two application examples: content-based image retrieval and image browsing. Experimental results show that the VSCN leads to significant improvement on both the precision of image retrieval (over 200%) and user experience for image browsing.

3 0.72811723 450 iccv-2013-What is the Most EfficientWay to Select Nearest Neighbor Candidates for Fast Approximate Nearest Neighbor Search?

Author: Masakazu Iwamura, Tomokazu Sato, Koichi Kise

Abstract: Approximate nearest neighbor search (ANNS) is a basic and important technique used in many tasks such as object recognition. It involves two processes: selecting nearest neighbor candidates and performing a brute-force search of these candidates. Only the former though has scope for improvement. In most existing methods, it approximates the space by quantization. It then calculates all the distances between the query and all the quantized values (e.g., clusters or bit sequences), and selects a fixed number of candidates close to the query. The performance of the method is evaluated based on accuracy as a function of the number of candidates. This evaluation seems rational but poses a serious problem; it ignores the computational cost of the process of selection. In this paper, we propose a new ANNS method that takes into account costs in the selection process. Whereas existing methods employ computationally expensive techniques such as comparative sort and heap, the proposed method does not. This realizes a significantly more efficient search. We have succeeded in reducing computation times by one-third compared with the state-of-the- art on an experiment using 100 million SIFT features.

4 0.72467983 13 iccv-2013-A General Two-Step Approach to Learning-Based Hashing

Author: Guosheng Lin, Chunhua Shen, David Suter, Anton van_den_Hengel

Abstract: Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. Here we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problemspecific hashing methods. Our framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Both problems have been extensively studied in the literature. Our extensive experiments demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art.

5 0.7243607 294 iccv-2013-Offline Mobile Instance Retrieval with a Small Memory Footprint

Author: Jayaguru Panda, Michael S. Brown, C.V. Jawahar

Abstract: Existing mobile image instance retrieval applications assume a network-based usage where image features are sent to a server to query an online visual database. In this scenario, there are no restrictions on the size of the visual database. This paper, however, examines how to perform this same task offline, where the entire visual index must reside on the mobile device itself within a small memory footprint. Such solutions have applications on location recognition and product recognition. Mobile instance retrieval requires a significant reduction in the visual index size. To achieve this, we describe a set of strategies that can reduce the visual index up to 60-80 compared to a scatannd raerddu iens tthaen vceis rueatrli ienvdaelx xim upple tom 6en0t-8at0io ×n found on ddte osk atops or servers. While our proposed reduction steps affect the overall mean Average Precision (mAP), they are able to maintain a good Precision for the top K results (PK). We argue that for such offline application, maintaining a good PK is sufficient. The effectiveness of this approach is demonstrated on several standard databases. A working application designed for a remote historical site is also presented. This application is able to reduce an 50,000 image index structure to 25 MBs while providing a precision of 97% for P10 and 100% for P1.

6 0.72120011 244 iccv-2013-Learning View-Invariant Sparse Representations for Cross-View Action Recognition

7 0.71996242 191 iccv-2013-Handling Uncertain Tags in Visual Recognition

8 0.71089888 352 iccv-2013-Revisiting Example Dependent Cost-Sensitive Learning with Decision Trees

9 0.71067023 374 iccv-2013-Salient Region Detection by UFO: Uniqueness, Focusness and Objectness

10 0.70949441 426 iccv-2013-Training Deformable Part Models with Decorrelated Features

11 0.70948577 153 iccv-2013-Face Recognition Using Face Patch Networks

12 0.70749235 214 iccv-2013-Improving Graph Matching via Density Maximization

13 0.70128107 409 iccv-2013-Supervised Binary Hash Code Learning with Jensen Shannon Divergence

14 0.69654369 83 iccv-2013-Complementary Projection Hashing

15 0.69546461 122 iccv-2013-Distributed Low-Rank Subspace Segmentation

16 0.68994641 229 iccv-2013-Large-Scale Video Hashing via Structure Learning

17 0.67960918 322 iccv-2013-Pose Estimation and Segmentation of People in 3D Movies

18 0.67808187 248 iccv-2013-Learning to Rank Using Privileged Information

19 0.67261916 368 iccv-2013-SYM-FISH: A Symmetry-Aware Flip Invariant Sketch Histogram Shape Descriptor

20 0.67225283 383 iccv-2013-Semi-supervised Learning for Large Scale Image Cosegmentation