nips nips2012 nips2012-307 knowledge-graph by maker-knowledge-mining

307 nips-2012-Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning


Source: pdf

Author: Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

Abstract: One of the main challenges in data clustering is to define an appropriate similarity measure between two objects. Crowdclustering addresses this challenge by defining the pairwise similarity based on the manual annotations obtained through crowdsourcing. Despite its encouraging results, a key limitation of crowdclustering is that it can only cluster objects when their manual annotations are available. To address this limitation, we propose a new approach for clustering, called semi-crowdsourced clustering that effectively combines the low-level features of objects with the manual annotations of a subset of the objects obtained via crowdsourcing. The key idea is to learn an appropriate similarity measure, based on the low-level features of objects and from the manual annotations of only a small portion of the data to be clustered. One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing. We address this difficulty by developing a metric learning algorithm based on the matrix completion method. Our empirical study with two real-world image data sets shows that the proposed algorithm outperforms state-of-the-art distance metric learning algorithms in both clustering accuracy and computational efficiency. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract One of the main challenges in data clustering is to define an appropriate similarity measure between two objects. [sent-7, score-0.531]

2 Crowdclustering addresses this challenge by defining the pairwise similarity based on the manual annotations obtained through crowdsourcing. [sent-8, score-0.956]

3 Despite its encouraging results, a key limitation of crowdclustering is that it can only cluster objects when their manual annotations are available. [sent-9, score-1.047]

4 To address this limitation, we propose a new approach for clustering, called semi-crowdsourced clustering that effectively combines the low-level features of objects with the manual annotations of a subset of the objects obtained via crowdsourcing. [sent-10, score-1.062]

5 The key idea is to learn an appropriate similarity measure, based on the low-level features of objects and from the manual annotations of only a small portion of the data to be clustered. [sent-11, score-0.936]

6 One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing. [sent-12, score-0.943]

7 We address this difficulty by developing a metric learning algorithm based on the matrix completion method. [sent-13, score-0.638]

8 Our empirical study with two real-world image data sets shows that the proposed algorithm outperforms state-of-the-art distance metric learning algorithms in both clustering accuracy and computational efficiency. [sent-14, score-0.876]

9 The key idea is to first obtain manual annotations of objects through crowdsourcing. [sent-24, score-0.606]

10 The annotations can either be in the form of grouping objects based on their perceived similarities [10] or the keyword assignments to individual objects (e. [sent-25, score-0.874]

11 A pairwise similarity matrix is then computed from the acquired annotations, and is used to cluster objects. [sent-28, score-0.734]

12 Unlike the conventional clustering techniques where the similarity measure is defined based on the features of objects, in crowdclustering, the pairwise similarities are derived from the manual annotations, which better capture the underlying inter-object similarity. [sent-29, score-1.1]

13 Studies [10] have shown that crowdclustering performs significantly better than the conventional clustering methods, given a sufficiently large number of manual annotations for all the objects to be clustered. [sent-30, score-1.062]

14 Despite the encouraging results obtained via crowdclustering, a main shortcoming of crowdclustering is that it can only cluster objects for which manual annotations are available, significantly limiting its application to large scale clustering problems. [sent-36, score-1.209]

15 For instance, when clustering hundreds of thousands of objects, it is not feasible to have each object manually annotated by multiple workers. [sent-37, score-0.389]

16 To address this limitation, we study the problem of semi-crowdsourced clustering, where given the annotations obtained through crowdsourcing for a small subset of the objects, the objective is to cluster the entire collection of objects. [sent-38, score-0.546]

17 Given a set of N objects to be clustered, the objective is to learn a pairwise similarity measure from the crowdsourced labels of n objects (n N ) and the object feature vector x. [sent-40, score-1.048]

18 Note that the available crowdclustering algorithms [10, 25] expect that all N objects be labeled by crowdsourcing. [sent-41, score-0.448]

19 The key to semi-crowdsourced clustering is to define an appropriate similarity measure for the subset of objects that do not have manual annotations (i. [sent-42, score-1.177]

20 given two objects oi and oj and their feature representation xi and xj , respectively, their similarity sim(Oi , Oj ) is given by sim(Oi , Oj ) = xi M xj , where M 0 is the learned distance metric. [sent-48, score-1.101]

21 Learning a linear similarity function from given pairwise similarities (sometimes referred to as pairwise constraints when similarities are binary) is known as distance metric learning, which has been studied extensively in the literature [24]. [sent-49, score-1.658]

22 The key challenge of distance metric learning in semicrowdsourced clustering arises due to the noise in the pairwise similarities obtained from manual annotations. [sent-50, score-1.413]

23 According to [25], large disagreements are often observed among human workers in specifying pairwise similarities. [sent-51, score-0.455]

24 As a result, pairwise similarities based on the majority vote among human workers often disagree with the true cluster assignments of objects. [sent-52, score-0.802]

25 As an example, the authors in [25] show that for the Scenes data set [8], more than 80% of the pairwise labels obtained from human workers are inconsistent with the true cluster assignment. [sent-53, score-0.598]

26 This large noise in the pairwise similarities due to crowdsourcing could seriously misguide the distance metric learning and lead to a poor prediction performance, as already demonstrated in [12] as well as in our empirical study. [sent-54, score-1.069]

27 We propose a metric learning algorithm that explicitly addresses the presence of noise in pairwise similarities obtained via crowdsourcing. [sent-55, score-0.729]

28 More specifically, the proposed algorithm for clustering N objects consists of three components: (i) filtering noisy pairwise similarities for n objects by only keeping object pairs whose pairwise similarities are agreed by many workers (not majority of the workers). [sent-57, score-1.672]

29 similarity of 1 when two objects are in the same cluster and 0, otherwise) for arbitrarily large n. [sent-62, score-0.641]

30 We finally note that in addition to distance metric learning, both kernel learning [16] and constrained clustering [2] can be applied to generalize the information in the manual annotations acquired by crowdsourcing. [sent-63, score-1.272]

31 In this work, we focus on distance metric learning. [sent-64, score-0.601]

32 The related work, as well as the discussion on exploring kernel learning and constrained clustering techniques for semi-crowdsourced clustering can be found in Section 4. [sent-65, score-0.439]

33 We then describe the proposed algorithm for learning distance metric from a small set of noisy pairwise similarities that are derived from manual annotations. [sent-67, score-1.238]

34 , On }, and obtain their manual annotations by crowdsourcing. [sent-79, score-0.413]

35 Given the manual annotations collected from the k-th HIT, we define a similarity matrix Ak ∈ Rn×n such that Ak = 1 if objects Oi and Oj share common annotations (i. [sent-81, score-1.195]

36 share common annotated keywords i,j or assigned to the same cluster by the worker), zero if they don’t, and −1 if either of the two objects is not annotated by the kth HIT (i. [sent-83, score-0.64]

37 Note that we only consider a binary similarity measure in this study because our goal is to perfectly reconstruct the ideal pairwise similarities based on the true cluster assignments (i. [sent-86, score-0.927]

38 The objective of semi-crowdsourced clustering is to cluster all the N objects in D based on the features in X and the m × m similarity matrices {Ak }m for the objects in D. [sent-89, score-1.035]

39 3 To generalize the pairwise similarities from the subset D to the entire collection of objects D, we propose to first learn a distance metric from the similarity matrices {Ak }m , and then compute the k=1 pairwise similarity for all the N objects in D using the learned distance metric. [sent-94, score-2.538]

40 The challenge is how to learn an appropriate distance metric from a set of similarity matrices {Ak }m . [sent-95, score-0.973]

41 A straightforward k=1 approach is to combine multiple similarity matrices into a single similarity matrix by computing their average. [sent-96, score-0.668]

42 The main problem with this simple strategy is that due to the large disagreements among workers in determining the pairwise similarities, the average similarities do not correlate well with the true cluster assignments. [sent-102, score-0.738]

43 In the next subsection, we develop an efficient and robust algorithm that learns a distance metric from a set of noisy similarity matrices. [sent-103, score-0.962]

44 filtering step, matrix completion step and distance metric learning step. [sent-107, score-0.866]

45 To filter out the uncertain object pairs, we introduce two thresholds d0 and d1 (1 ≥ ˜ d1 > d0 ≥ 0) into the average similarity matrix A. [sent-110, score-0.448]

46 Since any similarity measure smaller than d0 indicates that most workers put the corresponding object pair into different clusters, we simply set it as 0. [sent-111, score-0.571]

47 For object pairs with similarity measure in the range between d0 and and d1 , they are treated as uncertain object pairs and are discarded (i. [sent-113, score-0.506]

48 The resulting partially observed similarity matrix A is given by  ˜  1 Ai,j ∈ [d1 , 1] ˜i,j ∈ [0, d0 ] Ai,j = (1) 0 A  unobserved Otherwise We also define ∆ as the set of observed entries in Ai,j ˜ ˜ / ∆ = {(i, j) ∈ [N ] × [N ] : Aij ≥ 0, Aij ∈ (d0 , d1 )} Matrix completion step. [sent-116, score-0.615]

49 Since A is constructed from the partial clustering results generated by different workers, we expect some of the binary similarity measures in A to be incorrect. [sent-117, score-0.502]

50 If A∗ is the perfect similarity matrix, we have P∆ (A∗ + E) = P∆ (A), where P∆ outputs a matrix with [P∆ (B)]i,j = Bi,j if (i, j) ∈ ∆ and zero, otherwise. [sent-119, score-0.404]

51 To reconstruct the perfect similarity matrix A∗ from A, following the matrix completion theory [3], we solve the following optimization problem min |A|∗ + C|E|1 s. [sent-121, score-0.669]

52 This step learns a distance metric from the completed similarity matrix A. [sent-126, score-1.019]

53 A common problem shared by most distance metric learning algorithms is their high computational cost due to the constraint that a distance metric has to be positive semi-definite. [sent-127, score-1.202]

54 In this study, we develop an efficient algorithm for distance metric learning that does not have to deal with 4 the positive semi-definite constraint. [sent-128, score-0.601]

55 Our algorithm is based on the key observation that with a high probability, the completed similarity matrix A is positive semi-definite. [sent-129, score-0.397]

56 This property guarantees the resulting distance metric to be positive semi-definite. [sent-131, score-0.601]

57 The proposed distance metric learning algorithm is based on a standard regression algorithm [15]. [sent-132, score-0.655]

58 Given the similarity matrix A, the optimal distance metric M is given by a regression problem n min M ∈Rd×d (xi M xj − Ai,j )2 = |X M X − A|2 F L(M ) = (3) i,j=1 where xi is the feature vector for the sampled object Oi and X = (x1 , . [sent-133, score-1.048]

59 Let A(Oi , Oj ) be the perfect similarity that outputs 1 when Oi and Oj belong to the same cluster and zero, otherwise. [sent-146, score-0.485]

60 To learn an ideal distance metric from the perfect similarity measure A(Oi , Oj ), we generalize the regression problem in (3) as follows min L(M ) = Exi ,xj (xi M xj − A(Oi , Oj ))2 (6) M ∈Rd×d −1 −1 The solution to (6) is given by M = CX BB CX , where CX = Exi [xi xi ] and B = Exi [xi yi ]. [sent-148, score-1.045]

61 Let Ms be the smoothed version of the ideal distance metric M , i. [sent-149, score-0.624]

62 Given the learned distance metric Ms , we construct a similarity matrix S = X Ms X and then apply a spectral clustering algorithm [18] to compute the final data partition for N objects. [sent-158, score-1.196]

63 To perform crowdlabeling, we follow [25], and ask human workers to annotate images with keywords of their choice in each HIT. [sent-168, score-0.428]

64 For every HIT, the pairwise similarity between two images (i. [sent-171, score-0.605]

65 The comparison to the Base method allows us to examine the effect of distance metric learning in semi-crowdsourced clustering, and the comparison to the Raw method reveals the effect of filtering and matrix completion steps in distance metric learning. [sent-177, score-1.467]

66 Some of the other state-of-the-art distance metric learning algorithms (e. [sent-179, score-0.601]

67 For a fair comparison, all distance metric learning algorithms are applied to the pairwise constraints derived from A, the n × n pairwise similarity matrix reconstructed by the matrix completion algorithm. [sent-183, score-1.633]

68 We refer to the proposed distance metric learning algorithm as Regression based Distance Metric Learning, or RDML for short, and the proposed semi-crowdsourced clustering algorithm as Semi-Crowd. [sent-184, score-0.86]

69 First, d0 (d1 ) should be small (large) enough to ensure that most of the retained pairwise similarities are consistent with the cluster assignments. [sent-187, score-0.525]

70 22) Figure 4: Sample image pairs that are incorrectly clustered by the Base method but correctly clustered by the proposed method (the similarity of our method is based on the normalized distance metric Ms ). [sent-220, score-1.162]

71 2 Experimental Results First, we examine the effect of distance metric learning algorithm on semi-crowdsourced clustering. [sent-222, score-0.601]

72 Figure 3 compares the clustering performance with six different metric learning algorithms with that of the Base method that does not learn a distance metric. [sent-223, score-0.831]

73 We observed that four of the distance metric learning algorithms (i. [sent-224, score-0.601]

74 In fact, RCA and DCA can yield better performance than the Base method if all the pairwise similarities are consistent with the cluster assignments. [sent-228, score-0.525]

75 Compared to all the baseline distance metric learning algorithms, RDML, the proposed distance metric learning algorithm, yields the best clustering results for both the data sets and for all values of n (i. [sent-229, score-1.458]

76 This is consistent with our theoretical analysis in Theorem 1, and implies that only a modest number of annotated images is needed by the proposed algorithm to learn an appropriate distance metric. [sent-233, score-0.52]

77 Figure 4 shows some example image pairs for which the Base method fails to make correct cluster assignments, but the proposed RDML method successfully corrects these mistakes with the learned distance metric. [sent-235, score-0.518]

78 In Figure 3, we compare the clustering results of the proposed algorithm for semi-crowdsourced clustering (i. [sent-237, score-0.431]

79 Filtering+Matrix-Completion+RDML) to the Raw method that runs the proposed distance metric algorithm RDML without the filtering and matrix completion steps. [sent-239, score-0.895]

80 Finally, it is interesting to observe that the Raw method still outperforms all the baseline methods, which further verifies the effectiveness of the proposed algorithm for distance metric learning. [sent-242, score-0.656]

81 Finally, we evaluate the computational efficiency of the proposed distance metric learning algorithm. [sent-243, score-0.63]

82 Table 1 shows that the proposed distance metric learning algorithm is significantly more efficient than the baseline approaches evaluated here. [sent-244, score-0.656]

83 Since all the distance metric learning algorithms are applied to the similarity matrix recovered by the matrix completion algorithm, the computational cost of matrix completion is shared by all distance metric learning algorithms used in our evaluation. [sent-275, score-2.099]

84 In each HIT, a small subset of images are randomly sampled from the collection, and a worker is asked to cluster the subset of images into multiple groups. [sent-279, score-0.49]

85 In [25], the authors extend the definition of HITs for crowdclustering by asking workers to annotate images by keywords and then derive pairwise similarities between images based on the commonality of annotated keywords. [sent-281, score-1.231]

86 Although the matrix completion technique was first proposed for crowdclustering in [25], it had a different goal from this work. [sent-283, score-0.549]

87 In [25], matrix completion was used to estimate the similarity matrix, while the proposed approach uses matrix completion to estimate a distance metric, so that crowdsourced labels can be generalized to cluster those images which were not annotated during crowdsourcing. [sent-284, score-1.517]

88 Our work is closely related to distance metric learning that learns a distance metric consistent with a given subset of pairwise similarities/constraints [24]. [sent-285, score-1.463]

89 Although many studies on distance metric learning have been reported, only a few address the challenge of learning a reliable distance metric from noisy pairwise constraints [12, 22]. [sent-286, score-1.526]

90 In contrast, in semi-crowdsourced clustering, we expect that a significantly larger percentage of pairwise similarities are inconsistent with the true cluster assignments (as many as 80% [25]). [sent-288, score-0.597]

91 One limitation of distance metric learning is that it is restricted to a linear similarity function. [sent-289, score-0.941]

92 Kernel learning generalizes distance metric learning to a nonlinear similarity function by mapping each data point to a high dimensional space through a kernel function [16]. [sent-290, score-0.902]

93 We plan to learn a kernel based similarity function from a subset of manually annotated objects. [sent-291, score-0.503]

94 Besides distance metric learning, an alternative approach to incorporate the manual annotations into the clustering process is constrained clustering (or semi-supervised clustering) [2]. [sent-292, score-1.453]

95 Compared to distance metric learning, constrained clustering can be computationally more expensive. [sent-293, score-0.839]

96 Unlike distance metric learning that learns a distance metric from pairwise constraints only once and applies the learned distance metric to cluster any set of objects, a constrained clustering algorithm has to be rerun whenever a new set of objects needs to be clustered. [sent-294, score-2.629]

97 To exploit the strength of constrained clustering algorithms, we plan to explore hybrid approaches that effectively combine distance metric learning with constrained clustering approaches for more accurate and efficient semi-crowdsourced clustering. [sent-295, score-1.077]

98 Distance metric learning from uncertain side information for automated photo tagging. [sent-427, score-0.377]

99 Distance metric learning, with application to clustering with side-information. [sent-437, score-0.552]

100 Crowdclustering with sparse pairwise labels: A matrix completion approach. [sent-444, score-0.465]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('metric', 0.351), ('similarity', 0.301), ('crowdclustering', 0.255), ('distance', 0.25), ('annotations', 0.222), ('clustering', 0.201), ('pairwise', 0.2), ('completion', 0.199), ('objects', 0.193), ('manual', 0.191), ('workers', 0.186), ('similarities', 0.178), ('oi', 0.165), ('oj', 0.165), ('rdml', 0.157), ('cluster', 0.147), ('annotated', 0.108), ('images', 0.104), ('dca', 0.098), ('rca', 0.098), ('imagenet', 0.093), ('ms', 0.092), ('crowdsourcing', 0.09), ('hits', 0.082), ('ak', 0.076), ('hit', 0.075), ('matrix', 0.066), ('pascal', 0.061), ('base', 0.06), ('gdm', 0.059), ('rong', 0.058), ('cx', 0.058), ('lmnn', 0.057), ('object', 0.055), ('worker', 0.055), ('keywords', 0.053), ('exi', 0.052), ('clustered', 0.051), ('ltering', 0.049), ('assignments', 0.049), ('itml', 0.048), ('crowdsourced', 0.048), ('raw', 0.047), ('clusters', 0.046), ('jin', 0.045), ('image', 0.045), ('jain', 0.045), ('annotate', 0.043), ('human', 0.042), ('challenge', 0.042), ('incorrectly', 0.041), ('subset', 0.04), ('limitation', 0.039), ('anil', 0.039), ('hoi', 0.039), ('keyword', 0.039), ('shaili', 0.039), ('noisy', 0.039), ('perfect', 0.037), ('constrained', 0.037), ('jinfeng', 0.035), ('kth', 0.031), ('completed', 0.03), ('learn', 0.029), ('crowd', 0.029), ('turk', 0.029), ('nmi', 0.029), ('welinder', 0.029), ('proposed', 0.029), ('measure', 0.029), ('disagreements', 0.027), ('learned', 0.027), ('grouped', 0.026), ('sim', 0.026), ('baseline', 0.026), ('uncertain', 0.026), ('regression', 0.025), ('entries', 0.025), ('manually', 0.025), ('collection', 0.025), ('bb', 0.025), ('ve', 0.025), ('partially', 0.024), ('mechanical', 0.024), ('ideal', 0.023), ('belongie', 0.023), ('inconsistent', 0.023), ('amazon', 0.023), ('michigan', 0.023), ('correctly', 0.023), ('address', 0.022), ('filtering', 0.022), ('learns', 0.021), ('studies', 0.021), ('aij', 0.02), ('mi', 0.02), ('acquired', 0.02), ('usa', 0.02), ('pairs', 0.02), ('ax', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999946 307 nips-2012-Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

Author: Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

Abstract: One of the main challenges in data clustering is to define an appropriate similarity measure between two objects. Crowdclustering addresses this challenge by defining the pairwise similarity based on the manual annotations obtained through crowdsourcing. Despite its encouraging results, a key limitation of crowdclustering is that it can only cluster objects when their manual annotations are available. To address this limitation, we propose a new approach for clustering, called semi-crowdsourced clustering that effectively combines the low-level features of objects with the manual annotations of a subset of the objects obtained via crowdsourcing. The key idea is to learn an appropriate similarity measure, based on the low-level features of objects and from the manual annotations of only a small portion of the data to be clustered. One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing. We address this difficulty by developing a metric learning algorithm based on the matrix completion method. Our empirical study with two real-world image data sets shows that the proposed algorithm outperforms state-of-the-art distance metric learning algorithms in both clustering accuracy and computational efficiency. 1

2 0.19153363 265 nips-2012-Parametric Local Metric Learning for Nearest Neighbor Classification

Author: Jun Wang, Alexandros Kalousis, Adam Woznica

Abstract: We study the problem of learning local metrics for nearest neighbor classification. Most previous works on local metric learning learn a number of local unrelated metrics. While this ”independence” approach delivers an increased flexibility its downside is the considerable risk of overfitting. We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold. Using an approximation error bound of the metric matrix function we learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. We constrain the metric matrix function by imposing on the linear combinations manifold regularization which makes the learned metric matrix function vary smoothly along the geodesics of the data manifold. Our metric learning method has excellent performance both in terms of predictive power and scalability. We experimented with several largescale classification problems, tens of thousands of instances, and compared it with several state of the art metric learning methods, both global and local, as well as to SVM with automatic kernel selection, all of which it outperforms in a significant manner. 1

3 0.17625082 242 nips-2012-Non-linear Metric Learning

Author: Dor Kedem, Stephen Tyree, Fei Sha, Gert R. Lanckriet, Kilian Q. Weinberger

Abstract: In this paper, we introduce two novel metric learning algorithms, χ2 -LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ2 -LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ2 -distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ2 -LMNN, obtain best results in 19 out of 20 learning settings. 1

4 0.16128856 70 nips-2012-Clustering by Nonnegative Matrix Factorization Using Graph Random Walk

Author: Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, Erkki Oja

Abstract: Nonnegative Matrix Factorization (NMF) is a promising relaxation technique for clustering analysis. However, conventional NMF methods that directly approximate the pairwise similarities using the least square error often yield mediocre performance for data in curved manifolds because they can capture only the immediate similarities between data samples. Here we propose a new NMF clustering method which replaces the approximated matrix with its smoothed version using random walk. Our method can thus accommodate farther relationships between data samples. Furthermore, we introduce a novel regularization in the proposed objective function in order to improve over spectral clustering. The new learning objective is optimized by a multiplicative Majorization-Minimization algorithm with a scalable implementation for learning the factorizing matrix. Extensive experimental results on real-world datasets show that our method has strong performance in terms of cluster purity. 1

5 0.1606546 9 nips-2012-A Geometric take on Metric Learning

Author: Søren Hauberg, Oren Freifeld, Michael J. Black

Abstract: Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the structure of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensionality reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Riemannian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data. 1 Learning and Computing Distances Statistics relies on measuring distances. When the Euclidean metric is insufficient, as is the case in many real problems, standard methods break down. This is a key motivation behind metric learning, which strives to learn good distance measures from data. In the most simple scenarios a single metric tensor is learned, but in recent years, several methods have proposed learning multiple metric tensors, such that different distance measures are applied in different parts of the feature space. This has proven to be a very powerful approach for classification tasks [1, 2], but the approach has not generalized to other tasks. Here we consider the generalization of Principal Component Analysis (PCA) and linear regression; see Fig. 1 for an illustration of our approach. The main problem with generalizing multi-metric learning is that it is based on assumptions that make the feature space both non-smooth and non-metric. Specifically, it is often assumed that straight lines form geodesic curves and that the metric tensor stays constant along these lines. These assumptions are made because it is believed that computing the actual geodesics is intractable, requiring a discretization of the entire feature space [3]. We solve these problems by smoothing the transitions between different metric tensors, which ensures a metric space where geodesics can be computed. In this paper, we consider the scenario where the metric tensor at a given point in feature space is defined as the weighted average of a set of learned metric tensors. In this model, we prove that the feature space becomes a chart for a Riemannian manifold. This ensures a metric feature space, i.e. dist(x, y) = 0 ⇔ x = y , dist(x, y) = dist(y, x) (symmetry), (1) dist(x, z) ≤ dist(x, y) + dist(y, z) (triangle inequality). To compute statistics according to the learned metric, we need to be able to compute distances, which implies that we need to compute geodesics. Based on the observation that geodesics are 1 (a) Local Metrics & Geodesics (b) Tangent Space Representation (c) First Principal Geodesic Figure 1: Illustration of Principal Geodesic Analysis. (a) Geodesics are computed between the mean and each data point. (b) Data is mapped to the Euclidean tangent space and the first principal component is computed. (c) The principal component is mapped back to the feature space. smooth curves in Riemannian spaces, we derive an algorithm for computing geodesics that only requires a discretization of the geodesic rather than the entire feature space. Furthermore, we show how to compute the exponential and logarithmic maps of the manifold. With this we can map any point back and forth between a Euclidean tangent space and the manifold. This gives us a general strategy for incorporating the learned metric tensors in many Euclidean algorithms: map the data to the tangent of the manifold, perform the Euclidean analysis and map the results back to the manifold. Before deriving the algorithms (Sec. 3) we set the scene by an analysis of the shortcomings of current state-of-the-art methods (Sec. 2), which motivate our final model. The model is general and can be used for many problems. Here we illustrate it with several challenging problems in 3D body shape modeling and analysis (Sec. 4). All proofs can be found in the supplementary material along with algorithmic details and further experimental results. 2 Background and Related Work Single-metric learning learns a metric tensor, M, such that distances are measured as dist2 (xi , xj ) = xi − xj 2 M ≡ (xi − xj )T M(xi − xj ) , (2) where M is a symmetric and positive definite D × D matrix. Classic approaches for finding such a metric tensor include PCA, where the metric is given by the inverse covariance matrix of the training data; and linear discriminant analysis (LDA), where the metric tensor is M = S−1 SB S−1 , with Sw W W and SB being the within class scatter and the between class scatter respectively [9]. A more recent approach tries to learn a metric tensor from triplets of data points (xi , xj , xk ), where the metric should obey the constraint that dist(xi , xj ) < dist(xi , xk ). Here the constraints are often chosen such that xi and xj belong to the same class, while xi and xk do not. Various relaxed versions of this idea have been suggested such that the metric can be learned by solving a semi-definite or a quadratic program [1, 2, 4–8]. Among the most popular approaches is the Large Margin Nearest Neighbor (LMNN) classifier [5], which finds a linear transformation that satisfies local distance constraints, making the approach suitable for multi-modal classes. For many problems, a single global metric tensor is not enough, which motivates learning several local metric tensors. The classic work by Hastie and Tibshirani [9] advocates locally learning metric tensors according to LDA and using these as part of a kNN classifier. In a somewhat similar fashion, Weinberger and Saul [5] cluster the training data and learn a separate metric tensor for each cluster using LMNN. A more extreme point of view was taken by Frome et al. [1, 2], who learn a diagonal metric tensor for every point in the training set, such that distance rankings are preserved. Similarly, Malisiewicz and Efros [6] find a diagonal metric tensor for each training point such that the distance to a subset of the training data from the same class is kept small. Once a set of metric tensors {M1 , . . . , MR } has been learned, the distance dist(a, b) is measured according to (2) where “the nearest” metric tensor is used, i.e. R M(x) = r=1 wr (x) ˜ Mr , where wr (x) = ˜ ˜ j wj (x) 1 0 x − xr 2 r ≤ x − xj M otherwise 2 Mj , ∀j , (3) where x is either a or b depending on the algorithm. Note that this gives a non-metric distance function as it is not symmetric. To derive this equation, it is necessary to assume that 1) geodesics 2 −8 −8 Assumed Geodesics Location of Metric Tensors Test Points −6 −8 Actual Geodesics Location of Metric Tensors Test Points −6 Riemannian Geodesics Location of Metric Tensors Test Points −6 −4 −4 −4 −2 −2 −2 0 0 0 2 2 2 4 4 4 6 −8 6 −8 −6 −4 −2 0 (a) 2 4 6 −6 −4 −2 0 2 4 6 6 −8 −6 (b) −4 −2 (c) 0 2 4 6 (d) Figure 2: (a)–(b) An illustrative example where straight lines do not form geodesics and where the metric tensor does not stay constant along lines; see text for details. The background color is proportional to the trace of the metric tensor, such that light grey corresponds to regions where paths are short (M1 ), and dark grey corresponds to regions they are long (M2 ). (c) The suggested geometric model along with the geodesics. Again, background colour is proportional to the trace of the metric tensor; the colour scale is the same is used in (a) and (b). (d) An illustration of the exponential and logarithmic maps. form straight lines, and 2) the metric tensor stays constant along these lines [3]. Both assumptions are problematic, which we illustrate with a simple example in Fig. 2a–c. Assume we are given two metric tensors M1 = 2I and M2 = I positioned at x1 = (2, 2)T and x2 = (4, 4)T respectively. This gives rise to two regions in feature space in which x1 is nearest in the first and x2 is nearest in the second, according to (3). This is illustrated in Fig. 2a. In the same figure, we also show the assumed straight-line geodesics between selected points in space. As can be seen, two of the lines goes through both regions, such that the assumption of constant metric tensors along the line is violated. Hence, it would seem natural to measure the length of the line, by adding the length of the line segments which pass through the different regions of feature space. This was suggested by Ramanan and Baker [3] who also proposed a polynomial time algorithm for measuring these line lengths. This gives a symmetric distance function. Properly computing line lengths according to the local metrics is, however, not enough to ensure that the distance function is metric. As can be seen in Fig. 2a the straight line does not form a geodesic as a shorter path can be found by circumventing the region with the “expensive” metric tensor M1 as illustrated in Fig. 2b. This issue makes it trivial to construct cases where the triangle inequality is violated, which again makes the line length measure non-metric. In summary, if we want a metric feature space, we can neither assume that geodesics are straight lines nor that the metric tensor stays constant along such lines. In practice, good results have been reported using (3) [1,3,5], so it seems obvious to ask: is metricity required? For kNN classifiers this does not appear to be the case, with many successes based on dissimilarities rather than distances [10]. We, however, want to generalize PCA and linear regression, which both seek to minimize the reconstruction error of points projected onto a subspace. As the notion of projection is hard to define sensibly in non-metric spaces, we consider metricity essential. In order to build a model with a metric feature space, we change the weights in (3) to be smooth functions. This impose a well-behaved geometric structure on the feature space, which we take advantage of in order to perform statistical analysis according to the learned metrics. However, first we review the basics of Riemannian geometry as this provides the theoretical foundation of our work. 2.1 Geodesics and Riemannian Geometry We start by defining Riemannian manifolds, which intuitively are smoothly curved spaces equipped with an inner product. Formally, they are smooth manifolds endowed with a Riemannian metric [11]: Definition A Riemannian metric M on a manifold M is a smoothly varying inner product < a, b >x = aT M(x)b in the tangent space Tx M of each point x ∈ M . 3 Often Riemannian manifolds are represented by a chart; i.e. a parameter space for the curved surface. An example chart is the spherical coordinate system often used to represent spheres. While such charts are often flat spaces, the curvature of the manifold arises from the smooth changes in the metric. On a Riemannian manifold M, the length of a smooth curve c : [0, 1] → M is defined as the integral of the norm of the tangent vector (interpreted as speed) along the curve: 1 Length(c) = 1 c (λ) M(c(λ)) dλ c (λ)T M(c(λ))c (λ)dλ , = (4) 0 0 where c denotes the derivative of c and M(c(λ)) is the metric tensor at c(λ). A geodesic curve is then a length-minimizing curve connecting two given points x and y, i.e. (5) cgeo = arg min Length(c) with c(0) = x and c(1) = y . c The distance between x and y is defined as the length of the geodesic. Given a tangent vector v ∈ Tx M, there exists a unique geodesic cv (t) with initial velocity v at x. The Riemannian exponential map, Expx , maps v to a point on the manifold along the geodesic cv at t = 1. This mapping preserves distances such that dist(cv (0), cv (1)) = v . The inverse of the exponential map is the Riemannian logarithmic map denoted Logx . Informally, the exponential and logarithmic maps move points back and forth between the manifold and the tangent space while preserving distances (see Fig. 2d for an illustration). This provides a general strategy for generalizing many Euclidean techniques to Riemannian domains: data points are mapped to the tangent space, where ordinary Euclidean techniques are applied and the results are mapped back to the manifold. 3 A Metric Feature Space With the preliminaries settled we define the new model. Let C = RD denote the feature space. We endow C with a metric tensor in every point x, which we define akin to (3), R M(x) = wr (x)Mr , where wr (x) = r=1 wr (x) ˜ R ˜ j=1 wj (x) , (6) with wr > 0. The only difference from (3) is that we shall not restrict ourselves to binary weight ˜ functions wr . We assume the metric tensors Mr have already been learned; Sec. 4 contain examples ˜ where they have been learned using LMNN [5] and LDA [9]. From the definition of a Riemannian metric, we trivially have the following result: Lemma 1 The space C = RD endowed with the metric tensor from (6) is a chart of a Riemannian manifold, iff the weights wr (x) change smoothly with x. Hence, by only considering smooth weight functions wr we get a well-studied geometric structure ˜ on the feature space, which ensures us that it is metric. To illustrate the implications we return to the example in Fig. 2. We change the weight functions from binary to squared exponentials, which gives the feature space shown in Fig. 2c. As can be seen, the metric tensor now changes smoothly, which also makes the geodesics smooth curves (a property we will use when computing the geodesics). It is worth noting that Ramanan and Baker [3] also consider the idea of smoothly averaging the metric tensor. They, however, only evaluate the metric tensor at the test point of their classifier and then assume straight line geodesics with a constant metric tensor. Such assumptions violate the premise of a smoothly changing metric tensor and, again, the distance measure becomes non-metric. Lemma 1 shows that metric learning can be viewed as manifold learning. The main difference between our approach and techniques such as Isomap [12] is that, while Isomap learns an embedding of the data points, we learn the actual manifold structure. This gives us the benefit that we can compute geodesics as well as the exponential and logarithmic maps. These provide us with mappings back and forth between the manifold and Euclidean representation of the data, which preserve distances as well as possible. The availability of such mappings is in stark contrast to e.g. Isomap. In the next section we will derive a system of ordinary differential equations (ODE’s) that geodesics in C have to satisfy, which provides us with algorithms for computing geodesics as well as exponential and logarithmic maps. With these we can generalize many Euclidean techniques. 4 3.1 Computing Geodesics, Maps and Statistics At minima of (4) we know that the Euler-Lagrange equation must hold [11], i.e. ∂L d ∂L , where L(λ, c, c ) = c (λ)T M(c(λ))c (λ) . = ∂c dλ ∂c As we have an explicit expression for the metric tensor we can compute (7) in closed form: (7) Theorem 2 Geodesic curves in C satisfy the following system of 2nd order ODE’s M(c(λ))c (λ) = − 1 ∂vec [M(c(λ))] 2 ∂c(λ) T (c (λ) ⊗ c (λ)) , (8) where ⊗ denotes the Kronecker product and vec [·] stacks the columns of a matrix into a vector [13]. Proof See supplementary material. This result holds for any smooth weight functions wr . We, however, still need to compute ∂vec[M] , ˜ ∂c which depends on the specific choice of wr . Any smooth weighting scheme is applicable, but we ˜ restrict ourselves to the obvious smooth generalization of (3) and use squared exponentials. From this assumption, we get the following result Theorem 3 For wr (x) = exp − ρ x − xr ˜ 2 ∂vec [M(c)] = ∂c the derivative of the metric tensor from (6) is R ρ R j=1 2 Mr R 2 wj ˜ T r=1 T wj (c − xj ) Mj − (c − xr ) Mr ˜ wr vec [Mr ] ˜ . (9) j=1 Proof See supplementary material. Computing Geodesics. Any geodesic curve must be a solution to (8). Hence, to compute a geodesic between x and y, we can solve (8) subject to the constraints c(0) = x and c(1) = y . (10) This is a boundary value problem, which has a smooth solution. This allows us to solve the problem numerically using a standard three-stage Lobatto IIIa formula, which provides a fourth-order accurate C 1 –continuous solution [14]. Ramanan and Baker [3] discuss the possibility of computing geodesics, but arrive at the conclusion that this is intractable based on the assumption that it requires discretizing the entire feature space. Our solution avoids discretizing the feature space by discretizing the geodesic curve instead. As this is always one-dimensional the approach remains tractable in high-dimensional feature spaces. Computing Logarithmic Maps. Once a geodesic c is found, it follows from the definition of the logarithmic map, Logx (y), that it can be computed as v = Logx (y) = c (0) Length(c) . c (0) (11) In practice, we solve (8) by rewriting it as a system of first order ODE’s, such that we compute both c and c simultaneously (see supplementary material for details). Computing Exponential Maps. Given a starting point x on the manifold and a vector v in the tangent space, the exponential map, Expx (v), finds the unique geodesic starting at x with initial velocity v. As the geodesic must fulfill (8), we can compute the exponential map by solving this system of ODE’s with the initial conditions c(0) = x and c (0) = v . (12) This initial value problem has a unique solution, which we find numerically using a standard RungeKutta scheme [15]. 5 3.1.1 Generalizing PCA and Regression At this stage, we know that the feature space is Riemannian and we know how to compute geodesics and exponential and logarithmic maps. We now seek to generalize PCA and linear regression, which becomes straightforward since solutions are available in Riemannian spaces [16, 17]. These generalizations can be summarized as mapping the data to the tangent space at the mean, performing standard Euclidean analysis in the tangent and mapping the results back. The first step is to compute the mean value on the manifold, which is defined as the point that minimizes the sum-of-squares distances to the data points. Pennec [18] provides an efficient gradient descent approach for computing this point, which we also summarize in the supplementary material. The empirical covariance of a set of points is defined as the ordinary Euclidean covariance in the tangent space at the mean value [18]. With this in mind, it is not surprising that the principal components of a dataset have been generalized as the geodesics starting at the mean with initial velocity corresponding to the eigenvectors of the covariance [16], γvd (t) = Expµ (tvd ) , (13) th where vd denotes the d eigenvector of the covariance. This approach is called Principal Geodesic Analysis (PGA), and the geodesic curve γvd is called the principal geodesic. An illustration of the approach can be seen in Fig. 1 and more algorithmic details are in the supplementary material. Linear regression has been generalized in a similar way [17] by performing regression in the tangent of the mean and mapping the resulting line back to the manifold using the exponential map. The idea of working in the tangent space is both efficient and convenient, but comes with an element of approximation as the logarithmic map is only guarantied to preserve distances to the origin of the tangent and not between all pairs of data points. Practical experience, however, indicates that this is a good tradeoff; see [19] for a more in-depth discussion of when the approximation is suitable. 4 Experiments To illustrate the framework1 we consider an example in human body analysis, and then we analyze the scalability of the approach. But first, to build intuition, Fig. 3a show synthetically generated data samples from two classes. We sample random points xr and learn a local LDA metric [9] by considering all data points within a radius; this locally pushes the two classes apart. We combine the local metrics using (6) and Fig. 3b show the data in the tangent space of the resulting manifold. As can be seen the two classes are now globally further apart, which shows the effect of local metrics. 4.1 Human Body Shape We consider a regression example concerning human body shape analysis. We study 986 female body laser scans from the CAESAR [20] data set; each shape is represented using the leading 35 principal components of the data learned using a SCAPE-like model [21, 22]. Each shape is associated with anthropometric measurements such as body height, shoe size, etc. We show results for shoulder to wrist distance and shoulder breadth, but results for more measurements are in the supplementary material. To predict the measurements from shape coefficients, we learn local metrics and perform linear regression according to these. As a further experiment, we use PGA to reduce the dimensionality of the shape coefficients according to the local metrics, and measure the quality of the reduction by performing linear regression to predict the measurements. As a baseline we use the corresponding Euclidean techniques. To learn the local metric we do the following. First we whiten the data such that the variance captured by PGA will only be due to the change of metric; this allows easy visualization of the impact of the learned metrics. We then cluster the body shapes into equal-sized clusters according to the measurement and learn a LMNN metric for each cluster [5], which we associate with the mean of each class. These push the clusters apart, which introduces variance along the directions where the measurement changes. From this we construct a Riemannian manifold according to (6), 1 Our software implementation for computing geodesics and performing manifold statistics is available at http://ps.is.tue.mpg.de/project/Smooth Metric Learning 6 30 Euclidean Model Riemannian Model 24 20 18 16 20 15 10 5 14 12 0 (a) 25 22 Running Time (sec.) Average Prediction Error 26 10 (b) 20 Dimensionality 0 0 30 50 (c) 100 Dimensionality 150 (d) 4 3 3 2 2 1 1 0 −1 −2 −3 −4 −4 −3 −2 −1 0 1 2 3 4 Shoulder breadth 20 −2 −3 Euclidean Model Riemannian Model 0 −1 25 Prediction Error 4 15 10 0 −4 −5 0 4 10 15 20 Dimensionality 16 25 30 35 17 3 3 5 5 Euclidean Model Riemannian Model 2 15 2 1 1 Prediction Error Shoulder to wrist distance Figure 3: Left panels: Synthetic data. (a) Samples from two classes along with illustratively sampled metric tensors from (6). (b) The data represented in the tangent of a manifold constructed from local LDA metrics learned at random positions. Right panels: Real data. (c) Average error of linearly predicted body measurements (mm). (d) Running time (sec) of the geodesic computation as a function of dimensionality. 0 0 −1 −2 −1 −3 14 13 12 11 −2 −4 −3 −4 −4 10 −5 −3 −2 −1 0 1 Euclidean PCA 2 3 −6 −4 9 0 −2 0 2 4 Tangent Space PCA (PGA) 6 5 10 15 20 Dimensionality 25 30 35 Regression Error Figure 4: Left: body shape data in the first two principal components according to the Euclidean metric. Point color indicates cluster membership. Center: As on the left, but according to the Riemannian model. Right: regression error as a function of the dimensionality of the shape space; again the Euclidean metric and the Riemannian metric are compared. compute the mean value on the manifold, map the data to the tangent space at the mean and perform linear regression in the tangent space. As a first visualization we plot the data expressed in the leading two dimensions of PGA in Fig. 4; as can be seen the learned metrics provide principal geodesics, which are more strongly related with the measurements than the Euclidean model. In order to predict the measurements from the body shape, we perform linear regression, both directly in the shape space according to the Euclidean metric and in the tangent space of the manifold corresponding to the learned metrics (using the logarithmic map from (11)). We measure the prediction error using leave-one-out cross-validation. To further illustrate the power of the PGA model, we repeat this experiment for different dimensionalities of the data. The results are plotted in Fig. 4, showing that regression according to the learned metrics outperforms the Euclidean model. To verify that the learned metrics improve accuracy, we average the prediction errors over all millimeter measurements. The result in Fig. 3c shows that much can be gained in lower dimensions by using the local metrics. To provide visual insights into the behavior of the learned metrics, we uniformly sample body shape along the first principal geodesic (in the range ±7 times the standard deviation) according to the different metrics. The results are available as a movie in the supplementary material, but are also shown in Fig. 5. As can be seen, the learned metrics pick up intuitive relationships between body shape and the measurements, e.g. shoulder to wrist distance is related to overall body size, while shoulder breadth is related to body weight. 7 Shoulder to wrist distance Shoulder breadth Figure 5: Shapes corresponding to the mean (center) and ±7 times the standard deviations along the principal geodesics (left and right). Movies are available in the supplementary material. 4.2 Scalability The human body data set is small enough (986 samples in 35 dimensions) that computing a geodesic only takes a few seconds. To show that the current unoptimized Matlab implementation can handle somewhat larger datasets, we briefly consider a dimensionality reduction task on the classic MNIST handwritten digit data set. We use the preprocessed data available with [3] where the original 28×28 gray scale images were deskewed and projected onto their leading 164 Euclidean principal components (which captures 95% of the variance in the original data). We learn one diagonal LMNN metric per class, which we associate with the mean of the class. From this we construct a Riemannian manifold from (6), compute the mean value on the manifold and compute geodesics between the mean and each data point; this is the computationally expensive part of performing PGA. Fig. 3d plots the average running time (sec) for the computation of geodesics as a function of the dimensionality of the training data. A geodesic can be computed in 100 dimensions in approximately 5 sec., whereas in 150 dimensions it takes about 30 sec. In this experiment, we train a PGA model on 60,000 data points, and test a nearest neighbor classifier in the tangent space as we decrease the dimensionality of the model. Compared to a Euclidean model, this gives a modest improvement in classification accuracy of 2.3 percent, when averaged across different dimensionalities. Plots of the results can be found in the supplementary material. 5 Discussion This work shows that multi-metric learning techniques are indeed applicable outside the realm of kNN classifiers. The idea of defining the metric tensor at any given point as the weighted average of a finite set of learned metrics is quite natural from a modeling point of view, which is also validated by the Riemannian structure of the resulting space. This opens both a theoretical and a practical toolbox for analyzing and developing algorithms that use local metric tensors. Specifically, we show how to use local metric tensors for both regression and dimensionality reduction tasks. Others have attempted to solve non-classification problems using local metrics, but we feel that our approach is the first to have a solid theoretical backing. For example, Hastie and Tibshirani [9] use local LDA metrics for dimensionality reduction by averaging the local metrics and using the resulting metric as part of a Euclidean PCA, which essentially is a linear approach. Another approach was suggested by Hong et al. [23] who simply compute the principal components according to each metric separately, such that one low dimensional model is learned per metric. The suggested approach is, however, not difficulty-free in its current implementation. Currently, we are using off-the-shelf numerical solvers for computing geodesics, which can be computationally demanding. While we managed to analyze medium-sized datasets, we believe that the run-time can be drastically improved by developing specialized numerical solvers. In the experiments, we learned local metrics using techniques specialized for classification tasks as this is all the current literature provides. We expect improvements by learning the metrics specifically for regression and dimensionality reduction, but doing so is currently an open problem. Acknowledgments: Søren Hauberg is supported in part by the Villum Foundation, and Oren Freifeld is supported in part by NIH-NINDS EUREKA (R01-NS066311). 8 References [1] Andrea Frome, Yoram Singer, and Jitendra Malik. Image retrieval and classification using local distance functions. In B. Sch¨ lkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing o Systems 19 (NIPS), pages 417–424, Cambridge, MA, 2007. MIT Press. [2] Andrea Frome, Fei Sha, Yoram Singer, and Jitendra Malik. Learning globally-consistent local distance functions for shape-based image retrieval and classification. In International Conference on Computer Vision (ICCV), pages 1–8, 2007. [3] Deva Ramanan and Simon Baker. Local distance functions: A taxonomy, new algorithms, and an evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4):794–806, 2011. [4] Shai Shalev-Shwartz, Yoram Singer, and Andrew Y. Ng. Online and batch learning of pseudo-metrics. In Proceedings of the twenty-first international conference on Machine learning, ICML ’04, pages 94–101. ACM, 2004. [5] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207–244, 2009. [6] Tomasz Malisiewicz and Alexei A. Efros. Recognition by association via learning per-exemplar distances. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008. [7] Yiming Ying and Peng Li. Distance metric learning with eigenvalue optimization. The Journal of Machine Learning Research, 13:1–26, 2012. [8] Matthew Schultz and Thorsten Joachims. Learning a distance metric from relative comparisons. In Advances in Neural Information Processing Systems 16 (NIPS), 2004. [9] Trevor Hastie and Robert Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):607–616, June 1996. [10] Elzbieta Pekalska, Pavel Paclik, and Robert P. W. Duin. A generalized kernel approach to dissimilaritybased classification. Journal of Machine Learning Research, 2:175–211, 2002. [11] Manfredo Perdigao do Carmo. Riemannian Geometry. Birkh¨ user Boston, January 1992. a [12] Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [13] Jan R. Magnus and Heinz Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley & Sons, 2007. [14] Jacek Kierzenka and Lawrence F. Shampine. A BVP solver based on residual control and the Matlab PSE. ACM Transactions on Mathematical Software, 27(3):299–316, 2001. [15] John R. Dormand and P. J. Prince. A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 6:19–26, 1980. [16] P. Thomas Fletcher, Conglin Lu, Stephen M. Pizer, and Sarang Joshi. Principal Geodesic Analysis for the study of Nonlinear Statistics of Shape. IEEE Transactions on Medical Imaging, 23(8):995–1005, 2004. [17] Peter E. Jupp and John T. Kent. Fitting smooth paths to spherical data. Applied Statistics, 36(1):34–46, 1987. [18] Xavier Pennec. Probabilities and statistics on Riemannian manifolds: Basic tools for geometric measurements. In Proceedings of Nonlinear Signal and Image Processing, pages 194–198, 1999. [19] Stefan Sommer, Francois Lauze, Søren Hauberg, and Mads Nielsen. Manifold valued statistics, exact ¸ principal geodesic analysis and the effect of linear approximations. In European Conference on Computer Vision (ECCV), pages 43–56, 2010. [20] Kathleen M. Robinette, Hein Daanen, and Eric Paquet. The CAESAR project: a 3-D surface anthropometry survey. In 3-D Digital Imaging and Modeling, pages 380–386, 1999. [21] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. Scape: shape completion and animation of people. ACM Transactions on Graphics, 24(3):408–416, 2005. [22] Oren Freifeld and Michael J. Black. Lie bodies: A manifold representation of 3D human shape. In A. Fitzgibbon et al. (Eds.), editor, European Conference on Computer Vision (ECCV), Part I, LNCS 7572, pages 1–14. Springer-Verlag, oct 2012. [23] Yi Hong, Quannan Li, Jiayan Jiang, and Zhuowen Tu. Learning a mixture of sparse distance metrics for classification and dimensionality reduction. In International Conference on Computer Vision (ICCV), pages 906–913, 2011. 9

6 0.15525882 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set

7 0.14590345 69 nips-2012-Clustering Sparse Graphs

8 0.14268306 148 nips-2012-Hamming Distance Metric Learning

9 0.11995809 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

10 0.11399014 330 nips-2012-Supervised Learning with Similarity Functions

11 0.10824078 99 nips-2012-Dip-means: an incremental clustering method for estimating the number of clusters

12 0.10596167 135 nips-2012-Forging The Graphs: A Low Rank and Positive Semidefinite Graph Learning Approach

13 0.10241821 126 nips-2012-FastEx: Hash Clustering with Exponential Families

14 0.099839412 185 nips-2012-Learning about Canonical Views from Internet Image Collections

15 0.091481917 291 nips-2012-Reducing statistical time-series problems to binary classification

16 0.091445714 25 nips-2012-A new metric on the manifold of kernel matrices with application to matrix geometric means

17 0.088137455 318 nips-2012-Sparse Approximate Manifolds for Differential Geometric MCMC

18 0.086241998 344 nips-2012-Timely Object Recognition

19 0.085796878 338 nips-2012-The Perturbed Variation

20 0.0851807 157 nips-2012-Identification of Recurrent Patterns in the Activation of Brain Networks


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.202), (1, 0.08), (2, -0.097), (3, -0.115), (4, 0.065), (5, -0.107), (6, -0.021), (7, 0.037), (8, 0.121), (9, 0.085), (10, 0.09), (11, -0.305), (12, -0.005), (13, -0.062), (14, 0.049), (15, 0.071), (16, -0.07), (17, 0.158), (18, 0.096), (19, 0.013), (20, -0.063), (21, -0.182), (22, -0.001), (23, 0.08), (24, 0.11), (25, 0.006), (26, 0.053), (27, 0.037), (28, 0.071), (29, -0.018), (30, 0.006), (31, -0.058), (32, 0.037), (33, -0.009), (34, 0.013), (35, 0.095), (36, 0.049), (37, 0.026), (38, 0.007), (39, 0.051), (40, 0.003), (41, 0.013), (42, 0.048), (43, 0.08), (44, 0.018), (45, 0.091), (46, 0.053), (47, 0.129), (48, 0.022), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98355716 307 nips-2012-Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

Author: Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

Abstract: One of the main challenges in data clustering is to define an appropriate similarity measure between two objects. Crowdclustering addresses this challenge by defining the pairwise similarity based on the manual annotations obtained through crowdsourcing. Despite its encouraging results, a key limitation of crowdclustering is that it can only cluster objects when their manual annotations are available. To address this limitation, we propose a new approach for clustering, called semi-crowdsourced clustering that effectively combines the low-level features of objects with the manual annotations of a subset of the objects obtained via crowdsourcing. The key idea is to learn an appropriate similarity measure, based on the low-level features of objects and from the manual annotations of only a small portion of the data to be clustered. One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing. We address this difficulty by developing a metric learning algorithm based on the matrix completion method. Our empirical study with two real-world image data sets shows that the proposed algorithm outperforms state-of-the-art distance metric learning algorithms in both clustering accuracy and computational efficiency. 1

2 0.73963881 265 nips-2012-Parametric Local Metric Learning for Nearest Neighbor Classification

Author: Jun Wang, Alexandros Kalousis, Adam Woznica

Abstract: We study the problem of learning local metrics for nearest neighbor classification. Most previous works on local metric learning learn a number of local unrelated metrics. While this ”independence” approach delivers an increased flexibility its downside is the considerable risk of overfitting. We present a new parametric local metric learning method in which we learn a smooth metric matrix function over the data manifold. Using an approximation error bound of the metric matrix function we learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. We constrain the metric matrix function by imposing on the linear combinations manifold regularization which makes the learned metric matrix function vary smoothly along the geodesics of the data manifold. Our metric learning method has excellent performance both in terms of predictive power and scalability. We experimented with several largescale classification problems, tens of thousands of instances, and compared it with several state of the art metric learning methods, both global and local, as well as to SVM with automatic kernel selection, all of which it outperforms in a significant manner. 1

3 0.68068588 99 nips-2012-Dip-means: an incremental clustering method for estimating the number of clusters

Author: Argyris Kalogeratos, Aristidis Likas

Abstract: Learning the number of clusters is a key problem in data clustering. We present dip-means, a novel robust incremental method to learn the number of data clusters that can be used as a wrapper around any iterative clustering algorithm of k-means family. In contrast to many popular methods which make assumptions about the underlying cluster distributions, dip-means only assumes a fundamental cluster property: each cluster to admit a unimodal distribution. The proposed algorithm considers each cluster member as an individual ‘viewer’ and applies a univariate statistic hypothesis test for unimodality (dip-test) on the distribution of distances between the viewer and the cluster members. Important advantages are: i) the unimodality test is applied on univariate distance vectors, ii) it can be directly applied with kernel-based methods, since only the pairwise distances are involved in the computations. Experimental results on artificial and real datasets indicate the effectiveness of our method and its superiority over analogous approaches.

4 0.67904907 70 nips-2012-Clustering by Nonnegative Matrix Factorization Using Graph Random Walk

Author: Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, Erkki Oja

Abstract: Nonnegative Matrix Factorization (NMF) is a promising relaxation technique for clustering analysis. However, conventional NMF methods that directly approximate the pairwise similarities using the least square error often yield mediocre performance for data in curved manifolds because they can capture only the immediate similarities between data samples. Here we propose a new NMF clustering method which replaces the approximated matrix with its smoothed version using random walk. Our method can thus accommodate farther relationships between data samples. Furthermore, we introduce a novel regularization in the proposed objective function in order to improve over spectral clustering. The new learning objective is optimized by a multiplicative Majorization-Minimization algorithm with a scalable implementation for learning the factorizing matrix. Extensive experimental results on real-world datasets show that our method has strong performance in terms of cluster purity. 1

5 0.65331769 242 nips-2012-Non-linear Metric Learning

Author: Dor Kedem, Stephen Tyree, Fei Sha, Gert R. Lanckriet, Kilian Q. Weinberger

Abstract: In this paper, we introduce two novel metric learning algorithms, χ2 -LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ2 -LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ2 -distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ2 -LMNN, obtain best results in 19 out of 20 learning settings. 1

6 0.6429143 9 nips-2012-A Geometric take on Metric Learning

7 0.60991067 338 nips-2012-The Perturbed Variation

8 0.60246599 157 nips-2012-Identification of Recurrent Patterns in the Activation of Brain Networks

9 0.58666205 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set

10 0.57294697 135 nips-2012-Forging The Graphs: A Low Rank and Positive Semidefinite Graph Learning Approach

11 0.56734705 133 nips-2012-Finding Exemplars from Pairwise Dissimilarities via Simultaneous Sparse Recovery

12 0.51919454 69 nips-2012-Clustering Sparse Graphs

13 0.51319075 148 nips-2012-Hamming Distance Metric Learning

14 0.48256594 171 nips-2012-Latent Coincidence Analysis: A Hidden Variable Model for Distance Metric Learning

15 0.4817957 185 nips-2012-Learning about Canonical Views from Internet Image Collections

16 0.48073611 146 nips-2012-Graphical Gaussian Vector for Image Categorization

17 0.48026365 25 nips-2012-A new metric on the manifold of kernel matrices with application to matrix geometric means

18 0.47488192 196 nips-2012-Learning with Partially Absorbing Random Walks

19 0.47115952 155 nips-2012-Human memory search as a random walk in a semantic network

20 0.45654166 316 nips-2012-Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.052), (21, 0.012), (38, 0.1), (39, 0.024), (42, 0.171), (53, 0.041), (54, 0.012), (55, 0.035), (74, 0.057), (76, 0.322), (80, 0.056), (92, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.95209897 219 nips-2012-Modelling Reciprocating Relationships with Hawkes Processes

Author: Charles Blundell, Jeff Beck, Katherine A. Heller

Abstract: We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly, through actions among members of groups. Yet many models of social networks use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able to model reciprocity between groups of individuals. We then extend the Infinite Relational Model by using these reciprocating Hawkes processes to parameterise its edges, making events associated with edges co-dependent through time. Our model outperforms general, unstructured Hawkes processes as well as structured Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations. 1

same-paper 2 0.94692892 307 nips-2012-Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

Author: Jinfeng Yi, Rong Jin, Shaili Jain, Tianbao Yang, Anil K. Jain

Abstract: One of the main challenges in data clustering is to define an appropriate similarity measure between two objects. Crowdclustering addresses this challenge by defining the pairwise similarity based on the manual annotations obtained through crowdsourcing. Despite its encouraging results, a key limitation of crowdclustering is that it can only cluster objects when their manual annotations are available. To address this limitation, we propose a new approach for clustering, called semi-crowdsourced clustering that effectively combines the low-level features of objects with the manual annotations of a subset of the objects obtained via crowdsourcing. The key idea is to learn an appropriate similarity measure, based on the low-level features of objects and from the manual annotations of only a small portion of the data to be clustered. One difficulty in learning the pairwise similarity measure is that there is a significant amount of noise and inter-worker variations in the manual annotations obtained via crowdsourcing. We address this difficulty by developing a metric learning algorithm based on the matrix completion method. Our empirical study with two real-world image data sets shows that the proposed algorithm outperforms state-of-the-art distance metric learning algorithms in both clustering accuracy and computational efficiency. 1

3 0.93143392 242 nips-2012-Non-linear Metric Learning

Author: Dor Kedem, Stephen Tyree, Fei Sha, Gert R. Lanckriet, Kilian Q. Weinberger

Abstract: In this paper, we introduce two novel metric learning algorithms, χ2 -LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ2 -LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ2 -distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ2 -LMNN, obtain best results in 19 out of 20 learning settings. 1

4 0.92012888 289 nips-2012-Recognizing Activities by Attribute Dynamics

Author: Weixin Li, Nuno Vasconcelos

Abstract: In this work, we consider the problem of modeling the dynamic structure of human activities in the attributes space. A video sequence is Ä?Ĺš rst represented in a semantic feature space, where each feature encodes the probability of occurrence of an activity attribute at a given time. A generative model, denoted the binary dynamic system (BDS), is proposed to learn both the distribution and dynamics of different activities in this space. The BDS is a non-linear dynamic system, which extends both the binary principal component analysis (PCA) and classical linear dynamic systems (LDS), by combining binary observation variables with a hidden Gauss-Markov state process. In this way, it integrates the representation power of semantic modeling with the ability of dynamic systems to capture the temporal structure of time-varying processes. An algorithm for learning BDS parameters, inspired by a popular LDS learning method from dynamic textures, is proposed. A similarity measure between BDSs, which generalizes the BinetCauchy kernel for LDS, is then introduced and used to design activity classiÄ?Ĺš ers. The proposed method is shown to outperform similar classiÄ?Ĺš ers derived from the kernel dynamic system (KDS) and state-of-the-art approaches for dynamics-based or attribute-based action recognition. 1

5 0.90282816 28 nips-2012-A systematic approach to extracting semantic information from functional MRI data

Author: Francisco Pereira, Matthew Botvinick

Abstract: This paper introduces a novel classification method for functional magnetic resonance imaging datasets with tens of classes. The method is designed to make predictions using information from as many brain locations as possible, instead of resorting to feature selection, and does this by decomposing the pattern of brain activation into differently informative sub-regions. We provide results over a complex semantic processing dataset that show that the method is competitive with state-of-the-art feature selection and also suggest how the method may be used to perform group or exploratory analyses of complex class structure. 1

6 0.89937496 311 nips-2012-Shifting Weights: Adapting Object Detectors from Image to Video

7 0.8919788 205 nips-2012-MCMC for continuous-time discrete-state systems

8 0.89072651 175 nips-2012-Learning High-Density Regions for a Generalized Kolmogorov-Smirnov Test in High-Dimensional Data

9 0.89009809 286 nips-2012-Random Utility Theory for Social Choice

10 0.88855904 169 nips-2012-Label Ranking with Partial Abstention based on Thresholded Probabilistic Models

11 0.88637769 33 nips-2012-Active Learning of Model Evidence Using Bayesian Quadrature

12 0.88565785 338 nips-2012-The Perturbed Variation

13 0.88504648 247 nips-2012-Nonparametric Reduced Rank Regression

14 0.88064939 68 nips-2012-Clustering Aggregation as Maximum-Weight Independent Set

15 0.87800258 90 nips-2012-Deep Learning of Invariant Features via Simulated Fixations in Video

16 0.87712693 317 nips-2012-Smooth-projected Neighborhood Pursuit for High-dimensional Nonparanormal Graph Estimation

17 0.87633765 142 nips-2012-Generalization Bounds for Domain Adaptation

18 0.87627918 99 nips-2012-Dip-means: an incremental clustering method for estimating the number of clusters

19 0.87608445 164 nips-2012-Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

20 0.87529498 163 nips-2012-Isotropic Hashing