nips nips2011 nips2011-236 knowledge-graph by maker-knowledge-mining

236 nips-2011-Regularized Laplacian Estimation and Fast Eigenvector Approximation


Source: pdf

Author: Patrick O. Perry, Michael W. Mahoney

Abstract: Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which 2 -regularized or 1 -regularized 2 -regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). [sent-7, score-0.376]

2 Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. [sent-10, score-0.339]

3 Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e. [sent-11, score-0.115]

4 , the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. [sent-13, score-0.226]

5 Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. [sent-14, score-0.141]

6 Recently, Mahoney and Orecchia formalized these ideas in the context of computing the first nontrivial eigenvector of a graph Laplacian [1]. [sent-17, score-0.226]

7 Recall that, given a graph G on n nodes or equivalently its n × n Laplacian matrix L, the top nontrivial eigenvector of the Laplacian exactly optimizes the Rayleigh quotient, subject to the usual constraints. [sent-18, score-0.26]

8 This first nontrivial vector is, of course, of widespread interest in applications due to its usefulness for graph partitioning, image segmentation, data clustering, semi-supervised learning, etc. [sent-20, score-0.146]

9 That is, these three diffusion-based 1 procedures exactly optimize a regularized SDP with objective function F (X) + η G(X), for some regularization function G(·) to be described below, subject to the usual constraints. [sent-25, score-0.174]

10 We will show that the maximum a posteriori probability (MAP) estimate of the inverse of the population Laplacian leads to a regularized SDP, where the objective function F (X) = Tr(LX) and where the role of the penalty function G(·) is to encode prior assumptions about the population Laplacian. [sent-29, score-0.346]

11 In addition, we will show that when G(·) is the log-determinant function then the MAP estimate leads to the Mahoney-Orecchia regularized SDP corresponding to running the PageRank heuristic. [sent-30, score-0.107]

12 Said another way, the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. [sent-31, score-0.339]

13 Section 3 then describes a statistical framework for graph estimation; and Section 4 describes prior assumptions that can be made on the population Laplacian. [sent-34, score-0.229]

14 These two sections will shed light on the computational implications associated with these prior assumptions; but more importantly they will shed light on the implicit prior assumptions associated with making certain decisions to speed up computations. [sent-35, score-0.183]

15 2 Background on Laplacians and diffusion-based procedures A weighted symmetric graph G is defined by a vertex set V = {1, . [sent-38, score-0.109]

16 Note that the all-ones vector, often denoted 1, is an eigenvector of L0 with eigenvalue zero, i. [sent-46, score-0.092]

17 For this reason, 1 is often called trivial eigenvector of L0 . [sent-49, score-0.092]

18 , to perform spectral graph partitioning, one is interested in computing the first nontrivial eigenvector of a Laplacian. [sent-54, score-0.263]

19 These random walk-based or diffusion-based methods assign positive and negative “charge” to the nodes, and then they let the distribution of charge evolve according to dynamics derived from the graph structure. [sent-56, score-0.152]

20 Here, the charge evolves according to the heat equation k ∞ (−t) k=0 k! [sent-58, score-0.124]

21 Thus, the vector of charges evolves as Ht = exp(−tL) = L , where t ≥ 0 is a time parameter, times an input seed distribution vector. [sent-60, score-0.113]

22 Here, the charge at a node evolves by either moving to a neighbor of the current node or teleporting to a random node. [sent-62, score-0.077]

23 More formally, the vector of charges evolves as Rγ = γ (I − (1 − γ) M ) 2 −1 , (1) where M is the natural random walk transition matrix associated with the graph and where γ ∈ (0, 1) is the so-called teleportation parameter, times an input seed vector. [sent-63, score-0.261]

24 Thus, if M is the natural random walk transition matrix associated with the graph, then the vector of charges evolves as some power of Wα = αI + (1 − α)M , where α ∈ (0, 1) represents the “holding probability,” times an input seed vector. [sent-66, score-0.145]

25 In each of these cases, there is a parameter (t, γ, and the number of steps of the Lazy Random Walk) that controls the “aggressiveness” of the dynamics and thus how quickly the diffusive process equilibrates; and there is an input “seed” distribution vector. [sent-67, score-0.07]

26 See Appendix A of [8] for a brief discussion of local and global spectral partitioning in this context. [sent-71, score-0.076]

27 Conversely, solutions to the regularized SDP of (2) for appropriate values of η can be computed exactly by running one of the above three diffusion-based procedures. [sent-73, score-0.121]

28 Notably, when G = 0, the solution to the SDP of (2) is uu , where u is the smallest nontrivial eigenvector of L. [sent-74, score-0.143]

29 More generally and in this precise sense, the Heat Kernel, PageRank, and Lazy Random Walk dynamics can be seen as “regularized” versions of spectral clustering and Laplacian eigenvector computation. [sent-75, score-0.154]

30 Intuitively, the function G(·) is acting as a penalty function, in a manner analogous to the 2 or 1 penalty in Ridge regression or Lasso regression, and by running one of these three dynamics one is implicitly making assumptions about the form of G(·). [sent-76, score-0.198]

31 3 A statistical framework for regularized graph estimation Here, we will lay out a simple Bayesian framework for estimating a graph Laplacian. [sent-78, score-0.268]

32 Importantly, this framework will allow for regularization by incorporating prior information. [sent-79, score-0.1]

33 1 Analogy with regularized linear regression It will be helpful to keep in mind the Bayesian interpretation of regularized linear regression. [sent-81, score-0.228]

34 , λ β 2 and λ β 1 ) are called penalty func2 tions; and adding a penalty function to the optimization criterion can often be interpreted as incorporating prior information about β. [sent-92, score-0.126]

35 This induces a conditional density for the vector y = (y1 , . [sent-98, score-0.07]

36 Without loss of generality, the prior density can be assumed to take the form p(β) ∝ exp{−U (β)}. [sent-104, score-0.101]

37 Thus, 2 Ridge regression can be interpreted as imposing a Gaussian prior on β, and Lasso regression can be interpreted as imposing a double-exponential prior on β. [sent-109, score-0.226]

38 2 Bayesian inference for the population Laplacian For our problem, suppose that we have a connected graph with n nodes; or, equivalently, that we have L, the normalized Laplacian of that graph. [sent-111, score-0.197]

39 We will view this observed graph Laplacian, L, as a “sample” Laplacian, i. [sent-112, score-0.083]

40 As with the linear regression example, this induces a conditional density for L, to be denoted p(L | L). [sent-115, score-0.091]

41 Next, we can assume prior information about the population Laplacian in the form of a prior density, p(L); and, given the observed Laplacian, we can estimate the population Laplacian by maximizing its posterior density, p(L | L). [sent-116, score-0.276]

42 A graph Laplacian is not just a single observation—it is a positive semidefinite matrix with a very specific structure. [sent-119, score-0.083]

43 Thus, we will take L to be a random object with expectation L, where L is another normalized graph Laplacian. [sent-120, score-0.11]

44 Although, in general, L can be distinct from L, we will require that the nodes in the population and sample graphs have the same degrees. [sent-121, score-0.112]

45 , d(n) , then we can define X = {X : X 0, XD1/2 1 = 0, rank(X) = n − 1}, (6) in which case the population Laplacian and the sample Laplacian will both be members of X . [sent-128, score-0.088]

46 The constant of proportionality depends only on L, d, m, and n; and we emphasize that the density is supported on X . [sent-136, score-0.072]

47 (3) in the linear regression context, with 1/m, the inverse of the sample size parameter, playing the role of the variance parameter σ 2 . [sent-139, score-0.073]

48 Next, suppose we have know that L is a random object drawn from a prior density p(L). [sent-140, score-0.113]

49 4 − log |X| (10) ¯ Note the similarity with Mahoney-Orecchia regularized SDP of (2). [sent-150, score-0.103]

50 1 Prior density The prior we will present will be based on neutrality and invariance conditions; and it will be supported on X , i. [sent-155, score-0.101]

51 Note that because the prior depends on the data (via the orthogonality constraint induced by D), this is not a prior in the fully Bayesian sense; instead, the prior can be considered as part of an empirical or pseudo-Bayes estimation procedure. [sent-160, score-0.178]

52 The prior we will specify depends only on the eigenvalues of the normalized Laplacian, or equivalently on the eigenvalues of the pseudoinverse of the Laplacian. [sent-161, score-0.227]

53 Let L+ = τ OΛO be the spectral decomposition of the pseudoinverse of the normalized Laplacian L, where τ ≥ 0 is a scale factor, O ∈ Rn×n−1 is an orthogonal matrix, and Λ = diag λ(1), . [sent-162, score-0.117]

54 The parameter α, to which we refer as the “shape” parameter, must satisfy α > 0 for the density to be defined. [sent-177, score-0.06]

55 2 Posterior estimation and connection to PageRank To analyze the MAP estimate associated with the prior of Eqn. [sent-182, score-0.068]

56 Suppose the conditional likelihood for L given L is as defined in (7) and the prior ˆ ˆ ˆ density for L is as defined in (11). [sent-186, score-0.113]

57 Then, [Tr(L+ )]−1 L+ solves the Mahoney-Orecchia regularized SDP (2), with G(X) = − log |X| and η as given in Eqn. [sent-188, score-0.103]

58 Using (9) and the relation |L+ | = τ n−1 |Θ|, the posterior density for L given L is p(L | L) ∝ exp − mτ 2 Tr(LΘ) + 5 m+2(α−1) 2 log |Θ| + g(τ ) , ˆ where g(τ ) = m(n−1) log τ + log p(τ ). [sent-195, score-0.116]

59 m + 2(α − 1) (12) ˆ Thus Θ solves the regularized SDP (2) with G(X) = − log |X|. [sent-199, score-0.103]

60 (1) can be viewed as a degree-scaled regularized estimate of the pseudoinverse of the Laplacian. [sent-205, score-0.142]

61 Moreover, prior assumptions about the spectrum of the graph Laplacian have direct implications on the optimal teleportation parameter. [sent-206, score-0.208]

62 (12) shows how the optimal η is related to prior assumptions about the Laplacian. [sent-208, score-0.071]

63 5 Empirical evaluation In this section, we provide an empirical evaluation of the performance of the regularized Laplacian estimator, compared with the unregularized estimator. [sent-209, score-0.133]

64 To do this, we need a ground truth population Laplacian L and a noisily-observed sample Laplacian L. [sent-210, score-0.115]

65 (11) captures some of the qualitative features of both of these types of graphs (as the shape parameter is varied). [sent-214, score-0.062]

66 2, we describe a sampling procedure for L which, superficially, has no relation to the scaled Wishart ˆ conditional density of Eqn. [sent-216, score-0.071]

67 Despite this model misspecification, the regularized estimator Lη outperforms L for many choices of the regularization parameter η. [sent-218, score-0.148]

68 1 Ground truth generation and prior evaluation The ground truth graphs we generate are motivated by the Watts-Strogatz “small-world” model [14]. [sent-220, score-0.133]

69 To generate a ground truth population Laplacian, L—equivalently, a population graph—we start with a two-dimensional lattice of width w and height h, and thus n = wh nodes. [sent-221, score-0.189]

70 For example, if we sample edges i1 ∼ j1 and i2 ∼ j2 , then we replace these edges with i1 ∼ j2 and i2 ∼ j1 . [sent-224, score-0.077]

71 Thus, when s = 0, the graph is the original discretization of a low-dimensional space; and as s increases to infinity, the graph becomes more and more like a uniformly chosen 4-regular graph (which is an expander [15] and which bears similarities with an Erd˝ s-R´ nyi random graph [16]). [sent-225, score-0.349]

72 Indeed, each edge swap is a step of the Metropolis algorithm o e toward a uniformly chosen random graph with a fixed degree sequence. [sent-226, score-0.104]

73 Interestingly, the behavior of the spectrum of the small-world model as the edge-swaps increase is qualitatively quite similar to the behavior of the Dirichlet prior order statistics as the shape parameter α increases. [sent-237, score-0.114]

74 00 Order statistics q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q 0. [sent-250, score-0.886]

75 0 q q q q q q q q q q q q q qq q q qq q qq q qq q qq q q qq q qq q qq q qq q qq q qq q q qq 0. [sent-251, score-10.632]

76 2 Sampling procedure, estimation performance, and optimal regularization behavior Finally, we evaluate the estimation performance of a regularized estimator of the graph Laplacian and compare it with an unregularized estimate. [sent-263, score-0.261]

77 To do so, we construct the population graph G and its Laplacian L, for a given value of s, as described in Section 5. [sent-264, score-0.158]

78 The sampling procedure used to generate the observed graph G and its Laplacian L is parameterized by the sample size m. [sent-267, score-0.096]

79 ) We randomly choose m edges with replacement from G; and we define sample graph G and corresponding Laplacian L by setting the weight of i ∼ j equal to the number of times we sampled that edge. [sent-270, score-0.128]

80 Note that the sample graph G over-counts some edges in G and misses others. [sent-271, score-0.128]

81 ˆ We then compute the regularized estimate Lη , up to a constant of proportionality, by solving (implicitly! [sent-272, score-0.089]

82 ) the Mahoney-Orecchia regularized SDP (2) with G(X) = − log |X|. [sent-273, score-0.103]

83 Given a population Laplacian L, we define ˆ τ = τ (L) = Tr(L+ ) and Θ = Θ(L) = τ −1 L+ . [sent-275, score-0.075]

84 The implicit regularization clearly improves the performance of the estimator for a large range of η values. [sent-281, score-0.064]

85 (Note that the regularization parameter in the regularized SDP (2) is 1/η, and thus smaller values along the X-axis correspond to stronger regularization. [sent-282, score-0.148]

86 5 qq qq qq qq qq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q qq qq q q qq q qq q q qq qq q q qq qq q qq qq qq qq qq qq qqq qqqqqqqq qqqqqq 0. [sent-294, score-19.667]

87 Frobenius Error q qq qq qq qq qq qq qq qq qq qq qq qq qq qq qq q qq qq qq qq qq qq qq qq qq qq qq qq qq qq qq qqq qq qqq qq q qq qqq qqqq qqqqqqqqqq qqqqqqqq 0. [sent-316, score-29.584]

88 5 qqqqqqq qqqqqqqqqqqq qqqqqqqqqqq qqqqqqqqq qqqqqqq qqqqqq qqqqq qqqq qqq qqq qqq qq qq qq qq qq q q q q q q q q q q q q q q q q qq qq 0. [sent-317, score-6.625]

89 2 q q qq qq qq qq qq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q qq qq q q qq qq qq qq qq qq qqq qq qqq qq qqqqqq qqqqqq 0. [sent-364, score-16.232]

90 5 q q qq qq q q q q q q q q q q q q q q q q q q q qq q qq q qq q qq q q qq q qq q q qq q qq q q qq q qq q q q q q q q q qq q qq q q q qq qq qq qq qq qq qqqqqqqq qqqqqqq q q q q q q q q 5. [sent-373, score-17.786]

91 Recall that the regularization parameter in the regularized SDP (2) is 1/η, and thus smaller values along the X-axis correspond to stronger regularization. [sent-384, score-0.148]

92 2(f) plots the optimal regularization parameter η ∗ /¯ as a function of sample τ proportion for different fractions of edge swaps. [sent-385, score-0.085]

93 As when regularization is implemented explicitly, in all these cases, we observe a “sweet spot” where there is an optimal value for the implicit regularization parameter. [sent-386, score-0.109]

94 Figure 2(f) illustrates how the optimal choice of η depends on parameters defining the population Laplacians and sample Laplacians. [sent-387, score-0.088]

95 In particular, it illustrates how η ∗ , the optimal value of η (normalized by τ ), depends ¯ on the sampling proportion m/µ and the swaps per edges s/µ. [sent-388, score-0.062]

96 6 Conclusion We have provided a statistical interpretation for the observation that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve a certain regularized version of the problem. [sent-392, score-0.393]

97 Instead, our results should be viewed as making explicit the implicit prior assumptions associated with making certain decisions (that are already made in practice) to speed up computations. [sent-394, score-0.09]

98 Spectral partitioning works: Planar graphs and finite element meshes. [sent-410, score-0.063]

99 Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. [sent-457, score-0.166]

100 A spectral algorithm for improving graph partitions with applications to exploring data graphs locally. [sent-479, score-0.144]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('qq', 0.886), ('laplacian', 0.215), ('pagerank', 0.125), ('sdp', 0.122), ('orecchia', 0.098), ('qqq', 0.098), ('eigenvector', 0.092), ('regularized', 0.089), ('mahoney', 0.086), ('graph', 0.083), ('population', 0.075), ('tr', 0.06), ('prior', 0.055), ('pseudoinverse', 0.053), ('nontrivial', 0.051), ('heat', 0.047), ('density', 0.046), ('regularization', 0.045), ('charge', 0.044), ('qqqqqq', 0.044), ('wishart', 0.042), ('seed', 0.042), ('lx', 0.041), ('partitioning', 0.039), ('spectral', 0.037), ('eigenvalues', 0.036), ('lazy', 0.036), ('ridge', 0.035), ('frobenius', 0.034), ('regression', 0.033), ('qqqqqqq', 0.033), ('qqqqqqqq', 0.033), ('teleportation', 0.033), ('evolves', 0.033), ('edges', 0.032), ('walk', 0.032), ('dirichlet', 0.031), ('analogous', 0.029), ('normalized', 0.027), ('charges', 0.026), ('proportionality', 0.026), ('procedures', 0.026), ('dynamics', 0.025), ('interpreted', 0.025), ('replicates', 0.025), ('shape', 0.024), ('graphs', 0.024), ('penalty', 0.023), ('lasso', 0.023), ('aggressiveness', 0.022), ('bulk', 0.022), ('spectrum', 0.021), ('quick', 0.021), ('swap', 0.021), ('appendix', 0.02), ('equivalently', 0.02), ('map', 0.02), ('perry', 0.019), ('qqqq', 0.019), ('quotient', 0.019), ('implicit', 0.019), ('running', 0.018), ('implicitly', 0.018), ('unregularized', 0.018), ('rayleigh', 0.018), ('diffusive', 0.018), ('downstream', 0.018), ('interpretation', 0.017), ('expander', 0.017), ('swaps', 0.017), ('posterior', 0.016), ('assumptions', 0.016), ('spielman', 0.016), ('sdps', 0.016), ('calling', 0.016), ('laplacians', 0.016), ('conversely', 0.015), ('exactly', 0.014), ('log', 0.014), ('parameter', 0.014), ('semide', 0.014), ('truth', 0.014), ('proposition', 0.014), ('scaled', 0.013), ('inverse', 0.013), ('ground', 0.013), ('proportion', 0.013), ('shed', 0.013), ('sample', 0.013), ('estimation', 0.013), ('manner', 0.013), ('quickly', 0.013), ('evaluation', 0.013), ('solver', 0.012), ('conditional', 0.012), ('exp', 0.012), ('lattice', 0.012), ('suppose', 0.012), ('importantly', 0.012), ('vector', 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 236 nips-2011-Regularized Laplacian Estimation and Fast Eigenvector Approximation

Author: Patrick O. Perry, Michael W. Mahoney

Abstract: Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which 2 -regularized or 1 -regularized 2 -regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. 1

2 0.08323624 186 nips-2011-Noise Thresholds for Spectral Clustering

Author: Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, Aarti Singh

Abstract: Although spectral clustering has enjoyed considerable empirical success in machine learning, its theoretical properties are not yet fully developed. We analyze the performance of a spectral algorithm for hierarchical clustering and show that on a class of hierarchically structured similarity matrices, this algorithm can tolerate noise that grows with the number of data points while still perfectly recovering the hierarchical clusters with high probability. We additionally improve upon previous results for k-way spectral clustering to derive conditions under which spectral clustering makes no mistakes. Further, using minimax analysis, we derive tight upper and lower bounds for the clustering problem and compare the performance of spectral clustering to these information theoretic limits. We also present experiments on simulated and real world data illustrating our results. 1

3 0.072316848 248 nips-2011-Semi-supervised Regression via Parallel Field Regularization

Author: Binbin Lin, Chiyuan Zhang, Xiaofei He

Abstract: This paper studies the problem of semi-supervised learning from the vector field perspective. Many of the existing work use the graph Laplacian to ensure the smoothness of the prediction function on the data manifold. However, beyond smoothness, it is suggested by recent theoretical work that we should ensure second order smoothness for achieving faster rates of convergence for semisupervised regression problems. To achieve this goal, we show that the second order smoothness measures the linearity of the function, and the gradient field of a linear function has to be a parallel vector field. Consequently, we propose to find a function which minimizes the empirical error, and simultaneously requires its gradient field to be as parallel as possible. We give a continuous objective function on the manifold and discuss how to discretize it by using random points. The discretized optimization problem turns out to be a sparse linear system which can be solved very efficiently. The experimental results have demonstrated the effectiveness of our proposed approach. 1

4 0.072100602 213 nips-2011-Phase transition in the family of p-resistances

Author: Morteza Alamgir, Ulrike V. Luxburg

Abstract: We study the family of p-resistances on graphs for p 1. This family generalizes the standard resistance distance. We prove that for any fixed graph, for p = 1 the p-resistance coincides with the shortest path distance, for p = 2 it coincides with the standard resistance distance, and for p ! 1 it converges to the inverse of the minimal s-t-cut in the graph. Secondly, we consider the special case of random geometric graphs (such as k-nearest neighbor graphs) when the number n of vertices in the graph tends to infinity. We prove that an interesting phase transition takes place. There exist two critical thresholds p⇤ and p⇤⇤ such that if p < p⇤ , then the p-resistance depends on meaningful global properties of the graph, whereas if p > p⇤⇤ , it only depends on trivial local quantities and does not convey any useful information. We can explicitly compute the critical values: p⇤ = 1 + 1/(d 1) and p⇤⇤ = 1 + 1/(d 2) where d is the dimension of the underlying space (we believe that the fact that there is a small gap between p⇤ and p⇤⇤ is an artifact of our proofs). We also relate our findings to Laplacian regularization and suggest to use q-Laplacians as regularizers, where q satisfies 1/p⇤ + 1/q = 1. 1

5 0.070347257 147 nips-2011-Learning Patient-Specific Cancer Survival Distributions as a Sequence of Dependent Regressors

Author: Hsiu-chin Lin, Vickie Baracos, Russell Greiner, Chun-nam J. Yu

Abstract: An accurate model of patient survival time can help in the treatment and care of cancer patients. The common practice of providing survival time estimates based only on population averages for the site and stage of cancer ignores many important individual differences among patients. In this paper, we propose a local regression method for learning patient-specific survival time distribution based on patient attributes such as blood tests and clinical assessments. When tested on a cohort of more than 2000 cancer patients, our method gives survival time predictions that are much more accurate than popular survival analysis models such as the Cox and Aalen regression models. Our results also show that using patient-specific attributes can reduce the prediction error on survival time by as much as 20% when compared to using cancer site and stage only. 1

6 0.065930195 54 nips-2011-Co-regularized Multi-view Spectral Clustering

7 0.05384184 43 nips-2011-Bayesian Partitioning of Large-Scale Distance Data

8 0.053071741 71 nips-2011-Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators

9 0.05255013 89 nips-2011-Estimating time-varying input signals and ion channel states from a single voltage trace of a neuron

10 0.043790035 39 nips-2011-Approximating Semidefinite Programs in Sublinear Time

11 0.0403791 104 nips-2011-Generalized Beta Mixtures of Gaussians

12 0.038190596 117 nips-2011-High-Dimensional Graphical Model Selection: Tractable Graph Families and Necessary Conditions

13 0.038187057 118 nips-2011-High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

14 0.037217915 289 nips-2011-Trace Lasso: a trace norm regularization for correlated designs

15 0.03608264 144 nips-2011-Learning Auto-regressive Models from Sequence and Non-sequence Data

16 0.035926148 146 nips-2011-Learning Higher-Order Graph Structure with Features by Structure Penalty

17 0.035879005 258 nips-2011-Sparse Bayesian Multi-Task Learning

18 0.035595939 195 nips-2011-On Learning Discrete Graphical Models using Greedy Methods

19 0.034747221 167 nips-2011-Maximum Covariance Unfolding : Manifold Learning for Bimodal Data

20 0.033792924 262 nips-2011-Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.1), (1, 0.015), (2, -0.009), (3, -0.051), (4, -0.025), (5, -0.02), (6, -0.033), (7, 0.015), (8, 0.065), (9, -0.012), (10, 0.008), (11, 0.063), (12, 0.014), (13, -0.043), (14, -0.006), (15, 0.047), (16, 0.011), (17, 0.006), (18, -0.002), (19, -0.031), (20, -0.059), (21, 0.041), (22, 0.0), (23, 0.099), (24, 0.067), (25, -0.016), (26, -0.008), (27, 0.003), (28, -0.011), (29, -0.055), (30, 0.003), (31, 0.068), (32, -0.018), (33, 0.074), (34, -0.054), (35, -0.063), (36, 0.027), (37, 0.054), (38, -0.063), (39, -0.014), (40, 0.012), (41, -0.03), (42, -0.032), (43, -0.016), (44, 0.02), (45, -0.008), (46, 0.055), (47, -0.006), (48, 0.111), (49, -0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91197622 236 nips-2011-Regularized Laplacian Estimation and Fast Eigenvector Approximation

Author: Patrick O. Perry, Michael W. Mahoney

Abstract: Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which 2 -regularized or 1 -regularized 2 -regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. 1

2 0.62968606 213 nips-2011-Phase transition in the family of p-resistances

Author: Morteza Alamgir, Ulrike V. Luxburg

Abstract: We study the family of p-resistances on graphs for p 1. This family generalizes the standard resistance distance. We prove that for any fixed graph, for p = 1 the p-resistance coincides with the shortest path distance, for p = 2 it coincides with the standard resistance distance, and for p ! 1 it converges to the inverse of the minimal s-t-cut in the graph. Secondly, we consider the special case of random geometric graphs (such as k-nearest neighbor graphs) when the number n of vertices in the graph tends to infinity. We prove that an interesting phase transition takes place. There exist two critical thresholds p⇤ and p⇤⇤ such that if p < p⇤ , then the p-resistance depends on meaningful global properties of the graph, whereas if p > p⇤⇤ , it only depends on trivial local quantities and does not convey any useful information. We can explicitly compute the critical values: p⇤ = 1 + 1/(d 1) and p⇤⇤ = 1 + 1/(d 2) where d is the dimension of the underlying space (we believe that the fact that there is a small gap between p⇤ and p⇤⇤ is an artifact of our proofs). We also relate our findings to Laplacian regularization and suggest to use q-Laplacians as regularizers, where q satisfies 1/p⇤ + 1/q = 1. 1

3 0.5626896 9 nips-2011-A More Powerful Two-Sample Test in High Dimensions using Random Projection

Author: Miles Lopes, Laurent Jacob, Martin J. Wainwright

Abstract: We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) → ∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer. 1

4 0.5427075 67 nips-2011-Data Skeletonization via Reeb Graphs

Author: Xiaoyin Ge, Issam I. Safa, Mikhail Belkin, Yusu Wang

Abstract: Recovering hidden structure from complex and noisy non-linear data is one of the most fundamental problems in machine learning and statistical inference. While such data is often high-dimensional, it is of interest to approximate it with a lowdimensional or even one-dimensional space, since many important aspects of data are often intrinsically low-dimensional. Furthermore, there are many scenarios where the underlying structure is graph-like, e.g, river/road networks or various trajectories. In this paper, we develop a framework to extract, as well as to simplify, a one-dimensional ”skeleton” from unorganized data using the Reeb graph. Our algorithm is very simple, does not require complex optimizations and can be easily applied to unorganized high-dimensional data such as point clouds or proximity graphs. It can also represent arbitrary graph structures in the data. We also give theoretical results to justify our method. We provide a number of experiments to demonstrate the effectiveness and generality of our algorithm, including comparisons to existing methods, such as principal curves. We believe that the simplicity and practicality of our algorithm will help to promote skeleton graphs as a data analysis tool for a broad range of applications.

5 0.53515768 186 nips-2011-Noise Thresholds for Spectral Clustering

Author: Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, Aarti Singh

Abstract: Although spectral clustering has enjoyed considerable empirical success in machine learning, its theoretical properties are not yet fully developed. We analyze the performance of a spectral algorithm for hierarchical clustering and show that on a class of hierarchically structured similarity matrices, this algorithm can tolerate noise that grows with the number of data points while still perfectly recovering the hierarchical clusters with high probability. We additionally improve upon previous results for k-way spectral clustering to derive conditions under which spectral clustering makes no mistakes. Further, using minimax analysis, we derive tight upper and lower bounds for the clustering problem and compare the performance of spectral clustering to these information theoretic limits. We also present experiments on simulated and real world data illustrating our results. 1

6 0.50815833 71 nips-2011-Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators

7 0.49921888 117 nips-2011-High-Dimensional Graphical Model Selection: Tractable Graph Families and Necessary Conditions

8 0.46821922 43 nips-2011-Bayesian Partitioning of Large-Scale Distance Data

9 0.46529201 81 nips-2011-Efficient anomaly detection using bipartite k-NN graphs

10 0.46106824 118 nips-2011-High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

11 0.45966139 172 nips-2011-Minimax Localization of Structural Information in Large Noisy Matrices

12 0.44028479 5 nips-2011-A Denoising View of Matrix Completion

13 0.42985129 54 nips-2011-Co-regularized Multi-view Spectral Clustering

14 0.42284986 69 nips-2011-Differentially Private M-Estimators

15 0.42166692 147 nips-2011-Learning Patient-Specific Cancer Survival Distributions as a Sequence of Dependent Regressors

16 0.42009172 146 nips-2011-Learning Higher-Order Graph Structure with Features by Structure Penalty

17 0.41794768 51 nips-2011-Clustered Multi-Task Learning Via Alternating Structure Optimization

18 0.41550693 263 nips-2011-Sparse Manifold Clustering and Embedding

19 0.40610719 83 nips-2011-Efficient inference in matrix-variate Gaussian models with \iid observation noise

20 0.39610642 241 nips-2011-Scalable Training of Mixture Models via Coresets


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.018), (4, 0.072), (20, 0.032), (26, 0.024), (31, 0.061), (33, 0.025), (43, 0.077), (45, 0.074), (48, 0.269), (57, 0.04), (74, 0.059), (83, 0.049), (99, 0.071)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.87023491 232 nips-2011-Ranking annotators for crowdsourced labeling tasks

Author: Vikas C. Raykar, Shipeng Yu

Abstract: With the advent of crowdsourcing services it has become quite cheap and reasonably effective to get a dataset labeled by multiple annotators in a short amount of time. Various methods have been proposed to estimate the consensus labels by correcting for the bias of annotators with different kinds of expertise. Often we have low quality annotators or spammers–annotators who assign labels randomly (e.g., without actually looking at the instance). Spammers can make the cost of acquiring labels very expensive and can potentially degrade the quality of the consensus labels. In this paper we formalize the notion of a spammer and define a score which can be used to rank the annotators—with the spammers having a score close to zero and the good annotators having a high score close to one. 1 Spammers in crowdsourced labeling tasks Annotating an unlabeled dataset is one of the bottlenecks in using supervised learning to build good predictive models. Getting a dataset labeled by experts can be expensive and time consuming. With the advent of crowdsourcing services (Amazon’s Mechanical Turk being a prime example) it has become quite easy and inexpensive to acquire labels from a large number of annotators in a short amount of time (see [8], [10], and [11] for some computer vision and natural language processing case studies). One drawback of most crowdsourcing services is that we do not have tight control over the quality of the annotators. The annotators can come from a diverse pool including genuine experts, novices, biased annotators, malicious annotators, and spammers. Hence in order to get good quality labels requestors typically get each instance labeled by multiple annotators and these multiple annotations are then consolidated either using a simple majority voting or more sophisticated methods that model and correct for the annotator biases [3, 9, 6, 7, 14] and/or task complexity [2, 13, 12]. In this paper we are interested in ranking annotators based on how spammer like each annotator is. In our context a spammer is a low quality annotator who assigns random labels (maybe because the annotator does not understand the labeling criteria, does not look at the instances when labeling, or maybe a bot pretending to be a human annotator). Spammers can significantly increase the cost of acquiring annotations (since they need to be paid) and at the same time decrease the accuracy of the final consensus labels. A mechanism to detect and eliminate spammers is a desirable feature for any crowdsourcing market place. For example one can give monetary bonuses to good annotators and deny payments to spammers. The main contribution of this paper is to formalize the notion of a spammer for binary, categorical, and ordinal labeling tasks. More specifically we define a scalar metric which can be used to rank the annotators—with the spammers having a score close to zero and the good annotators having a score close to one (see Figure 4). We summarize the multiple parameters corresponding to each annotator into a single score indicative of how spammer like the annotator is. While this spammer score was implicit for binary labels in earlier works [3, 9, 2, 6] the extension to categorical and ordinal labels is novel and is quite different from the accuracy computed from the confusion rate matrix. An attempt to quantify the quality of the workers based on the confusion matrix was recently made by [4] where they transformed the observed labels into posterior soft labels based on the estimated confusion 1 matrix. While we obtain somewhat similar annotator rankings, we differ from this work in that our score is directly defined in terms of the annotator parameters (see § 5 for more details). The rest of the paper is organized as follows. For ease of exposition we start with binary labels (§ 2) and later extend it to categorical (§ 3) and ordinal labels (§ 4). We first specify the annotator model used, formalize the notion of a spammer, and propose an appropriate score in terms of the annotator model parameters. We do not dwell too much on the estimation of the annotator model parameters. These parameters can either be estimated directly using known gold standard 1 or the iterative algorithms that estimate the annotator model parameters without actually knowing the gold standard [3, 9, 2, 6, 7]. In the experimental section (§ 6) we obtain rankings for the annotators using the proposed spammer scores on some publicly available data from different domains. 2 Spammer score for crowdsourced binary labels j Annotator model Let yi ∈ {0, 1} be the label assigned to the ith instance by the j th annotator, and let yi ∈ {0, 1} be the actual (unobserved) binary label. We model the accuracy of the annotator separately on the positive and the negative examples. If the true label is one, the sensitivity (true positive rate) αj for the j th annotator is defined as the probability that the annotator labels it as one. j αj := Pr[yi = 1|yi = 1]. On the other hand, if the true label is zero, the specificity (1−false positive rate) β j is defined as the probability that annotator labels it as zero. j β j := Pr[yi = 0|yi = 0]. Extensions of this basic model have been proposed to include item level difficulty [2, 13] and also to model the annotator performance based on the feature vector [14]. For simplicity we use the basic model proposed in [7] in our formulation. Based on many instances labeled by multiple annotators the maximum likelihood estimator for the annotator parameters (αj , β j ) and also the consensus ground truth (yi ) can be estimated iteratively [3, 7] via the Expectation Maximization (EM) algorithm. The EM algorithm iteratively establishes a particular gold standard (initialized via majority voting), measures the performance of the annotators given that gold standard (M-step), and refines the gold standard based on the performance measures (E-step). Who is a spammer? Intuitively, a spammer assigns labels randomly—maybe because the annotator does not understand the labeling criteria, does not look at the instances when labeling, or maybe a bot pretending to be a human annotator. More precisely an annotator is a spammer if the probability j of observed label yi being one given the true label yi is independent of the true label, i.e., j j Pr[yi = 1|yi ] = Pr[yi = 1]. This means that the annotator is assigning labels randomly by flipping a coin with bias without actually looking at the data. Equivalently (1) can be written as j j Pr[yi = 1|yi = 1] = Pr[yi = 1|yi = 0] which implies αj = 1 − β j . (1) j Pr[yi = 1] (2) Hence in the context of the annotator model defined earlier a perfect spammer is an annotator for whom αj + β j − 1 = 0. This corresponds to the diagonal line on the Receiver Operating Characteristic (ROC) plot (see Figure 1(a)) 2 . If αj + β j − 1 < 0 then the annotators lies below the diagonal line and is a malicious annotator who flips the labels. Note that a malicious annotator has discriminatory power if we can detect them and flip their labels. In fact the methods proposed in [3, 7] can automatically flip the labels for the malicious annotators. Hence we define the spammer score for an annotator as S j = (αj + β j − 1)2 (3) An annotator is a spammer if S j is close to zero. Good annotators have S j > 0 while a perfect annotator has S j = 1. 1 One of the commonly used strategy to filter out spammers is to inject some items into the annotations with known labels. This is the strategy used by CrowdFlower (http://crowdflower.com/docs/gold). 2 Also note that (αj + β j )/2 is equal to the area shown in the plot and can be considered as a non-parametric approximation to the area under the ROC curve (AUC) based on one observed point. It is also equal to the Balanced Classification Rate (BCR). So a spammer can also be defined as having BCR or AUC equal to 0.5. 2 Equal accuracy contours (prevalence=0.5) 0.9 1 Good Annotators Biased Annotators 0.8 0.9 Sensitivity Sensitivity ( αj ) Spammers 0.5 0.4 j 0.3 j 0.6 0.5 0.4 4 0. 5 0. 3 0. 7 0. 6 0. 4 0. 5 0. 2 0. 3 0. Malicious Annotators 0.2 0.4 0.6 1−Specificity ( βj ) 0.8 0.1 1 0. 0. 2 0. 3 0. 1 0. 0.6 0.5 1 0. 2 0. 2 0. 1 0. 0.4 3 1 0. 0. 2 0. 4 0. 0.2 4 0. 5 0. 0 0 1 3 0. 0.3 0.2 Biased Annotators 4 0.7 6 0. 4 0. 0.8 7 0.3 Area = (α +β )/2 0.2 0 0 0.9 0. 8 0. 8 0. 0.7 6 0. .5 0 5 0. 0.7 [ 1−βj, αj ] 0.6 0.1 6 0. 0.8 0.7 Equal spammer score contours 1 7 0. 8 0. 9 0. Sensitivity 1 (a) Binary annotator model 0.1 1 0. 2 0. 3 0. 0.2 0.4 0.6 1−Specificity 0.8 1 1 0. 0 0 (b) Accuracy 0.2 3 0. 4 0. 0.4 0.6 1−Specificity 5 0. .6 7 0 0. 8 0. 0.8 1 (c) Spammer score Figure 1: (a) For binary labels an annotator is modeled by his/her sensitivity and specificity. A perfect spammer lies on the diagonal line on the ROC plot. (b) Contours of equal accuracy (4) and (c) equal spammer score (3). Accuracy This notion of a spammer is quite different for that of the accuracy of an annotator. An annotator with high accuracy is a good annotator but one with low accuracy is not necessarily a spammer. The accuracy is computed as 1 j Accuracyj = Pr[yi = yi ] = j Pr[yi = 1|yi = k]Pr[yi = k] = αj p + β j (1 − p), (4) k=0 where p := Pr[yi = 1] is the prevalence of the positive class. Note that accuracy depends on prevalence. Our proposed spammer score does not depend on prevalence and essentially quantifies the annotator’s inherent discriminatory power. Figure 1(b) shows the contours of equal accuracy on the ROC plot. Note that annotators below the diagonal line (malicious annotators) have low accuracy. The malicious annotators are good annotators but they flip their labels and as such are not spammers if we can detect them and then correct for the flipping. In fact the EM algorithms [3, 7] can correctly flip the labels for the malicious annotators and hence they should not be treated as spammers. Figure 1(c) also shows the contours of equal score for our proposed score and it can be seen that the malicious annotators have a high score and only annotators along the diagonal have a low score (spammers). Log-odds Another interpretation of a spammer can be seen from the log odds. Using Bayes’ rule the posterior log-odds can be written as log j Pr[yi = 1|yi ] Pr[yi = j 0|yi ] = log j Pr[yi |yi = 1] j Pr[yi |yi = 0] + log p . 1−p Pr[y =1|y j ] p If an annotator is a spammer (i.e., (2) holds) then log Pr[yi =0|yi ] = log 1−p . Essentially the annotator j i i provides no information in updating the posterior log-odds and hence does not contribute to the estimation of the actual true label. 3 Spammer score for categorical labels Annotator model Suppose there are K ≥ 2 categories. We introduce a multinomial parameter αj = (αj , . . . , αj ) for each annotator, where c c1 cK K j αj := Pr[yi = k|yi = c] ck αj = 1. ck and k=1 αj ck The term denotes the probability that annotator j assigns class k to an instance given that the true class is c. When K = 2, αj and αj are sensitivity and specificity, respectively. 11 00 Who is a spammer? As earlier a spammer assigns labels randomly, i.e., j j Pr[yi = k|yi ] = Pr[yi = k], ∀k. 3 j j This is equivalent to Pr[yi = k|yi = c] = Pr[yi = k|yi = c ], ∀c, c , k = 1, . . . , K— which means knowing the true class label being c or c does not change the probability of the annotator’s assigned label. This indicates that the annotator j is a spammer if αj = αj k , ∀c, c , k = 1, . . . , K. ck c (5) Let Aj be the K × K confusion rate matrix with entries [Aj ]ck = αck —a spammer would have 0.50 0.50 0.50 all the rows of Aj equal, for example, Aj = 0.25 0.25 0.25 0.25 0.25 0.25 , for a three class categorical annotation problem. Essentially Aj is a rank one matrix of the form Aj = evj , for some column vector vj ∈ RK that satisfies vj e = 1, where e is column vector of ones. In the binary case we had this natural notion of spammer as an annotator for whom αj + β j − 1 was close to zero. One natural way to summarize (5) would be in terms of the distance (Frobenius norm) of the confusion matrix to the closest rank one approximation, i.e, S j := Aj − eˆj v 2 F, (6) where ˆj solves v ˆj = arg min Aj − evj v vj 2 F s.t. vj e = 1. (7) Solving (7) yields ˆj = (1/K)Aj e, which is the mean of the rows of Aj . Then from (6) we have v Sj = I− 1 ee K 2 Aj = F 1 K (αj − αj k )2 . ck c c < . . . < K. Annotator model It is conceptually easier to think of the true label to be binary, that is, yi ∈ {0, 1}. For example in mammography a lesion is either malignant (1) or benign (0) (which can be confirmed by biopsy) and the BIRADS ordinal scale is a means for the radiologist to quantify the uncertainty based on the digital mammogram. The radiologist assigns a higher value of the label if he/she thinks the true label is closer to one. As earlier we characterize each annotator by the sensitivity and the specificity, but the main difference is that we now define the sensitivity and specificity for j each ordinal label (or threshold) k ∈ {1, . . . , K}. Let αj and βk be the sensitivity and specificity k th respectively of the j annotator corresponding to the threshold k, that is, j j j αj = Pr[yi ≥ k | yi = 1] and βk = Pr[yi < k | yi = 0]. k j j Note that αj = 1, β1 = 0 and αj 1 K+1 = 0, βK+1 = 1 from this definition. Hence each annotator j j is parameterized by a set of 2(K − 1) parameters [αj , β2 , . . . , αj , βK ]. This corresponds to an 2 K empirical ROC curve for the annotator (Figure 2). 4 Who is a spammer? As earlier we define an an1 j k=1 notator j to be a spammer if Pr[yi = k|yi = 1] = 0.9 j k=2 0.8 Pr[yi = k|yi = 0] ∀k = 1, . . . , K. Note that from j 0.7 k=3 [ 1−β , α ] the annotation model we have 3 Pr[yi = k | yi = 0.6 j j j 1] = αk − αk+1 and Pr[yi = k | yi = 0] = 0.5 k=4 j j 0.4 βk+1 − βk . This implies that annotator j is a spam0.3 j j mer if αj − αj k k+1 = βk+1 − βk , ∀k = 1, . . . , K, 0.2 j j 0.1 which leads to αj + βk = αj + β1 = 1, ∀k. This 1 k j 0 0 0.2 0.4 0.6 0.8 1 means that for every k, the point (1 − βk , αj ) lies on k 1−Specificity ( β ) the diagonal line in the ROC plot shown in Figure 2. The area under the empirical ROC curve can be comFigure 2: Ordinal labels: An annotator is modK 1 puted as (see Figure 2) AUCj = 2 k=1 (αj + eled by sensitivity/specificity for each threshold. k+1 j j αj )(βk+1 − βk ), and can be used to define the folk lowing spammer score as (2AUCj − 1)2 to rank the different annotators. 3 Sensitivity ( αj ) 3 j 2 K (αj k+1 k=1 j S = + j αj )(βk+1 k − j βk ) −1 (9) With two levels this expression defaults to the binary case. An annotator is a spammer if S j is close to zero. Good annotators have S j > 0 while a perfect annotator has S j = 1. 5 Previous work Recently Ipeirotis et.al. [4] proposed a score for categorical labels based on the expected cost of the posterior label. In this section we briefly describe their approach and compare it with our proposed score. For each instance labeled by the annotator they first compute the posterior (soft) label j j Pr[yi = c|yi ] for c = 1, . . . , K, where yi is the label assigned to the ith instance by the j th annotator and yi is the true unknown label. The posterior label is computed via Bayes’ rule as j j j Pr[yi = c|yi ] ∝ Pr[yi |yi = c]Pr[yi = c] = (αj )δ(yi ,k) pc , where pc = Pr[yi = c] is the prevack lence of class c. The score for a spammer is based on the intuition that the posterior label vector j j (Pr[yi = 1|yi ], . . . , Pr[yi = K|yi ]) for a good annotator will have all the probability mass concentrated on single class. For example for a three class problem (with equal prevalence), a posterior label vector of (1, 0, 0) (certain that the class is one) comes from a good annotator while a (1/3, 1/3, 1/3) (complete uncertainty about the class label) comes from spammer. Based on this they define the following score for each annotator 1 Score = N N K K j j costck Pr[yi = k|yi ]Pr[yi = c|yi ] j i=1 . (10) c=1 k=1 where costck is the misclassification cost when an instance of class c is classified as k. Essentially this is capturing some sort of uncertainty of the posterior label averaged over all the instances. Perfect workers have a score Scorej = 0 while spammers will have high score. An entropic version of this score based on similar ideas has also been recently proposed in [5]. Our proposed spammer score differs from this approach in the following aspects: (1) Implicit in the score defined above (10) j is the assumption that an annotator is a spammer when Pr[yi = c|yi ] = Pr[yi = c], i.e., the estimated posterior labels are simply based on the prevalence and do not depend on the observed labels. By j j Bayes’ rule this is equivalent to Pr[yi |yi = c] = Pr[yi ] which is what we have used to define our spammer score. (2) While both notions of a spammer are equivalent, the approach of [4] first computes the posterior labels based on the observed data, the class prevalence and the annotator j j j j This can be seen as follows: Pr[yi = k | yi = 1] = Pr[(yi ≥ k) AND (yi < k + 1) | yi = 1] = Pr[yi ≥ j j j j k | yi = 1] + Pr[yi < k + 1 | yi = 1] − Pr[(yi ≥ k) OR (yi < k + 1) | yi = 1] = Pr[yi ≥ k | yi = j j j 1] − Pr[yi ≥ k + 1 | yi = 1] = αj − αj . Here we used the fact that Pr[(yi ≥ k) OR (yi < k + 1)] = 1. k k+1 3 5 simulated | 500 instances | 30 annotators simulated | 500 instances | 30 annotators 1 12 0.8 Spammer Score 18 0.6 0.5 22 24 23 25 0.3 29 20 0.2 0.4 0.2 30 16 14 0.1 26 21 27 28 19 0 0 13 0 0.2 0.4 0.6 1−Specificity 0.8 1 500 500 500 500 500 500 500 500 500 500 0.4 0.6 500 500 500 1 0.7 500 500 500 500 500 500 500 500 500 500 500 500 500 500 3 1 500 500 500 2 8 510 7 17 4 9 27 8 30 6 3 28 7 10 2 23 22 26 24 5 1 21 29 25 14 12 17 11 18 20 19 15 16 13 4 0.8 Sensitivity 6 9 0.9 15 11 Annotator (a) Simulation setup (b) Annotator ranking Annotator rank (median) via accuracy simulated | 500 instances | 30 annotators Annotator rank (median) via Ipeirotis et.al.[4] simulated | 500 instances | 30 annotators 27 30 30 28 23 22 26 25 24 21 29 25 20 14 17 18 15 20 16 15 11 13 12 19 1 10 5 10 2 7 6 3 5 8 9 4 0 0 5 10 15 20 25 Annotator rank (median) via spammer score 30 (c) Comparison with accuracy 30 18 20 16 15 19 13 12 25 11 14 17 25 20 29 21 51 15 26 22 23 24 10 2 7 10 28 6 3 8 30 5 27 9 4 0 0 5 10 15 20 25 Annotator rank (median) via spammer score 30 (d) Comparison with Ipeirotis et. al. [4] Figure 3: (a) The simulation setup consisting of 10 good annotators (annotators 1 to 10), 10 spammers (11 to 20), and 10 malicious annotators (21 to 30). (b) The ranking of annotators obtained using the proposed spammer score. The spammer score ranges from 0 to 1, the lower the score, the more spammy the annotator. The mean spammer score and the 95% confidence intervals (CI) are shown—obtained from 100 bootstrap replications. The annotators are ranked based on the lower limit of the 95% CI. The number at the top of the CI bar shows the number of instances annotated by that annotator. (c) and (d) Comparison of the median rank obtained via the spammer score with the rank obtained using (c) accuracy and (d) the method proposed by Ipeirotis et. al. [4]. parameters and then computes the expected cost. Our proposed spammer score does not depend on the prevalence of the class. Our score is also directly defined only in terms of the annotator confusion matrix and does not need the observed labels. (3) For the score defined in (10) while perfect annotators have a score of 0 it is not clear what should be a good baseline for a spammer. The authors suggest to compute the baseline by assuming that a worker assigns as label the class with maximum prevalence. Our proposed score has a natural scale with a perfect annotator having a score of 1 and a spammer having a score of 0. (4) However one advantage of the approach in [4] is that they can directly incorporate varied misclassification costs. 6 Experiments Ranking annotators based on the confidence interval As mentioned earlier the annotator model parameters can be estimated using the iterative EM algorithms [3, 7] and these estimated annotator parameters can then be used to compute the spammer score. The spammer score can then be used to rank the annotators. However one commonly observed phenomenon when working with crowdsourced data is that we have a lot of annotators who label only a very few instances. As a result the annotator parameters cannot be reliably estimated for these annotators. In order to factor this uncertainty in the estimation of the model parameters we compute the spammer score for 100 bootstrap replications. Based on this we compute the 95% confidence intervals (CI) for the spammer score for each annotator. We rank the annotators based on the lower limit of the 95% CI. The CIs are wider 6 Table 1: Datasets N is the number of instances. M is the number of annotators. M ∗ is the mean/median number of annotators per instance. N ∗ is the mean/median number of instances labeled by each annotator. Dataset Type N M M∗ N∗ bluebird binary 108 39 39/39 108/108 temp binary 462 76 10/10 61/16 Brief Description wsd categorical/3 177 34 10/10 52/20 sentiment categorical/3 1660 33 6/6 291/175 30 100 10 38 10/10 10/10 bird identification [12] The annotator had to identify whether there was an Indigo Bunting or Blue Grosbeak in the image. event annotation [10] Given a dialogue and a pair of verbs annotators need to label whether the event described by the first verb occurs before or after the second. 30/30 26/20 30 30 30 30 30 30 3 30 30 1 30 20 20 20 20 20 77 117 20 60 Spammer Score 0.4 10 7 9 8 6 5 0 2 0.2 13 31 10 23 29 1 2 4 6 8 9 14 15 17 22 32 5 18 16 19 11 12 20 21 24 25 26 27 28 30 33 34 7 3 0 0.6 20 20 108 108 108 108 108 108 108 108 108 108 108 108 108 108 108 108 108 20 20 20 20 17 17 40 20 20 100 Spammer Score 0.4 108 108 108 108 108 108 108 108 108 108 108 108 108 0.8 0.6 0.2 17 8 27 30 25 35 1 12 32 37 38 16 22 9 29 15 20 19 5 39 3 21 23 14 2 10 24 7 33 13 36 31 4 34 28 18 11 6 26 0.2 30 77 77 4 108 108 108 108 0.4 0 wosi | 30 instances | 10 annotators 1 0.8 108 108 0.6 108 108 108 Spammer Score 0.8 wsd | 177 instances | 34 annotators 1 80 177 157 177 157 bluebird | 108 instances | 39 annotators 1 word similarity [10] Numeric judgements of word similarity. affect recognition [10] Each annotator is presented with a short headline and asked to rate it on a scale [-100,100] to denote the overall positive or negative valence. 40 40 20 ordinal/[0 10] ordinal[-100 100] 20 20 20 wosi valence word sense disambiguation [10] The labeler is given a paragraph of text containing the word ”president” and asked to label one of the three appropriate senses. irish economic sentiment analysis [1] Articles from three Irish online news sources were annotated by volunteer users as positive, negative, or irrelevant. 20 20 20 20 20 60 20 20 20 40 40 100 0.4 Annotator Annotator 0 1 26 10 18 28 15 5 36 23 12 8 32 31 38 13 17 27 11 2 35 24 19 9 6 30 33 37 14 29 4 3 20 34 22 25 7 16 21 40 20 0.2 26 2 6 11 5 14 3 20 9 22 31 10 12 18 8 13 30 4 1 29 19 17 27 28 21 15 25 23 7 33 16 24 32 10 132 10 360 10 0 13 18 52 75 33 32 12 74 31 51 41 55 7 14 70 42 58 65 43 1 10 47 61 73 25 37 76 67 24 46 54 48 39 56 15 62 68 44 53 64 40 9 28 6 2 57 3 4 5 8 11 16 17 19 20 21 22 23 26 27 29 30 34 35 36 38 45 49 50 59 60 63 66 69 71 72 442 462 452 10 10 0.6 238 171 75 654 20 0.2 0.2 0 12 77 67 374 249 229 453 346 428 0.4 Spammer Score 43 175 119 541 525 437 0.8 917 104 284 0.4 0.6 1211 1099 10 Spammer Score 0.8 572 30 52 402 60 0.6 30 Spammer Score 0.8 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 60 40 20 15 7 7 11 12 35 29 1 87 10 10 10 10 10 10 10 12 1 30 10 10 10 10 10 10 10 10 10 10 10 10 10 valence | 100 instances | 38 annotators 20 22 10 10 10 10 sentiment | 1660 instances | 33 annotators 10 10 30 20 10 Annotator temp | 462 instances | 76 annotators 1 Annotator 10 50 10 10 40 10 70 350 80 40 100 192 190 40 32 60 70 20 20 40 80 20 50 50 50 30 Annotator Annotator Figure 4: Annotator Rankings The rankings obtained for the datasets in Table 1. The spammer score ranges from 0 to 1, the lower the score, the more spammy the annotator. The mean spammer score and the 95% confidence intervals (CI) are shown—obtained from 100 bootstrap replications. The annotators are ranked based on the lower limit of the 95% CI. The number at the top of the CI bar shows the number of instances annotated by that annotator. Note that the CIs are wider when the annotator labels only a few instances. when the annotator labels only a few instances. For a crowdsourced labeling task the annotator has to be good and also label a reasonable number of instances in order to be reliably identified. Simulated data We first illustrate our proposed spammer score on simulated binary data (with equal prevalence for both classes) consisting of 500 instances labeled by 30 annotators of varying sensitivity and specificity (see Figure 3(a) for the simulation setup). Of the 30 annotators we have 10 good annotators (annotators 1 to 10 who lie above the diagonal in Figure 3(a)), 10 spammers (annotators 11 to 20 who lie around the diagonal), and 10 malicious annotators (annotators 21 to 30 who lie below the diagonal). Figure 3(b) plots the ranking of annotators obtained using the proposed spammer score with the annotator model parameters estimated via the EM algorithm [3, 7]. The spammer score ranges from 0 to 1, the lower the score, the more spammy the annotator. The mean spammer score and the 95% confidence interval (CI) obtained via bootstrapping are shown. The annotators are ranked based on the lower limit of the 95% CI. As can be seen all the spammers (annotators 11 to 20) have a low spammer score and appear at the bottom of the list. The malicious annotators have higher score than the spammers since we can correct for their flipping. The malicious annotators are good annotators but they flip their labels and as such are not spammers if we detect that they are malicious. Figure 3(c) compares the (median) rank obtained via the spammer score with the (median) rank obtained using accuracy as the score to rank the annotators. While the good annotators are ranked high by both methods the accuracy score gives a low rank to the malicious annotators. Accuracy does not capture the notion of a spammer. Figure 3(d) compares the ranking with the method proposed by Ipeirotis et. al. [4] which gives almost similar rankings as our proposed score. 7 21 23 10 6 35 4 34 1126 18 147 30 3 31 13 2436 33 25 5 2 20 15 39 19 15 20 28 22 299 12 37 16 38 10 32 1 5 27 25 35 30 8 17 0 0 5 10 15 20 25 30 35 Annotator rank (median) via spammer score 40 bluebird | 108 instances | 39 annotators 40 1 6 34 112618 4 31 1013 7 30 2 28 21 5 20 15 39 19 20 15 22 37 16 299 12 38 10 5 8 17 0 0 27 25 35 30 0.6 0.5 35 32 2 0.4 36 11 13 31 24 10 33 28 21 26 18 0 0 40 34 15 19 39 0.1 (a) 22 37 20 38 29 9 0.2 5 10 15 20 25 30 35 Annotator rank (median) via spammer score 6 4 16 0.3 32 1 7 30 25 1 3 14 3 27 5 0.7 24 33 14 36 23 25 12 8 0.9 17 0.8 35 Sensitivity Annotator rank (median) via accuracy bluebird | 108 instances | 39 annotators Annotator rank (median) via Ipeirotis et.al.[4] bluebird | 108 instances | 39 annotators 40 23 0.2 0.4 0.6 1−Specificity (b) 0.8 1 (c) Figure 5: Comparison of the rank obtained via the spammer score with the rank obtained using (a) accuracy and (b) the method proposed by Ipeirotis et. al. [4] for the bluebird binary dataset. (c) The annotator model parameters as estimated by the EM algorithm [3, 7]. 19 25 12 18 7 3 14 20 32 5 8 1 16 20 9 21 15 34 10 31 29 17 28 22 26 2315 5 2 0 0 4 6 13 10 5 10 15 20 25 30 Annotator rank (median) via spammer score 35 30 25 16 19 7 25 8 9 27 14 3 28 17 18 32 5 10 4 2 10 6 1529 31 23 22 21 15 0 0 33 30 11 1 20 5 sentiment | 1660 instances | 33 annotators 24 35 12 20 24 34 26 13 5 10 15 20 25 30 Annotator rank (median) via spammer score 35 33 7 30 15 17 25 28 2719 2223 20 8 1 4 1812 15 13 10 20 32 30 10 3 29 9 31 16 5 6 2 5 14 11 26 0 0 5 10 15 20 25 30 Annotator rank (median) via spammer score 25 21 Annotator rank (median) via Ipeirotis et.al.[4] 25 24 27 Annotator rank (median) via accuracy Annotator rank (median) via accuracy 30 sentiment | 1660 instances | 33 annotators wsd | 177 instances | 34 annotators 33 30 11 Annotator rank (median) via Ipeirotis et.al.[4] wsd | 177 instances | 34 annotators 35 7 30 15 19 17 27 25 21 25 8 12 4 18 20 24 15 20 33 10 3 13 9 28 1 29 23 10 1632 11 14 5 6 2 5 31 30 22 26 0 0 5 10 15 20 25 30 Annotator rank (median) via spammer score Figure 6: Comparison of the median rank obtained via the spammer score with the rank obtained using accuracy and he method proposed by Ipeirotis et. al. [4] for the two categorial datasets in Table 1. Mechanical Turk data We report results on some publicly available linguistic and image annotation data collected using the Amazon’s Mechanical Turk (AMT) and other sources. Table 1 summarizes the datasets. Figure 4 plots the spammer scores and rankings obtained. The mean and the 95% CI obtained via bootstrapping are also shown. The number at the top of the CI bar shows the number of instances annotated by that annotator. The rankings are based on the lower limit of the 95% CI which factors the number of instances labeled by the annotator into the ranking. An annotator who labels only a few instances will have very wide CI. Some annotators who label only a few instances may have a high mean spammer score but the CI will be wide and hence ranked lower. Ideally we would like to have annotators with a high score and at the same time label a lot of instances so that we can reliablly identify them. The authors [1] for the sentiment dataset shared with us some of the qualitative observations regarding the annotators and they somewhat agree with our rankings. For example the authors made the following comments about Annotator 7 ”Quirky annotator - had a lot of debate about what was the meaning of the annotation question. I’d say he changed his labeling strategy at least once during the process”. Our proposed score gave a low rank to this annotator. Comparison with other approaches Figure 5 and 6 compares the proposed ranking with the rank obtained using accuracy and the method proposed by Ipeirotis et. al. [4] for some binary and categorical datasets in Table 1. Our proposed ranking is somewhat similar to that obtained by Ipeirotis et. al. [4] but accuracy does not quite capture the notion of spammer. For example for the bluebird dataset for annotator 21 (see Figure 5(a)) accuracy ranks it at the bottom of the list while the proposed score puts is in the middle of the list. From the estimated model parameters it can be seen that annotator 21 actually flips the labels (below the diagonal in Figure 5(c)) but is a good annotator. 7 Conclusions We proposed a score to rank annotators for crowdsourced binary, categorical, and ordinal labeling tasks. The obtained rankings and the scores can be used to allocate monetary bonuses to be paid to different annotators and also to eliminate spammers from further labeling tasks. A mechanism to rank annotators should be desirable feature of any crowdsourcing service. The proposed score should also be useful to specify the prior for Bayesian approaches to consolidate annotations. 8 References [1] A. Brew, D. Greene, and P. Cunningham. Using crowdsourcing and active learning to track sentiment in online media. In Proceedings of the 6th Conference on Prestigious Applications of Intelligent Systems (PAIS’10), 2010. [2] B. Carpenter. Multilevel bayesian models of categorical data annotation. Technical Report available at http://lingpipe-blog.com/lingpipe-white-papers/, 2008. [3] A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 28(1):20–28, 1979. [4] P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP’10), pages 64–67, 2010. [5] V. C. Raykar and S. Yu. An entropic score to rank annotators for crowdsourced labelling tasks. In Proceedings of the Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2011. [6] V. C. Raykar, S. Yu, L .H. Zhao, A. Jerebko, C. Florin, G. H. Valadez, L. Bogoni, and L. Moy. Supervised learning from multiple experts: Whom to trust when everyone lies a bit. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), pages 889– 896, 2009. [7] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297–1322, April 2010. [8] V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 614–622, 2008. [9] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. In Advances in Neural Information Processing Systems 7, pages 1085–1092. 1995. [10] R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. Cheap and Fast—but is it good? Evaluating Non-Expert Annotations for Natural Language Tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP ’08), pages 254–263, 2008. [11] A. Sorokin and D. Forsyth. Utility data annotation with Amazon Mechanical Turk. In Proceedings of the First IEEE Workshop on Internet Vision at CVPR 08, pages 1–8, 2008. [12] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. In Advances in Neural Information Processing Systems 23, pages 2424–2432. 2010. [13] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems 22, pages 2035–2043. 2009. [14] Y. Yan, R. Rosales, G. Fung, M. Schmidt, G. Hermosillo, L. Bogoni, L. Moy, and J. Dy. Modeling annotator expertise: Learning when everybody knows a bit of something. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), pages 932–939, 2010. 9

same-paper 2 0.76625168 236 nips-2011-Regularized Laplacian Estimation and Fast Eigenvector Approximation

Author: Patrick O. Perry, Michael W. Mahoney

Abstract: Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick approximation to the first nontrivial eigenvector of a data graph Laplacian exactly solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which 2 -regularized or 1 -regularized 2 -regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the MahoneyOrecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-based PageRank procedure for computing an approximation to the first nontrivial eigenvector of the graph Laplacian. Empirical results are also provided to illustrate the manner in which approximate eigenvector computation implicitly performs statistical regularization, relative to running the corresponding exact algorithm. 1

3 0.70345664 204 nips-2011-Online Learning: Stochastic, Constrained, and Smoothed Adversaries

Author: Alexander Rakhlin, Karthik Sridharan, Ambuj Tewari

Abstract: Learning theory has largely focused on two main learning scenarios: the classical statistical setting where instances are drawn i.i.d. from a fixed distribution, and the adversarial scenario wherein, at every time step, an adversarially chosen instance is revealed to the player. It can be argued that in the real world neither of these assumptions is reasonable. We define the minimax value of a game where the adversary is restricted in his moves, capturing stochastic and non-stochastic assumptions on data. Building on the sequential symmetrization approach, we define a notion of distribution-dependent Rademacher complexity for the spectrum of problems ranging from i.i.d. to worst-case. The bounds let us immediately deduce variation-type bounds. We study a smoothed online learning scenario and show that exponentially small amount of noise can make function classes with infinite Littlestone dimension learnable. 1

4 0.69327611 198 nips-2011-On U-processes and clustering performance

Author: Stéphan J. Clémençcon

Abstract: Many clustering techniques aim at optimizing empirical criteria that are of the form of a U -statistic of degree two. Given a measure of dissimilarity between pairs of observations, the goal is to minimize the within cluster point scatter over a class of partitions of the feature space. It is the purpose of this paper to define a general statistical framework, relying on the theory of U -processes, for studying the performance of such clustering methods. In this setup, under adequate assumptions on the complexity of the subsets forming the partition candidates, the √ excess of clustering risk is proved to be of the order OP (1/ n). Based on recent results related to the tail behavior of degenerate U -processes, it is also shown how to establish tighter rate bounds. Model selection issues, related to the number of clusters forming the data partition in particular, are also considered. 1

5 0.6877858 114 nips-2011-Hierarchical Multitask Structured Output Learning for Large-scale Sequence Segmentation

Author: Nico Goernitz, Christian Widmer, Georg Zeller, Andre Kahles, Gunnar Rätsch, Sören Sonnenburg

Abstract: We present a novel regularization-based Multitask Learning (MTL) formulation for Structured Output (SO) prediction for the case of hierarchical task relations. Structured output prediction often leads to difficult inference problems and hence requires large amounts of training data to obtain accurate models. We propose to use MTL to exploit additional information from related learning tasks by means of hierarchical regularization. Training SO models on the combined set of examples from multiple tasks can easily become infeasible for real world applications. To be able to solve the optimization problems underlying multitask structured output learning, we propose an efficient algorithm based on bundle-methods. We demonstrate the performance of our approach in applications from the domain of computational biology addressing the key problem of gene finding. We show that 1) our proposed solver achieves much faster convergence than previous methods and 2) that the Hierarchical SO-MTL approach outperforms considered non-MTL methods. 1

6 0.55259079 85 nips-2011-Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits

7 0.53818882 80 nips-2011-Efficient Online Learning via Randomized Rounding

8 0.52860779 106 nips-2011-Generalizing from Several Related Classification Tasks to a New Unlabeled Sample

9 0.52794033 186 nips-2011-Noise Thresholds for Spectral Clustering

10 0.52771848 102 nips-2011-Generalised Coupled Tensor Factorisation

11 0.52333516 139 nips-2011-Kernel Bayes' Rule

12 0.52257717 258 nips-2011-Sparse Bayesian Multi-Task Learning

13 0.51906985 127 nips-2011-Image Parsing with Stochastic Scene Grammar

14 0.51873547 71 nips-2011-Directed Graph Embedding: an Algorithm based on Continuous Limits of Laplacian-type Operators

15 0.51572478 29 nips-2011-Algorithms and hardness results for parallel large margin learning

16 0.51567584 276 nips-2011-Structured sparse coding via lateral inhibition

17 0.51466036 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

18 0.51454455 144 nips-2011-Learning Auto-regressive Models from Sequence and Non-sequence Data

19 0.51305717 66 nips-2011-Crowdclustering

20 0.51161546 180 nips-2011-Multiple Instance Filtering