jmlr jmlr2012 jmlr2012-82 knowledge-graph by maker-knowledge-mining

82 jmlr-2012-On the Necessity of Irrelevant Variables


Source: pdf

Author: David P. Helmbold, Philip M. Long

Abstract: This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant variables have a small advantage over random guessing. The main result is that algorithms relying predominately on irrelevant variables have error probabilities that quickly go to 0 in situations where algorithms that limit the use of irrelevant variables have errors bounded below by a positive constant. We also show that accurate learning is possible even when there are so few examples that one cannot determine with high confidence whether or not any individual variable is relevant. Keywords: feature selection, generalization, learning theory

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Wolfe Rd, SW3-350 Cupertino, CA 95014, USA Editor: G´ bor Lugosi a Abstract This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. [sent-8, score-0.81]

2 The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant variables have a small advantage over random guessing. [sent-9, score-0.73]

3 The main result is that algorithms relying predominately on irrelevant variables have error probabilities that quickly go to 0 in situations where algorithms that limit the use of irrelevant variables have errors bounded below by a positive constant. [sent-10, score-1.16]

4 Introduction When creating a classifier, a natural inclination is to only use variables that are obviously relevant since irrelevant variables typically decrease the accuracy of a classifier. [sent-13, score-0.975]

5 On the other hand, this paper shows that the harm from irrelevant variables can be much less than the benefit from relevant variables and therefore it is possible to learn very accurate classifiers even when almost all of the variables are irrelevant. [sent-14, score-1.215]

6 We provide an illustrative analysis that isolates the effects of relevant and irrelevant variables on a classifier’s accuracy. [sent-17, score-0.766]

7 We focus on the situation where relatively few of the many variables are relevant, and the relevant variables are only weakly predictive. [sent-19, score-0.678]

8 We prove upper bounds on the error rate of a very simple learning algorithm that may include many irrelevant variables in its hypothesis. [sent-21, score-0.655]

9 The combination of these results show that the simple algorithm’s error rate approaches zero in situations where every algorithm that predicts with mostly relevant variables has an error rate greater than a positive constant. [sent-29, score-0.536]

10 However, we do know which variables are relevant and irrelevant in synthetic data (and can generate as many test examples as desired). [sent-44, score-0.735]

11 Each of two classes is equally likely, and there are 1000 relevant boolean variables, 500 of which agree with the class label with probability 1/2 + 1/10, and 500 which disagree with the class label with probability 1/2 + 1/10. [sent-46, score-0.858]

12 The algorithm is equally simple: it has a parameter β, and outputs the majority vote over those features (variables or their negations) that agree with the class label on a 1/2 + β fraction of the training examples. [sent-48, score-0.597]

13 Both the accuracy of the classifier and the fraction of relevant variables are plotted against the number of variables used in the model, for various values of β. [sent-50, score-0.762]

14 2 Each time, the best accuracy is achieved when an overwhelming majority of the variables used in the model are irrelevant, and those models with few (< 25%) irrelevant variables perform far worse. [sent-51, score-0.817]

15 Let k of the variables agree with the class label with probability 1/2 + γ, and the remaining n − k variables agree with the label with probability 1/2. [sent-61, score-1.19]

16 Whereas Equation (1) bounded the error as a function of the number of variables n and relevant variables k in the model, we now use capital N and capital K for the total number of variables and number of relevant variables in the data. [sent-67, score-1.467]

17 The N − K irrelevant variables are independent of the label, agreeing with it with probability 1/2. [sent-68, score-0.68]

18 8 Fraction of irrelevant variables 1 Figure 2: Left: Test error and fraction of irrelevant variables as a function of the number of features. [sent-86, score-1.207]

19 Right: Scatter plot of test error rates (vertical) against fraction of irrelevant variables (horizontal). [sent-87, score-0.67]

20 We analyze an algorithm that chooses a value β ≥ 0 and outputs a majority vote over all features that agree with the class label on at least 1/2 + β of the training examples (as before, each feature is either a variable or its negation). [sent-89, score-0.619]

21 Thus the edge of the relevant features and the fraction of features that are relevant both approach zero while the total number of relevant features increases. [sent-92, score-1.016]

22 With only Θ(1/γ2 ) examples, an algorithm cannot even tell with high confidence whether a relevant variable is positively or negatively associated with the class label, much less solve the more difficult problem of determining whether or not a variable is relevant. [sent-97, score-0.474]

23 To be precise, the algorithm includes each variable or its negation when β = 0 and m is odd, and includes both the variable and its negation when m is even and the variable agrees with the class label exactly half the time. [sent-101, score-0.579]

24 2148 N ECESSITY OF I RRELEVANT VARIABLES Our upper bounds illustrate the potential rewards for algorithms that are “inclusive”, using many of the available variables in their classifiers, even when this means that most variables in the model are irrelevant. [sent-104, score-0.549]

25 We say that an algorithm is λ-exclusive if the expectation of the fraction of the variables used in its model that are relevant is at least λ. [sent-106, score-0.562]

26 , 2010; Donoho and Jin, 2004, 2006; Meinshausen and Rice, 2006), performed analyses and simulations using sources with elements in common with the model studied here, including conditionally independent variables and a weak association between the variables and the class labels. [sent-111, score-0.598]

27 Donoho and Jin also pointed out that their algorithm can produce accurate hypotheses while using many more irrelevant features than relevant ones. [sent-112, score-0.631]

28 The main theoretical results proved in their papers describe conditions that imply that, if the relevant variables are too small a fraction of all the variables, and the number of examples is too small, then learning is impossible. [sent-113, score-0.522]

29 A limit on the redundancy is needed for results like ours since, for example, a collection of Θ(k) perfectly correlated irrelevant variables would swamp the votes of the k relevant variables. [sent-128, score-0.822]

30 Another obvious direction for generalization is to relax the strict categorization of variables into irrelevant and (1/2+ γ)-relevant classes. [sent-136, score-0.537]

31 For example, our proof techniques easily give similar theorems when each relevant variable has a probability between 1/2 + γ/2 and 1/2 + 2γ of agreeing with the class label (as discussed in Section 6. [sent-138, score-0.608]

32 Section 4 gives bounds on the expected error of hypotheses learned from training data while Section 5 shows that, in certain situations, any exclusive algorithm must have high error while the error of some inclusive algorithms goes to 0. [sent-142, score-0.503]

33 For any η > 0: P[U > (1 + η)E(U)] < exp −(1 + η)E(U) ln 1+η e . [sent-156, score-0.547]

34 Which variables are relevant and whether each one is positively or negatively correlated with the class designations are chosen arbitrarily ahead of time. [sent-172, score-0.579]

35 The 2(N − K) irrelevant features come from the irrelevant variables, the K relevant features agree with the class labels with probability 1/2 + γ, and the K misleading features agree with the class labels with probability 1/2 − γ. [sent-174, score-1.654]

36 We use n for the total number of features in model M , k for the number of relevant features, and ℓ for the number of misleading features (leaving n − k − ℓ irrelevant features). [sent-176, score-0.767]

37 exp n The next corollary shows that even models where most of the features are irrelevant can be highly accurate. [sent-188, score-0.525]

38 √ Corollary 2 If γ is a constant, k − ℓ = ω( n) and k = o(n), then the accuracy of the model approaches 100% while its fraction of irrelevant variables approaches 1 (as n → ∞). [sent-189, score-0.621]

39 To keep the analysis as clean as possible, our results in this section apply to algorithms that chose β as a function of the number of features N, the number of relevant features K, the edge of the relevant features γ, and training set size m, and then predict with Mβ . [sent-200, score-0.761]

40 This Bayes Optimal predictor for our generative model is a majority vote of 2 the K relevant features, and has an error rate bounded by e−2γ K (a bound as tight as the Hoeffding bound). [sent-205, score-0.49]

41 Proof: For a particular misleading feature to be included in Mβ , Algorithm A must overestimate the probability that misleading feature equals the class label by at least β + γ. [sent-212, score-0.571]

42 Lemma 5 With probability at least 1 − 2δ, the number of irrelevant features in Mβ is at most 2 8Ne−2β m + 6 ln(1/δ). [sent-215, score-0.499]

43 Proof: For a particular positive irrelevant feature to be included in Mβ , Algorithm A must overestimate the probability that the positive irrelevant feature equals the class label by β. [sent-216, score-0.877]

44 Applying (3), 2 this happens with probability at most e−2β m , so the expected number of irrelevant positive features 2m in Mβ is at most (N − K)e−2β . [sent-217, score-0.459]

45 So the events that various irrelevant variables are included in Mβ are independent. [sent-219, score-0.578]

46 2 Applying (5) with E(U) = (N − K)e−2β m gives that, with probability at least 1 − δ, the number of 2 irrelevant positive features in Mβ is at most 4(N − K)e−2β m . [sent-220, score-0.499]

47 A symmetric analysis establishes the same bound on the number of negative irrelevant features in Mβ . [sent-221, score-0.458]

48 Proof: For a particular relevant feature to be excluded from Mβ , Algorithm A must underestimate the probability that the relevant feature equals the class label by at least γ − β. [sent-224, score-0.678]

49 Applying (3), this 2 happens with probability at most e−2(γ−β) m , so the expected number of relevant variables excluded 2m from Mβ is at most Ke−2(γ−β) . [sent-225, score-0.526]

50 Proof Combining Lemmas 4 and 6 with the upper bound of N on the number of features in Mβ as in Lemma 7’s proof gives the following error bound on Mβ  2  2 −2γ2 K − 8Ke−2(γ−β) m − 6 ln(1/δ)  + exp   + 2δ N for any δ > 0. [sent-241, score-0.467]

51 When ln (32) , (12) 2(1 − c)2 γ2 this term is at least 3/4 − o(1), and thus its square is at least 1/2 for small enough γ, completing the proof of the first part of the theorem. [sent-245, score-0.551]

52 2 Lemma 9 The expected number of irrelevant variables in Mβ is at least (N − K)e−16β m . [sent-248, score-0.577]

53 Thus, in this situation, “inclusive” algorithms relying on many irrelevant variables have error rates going to zero while every “exclusive” algorithm has an error rate bounded below by a constant. [sent-258, score-0.635]

54 The proofs in this section assume that all relevant variables are positively correlated with the class designation, so each relevant variable agrees with the class designation with probability 1/2 + γ. [sent-259, score-1.118]

55 Definition 12 We say that an algorithm A is λ-exclusive7 if for every positive N, K, γ, and m, the expected fraction of the variables included in its hypothesis that are relevant is at least λ, that is, |V (A(S)) ∩ R | ≥ λ. [sent-282, score-0.603]

56 The assumption that each relevant variable agrees with the class label with probability 1/2 + γ gives a special case of the generative model described in Section 4, so the lower bounds proven here also apply to that more general setting. [sent-285, score-0.695]

57 On the other hand, (14) implies that if the algorithm restricts itself to variables with empirical edges greater than β∗ then it does not include enough relevant variables to be accurate. [sent-298, score-0.713]

58 The proof must show that arbitrary algorithms frequently include either too many irrelevant variables to be λ-exclusive or too few relevant ones to be accurate. [sent-299, score-0.782]

59 (1989), we will assume that the K relevant variables are randomly selected from the N variables, and lower bound the error with respect to this random choice, along with the training and test data. [sent-302, score-0.634]

60 This will then imply that, for each algorithm, there will be a choice of the K relevant variables giving the same lower bound with respect only to the random choice of the training and test data. [sent-303, score-0.585]

61 We will always use relevant variables that are positively associated with the class label, agreeing with it with probability 1/2 + γ. [sent-304, score-0.692]

62 Before attacking the two parts of the proof alluded to above, we need a subsection providing some basic results about relevant variables and optimal algorithms. [sent-317, score-0.485]

63 Lemma 15 If γ ∈ [0, 1/5] then any classifier using k relevant variables has an error probability at 2 least 1 e−5γ k . [sent-321, score-0.615]

64 Our next lemma shows that, given a sample, the probability that a variable is relevant (positively correlated with the class label) is monotonically increasing in its empirical edge. [sent-325, score-0.51]

65 So, let us fix 2158 N ECESSITY OF I RRELEVANT VARIABLES the values of the class labels, and evaluate probabilities only with respect to the random choice of the relevant variables R , and the values of the variables. [sent-329, score-0.476]

66 Because, in this lower bound proof, relevant variables are always positively associated with the class label, we will use a variant of Mβ which only considers positive features. [sent-345, score-0.669]

67 Definition 17 Let Vβ be a vote over the variables with empirical edge at least β. [sent-346, score-0.517]

68 We now establish lower bounds on the probability of variables being included in Vβ (here β can be a function of γ, but does not depend on the particular sample S). [sent-348, score-0.471]

69 2159 H ELMBOLD AND L ONG Lemma 18 If γ ≤ 1/8 and β ≥ 0 then the probability that a given variable has empirical edge at least β is at least 1 exp −16β2 m . [sent-349, score-0.508]

70 5 If in addition m ≥ 1/β2 , then the probability that a given variable has empirical edge at least β is at least 1 1 √ exp −2β2 m − √ . [sent-350, score-0.508]

71 7β m m Proof: Since relevant variables agree with the class label with probability 1/2 + γ, the probability that a relevant variable has empirical edge at least β is lower bounded by the probability that an irrelevant variable has empirical edge at least β. [sent-351, score-2.03]

72 An irrelevant variable has empirical edge at least β only when it agrees with the class on 1/2 + β of the sample. [sent-352, score-0.649]

73 We now upper bound the probability of a relevant variable being included in Vβ , again for β that does not depend on S. [sent-355, score-0.48]

74 Lemma 19 If β ≥ γ, the probability that a given relevant variable has empirical edge at least β is 2 at most e−2(β−γ) m . [sent-356, score-0.543]

75 Proof: Use (3) to bound the probability that a relevant feature agrees with the class label β − γ more often than its expected fraction of times. [sent-357, score-0.668]

76 2 Bounding λ-Exclusiveness Recall that n(S) is the number of variables used by A(S), and β(S) is the edge of the variable whose rank, when the variables are ordered by their empirical edges, is n(S). [sent-359, score-0.697]

77 Suppose, given the training set S, the variables are sorted in decreasing order of empirical edge (breaking ties arbitrarily, say using the variable index). [sent-362, score-0.484]

78 Since for each sample S and each variable xi , the probability P xi relevant S decreases as the |VS,k ∩ R | empirical edge of xi decreases (Lemma 16), the expectation E S is non-increasing |VS,k | with k. [sent-364, score-0.668]

79 The number of relevant variables in Vβ∗ has a binomial distribution with parameters K and p where p < prel . [sent-372, score-0.73]

80 (18) √ √ √ Since σ < K p < K prel by (16), we have σ K prel < K prel . [sent-375, score-0.717]

81 Substituting the values of K and prel into the square-root yields exp K prel > σ × √ > σ/ γ, ln(1/γ)1/3 2 exp − ln(1/γ)1/2 25b + 2 ln(1/γ) 5 1/4 γ for small enough γ. [sent-376, score-0.724]

82 Lemma 18 shows that, for each variable, the probability of the variable having empirical edge β∗ is at least √ 1 ln(1/γ)1/2 5 b 1 γ ∗2 √ exp −2β m − √ = exp −2 −√ ∗ m 1/4 7β m 7 ln(1/γ) 25b b √ b ln(1/γ)1/2 > exp −2 25b 2 ln(1/γ)1/4 for sufficiently small γ. [sent-379, score-0.714]

83 Since the empirical edges of different variables are independent, the probability that at least n variables have empirical edge at least β∗ is lower bounded by the probability of at least n successes from the binomial distribution with parameters N and pirrel where √ ln(1/γ)1/2 b exp −2 . [sent-380, score-1.392]

84 Therefore applying the Chebyshev bound (17) with √ a = 1/ γ gives (for sufficiently small γ) P |Vβ∗ | < √ N pirrel ≤ P |Vβ∗ | < N pirrel − σ/ γ < γ. [sent-382, score-0.529]

85 3 Large Error Call a variable good if it is relevant and its empirical edge is at least β∗ in the sample. [sent-388, score-0.455]

86 By Chebyshev’s inequality, we have P # good vars ≥ K p + a K p ≤ P # good vars ≥ K p + a K p(1 − p) ≤ and setting a = √ K p, this gives P[# good vars ≥ 2K p] ≤ ∗ −γ)2 m By Lemma 19, K p ≤ Ke−2(β ln(K p) ≤ ln K − 2b = Ke−2b(ln(1/γ) 1 . [sent-392, score-0.565]

87 Assume that k variables agree with the label with probability 1/2 + γ, and the n − k agree with the label with probability 1/2. [sent-407, score-0.912]

88 Then applying a Chernoff-Hoeffding bound for such sets of random variables due to Pemmaraju (2001), if r ≤ n/2, one gets a bound of 2 2 c(r + 1) exp −2γ k the probability of error. [sent-409, score-0.625]

89 2 Variables with Different Strengths We have previously assumed that all relevant variables are equally strongly associated with the class label. [sent-411, score-0.476]

90 Thus relevant variables agree with the class label with probability at least 1/2 + γmin and misleading variables agree with the class label with probability at least 1/2 − γmax . [sent-413, score-1.63]

91 Using the 1/2 + γmin and 1/2 − γmax underestimates on the probability that relevant variables and misleading variables agree with the class label leads to an analog of Theorem 1. [sent-415, score-1.207]

92 This analog says that models voting n variables, k of which are relevant and ℓ of which are misleading, have error probabilities bounded by exp −2[γmin k − γmax ℓ]2 + n 2164 . [sent-416, score-0.461]

93 N ECESSITY OF I RRELEVANT VARIABLES We can also use the upper and lower bounds on association to get high-confidence bounds (like those of Lemmas 4 and 6) on the numbers of relevant and misleading features in models Mβ . [sent-417, score-0.595]

94 A more sophisticated analysis keeping better track of the degree of association between relevant variables and the class label may produce better bounds. [sent-422, score-0.62]

95 In addition, if the variables have varying strengths then it makes sense to consider classifiers that assign different voting weights to the variables based on their estimated strength of association with the class label. [sent-423, score-0.637]

96 Conclusions We analyzed learning when there are few examples, a small fraction of the variables are relevant, and the relevant variables are only weakly correlated with the class label. [sent-426, score-0.83]

97 In this situation, algorithms that produce hypotheses consisting predominately of irrelevant variables can be highly accurate (with error rates going to 0). [sent-427, score-0.685]

98 Furthermore, this inclusion of many irrelevant variables is essential. [sent-428, score-0.537]

99 Any algorithm limiting the expected fraction of irrelevant variables in its hypotheses has an error rate bounded below by a constant. [sent-429, score-0.732]

100 2 Proof of (5) Using (4) with η = 3 + 3 ln(1/δ)/E(U) gives P[U > 4E(U) + 3 ln(1/δ)] < exp −(4E(U) + 3 ln δ) ln < exp −(3 ln(1/δ) ln 4 + 3 ln(1/δ)/E(U) e 4 e < exp (− ln(1/δ)) = δ using the fact that ln(4/e) ≈ 0. [sent-442, score-1.641]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ln', 0.424), ('irrelevant', 0.297), ('variables', 0.24), ('elmbold', 0.239), ('prel', 0.239), ('ecessity', 0.221), ('pirrel', 0.221), ('rrelevant', 0.221), ('relevant', 0.198), ('agree', 0.132), ('misleading', 0.124), ('exp', 0.123), ('edge', 0.116), ('label', 0.116), ('ong', 0.095), ('donoho', 0.092), ('designation', 0.092), ('probability', 0.088), ('bound', 0.087), ('vote', 0.086), ('negation', 0.085), ('fraction', 0.084), ('features', 0.074), ('positively', 0.073), ('goes', 0.071), ('bounds', 0.069), ('variable', 0.066), ('inclusive', 0.065), ('ke', 0.065), ('exclusive', 0.062), ('hypotheses', 0.062), ('helmbold', 0.061), ('si', 0.06), ('voting', 0.06), ('agrees', 0.057), ('votes', 0.057), ('ui', 0.055), ('agreeing', 0.055), ('eln', 0.055), ('motwani', 0.055), ('slud', 0.055), ('xi', 0.055), ('lemma', 0.055), ('binomial', 0.053), ('conditionally', 0.052), ('jin', 0.05), ('error', 0.049), ('pnas', 0.047), ('vars', 0.047), ('proof', 0.047), ('theorem', 0.045), ('naive', 0.044), ('classi', 0.044), ('boolean', 0.044), ('chebyshev', 0.043), ('er', 0.042), ('tail', 0.041), ('included', 0.041), ('annals', 0.041), ('boosting', 0.04), ('least', 0.04), ('bayes', 0.04), ('majority', 0.04), ('class', 0.038), ('abramovich', 0.037), ('matou', 0.037), ('nec', 0.037), ('predominately', 0.037), ('voters', 0.037), ('def', 0.037), ('bagging', 0.037), ('empirical', 0.035), ('conditioning', 0.034), ('tell', 0.033), ('lower', 0.033), ('ers', 0.032), ('lemmas', 0.032), ('capital', 0.031), ('cruz', 0.031), ('kp', 0.031), ('shrunken', 0.031), ('analogs', 0.031), ('analog', 0.031), ('strengths', 0.031), ('haussler', 0.031), ('corollary', 0.031), ('completes', 0.031), ('independence', 0.031), ('effects', 0.031), ('correlated', 0.03), ('generative', 0.03), ('anthony', 0.03), ('duda', 0.028), ('odd', 0.028), ('vacuous', 0.028), ('association', 0.028), ('mi', 0.028), ('continuing', 0.028), ('incorrect', 0.027), ('training', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999905 82 jmlr-2012-On the Necessity of Irrelevant Variables

Author: David P. Helmbold, Philip M. Long

Abstract: This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant variables have a small advantage over random guessing. The main result is that algorithms relying predominately on irrelevant variables have error probabilities that quickly go to 0 in situations where algorithms that limit the use of irrelevant variables have errors bounded below by a positive constant. We also show that accurate learning is possible even when there are so few examples that one cannot determine with high confidence whether or not any individual variable is relevant. Keywords: feature selection, generalization, learning theory

2 0.15311874 80 jmlr-2012-On Ranking and Generalization Bounds

Author: Wojciech Rejchel

Abstract: The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates that are 1 faster than √n . We apply our results to commonly used ranking algorithms, for instance boosting or support vector machines. Moreover, we study the performance of considered estimators on real data sets. Keywords: convex risk minimization, excess risk, support vector machine, empirical process, U-process

3 0.143802 97 jmlr-2012-Regularization Techniques for Learning with Matrices

Author: Sham M. Kakade, Shai Shalev-Shwartz, Ambuj Tewari

Abstract: There is growing body of learning problems for which it is natural to organize the parameters into a matrix. As a result, it becomes easy to impose sophisticated prior knowledge by appropriately regularizing the parameters under some matrix norm. This work describes and analyzes a systematic method for constructing such matrix-based regularization techniques. In particular, we focus on how the underlying statistical properties of a given problem can help us decide which regularization function is appropriate. Our methodology is based on a known duality phenomenon: a function is strongly convex with respect to some norm if and only if its conjugate function is strongly smooth with respect to the dual norm. This result has already been found to be a key component in deriving and analyzing several learning algorithms. We demonstrate the potential of this framework by deriving novel generalization and regret bounds for multi-task learning, multi-class learning, and multiple kernel learning. Keywords: regularization, strong convexity, regret bounds, generalization bounds, multi-task learning, multi-class learning, multiple kernel learning

4 0.11460425 13 jmlr-2012-Active Learning via Perfect Selective Classification

Author: Ran El-Yaniv, Yair Wiener

Abstract: We discover a strong relation between two known learning models: stream-based active learning and perfect selective classification (an extreme case of ‘classification with a reject option’). For these models, restricted to the realizable case, we show a reduction of active learning to selective classification that preserves fast rates. Applying this reduction to recent results for selective classification, we derive exponential target-independent label complexity speedup for actively learning general (non-homogeneous) linear classifiers when the data distribution is an arbitrary high dimensional mixture of Gaussians. Finally, we study the relation between the proposed technique and existing label complexity measures, including teaching dimension and disagreement coefficient. Keywords: classification with a reject option, perfect classification, selective classification, active learning, selective sampling, disagreement coefficient, teaching dimension, exploration vs. exploitation 1. Introduction and Related Work Active learning is an intriguing learning model that provides the learning algorithm with some control over the learning process, potentially leading to significantly faster learning. In recent years it has been gaining considerable recognition as a vital technique for efficiently implementing inductive learning in many industrial applications where abundance of unlabeled data exists, and/or in cases where labeling costs are high. In this paper we expose a strong relation between active learning and selective classification, another known alternative learning model (Chow, 1970; El-Yaniv and Wiener, 2010). Focusing on binary classification in realizable settings we consider standard stream-based active learning, which is also referred to as online selective sampling (Atlas et al., 1990; Cohn et al., 1994). In this model the learner is given an error objective ε and then sequentially receives unlabeled examples. At each step, after observing an unlabeled example x, the learner decides whether or not to request the label of x. The learner should terminate the learning process and output a binary classifier whose true error is guaranteed to be at most ε with high probability. The penalty incurred by the learner is the number of label requests made and this number is called the label complexity. A label complexity bound of O(d log(d/ε)) for actively learning ε-good classifier from a concept class with VC-dimension d, provides an exponential speedup in terms of 1/ε relative to standard (passive) supervised learning where the sample complexity is typically O(d/ε). The study of (stream-based, realizable) active learning is paved with very interesting theoretical results. Initially, only a few cases were known where active learning provides significant advanc 2012 Ran El-Yaniv and Yair Wiener. E L -YANIV AND W IENER tage over passive learning. Perhaps the most favorable result was an exponential label complexity speedup for learning homogeneous linear classifiers where the (linearly separable) data is uniformly distributed over the unit sphere. This result was manifested by various authors using various analysis techniques, for a number of strategies that can all be viewed in hindsight as approximations or variations of the “CAL algorithm” of Cohn et al. (1994). Among these studies, the earlier theoretical results (Seung et al., 1992; Freund et al., 1993, 1997; Fine et al., 2002; Gilad-Bachrach, 2007) considered Bayesian settings and studied the speedup obtained by the Query by Committee (QBC) algorithm. The more recent results provided PAC style analyses (Dasgupta et al., 2009; Hanneke, 2007a, 2009). Lack of positive results for other non-toy problems, as well as various additional negative results that were discovered, led some researchers to believe that active learning is not necessarily advantageous in general. Among the striking negative results is Dasgupta’s negative example for actively learning general (non-homogeneous) linear classifiers (even in two dimensions) under the uniform distribution over the sphere (Dasgupta, 2005). A number of recent innovative papers proposed alternative models for active learning. Balcan et al. (2008) introduced a subtle modification of the traditional label complexity definition, which opened up avenues for new positive results. According to their new definition of “non-verifiable” label complexity, the active learner is not required to know when to stop the learning process with a guaranteed ε-good classifier. Their main result, under this definition, is that active learning is asymptotically better than passive learning in the sense that only o(1/ε) labels are required for actively learning an ε-good classifier from a concept class that has a finite VC-dimension. Another result they accomplished is an exponential label complexity speedup for (non-verifiable) active learning of non-homogeneous linear classifiers under the uniform distribution over the the unit sphere. Based on Hanneke’s characterization of active learning in terms of the “disagreement coefficient” (Hanneke, 2007a), Friedman (2009) recently extended the Balcan et al. results and proved that a target-dependent exponential speedup can be asymptotically achieved for a wide range of “smooth” learning problems (in particular, the hypothesis class, the instance space and the distribution should all be expressible by smooth functions). He proved that under such smoothness conditions, for any target hypothesis h∗ , Hanneke’s disagreement coefficient is bounded above in terms of a constant c(h∗ ) that depends on the unknown target hypothesis h∗ (and is independent of δ and ε). The resulting label complexity is O (c(h∗ ) d polylog(d/ε)) (Hanneke, 2011b). This is a very general result but the target-dependent constant involved in this bound is only guaranteed to be finite. With this impressive progress in the case of target-dependent bounds for active learning, the current state of affairs in the target-independent bounds for active learning arena leaves much to be desired. To date the most advanced result in this model, which was already essentially established by Seung et al. and Freund et al. more than fifteen years ago (Seung et al., 1992; Freund et al., 1993, 1997), is still a target-independent exponential speed up bound for homogeneous linear classifiers under the uniform distribution over the sphere. The other learning model we contemplate that will be shown to have strong ties to active learning, is selective classification, which is mainly known in the literature as ‘classification with a reject option.’ This old-timer model, that was already introduced more than fifty years ago (Chow, 1957, 1970), extends standard supervised learning by allowing the classifier to opt out from predictions in cases where it is not confident. The incentive is to increase classification reliability over instances that are not rejected by the classifier. Thus, using selective classification one can potentially achieve 256 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION a lower error rate using the same labeling “budget.” The main quantities that characterize a selective classifier are its (true) error and coverage rate (or its complement, the rejection rate). There is already substantial volume of research publications on selective classification, that kept emerging through the years. The main theme in many of these publications is the implementation of certain reject mechanisms for specific learning algorithms like support vector machines and neural networks. Among the few theoretical studies on selective classification, there are various excess risk bounds for ERM learning (Herbei and Wegkamp, 2006; Bartlett and Wegkamp, 2008; Wegkamp, 2007), and certain coverage/risk guarantees for selective ensemble methods (Freund et al., 2004). In a recent work (El-Yaniv and Wiener, 2010) the trade-off between error and coverage was examined and in particular, a new extreme case of selective learning was introduced. In this extreme case, termed here “perfect selective classification,” the classifier is given m labeled examples and is required to instantly output a classifier whose true error is perfectly zero with certainty. This is of course potentially doable only if the classifier rejects a sufficient portion of the instance space. A non-trivial result for perfect selective classification is a high probability lower bound on the classifier coverage (or equivalently, an upper bound on its rejection rate). Such bounds have recently been presented in El-Yaniv and Wiener (2010). In Section 3 we present a reduction of active learning to perfect selective classification that preserves “fast rates.” This reduction enables the luxury of analyzing dynamic active learning problems as static problems. Relying on a recent result on perfect selective classification from El-Yaniv and Wiener (2010), in Section 4 we then apply our reduction and conclude that general (non-homogeneous) linear classifiers are actively learnable at exponential (in 1/ε) label complexity rate when the data distribution is an arbitrary unknown finite mixture of high dimensional Gaussians. While we obtain exponential label complexity speedup in 1/ε, we incur exponential slowdown in d 2 , where d is the problem dimension. Nevertheless, in Section 5 we prove a lower bound of Ω((log m)(d−1)/2 (1 + o(1)) on the label complexity, when considering the class of unrestricted linear classifiers under a Gaussian distribution. Thus, an exponential slowdown in d is unavoidable in such settings. Finally, in Section 6 we relate the proposed technique to other complexity measures for active learning. Proving and using a relation to the teaching dimension (Goldman and Kearns, 1995) we show, by relying on a known bound for the teaching dimension, that perfect selective classification with meaningful coverage can be achieved for the case of axis-aligned rectangles under a product distribution. We then focus on Hanneke’s disagreement coefficient and show that the coverage of perfect selective classification can be bounded below using the disagreement coefficient. Conversely, we show that the disagreement coefficient can be bounded above using any coverage bound for perfect selective classification. Consequently, the results here imply that the disagreement coefficient can be sufficiently bounded to ensure fast active learning for the case of linear classifiers under a mixture of Gaussians. 2. Active Learning and Perfect Selective Classification In binary classification the goal is to learn an accurate binary classifier, h : X → {±1}, from a finite labeled training sample. Here X is some instance space and the standard assumption is that the training sample, Sm = {(xi , yi )}m , containing m labeled examples, is drawn i.i.d. from some i=1 unknown distribution P(X,Y ) defined over X × {±1}. The classifier h is chosen from some hypothesis class H . In this paper we focus on the realizable setting whereby labels are defined by 257 E L -YANIV AND W IENER some unknown target hypothesis h∗ ∈ H . Thus, the underlying distribution reduces to P(X). The performance of a classifier h is quantified by its true zero-one error, R(h) Pr{h(X) = h∗ (X)}. A positive result for a classification problem (H , P) is a learning algorithm that given an error target ε and a confidence parameter δ can output, based on Sm , an hypothesis h whose error R(h) ≤ ε, with probability of at least 1 − δ. A bound B(ε, δ) on the size m of labeled training sample sufficient for achieving this is called the sample complexity of the learning algorithm. A classical result is that any consistent learning algorithm has sample complexity of O( 1 (d log( 1 ) + log( 1 ))), where d is ε ε δ the VC-dimension of H (see, e.g., Anthony and Bartlett, 1999). 2.1 Active Learning We consider the following standard active learning model. In this model the learner sequentially observes unlabeled instances, x1 , x2 , . . ., that are sampled i.i.d. from P(X). After receiving each xi , the learning algorithm decides whether or not to request its label h∗ (xi ), where h∗ ∈ H is an unknown target hypothesis. Before the start of the game the algorithm is provided with some desired error rate ε and confidence level δ. We say that the learning algorithm actively learned the problem instance (H , P) if at some point it can terminate this process, after observing m instances and requesting k labels, and output an hypothesis h ∈ H whose error R(h) ≤ ε, with probability of at least 1 − δ. The quality of the algorithm is quantified by the number k of requested labels, which is called the label complexity. A positive result for a learning problem (H , P) is a learning algorithm that can actively learn this problem for any given ε and δ, and for every h∗ , with label complexity bounded above by L(ε, δ, h∗ ). If there is a label complexity bound that is O(polylog(1/ε)) we say that the problem is actively learnable at exponential rate. 2.2 Selective Classification Following the formulation in El-Yaniv and Wiener (2010) the goal in selective classification is to learn a pair of functions (h, g) from a labeled training sample Sm (as defined above for passive learning). The pair (h, g), which is called a selective classifier, consists of a binary classifier h ∈ H , and a selection function, g : X → {0, 1}, which qualifies the classifier h as follows. For any sample x ∈ X , the output of the selective classifier is (h, g)(x) h(x) iff g(x) = 1, and (h, g)(x) abstain iff g(x) = 0. Thus, the function g is a filter that determines a sub-domain of X over which the selective classifier will abstain from classifications. A selective classifier is thus characterized by its coverage, Φ(h, g) EP {g(x)}, which is the P-weighted volume of the sub-domain of X that is not filtered out, and its error, R(h, g) = E{I(h(X) = h∗ (X)) · g(X)}/Φ(h, g), which is the zero-one loss restricted to the covered sub-domain. Note that this is a “smooth” generalization of passive learning and, in particular, R(h, g) reduces to R(h) (standard classification) if g(x) ≡ 1. We expect to see a trade-off between R(h, g) and Φ(h, g) in the sense that smaller error should be obtained by compromising the coverage. A major issue in selective classification is how to optimally control this trade-off. In this paper we are concerned with an extreme case of this trade-off whereby (h, g) is required to achieve a perfect score of zero error with certainty. This extreme learning objective is termed perfect learning in El-Yaniv and Wiener (2010). Thus, for a perfect selective classifier (h, g) we always have R(h, g) = 0, and its quality is determined by its guaranteed coverage. A positive result for (perfect) selective classification problem (H , P) is a learning algorithm that uses a labeled training sample Sm (as in passive learning) to output a perfect selective classifier (h, g) for which Φ(h, g) ≥ BΦ (H , δ, m) with probability of at least 1 − δ, for any given δ. The bound 258 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION BΦ = BΦ (H , δ, m) is called a coverage bound (or coverage rate) and its complement, 1 − BΦ , is called a rejection bound (or rate). A coverage rate BΦ = 1 − O( polylog(m) ) (and the corresponding m 1 − BΦ rejection rate) are qualified as fast. 2.3 The CAL Algorithm and the Consistent Selective Strategy (CSS) The major players in active learning and in perfect selective classification are the CAL algorithm and the consistent selective strategy (CSS), respectively. To define them we need the following definitions. Definition 1 (Version space, Mitchell, 1977) Given an hypothesis class H and a training sample Sm , the version space V SH ,Sm is the set of all hypotheses in H that classify Sm correctly. Definition 2 (Disagreement set, Hanneke, 2007a; El-Yaniv and Wiener, 2010) Let G ⊂ H . The disagreement set w.r.t. G is defined as DIS(G ) {x ∈ X : ∃h1 , h2 ∈ G The agreement set w.r.t. G is AGR(G ) s.t. h1 (x) = h2 (x)} . X \ DIS(G ). The main strategy for active learning in the realizable setting (Cohn et al., 1994) is to request labels only for instances belonging to the disagreement set and output any (consistent) hypothesis belonging to the version space. This strategy is often called the CAL algorithm. A related strategy for perfect selective classification was proposed in El-Yaniv and Wiener (2010) and termed consistent selective strategy (CSS). Given a training set Sm , CSS takes the classifier h to be any hypothesis in V SH ,Sm (i.e., a consistent learner), and takes a selection function g that equals one for all points in the agreement set with respect to V SH ,Sm , and zero otherwise. 3. From Coverage Bound to Label Complexity Bound In this section we present a reduction from stream-based active learning to perfect selective classification. Particularly, we show that if there exists for H a perfect selective classifier with a fast rejection rate of O(polylog(m)/m), then the CAL algorithm will actively learn H with exponential label complexity rate of O(polylog(1/ε)). Lemma 3 Let Sm = {(x1 , y1 ), . . . , (xm , ym )} be a sequence of m labeled samples drawn i.i.d. from an unknown distribution P(X) and let Si = {(x1 , y1 ), . . . , (xi , yi )} be the i-prefix of Sm . Then, with probability of at least 1 − δ over random choices of Sm , the following bound holds simultaneously for all i = 1, . . . , m − 1, Pr xi+1 ∈ DIS(V SH ,Si )|Si ≤ 1 − BΦ H , δ , 2⌊log2 (i)⌋ , log2 (m) where BΦ (H , δ, m) is a coverage bound for perfect selective classification with respect to hypothesis class H , confidence δ and sample size m . 259 E L -YANIV AND W IENER Proof For j = 1, . . . , m, abbreviate DIS j DIS(V SH ,S j ) and AGR j AGR(V SH ,S j ). By definition, DIS j = X \ AGR j . By the definitions of a coverage bound and agreement/disagreement sets, with probability of at least 1 − δ over random choices of S j BΦ (H , δ, j) ≤ Pr{x ∈ AGR j |S j } = Pr{x ∈ DIS j |S j } = 1 − Pr{x ∈ DIS j |S j }. Applying the union bound we conclude that the following inequality holds simultaneously with high probability for t = 0, . . . , ⌊log2 (m)⌋ − 1, Pr{x2t +1 ∈ DIS2t |S2t } ≤ 1 − BΦ H , δ , 2t . log2 (m) (1) For all j ≤ i, S j ⊆ Si , so DISi ⊆ DIS j . Therefore, since the samples in Sm are all drawn i.i.d., for any j ≤ i, Pr {xi+1 ∈ DISi |Si } ≤ Pr xi+1 ∈ DIS j |S j = Pr x j+1 ∈ DIS j |S j . The proof is complete by setting j = 2⌊log2 (i)⌋ ≤ i, and applying inequality (1). Lemma 4 (Bernstein’s inequality Hoeffding, 1963) Let X1 , . . . , Xn be independent zero-mean random variables. Suppose that |Xi | ≤ M almost surely, for all i. Then, for all positive t,   n 2 /2 t . Pr ∑ Xi > t ≤ exp − 2 + Mt/3 i=1 ∑E Xj Lemma 5 Let Zi , i = 1, . . . , m, be independent Bernoulli random variables with success probabilities pi . Then, for any 0 < δ < 1, with probability of at least 1 − δ, m ∑ (Zi − E{Zi }) ≤ 2 ln i=1 Proof Define Wi 1 2 1 ∑ pi + 3 ln δ . δ Zi − E{Zi } = Zi − pi . Clearly, E{Wi } = 0, |Wi | ≤ 1, E{Wi2 } = pi (1 − pi ). Applying Bernstein’s inequality (Lemma 4) on the Wi ,   n t 2 /2 t 2 /2  = exp − Pr ∑ Wi > t ≤ exp − ∑ pi (1 − pi ) + t/3 i=1 ∑ E W j2 + t/3 ≤ exp − t 2 /2 . ∑ pi + t/3 Equating the right-hand side to δ and solving for t, we have t 2 /2 1 = ln δ ∑ pi + t/3 ⇐⇒ 2 1 1 t 2 − t · ln − 2 ln ∑ pi = 0, 3 δ δ 260 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION and the positive solution of this quadratic equation is t= 1 1 ln + 3 δ 1 21 1 2 1 ln + 2 ln ∑ pi < ln + 9 δ δ 3 δ 2 ln 1 pi . δ∑ Lemma 6 Let Z1 , Z2 , . . . , Zm be a high order Markov sequence of dependent binary random variables defined in the same probability space. Let X1 , X2 , . . . , Xm be a sequence of independent random variables such that, Pr {Zi = 1|Zi−1 , . . . , Z1 , Xi−1 , . . . , X1 } = Pr {Zi = 1|Xi−1 , . . . , X1 } . Define P1 Pr {Z1 = 1}, and for i = 2, . . . , m, Pi Pr {Zi = 1|Xi−1 , . . . , X1 } . Let b1 , b2 . . . bm be given constants independent of X1 , X2 , . . . , Xm .1 Assume that Pi ≤ bi simultaneously for all i with probability of at least 1 − δ/2, δ ∈ (0, 1). Then, with probability of at least 1 − δ, m m 2 2 2 ∑ Zi ≤ ∑ bi + 2 ln δ ∑ bi + 3 ln δ . i=1 i=1 We proceed with a direct proof of Lemma 6. An alternative proof of this lemma, using supermartingales, appears in Appendix B. Proof For i = 1, . . . , m, let Wi be binary random variables satisfying bi + I(Pi ≤ bi ) · (Pi − bi ) , Pi bi − Pi ,0 , Pr{Wi = 1|Zi = 0, Xi−1 , . . . , X1 } max 1 − Pi Pr{Wi = 1|Wi−1 , . . . ,W1 , Xi−1 , . . . , X1 } = Pr{Wi = 1|Xi−1 , . . . , X1 }. Pr{Wi = 1|Zi = 1, Xi−1 , . . . , X1 } We notice that Pr{Wi = 1|Xi−1 , . . . , X1 } = Pr{Wi = 1, Zi = 1|Xi−1 , . . . , X1 } + Pr{Wi = 1, Zi = 0|Xi−1 , . . . , X1 } = Pr{Wi = 1|Zi = 1, Xi−1 , . . . , X1 } Pr{Zi = 1|Xi−1 , . . . , X1 } + Pr{Wi = 1|Zi = 0, Xi−1 , . . . , X1 } Pr{Zi = 0|Xi−1 , . . . , X1 } = Pi + bi −Pii (1 − Pi ) = bi , Pi ≤ bi ; 1−P bi · Pi + 0 = bi , else. Pi Hence the distribution of each Wi is independent of Xi−1 , . . . , X1 , and the Wi are independent Bernoulli random variables with success probabilities bi . By construction if Pi ≤ bi then Pr{Wi = 1|Zi = 1} = X Pr{Wi = 1|Zi = 1, Xi−1 , . . . , X1 } = 1. 1. Precisely we require that each of the bi were selected before Xi are chosen 261 E L -YANIV AND W IENER By assumption Pi ≤ bi for all i simultaneously with probability of at least 1−δ/2. Therefore, Zi ≤ Wi simultaneously with probability of at least 1 − δ/2. We now apply Lemma 5 on the Wi . The proof is then completed using the union bound. Theorem 7 Let Sm be a sequence of m unlabeled samples drawn i.i.d. from an unknown distribution P. Then with probability of at least 1 − δ over choices of Sm , the number of label requests k by the CAL algorithm is bounded by k ≤ Ψ(H , δ, m) + where Ψ(H , δ, m) 2 2 2 2 ln Ψ(H , δ, m) + ln , δ 3 δ m ∑ i=1 1 − BΦ H , δ , 2⌊log2 (i)⌋ 2 log2 (m) and BΦ (H , δ, m) is a coverage bound for perfect selective classification with respect to hypothesis class H , confidence δ and sample size m . Proof According to CAL, the label of sample xi will be requested iff xi ∈ DIS(V SH ,Si−1 ). For i = 1, . . . , m, let Zi be binary random variables such that Zi 1 iff CAL requests a label for sample xi . Applying Lemma 3 we get that for all i = 2, . . . , m, with probability of at least 1 − δ/2 Pr{Zi = 1|Si−1 } = Pr xi ∈ DIS(V SH ,Si−1 )|Si−1 ≤ 1 − BΦ H , δ , 2⌊log2 (i−1)⌋ . 2 log2 (m) For i = 1, BΦ (H , δ, 1) = 0 and the above inequality trivially holds. An application of Lemma 6 on the variables Zi completes the proof. Theorem 7 states an upper bound on the label complexity expressed in terms of m, the size of the sample provided to CAL. This upper bound is very convenient for directly analyzing the active learning speedup relative to supervised learning. A standard label complexity upper bound, which depends on 1/ε, can be extracted using the following simple observation. Lemma 8 (Hanneke, 2009; Anthony and Bartlett, 1999) Let Sm be a sequence of m unlabeled samples drawn i.i.d. from an unknown distribution P. Let H be a hypothesis class whose finite VC dimension is d, and let ε and δ be given. If m≥ 4 2 12 d ln + ln , ε ε δ then, with probability of at least 1 − δ, CAL will output a classifier whose true error is at most ε. Proof Hanneke (2009) observed that since CAL requests a label whenever there is a disagreement in the version space, it is guaranteed that after processing m examples, CAL will output a classifier that is consistent with all the m examples introduced to it. Therefore, CAL is a consistent learner. A classical result (Anthony and Bartlett, 1999, Theorem 4.8) is that any consistent learner will achieve, with probability of at least 1 − δ, a true error not exceeding ε after observing at most 12 2 4 ε d ln ε + ln δ labeled examples. 262 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Theorem 9 Let H be a hypothesis class whose finite VC dimension is d. If the rejection rate of CSS polylog( m ) δ (see definition in Section 2.3) is O , then (H , P) is actively learnable with exponential m label complexity speedup. Proof Plugging this rejection rate into Ψ (defined in Theorem 7) we have,  m m polylog δ Ψ(H , δ, m) ∑ 1 − BΦ (H , , 2⌊log2 (i)⌋ ) = ∑ O  log2 (m) i i=1 i=1 Applying Lemma 41 we get Ψ(H , δ, m) = O polylog By Theorem 7, k = O polylog m δ m log(m) δ i log(m) δ  . . , and an application of Lemma 8 concludes the proof. 4. Label Complexity Bounding Technique and Its Applications In this section we present a novel technique for deriving target-independent label complexity bounds for active learning. The technique combines the reduction of Theorem 7 and a general datadependent coverage bound for selective classification from El-Yaniv and Wiener (2010). For some learning problems it is a straightforward technical exercise, involving VC-dimension calculations, to arrive with exponential label complexity bounds. We show a few applications of this technique resulting in both reproductions of known label complexity exponential rates as well as a new one. The following definitions (El-Yaniv and Wiener, 2010) are required for introducing the technique. Definition 10 (Version space compression set) For any hypothesis class H , let Sm be a labeled sample of m points inducing a version space V SH ,Sm . The version space compression set, S′ ⊆ Sm , ˆ ˆ is a smallest subset of Sm satisfying V SH ,Sm = V SH ,S′ . The (unique) number n = n(H , Sm ) = |S′ | is called the version space compression set size. Remark 11 Our ”version space compression set” is precisely Hanneke’s ”minimum specifying set” (Hanneke, 2007b) for f on U with respect to V , where, f = h∗ , U = Sm , V = H [Sm ] (see Definition 23). Definition 12 (Characterizing hypothesis) For any subset of hypotheses G ⊆ H , the characterizing hypothesis of G , denoted fG (x), is a binary hypothesis over X (not restricted to H ) obtaining positive values over the agreement set AGR(G ) (Definition 2), and zero otherwise. Definition 13 (Order-n characterizing set) For each n, let Σn be the set of all possible labeled samples of size n (all n-subsets, each with all 2n possible labelings). The order-n characterizing set of H , denoted Fn , is the set of all characterizing hypotheses fG (x), where G ⊆ H is a version space induced by some member of Σn . 263 E L -YANIV AND W IENER Definition 14 (Characterizing set complexity) Let Fn be the order-n characterizing set of H . The order-n characterizing set complexity of H , denoted γ (H , n), is the VC-dimension of Fn . The following theorem, credited to (El-Yaniv and Wiener, 2010, Theorem 21), is a powerful data-dependent coverage bound for perfect selective learning, expressed in terms of the version space compression set size and the characterizing set complexity. Theorem 15 (Data-dependent coverage guarantee) For any m, let a1 , a2 , . . . , am ∈ R be given, such that ai ≥ 0 and ∑m ai ≤ 1. Let (h, g) be perfect selective classifier (CSS, see Section 2.3). i=1 Then, R(h, g) = 0, and for any 0 ≤ δ ≤ 1, with probability of at least 1 − δ, Φ(h, g) ≥ 1 − 2 γ (H , n) ln+ ˆ m 2em 2 , + ln an δ γ (H , n) ˆ ˆ where n is the size of the version space compression set and γ (H , n) is the order-n characterizing ˆ ˆ ˆ set complexity of H . Given an hypothesis class H , our recipe to deriving active learning label complexity bounds for H is: (i) calculate both n and γ (H , n); (ii) apply Theorem 15, obtaining a bound BΦ for the ˆ ˆ coverage; (iii) plug BΦ in Theorem 7 to get a label complexity bound expressed as a summation; (iv) Apply Lemma 41 to obtain a label complexity bound in a closed form. 4.1 Examples In the following example we derive a label complexity bound for the concept class of thresholds (linear separators in R). Although this is a toy example (for which an exponential rate is well known) it does exemplify the technique, and in many other cases the application of the technique is not much harder. Let H be the class of thresholds. We first show that the corresponding version space compression set size n ≤ 2. Assume w.l.o.g. that h∗ (x) I(x > w) for some w ∈ (0, 1). Let ˆ x− max{xi ∈ Sm |yi = −1} and x+ min(xi ∈ Sm |yi = +1). At least one of x− or x+ exist. Let ′ ′ Sm = {(x− , −1), (x+ , +1)}. Then V SH ,Sm = V SH ,Sm , and n = |Sm | ≤ 2. Now, γ (H , 2) = 2, because ˆ ′ the order-2 characterizing set of H is the class of intervals in R whose VC-dimension is 2. Plugging these numbers in Theorem 15, and using the assignment a1 = a2 = 1/2, BΦ (H , δ, m) = 1 − 2 4 ln (m/δ) 2 ln (em) + ln = 1−O . m δ m Next we plug BΦ in Theorem 7 obtaining a raw label complexity m Ψ(H , δ, m) = ∑ 1 − BΦ H , i=1 δ , 2⌊log2 (i)⌋ 2 log2 (m) m = ∑O i=1 ln (log2 (m) · i/δ) . i Finally, by applying Lemma 41, with a = 1 and b = log2 m/δ, we conclude that Ψ(H , δ, m) = O ln2 m δ . Thus, H is actively learnable with exponential speedup, and this result applies to any distribution. In Table 1 we summarize the n and γ (H , n) values we calculated for four other hypothesis classes. The ˆ ˆ 264 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Hypothesis class Distribution n ˆ γ (H , n) ˆ Linear separators in R Intervals in R Linear separators in R2 any any (target-dependent)2 any distribution on the unit circle (target-dependent)2 2 4 4 2 4 4 Linear separators in Rd Balanced axis-aligned rectangles in Rd mixture of Gaussians product distribution O (log m)d−1 /δ O (log (dm/δ)) O nd/2+1 ˆ O (d n log n) ˆ ˆ Table 1: The n and γ of various hypothesis spaces achieving exponential rates. ˆ last two cases are fully analyzed in Sections 4.2 and 6.1, respectively. For the other classes, where γ and n are constants, it is clear (Theorem 15) that exponential rates are obtained. We emphasize that ˆ the bounds for these two classes are target-dependent as they require that Sm include at least one sample from each class. 4.2 Linear Separators in Rd Under Mixture of Gaussians In this section we state and prove our main example, an exponential label complexity bound for linear classifiers in Rd . Theorem 16 Let H be the class of all linear binary classifiers in Rd , and let the underlying distribution be any mixture of a fixed number of Gaussians in Rd . Then, with probability of at least 1 − δ over choices of Sm , the number of label requests k by CAL is bounded by 2 k=O (log m)d +1 δ(d+3)/2 . Therefore by Lemma 8 we get k = O (poly(1/δ) · polylog(1/ε)) . Proof The following is a coverage bound for linear classifiers in d dimensions that holds in our setting with probability of at least 1 − δ (El-Yaniv and Wiener, 2010, Corollary 33),3 2 Φ(h, g) ≥ 1 − O 1 (log m)d · (d+3)/2 m δ . (2) 2. Target-dependent with at least one sample in each class. 3. This bound uses the fact that for linear classifiers in d dimensions n = O (log m)d−1 /δ (El-Yaniv and Wiener, 2010, ˆ Lemma 32), and that γ (H , n) = O nd/2+1 (El-Yaniv and Wiener, 2010, Lemma 27). ˆ ˆ 265 E L -YANIV AND W IENER Plugging this bound in Theorem 7 we obtain, Ψ(H , δ, m) = m ∑ i=1 1 − BΦ H , m = ∑O i=1 = O δ , 2⌊log2 (i)⌋ 2 log2 (m) 2 log2 (m) (log i)d · i δ log2 (m) δ d+3 2 d+3 2 m (log(i))d ·∑ i i=1 2 . Finally, an application of Lemma 41 with a = d 2 and b = 1 completes the proof. 5. Lower Bound on Label Complexity In the previous section we have derived an upper bound on the label complexity of CAL for various classifiers and distributions. In the case of linear classifiers in Rd we have shown an exponential speed up in terms of 1/ε but also an exponential slow down in terms of the dimension d. In passive learning there is a linear dependency in the dimension while in our case (active learning using CAL) there is an exponential one. Is it an artifact of our bounding technique or a fundamental phenomenon? To answer this question we derive an asymptotic lower bound on the label complexity. We show that the exponential dependency in d is unavoidable (at least asymptotically) for every bounding technique when considering linear classifier even under a single Gaussian (isotropic) distribution. The argument is obtained by the observation that CAL has to request a label to any point on the convex hull of a sample Sm . The bound is obtained using known results from probabilistic geometry, which bound the first two moments of the number of vertices of a random polytope under the Gaussian distribution. Definition 17 (Gaussian polytope) Let X1 , ..., Xm be i.i.d. random points in Rd with common stan1 dard normal distribution (with zero mean and covariance matrix 2 Id ). A Gaussian polytope Pm is the convex hull of these random points. Denote by fk (Pm ) the number of k-faces in the Gaussian polytope Pm . Note that f0 (Pm ) is the number of vertices in Pm . The following two Theorems asymptotically bound the average and variance of fk (Pm ). Theorem 18 (Hug et al., 2004, Theorem 1.1) Let X1 , ..., Xm be i.i.d. random points in Rd with common standard normal distribution. Then E fk (Pm ) = c(k,d) (log m) d−1 2 · (1 + o(1)) as m → ∞, where c(k,d) is a constant depending only on k and d. 266 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Theorem 19 (Hug and Reitzner, 2005, Theorem 1.1) Let X1 , ..., Xm be i.i.d. random points in Rd with common standard normal distribution. Then there exists a positive constant cd , depending only on the dimension, such that d−1 Var ( fk (Pm )) ≤ cd (log m) 2 for all k ∈ {0, . . . , d − 1}. We can now use Chebyshev’s inequality to lower bound the number of vertices in Pm ( f0 (Pm )) with high probability. Theorem 20 Let X1 , ..., Xm be i.i.d. random points in Rd with common standard normal distribution and δ > 0 be given. Then with probability of at least 1 − δ, f0 (Pm ) ≥ cd (log m) d−1 2 d−1 cd ˜ − √ (log m) 4 δ · (1 + o(1)) as m → ∞, where cd and cd are constants depending only on d. ˜ Proof Using Chebyshev’s inequality (in the second inequality), as well as Theorem 19 we get Pr ( f0 (Pm ) > E f0 (Pm ) − t) = 1 − Pr ( f0 (Pm ) ≤ E f0 (Pm ) − t) ≥ 1 − Pr (| f0 (Pm ) − E f0 (Pm )| ≥ t) d−1 cd Var ( f0 (Pm )) ≥ 1 − 2 (log m) 2 . ≥ 1− 2 t t Equating the RHS to 1 − δ and solving for t we get t= (log m) cd δ d−1 2 . Applying Theorem 18 completes the proof. Theorem 21 (Lower bound) Let H be the class of linear binary classifiers in Rd , and let the underlying distribution be standard normal distribution in Rd . Then there exists a target hypothesis such that, with probability of at least 1 − δ over choices of Sm , the number of label requests k by CAL is bounded by d−1 cd k ≥ (log m) 2 · (1 + o(1)). 2 as m → ∞, where cd is a constant depending only on d. Proof Let us look at the Gaussian polytope Pm induced by the random sample Sm . As long as all labels requested by CAL have the same value (the case of minuscule minority class) we note that every vertex of Pm falls in the region of disagreement with respect to any subset of Sm that do not include that specific vertex. Therefore, CAL will request label at least for each vertex of Pm . For sufficiently large m, in particular, 4 2cd d−1 ˜ √ log m ≥ , cd δ we conclude the proof by applying Theorem 20. 267 E L -YANIV AND W IENER 6. Relation to Existing Label Complexity Measures A number of complexity measures to quantify the speedup in active learning have been proposed. In this section we show interesting relations between our techniques and two well known measures, namely the teaching dimension (Goldman and Kearns, 1995) and the disagreement coefficient (Hanneke, 2009). Considering first the teaching dimension, we prove in Lemma 26 that the version space compression set size is bounded above, with high probability, by the extended teaching dimension growth function (introduced by Hanneke, 2007b). Consequently, it follows that perfect selective classification with meaningful coverage can be achieved for the case of axis-aligned rectangles under a product distribution. We then focus on Hanneke’s disagreement coefficient and show in Theorem 34 that the coverage of CSS can be bounded below using the disagreement coefficient. Conversely, in Corollary 39 we show that the disagreement coefficient can be bounded above using any coverage bound for CSS. Consequently, the results here imply that the disagreement coefficient, θ(ε) grows slowly with 1/ε for the case of linear classifiers under a mixture of Gaussians. 6.1 Teaching Dimension The teaching dimension is a label complexity measure proposed by Goldman and Kearns (1995). The dimension of the hypothesis class H is the minimum number of examples required to present to any consistent learner in order to uniquely identify any hypothesis in the class. We now define the following variation of the extended teaching dimension (Heged¨ s, 1995) u due to Hanneke. Throughout we use the notation h1 (S) = h2 (S) to denote the fact that the two hypotheses agree on the classification of all instances in S. ¨ Definition 22 (Extended Teaching Dimension, Hegedus, 1995; Hanneke, 2007b) Let V ⊆ H , m ≥ m, 0, U ∈ X ∀f ∈ H , XT D( f ,V,U) = inf {t | ∃R ⊆ U : | {h ∈ V : h(R) = f (R)} | ≤ 1 ∧ |R| ≤ t} . Definition 23 (Hanneke, 2007b) For V ⊆ H , V [Sm ] denotes any subset of V such that ∀h ∈ V, | h′ ∈ V [Sm ] : h′ (Sm ) = h(Sm ) | = 1. Claim 24 Let Sm be a sample of size m, H an hypothesis class, and n = n(H , Sm ), the version space ˆ compression set size. Then, XT D(h∗ , H [Sm ], Sm ) = n. ˆ Proof Let Sn ⊆ Sm be a version space compression set. Assume, by contradiction, that there exist ˆ two hypotheses h1 , h2 ∈ H [Sm ], each of which agrees on the given classifications of all examples in Sn . Therefore, h1 , h2 ∈ V SH ,Sn , and by the definition of version space compression set, we get ˆ ˆ h1 , h2 ∈ V SH ,Sm . Hence, | h ∈ H [Sm ] : h(Sm ) = h∗ (Sm ) | ≥ 2, which contradicts definition 23. Therefore, | h ∈ H [Sm ] : h(Sn ) = h∗ (Sn ) | ≤ 1, ˆ ˆ 268 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION and XT D(h∗ , H [Sm ], Sm ) ≤ |Sn | = n. ˆ ˆ Let R ⊂ Sm be any subset of size |R| < n. Consequently, V SH ,Sm ⊂ V SH ,R , and there exist hypothesis, ˆ ′ ∈ VS h H ,R , that agrees with all labeled examples in R, but disagrees with at least one example in Sm . Thus, h′ (Sm ) = h∗ (Sm ), and according to definition 23, there exist hypotheses h1 , h2 ∈ H [Sm ] such that h1 (Sm ) = h′ (Sm ) = h∗ (Sm ) = h2 (Sm ). But h1 (R) = h2 (R) = h∗ (R), so | {h ∈ V [Sm ] : h(R) = h∗ (R)} | ≥ 2. It follows that XT D(h∗ , H [Sm ], Sm ) ≥ n. ˆ Definition 25 (XTD Growth Function, Hanneke, 2007b) For m ≥ 0, V ⊆ H , δ ∈ [0, 1], XT D(V, P, m, δ) = inf t|∀h ∈ H , Pr {XT D(h,V [Sm ], Sm ) > t} ≤ δ . Lemma 26 Let H be an hypothesis class, P an unknown distribution, and δ > 0. Then, with probability of at least 1 − δ, n ≤ XT D(H , P, m, δ). ˆ Proof According to Definition 25, with probability of at least 1 − δ, XT D(h∗ , H [Sm ], Sm ) ≤ XT D(H , P, m, δ). Applying Claim 24 completes the proof. Lemma 27 (Balanced Axis-Aligned Rectangles, Hanneke, 2007b, Lemma 4) If P is a product distribution on Rd with continuous CDF, and H is the set of axis-aligned rectangles such that ∀h ∈ H , PrX∼P {h(X) = +1} ≥ λ, then, XT D(H , P, m, δ) ≤ O d2 dm log . λ δ Lemma 28 Blumer et al., 1989, Lemma 3.2.3 Let F be a binary hypothesis class of finite VC dimension d ≥ 1. For all k ≥ 1, define the k-fold union, Fk∪ Then, for all k ≥ 1, ∪k f i : f i ∈ F , 1 ≤ i ≤ k . i=1 VC(Fk∪ ) ≤ 2dk log2 (3k). Lemma 29 (order-n characterizing set complexity) Let H be the class of axis-aligned rectangles in Rd . Then, γ(H , n) ≤ O (dn log n) . 269 E L -YANIV AND W IENER − + Proof Let Sn = Sk ∪ Sn−k be a sample of size n composed of k negative examples, {x1 , x2 , . . . xk }, and n − k positive ones. Let H be the class of axis-aligned rectangles. We define, ∀1 ≤ i ≤ k, + Sn−k ∪ {(xi , −1)} . Ri Notice that V SH ,Ri includes all axis aligned rectangles that classify all samples in S+ as positive, and xi as negative. Therefore, the agreement region of V SH ,Ri is composed of two components as depicted in Figure 1. The first component is the smallest rectangle that bounds the positive samples, and the second is an unbounded convex polytope defined by up to d hyperplanes intersecting at xi . Let AGRi be the agreement region of V SH ,Ri and AGR the agreement region of V SH ,Sn . Clearly, Ri ⊆ Sn , so V SH ,Sn ⊆ V SH ,Ri , and AGRi ⊆ AGR, and it follows that k i=1 AGRi ⊆ AGR. Assume, by contradiction, that x ∈ AGR but x ∈ k AGRi . Therefore, for any 1 ≤ i ≤ k, there exist i=1 (i) (i) (i) (i) two hypotheses h1 , h2 ∈ V SH ,Ri , such that, h1 (x) = h2 (x). Assume, without loss of generality, (i) that h1 (x) = 1. We define k h1 (i) h1 k and (i) h2 , h2 i=1 i=1 (i) meaning that h1 classifies a sample as positive if and only if all hypotheses h1 classify it as positive. Noting that the intersection of axis-aligned rectangles is itself an axis-aligned rectangle, we know (i) (i) that h1 , h2 ∈ H . Moreover, for any xi we have, h1 (xi ) = h2 (xi ) = −1, so also h1 (xi ) = h2 (xi ) = −1, and h1 , h2 ∈ V SH ,Sn . But h1 (x) = h2 (x). Contradiction. Therefore, k AGRi . AGR = i=1 It is well known that the VC dimension of a hyper-rectangle in Rd is 2d. The VC dimension of AGRi is bounded by the VC dimension of the union of two hyper-rectangles in Rd . Furthermore, the VC dimension of AGR is bounded by the VC dimension of the union of all AGRi . Applying Lemma 28 twice we get, VCdim {AGR} ≤ 42dk log2 (3k) ≤ 42dn log2 (3n). If k = 0 then the entire sample is positive and the region of agreement is an hyper-rectangle. Therefore, VCdim {AGR} = 2d. If k = n then the entire sample is negative and the region of agreement is the points of the samples themselves. Hence, VCdim {AGR} = n. Overall we get that in all cases, VCdim {AGR} ≤ 42dn log2 (3n) = O (dn log n) . 270 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Figure 1: Agreement region of V SH ,Ri . Corollary 30 (Balanced Axis-Aligned Rectangles) Under the same conditions of Lemma 27, the class of balanced axis-aligned rectangles in Rd can be perfectly selectively learned with fast coverage rate. Proof Applying Lemmas 26 and 27 we get that with probability of at least 1 − δ, dm d2 log . λ δ n≤O ˆ Any balanced axis-aligned rectangle belongs to the class of all axis-aligned rectangles. Therefore, the coverage of CSS for the class of balanced axis-aligned rectangles is bounded bellow by the coverage of the class of axis-aligned rectangles. Applying Lemma 29, and assuming m ≥ d, we obtain, γ (H , n) ≤ O d ˆ d2 d2 dm dm log log log λ δ λ δ ≤O dm d3 log2 . λ λδ Applying Theorem 15 completes the proof. 6.2 Disagreement Coefficient In this section we show interesting relations between the disagreement coefficient and coverage bounds in perfect selective classification. We begin by defining, for an hypothesis h ∈ H , the set of all hypotheses that are r-close to h. Definition 31 (Hanneke, 2011b, p.337) For any hypothesis h ∈ H , distribution P over X , and r > 0, define the set B(h, r) of all hypotheses that reside in a ball of radius r around h, B(h, r) h′ ∈ H : Pr X∼P h′ (X) = h(X) ≤ r . Theorem 32 (Vapnik and Chervonenkis, 1971; Anthony and Bartlett, 1999, p.53) Let H be a hypothesis class with VC-dimension d. For any probability distribution P on X × {±1}, with probability of at least 1 − δ over the choice of Sm , any hypothesis h ∈ H consistent with Sm satisfies R(h) ≤ η(d, m, δ) 2 2em 2 + ln . d ln m d δ 271 E L -YANIV AND W IENER For any G ⊆ H and distribution P we denote by ∆G the volume of the disagreement region of G, ∆G Pr {DIS(G)} . Definition 33 (Disagreement coefficient, Hanneke, 2009) Let ε ≥ 0. The disagreement coefficient of the hypothesis class H with respect to the target distribution P is θ(ε) θh∗ (ε) = sup r>ε ∆B(h∗ , r) . r The following theorem formulates an intimate relation between active learning (disagreement coefficient) and selective classification. Theorem 34 Let H be an hypothesis class with VC-dimension d, P an unknown distribution, ε ≥ 0, and θ(ε), the corresponding disagreement coefficient. Let (h, g) be a perfect selective classifier (CSS, see Section 2.3). Then, R(h, g) = 0, and for any 0 ≤ δ ≤ 1, with probability of at least 1 − δ, Φ(h, g) ≥ 1 − θ(ε) · max {η(d, m, δ), ε} . Proof Clearly, R(h, g) = 0, and it remains to prove the coverage bound. By Theorem 32, with probability of at least 1 − δ, ∀h ∈ V SH ,Sm R(h) ≤ η(d, m, δ) ≤ max {η(d, m, δ), ε} . Therefore, V SH ,Sm ⊆ B (h∗ , max {η(d, m, δ), ε}) , ∆V SH ,Sm ≤ ∆B (h∗ , max {η(d, m, δ), ε}) . By Definition 33, for any r′ > ε, ∆B(h∗ , r′ ) ≤ θ(ε)r′ . Thus, the proof is complete by recalling that Φ(h, g) = 1 − ∆V SH ,Sm . Theorem 34 tells us that whenever our learning problem (specified by the pair (H , P)) has a disagreement coefficient that grows slowly with respect to 1/ε , it can be (perfectly) selectively learned with a “fast” coverage bound. Consequently, through Theorem 9 we also know that in each case where there exists a disagreement coefficient that grows slowly with respect to 1/ε, active learning with a fast rate can also be deduced directly through a reduction from perfect selective classification. It follows that as far as fast rates in active learning are concerned, whatever can be accomplished by bounding the disagreement coefficient, can be accomplished also using perfect selective classification. This result is summarized in the following corollary. Corollary 35 Let H be an hypothesis class with VC-dimension d, P an unknown distribution, and θ(ε), the corresponding disagreement coefficient. If θ(ε) = O(polylog(1/ε)), there exists a coverage bound such that an application of Theorem 7 ensures that (H , P) is actively learnable with exponential label complexity speedup. 272 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Proof The proof is established by straightforward applications of Theorems 34 with ε = 1/m and 9. The following result, due to Hanneke (2011a), implies a coverage upper bound for CSS. Lemma 36 (Hanneke, 2011a, Proof of Lemma 47) Let H be an hypothesis class, P an unknown distribution, and r ∈ (0, 1). Then, EP ∆Dm ≥ (1 − r)m ∆B (h∗ , r) , where Dm V SH ,Sm ∩ B (h∗ , r) . (3) Theorem 37 (Coverage upper bound) Let H be an hypothesis class, P an unknown distribution, and δ ∈ (0, 1). Then, for any r ∈ (0, 1), 1 > α > δ, BΦ (H , δ, m) ≤ 1 − where BΦ (H , δ, m) is any coverage bound. (1 − r)m − α ∆B (h∗ , r) , 1−α Proof Recalling the definition of Dm (3), clearly Dm ⊆ V SH ,Sm and Dm ⊆ B(h∗ , r). These inclusions imply (respectively), by the definition of disagreement set, ∆Dm ≤ ∆V SH ,Sm , and ∆Dm ≤ ∆B(h∗ , r). (4) Using Markov’s inequality (in inequality (5) of the following derivation) and applying (4) (in equality (6)), we thus have, (1 − r)m − α (1 − r)m − α ∆B (h∗ , r) ≤ Pr ∆Dm ≤ ∆B (h∗ , r) 1−α 1−α 1 − (1 − r)m Pr ∆B (h∗ , r) − ∆Dm ≥ ∆B (h∗ , r) 1−α 1 − (1 − r)m Pr |∆B (h∗ , r) − ∆Dm | ≥ ∆B (h∗ , r) 1−α E {|∆B (h∗ , r) − ∆Dm |} (1 − α) · (1 − (1 − r)m ) ∆B (h∗ , r) ∆B (h∗ , r) − E∆Dm . (1 − α) · (1 − (1 − r)m ) ∆B (h∗ , r) Pr ∆V SH ,Sm ≤ = ≤ ≤ = Applying Lemma 36 we therefore obtain, ≤ (1 − α) · ∆B (h∗ , r) − (1 − r)m ∆B(h∗ , r) = 1 − α < 1 − δ. (1 − (1 − r)m ) ∆B (h∗ , r) Observing that for any coverage bound, Pr ∆V SH ,Sm ≤ 1 − BΦ (H , δ, m) ≥ 1 − δ, completes the proof. 273 (5) (6) E L -YANIV AND W IENER Corollary 38 Let H be an hypothesis class, P an unknown distribution, and δ ∈ (0, 1/8). Then for any m ≥ 2, 1 1 , BΦ (H , δ, m) ≤ 1 − ∆B h∗ , 7 m where BΦ (H , δ, m) is any coverage bound. Proof The proof is established by a straightforward application of Theorem 37 with α = 1/8 and r = 1/m. With Corollary 38 we can bound the disagreement coefficient for settings whose coverage bound is known. Corollary 39 Let H be an hypothesis class, P an unknown distribution, and BΦ (H , δ, m) a coverage bound. Then the disagreement coefficient is bounded by, θ(ε) ≤ max sup 7 · r∈(ε,1/2) 1 − BΦ (H , 1/9, ⌊1/r⌋) ,2 . r Proof Applying Corollary 38 we get that for any r ∈ (0, 1/2), 1 − BΦ (H , 1/9, ⌊1/r⌋) ∆B(h∗ , r) ∆B(h∗ , 1/⌊1/r⌋) ≤ ≤ 7· . r r r Therefore, θ(ε) = sup r>ε ∆B(h∗ , r) ≤ max r sup 7 · r∈(ε,1/2) 1 − BΦ (H , 1/9, ⌊1/r⌋) ,2 . r Corollary 40 Let H be the class of all linear binary classifiers in Rd , and let the underlying distribution be any mixture of a fixed number of Gaussians in Rd . Then θ(ε) ≤ O polylog 1 ε . Proof Applying Corollary 39 together with inequality 2 we get that θ(ε) ≤ max sup 7 · r∈(ε,1/2) 1 − BΦ (H , 1/9, ⌊1/r⌋) ,2 r 7 ≤ max sup ·O r∈(ε,1/2) r 2 d+3 (log ⌊1/r⌋)d ·9 2 ⌊1/r⌋ 274 ,2 ≤O 1 log ε d2 . ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION 7. Concluding Remarks For quite a few years, since its inception, the theory of target-independent bounds for noise-free active learning managed to handle relatively simple settings, mostly revolving around homogeneous linear classifiers under the uniform distribution over the sphere. It is likely that this distributional uniformity assumption was often adapted to simplify analyses. However, it was shown by Dasgupta (2005) that under this distribution, exponential speed up cannot be achieved when considering general (non homogeneous) linear classifiers. The reason for this behavior is related to the two tasks that a good active learner should successfully accomplish: exploration and exploitation. Intuitively (and oversimplifying things) exploration is the task of obtaining at least one sample in each class, and exploitation is the process of refining the decision boundary by requesting labels of points around the boundary. Dasgupta showed that exploration cannot be achieved fast enough under the uniform distribution on the sphere. The source of this difficulty is the fact that under this distribution all training points reside on their convex hull. In general, the speed of exploration (using linear classifiers) depends on the size (number of vertices) of the convex hull of the training set. When using homogeneous linear classifiers, exploration is trivially achieved (under the uniform distribution) and exploitation can achieve exponential speedup. So why in the non-verifiable model (Balcan et al., 2008) it is possible to achieve exponential speedup even when using non homogeneous linear classifiers under the uniform distribution? The answer is that in the non-verifiable model, label complexity attributed to exploration is encapsulated in a target-dependent “constant.” Specifically, in Balcan et al. (2008) this constant is explicitly defined to be the probability mass of the minority class. Indeed, in certain noise free settings using linear classifiers, where the minority class is large enough, exploration is a non issue. In general, however, exploration is a major bottleneck in practical active learning (Baram et al., 2004; Begleiter et al., 2008). The present results show how exponential speedup can be achieved, including exploration, when using different (and perhaps more natural) distributions. With these good news, a somewhat pessimistic picture arises from the lower bound we obtained for the exponential dependency on the dimension d. This negative result is not restricted to streambased active learning and readily applies also to the pool-based model. While the bound is only asymptotic, we conjecture that it also holds for finite samples. Moreover, we believe that within the stream- or pool-based settings a similar statement should hold true for any active learning method (and not necessarily CAL-based querying strategies). This result indicates that when performing noise free active learning of linear classifiers, aggressive feature selection is beneficial for exploration speedup. We note, however, that it remains open whether a slowdown exponent of d (rather than d 2 ) is achievable. We have exposed interesting relations of the present technique to well known complexity measures for active learning, namely, the teaching dimension and the disagreement coefficient. These developments were facilitated by observations made by Hanneke on the teaching dimension and the disagreement coefficient. These relations gave rise to further observations on active learning, which are discussed in Section 6 and include exponential speedup for balanced axis-aligned rectangles. Finally, we note that the intimate relation between selective classification and the disagreement coefficient was recently exposed in another result for selective classification where the disagreement coefficient emerged as a dominating factor in a coverage bound for agnostic selective classification (El-Yaniv and Wiener, 2011). 275 E L -YANIV AND W IENER Acknowledgments We thank the anonymous reviewers for their good comments. This paper particularly benefited from insightful observations made by one of the reviewers, which are summarized in Section 6, including the proof of Theorem 37 and the link between our n and the extended teaching dimension ˆ (Lemmas 26 and 27). Appendix A. Lemma 41 For any m ≥ 3, a ≥ 1, b ≥ 1 we get lna (bi) i m ∑ i=1 Proof Setting f (x) lna (bx) x , < 4 a+1 ln (b(m + 1)). a we have lna−1 (bx) df = (a − ln bx) · . dx x2 Therefore, f is monotonically increasing when x < ea /b, monotonically decreasing function when x ≥ ea /b and its attains its maximum at x = ea /b. Consequently, for i < ea /b − 1, or i ≥ ea /b + 1, i+1 f (i) ≤ f (x)dx. x=i−1 For ea /b − 1 ≤ i < ea /b + 1, f (i) ≤ f (ea /b) = b a e a ≤ aa . (7) Therefore, if m < ea − 1 we have, m ∑ i=1 m f (i) = lna (b) + ∑ f (i) < 2 · i=2 m+1 x=1 f (x)dx ≤ 2 lna+1 (b(m + 1)). a+1 Otherwise, m ≥ ea /b, in which case we overcome the change of slope by adding twice the (upper bound on the) maximal value (7), m ∑ f (i) < i=1 ≤ 2 2 2 lna+1 (b(m + 1)) + 2aa = lna+1 (b(m + 1)) + aa+1 a+1 a+1 a 2 4 2 lna+1 (b(m + 1)) + lna+1 bm ≤ lna+1 (b(m + 1)). a+1 a a 276 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION Appendix B. Alternative Proof of Lemma 6 Using Super Martingales Define Wk ∑k (Zi − bi ). We assume that with probability of at least 1 − δ/2, i=1 Pr{Zi |Z1 , . . . , Zi−1 } ≤ bi , simultaneously for all i. Since Zi is a binary random variable it is easy to see that (w.h.p.), EZi {Wi |Z1 , . . . , Zi−1 } = Pr{Zi |Z1 , . . . , Zi−1 } − bi +Wi−1 ≤ Wi−1 , and the sequence W1m W1 , . . . ,Wm is a super-martingale with high probability. We apply the following theorem by McDiarmid that refers to martingales (but can be shown to apply to supermartingales, by following its original proof). Theorem 42 (McDiarmid, 1998, Theorem 3.12) Let Y1 , . . . ,Yn be a martingale difference sequence with −ak ≤ Yk ≤ 1 − ak for each k; let A = 1 ∑ ak . Then, for any ε > 0, n Pr ∑ Yk ≥ Anε ≤ exp (−[(1 + ε) ln(1 + ε) − ε]An) ≤ exp − Anε2 . 2(1 + ε/3) In our case, Yk = Wk −Wk−1 = Zk − bk ≤ 1 − bk and we apply the (revised) theorem with ak and An ∑ bk B. We thus obtain, for any 0 < ε < 1, Pr ∑ Zk ≥ B + Bε ≤ exp − bk Bε2 . 2(1 + ε/3) Equating the right-hand side to δ/2, we obtain ε = 2 2 ln ± 3 δ 4 22 2 ln + 8B ln 9 δ δ ≤ 1 2 ln + 3 δ 1 22 ln + 9 δ = 2 2 ln + 3 δ 2B ln 2 δ 2B ln /2B 2 δ /B /B. Applying the union bound completes the proof. References M. Anthony and P.L. Bartlett. Neural Network Learning; Theoretical Foundations. Cambridge University Press, 1999. L. Atlas, D. Cohn, R. Ladner, A.M. El-Sharkawi, and R.J. Marks. Training connectionist networks with queries and selective sampling. In Neural Information Processing Systems (NIPS), pages 566–573, 1990. M.F. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In 21st Annual Conference on Learning Theory (COLT), pages 45–56, 2008. 277 E L -YANIV AND W IENER Y. Baram, R. El-Yaniv, and K. Luz. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255–291, 2004. P.L. Bartlett and M.H. Wegkamp. Classification with a reject option using a hinge loss. Journal of Machine Learning Research, 9:1823–1840, 2008. R. Begleiter, R. El-Yaniv, and D. Pechyony. Repairing self-confident active-transductive learners using systematic exploration. Pattern Recognition Letters, 29(9):1245–1251, 2008. A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth. Chervonenkis dimension. Journal of the ACM, 36, 1989. Learnability and the Vapnik- C.K. Chow. An optimum character recognition system using decision function. IEEE Transactions on Computers, 6(4):247–254, 1957. C.K. Chow. On optimum recognition error and reject trade-off. IEEE Transactions on Information Theory, 16:41–36, 1970. D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems 18, pages 235–242, 2005. S. Dasgupta, A. Tauman Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281–299, 2009. R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. Journal of Machine Learning Research, 11:1605–1641, 2010. R. El-Yaniv and Y. Wiener. Agnostic selective classification. In Neural Information Processing Systems (NIPS), 2011. S. Fine, R. Gilad-Bachrach, and E. Shamir. Query by committee, linear separation and random walks. Theoretical Computer Science, 284(1):25–51, 2002. Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Information, prediction, and Query by Committee. In Advances in Neural Information Processing Systems (NIPS) 5, pages 483–490, 1993. Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28:133–168, 1997. Y. Freund, Y. Mansour, and R.E. Schapire. Generalization bounds for averaged classifiers. Annals of Statistics, 32(4):1698–1722, 2004. E. Friedman. Active learning for smooth problems. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009. R. Gilad-Bachrach. To PAC and Beyond. PhD thesis, the Hebrew University of Jerusalem, 2007. S. Goldman and M. Kearns. On the complexity of teaching. JCSS: Journal of Computer and System Sciences, 50, 1995. 278 ACTIVE L EARNING VIA P ERFECT S ELECTIVE C LASSIFICATION S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML ’07: Proceedings of the 24th international conference on Machine learning, pages 353–360, 2007a. S. Hanneke. Teaching dimension and the complexity of active learning. In Proceedings of the 20th Annual Conference on Learning Theory (COLT), volume 4539 of Lecture Notes in Artificial Intelligence, pages 66–81, 2007b. S. Hanneke. Theoretical Foundations of Active Learning. PhD thesis, Carnegie Mellon University, 2009. S. Hanneke. Activized learning: Transforming passive to active with improved label complexity. CoRR, abs/1108.1766, 2011a. URL http://arxiv.org/abs/1108.1766. informal publication. S. Hanneke. Rates of convergence in active learning. Annals of Statistics, 37(1):333–361, 2011b. T. Heged¨ s. Generalized teaching dimensions and the query complexity of learning. In COLT: u Proceedings of the Workshop on Computational Learning Theory, Morgan Kaufmann Publishers, 1995. R. Herbei and M.H. Wegkamp. Classification with reject option. The Canadian Journal of Statistics, 34(4):709–721, 2006. W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, March 1963. D. Hug and M. Reitzner. Gaussian polytopes: variances and limit theorems, June 2005. D. Hug, G. O. Munsonious, and M. Reitzner. Asymptotic mean values of Gaussian polytopes. Beitr¨ ge Algebra Geom., 45:531–548, 2004. a C. McDiarmid. Concentration. In M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, and B. Reed, editors, Probabilistic Methods for Algorithmic Discrete Mathematics, volume 16, pages 195– 248. Springer-Verlag, 1998. T. Mitchell. Version spaces: a candidate elimination approach to rule learning. In IJCAI’77: Proceedings of the 5th international joint conference on Artificial Intelligence, pages 305–310, 1977. H.S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of the Fifth Annual Workshop on Computational Learning theory (COLT), pages 287–294, 1992. V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16:264–280, 1971. M.H. Wegkamp. Lasso type classifiers with a reject option. Electronic Journal of Statistics, 1: 155–168, 2007. 279

5 0.10917637 73 jmlr-2012-Multi-task Regression using Minimal Penalties

Author: Matthieu Solnon, Sylvain Arlot, Francis Bach

Abstract: In this paper we study the kernel multiple ridge regression framework, which we refer to as multitask regression, using penalization techniques. The theoretical analysis of this problem shows that the key element appearing for an optimal calibration is the covariance matrix of the noise between the different tasks. We present a new algorithm to estimate this covariance matrix, based on the concept of minimal penalty, which was previously used in the single-task regression framework to estimate the variance of the noise. We show, in a non-asymptotic setting and under mild assumptions on the target function, that this estimator converges towards the covariance matrix. Then plugging this estimator into the corresponding ideal penalty leads to an oracle inequality. We illustrate the behavior of our algorithm on synthetic examples. Keywords: multi-task, oracle inequality, learning theory

6 0.10360144 14 jmlr-2012-Activized Learning: Transforming Passive to Active with Improved Label Complexity

7 0.099244691 87 jmlr-2012-PAC-Bayes Bounds with Data Dependent Priors

8 0.095551126 71 jmlr-2012-Multi-Instance Learning with Any Hypothesis Class

9 0.09198723 76 jmlr-2012-Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics

10 0.089220352 111 jmlr-2012-Structured Sparsity and Generalization

11 0.088831365 43 jmlr-2012-Fast Approximation of Matrix Coherence and Statistical Leverage

12 0.0872209 8 jmlr-2012-A Primal-Dual Convergence Analysis of Boosting

13 0.079749934 117 jmlr-2012-Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality

14 0.077304088 67 jmlr-2012-Minimax-Optimal Rates For Sparse Additive Models Over Kernel Classes Via Convex Programming

15 0.069194451 26 jmlr-2012-Coherence Functions with Applications in Large-Margin Classification Methods

16 0.064528838 21 jmlr-2012-Bayesian Mixed-Effects Inference on Classification Performance in Hierarchical Data Sets

17 0.056102011 59 jmlr-2012-Linear Regression With Random Projections

18 0.053179864 33 jmlr-2012-Distance Metric Learning with Eigenvalue Optimization

19 0.051912796 118 jmlr-2012-Variational Multinomial Logit Gaussian Process

20 0.051772244 28 jmlr-2012-Confidence-Weighted Linear Classification for Text Categorization


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.285), (1, 0.112), (2, -0.169), (3, 0.01), (4, -0.058), (5, -0.112), (6, 0.144), (7, 0.194), (8, -0.016), (9, 0.022), (10, -0.016), (11, -0.03), (12, 0.1), (13, -0.053), (14, -0.048), (15, -0.211), (16, 0.208), (17, 0.111), (18, 0.078), (19, 0.016), (20, 0.094), (21, -0.024), (22, -0.01), (23, -0.094), (24, -0.022), (25, -0.006), (26, 0.08), (27, -0.031), (28, 0.005), (29, 0.035), (30, 0.111), (31, -0.067), (32, -0.206), (33, -0.041), (34, 0.046), (35, 0.011), (36, -0.052), (37, -0.046), (38, -0.088), (39, -0.046), (40, 0.052), (41, 0.005), (42, -0.037), (43, 0.083), (44, -0.048), (45, 0.04), (46, 0.043), (47, 0.027), (48, -0.003), (49, 0.047)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96608561 82 jmlr-2012-On the Necessity of Irrelevant Variables

Author: David P. Helmbold, Philip M. Long

Abstract: This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant variables have a small advantage over random guessing. The main result is that algorithms relying predominately on irrelevant variables have error probabilities that quickly go to 0 in situations where algorithms that limit the use of irrelevant variables have errors bounded below by a positive constant. We also show that accurate learning is possible even when there are so few examples that one cannot determine with high confidence whether or not any individual variable is relevant. Keywords: feature selection, generalization, learning theory

2 0.72521049 71 jmlr-2012-Multi-Instance Learning with Any Hypothesis Class

Author: Sivan Sabato, Naftali Tishby

Abstract: In the supervised learning setting termed Multiple-Instance Learning (MIL), the examples are bags of instances, and the bag label is a function of the labels of its instances. Typically, this function is the Boolean OR. The learner observes a sample of bags and the bag labels, but not the instance labels that determine the bag labels. The learner is then required to emit a classification rule for bags based on the sample. MIL has numerous applications, and many heuristic algorithms have been used successfully on this problem, each adapted to specific settings or applications. In this work we provide a unified theoretical analysis for MIL, which holds for any underlying hypothesis class, regardless of a specific application or problem domain. We show that the sample complexity of MIL is only poly-logarithmically dependent on the size of the bag, for any underlying hypothesis class. In addition, we introduce a new PAC-learning algorithm for MIL, which uses a regular supervised learning algorithm as an oracle. We prove that efficient PAC-learning for MIL can be generated from any efficient non-MIL supervised learning algorithm that handles one-sided error. The computational complexity of the resulting algorithm is only polynomially dependent on the bag size. Keywords: multiple-instance learning, learning theory, sample complexity, PAC learning, supervised classification

3 0.65246844 80 jmlr-2012-On Ranking and Generalization Bounds

Author: Wojciech Rejchel

Abstract: The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates that are 1 faster than √n . We apply our results to commonly used ranking algorithms, for instance boosting or support vector machines. Moreover, we study the performance of considered estimators on real data sets. Keywords: convex risk minimization, excess risk, support vector machine, empirical process, U-process

4 0.58110881 87 jmlr-2012-PAC-Bayes Bounds with Data Dependent Priors

Author: Emilio Parrado-Hernández, Amiran Ambroladze, John Shawe-Taylor, Shiliang Sun

Abstract: This paper presents the prior PAC-Bayes bound and explores its capabilities as a tool to provide tight predictions of SVMs’ generalization. The computation of the bound involves estimating a prior of the distribution of classifiers from the available data, and then manipulating this prior in the usual PAC-Bayes generalization bound. We explore two alternatives: to learn the prior from a separate data set, or to consider an expectation prior that does not need this separate data set. The prior PAC-Bayes bound motivates two SVM-like classification algorithms, prior SVM and ηprior SVM, whose regularization term pushes towards the minimization of the prior PAC-Bayes bound. The experimental work illustrates that the new bounds can be significantly tighter than the original PAC-Bayes bound when applied to SVMs, and among them the combination of the prior PAC-Bayes bound and the prior SVM algorithm gives the tightest bound. Keywords: PAC-Bayes bound, support vector machine, generalization capability prediction, classification

5 0.57640988 97 jmlr-2012-Regularization Techniques for Learning with Matrices

Author: Sham M. Kakade, Shai Shalev-Shwartz, Ambuj Tewari

Abstract: There is growing body of learning problems for which it is natural to organize the parameters into a matrix. As a result, it becomes easy to impose sophisticated prior knowledge by appropriately regularizing the parameters under some matrix norm. This work describes and analyzes a systematic method for constructing such matrix-based regularization techniques. In particular, we focus on how the underlying statistical properties of a given problem can help us decide which regularization function is appropriate. Our methodology is based on a known duality phenomenon: a function is strongly convex with respect to some norm if and only if its conjugate function is strongly smooth with respect to the dual norm. This result has already been found to be a key component in deriving and analyzing several learning algorithms. We demonstrate the potential of this framework by deriving novel generalization and regret bounds for multi-task learning, multi-class learning, and multiple kernel learning. Keywords: regularization, strong convexity, regret bounds, generalization bounds, multi-task learning, multi-class learning, multiple kernel learning

6 0.5477376 8 jmlr-2012-A Primal-Dual Convergence Analysis of Boosting

7 0.5323711 111 jmlr-2012-Structured Sparsity and Generalization

8 0.51213574 14 jmlr-2012-Activized Learning: Transforming Passive to Active with Improved Label Complexity

9 0.5076974 13 jmlr-2012-Active Learning via Perfect Selective Classification

10 0.50610149 73 jmlr-2012-Multi-task Regression using Minimal Penalties

11 0.46623507 76 jmlr-2012-Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics

12 0.43833166 43 jmlr-2012-Fast Approximation of Matrix Coherence and Statistical Leverage

13 0.41719943 117 jmlr-2012-Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality

14 0.36098376 21 jmlr-2012-Bayesian Mixed-Effects Inference on Classification Performance in Hierarchical Data Sets

15 0.31590021 114 jmlr-2012-Towards Integrative Causal Analysis of Heterogeneous Data Sets and Studies

16 0.31402877 26 jmlr-2012-Coherence Functions with Applications in Large-Margin Classification Methods

17 0.30856967 118 jmlr-2012-Variational Multinomial Logit Gaussian Process

18 0.30472532 70 jmlr-2012-Multi-Assignment Clustering for Boolean Data

19 0.30287069 67 jmlr-2012-Minimax-Optimal Rates For Sparse Additive Models Over Kernel Classes Via Convex Programming

20 0.28593218 99 jmlr-2012-Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.01), (21, 0.048), (26, 0.058), (29, 0.052), (35, 0.017), (44, 0.32), (49, 0.02), (56, 0.019), (57, 0.019), (75, 0.073), (77, 0.039), (79, 0.02), (92, 0.129), (96, 0.089)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.76946682 23 jmlr-2012-Breaking the Curse of Kernelization: Budgeted Stochastic Gradient Descent for Large-Scale SVM Training

Author: Zhuang Wang, Koby Crammer, Slobodan Vucetic

Abstract: Online algorithms that process one example at a time are advantageous when dealing with very large data or with data streams. Stochastic Gradient Descent (SGD) is such an algorithm and it is an attractive choice for online Support Vector Machine (SVM) training due to its simplicity and effectiveness. When equipped with kernel functions, similarly to other SVM learning algorithms, SGD is susceptible to the curse of kernelization that causes unbounded linear growth in model size and update time with data size. This may render SGD inapplicable to large data sets. We address this issue by presenting a class of Budgeted SGD (BSGD) algorithms for large-scale kernel SVM training which have constant space and constant time complexity per update. Specifically, BSGD keeps the number of support vectors bounded during training through several budget maintenance strategies. We treat the budget maintenance as a source of the gradient error, and show that the gap between the BSGD and the optimal SVM solutions depends on the model degradation due to budget maintenance. To minimize the gap, we study greedy budget maintenance methods based on removal, projection, and merging of support vectors. We propose budgeted versions of several popular online SVM algorithms that belong to the SGD family. We further derive BSGD algorithms for multi-class SVM training. Comprehensive empirical results show that BSGD achieves higher accuracy than the state-of-the-art budgeted online algorithms and comparable to non-budget algorithms, while achieving impressive computational efficiency both in time and space during training and prediction. Keywords: SVM, large-scale learning, online learning, stochastic gradient descent, kernel methods

same-paper 2 0.73810613 82 jmlr-2012-On the Necessity of Irrelevant Variables

Author: David P. Helmbold, Philip M. Long

Abstract: This work explores the effects of relevant and irrelevant boolean variables on the accuracy of classifiers. The analysis uses the assumption that the variables are conditionally independent given the class, and focuses on a natural family of learning algorithms for such sources when the relevant variables have a small advantage over random guessing. The main result is that algorithms relying predominately on irrelevant variables have error probabilities that quickly go to 0 in situations where algorithms that limit the use of irrelevant variables have errors bounded below by a positive constant. We also show that accurate learning is possible even when there are so few examples that one cannot determine with high confidence whether or not any individual variable is relevant. Keywords: feature selection, generalization, learning theory

3 0.50144464 8 jmlr-2012-A Primal-Dual Convergence Analysis of Boosting

Author: Matus Telgarsky

Abstract: Boosting combines weak learners into a predictor with low empirical risk. Its dual constructs a high entropy distribution upon which weak learners and training labels are uncorrelated. This manuscript studies this primal-dual relationship under a broad family of losses, including the exponential loss of AdaBoost and the logistic loss, revealing: • Weak learnability aids the whole loss family: for any ε > 0, O (ln(1/ε)) iterations suffice to produce a predictor with empirical risk ε-close to the infimum; • The circumstances granting the existence of an empirical risk minimizer may be characterized in terms of the primal and dual problems, yielding a new proof of the known rate O (ln(1/ε)); • Arbitrary instances may be decomposed into the above two, granting rate O (1/ε), with a matching lower bound provided for the logistic loss. Keywords: boosting, convex analysis, weak learnability, coordinate descent, maximum entropy

4 0.50134313 85 jmlr-2012-Optimal Distributed Online Prediction Using Mini-Batches

Author: Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao

Abstract: Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem. Keywords: distributed computing, online learning, stochastic optimization, regret bounds, convex optimization

5 0.4986667 64 jmlr-2012-Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Author: Sangkyun Lee, Stephen J. Wright

Abstract: Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a lowdimensional manifold of parameter space along which the regularizer is smooth. (When an ℓ1 regularizer is used to induce sparsity in the solution, for example, this manifold is defined by the set of nonzero components of the parameter vector.) This paper shows that a regularized dual averaging algorithm can identify this manifold, with high probability, before reaching the solution. This observation motivates an algorithmic strategy in which, once an iterate is suspected of lying on an optimal or near-optimal manifold, we switch to a “local phase” that searches in this manifold, thus converging rapidly to a near-optimal point. Computational results are presented to verify the identification property and to illustrate the effectiveness of this approach. Keywords: regularization, dual averaging, partly smooth manifold, manifold identification

6 0.49707431 26 jmlr-2012-Coherence Functions with Applications in Large-Margin Classification Methods

7 0.49601337 111 jmlr-2012-Structured Sparsity and Generalization

8 0.49534869 117 jmlr-2012-Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality

9 0.49073303 2 jmlr-2012-A Comparison of the Lasso and Marginal Regression

10 0.4907313 7 jmlr-2012-A Multi-Stage Framework for Dantzig Selector and LASSO

11 0.48979571 71 jmlr-2012-Multi-Instance Learning with Any Hypothesis Class

12 0.48968893 67 jmlr-2012-Minimax-Optimal Rates For Sparse Additive Models Over Kernel Classes Via Convex Programming

13 0.48955417 80 jmlr-2012-On Ranking and Generalization Bounds

14 0.48844349 73 jmlr-2012-Multi-task Regression using Minimal Penalties

15 0.48803228 13 jmlr-2012-Active Learning via Perfect Selective Classification

16 0.4866603 29 jmlr-2012-Consistent Model Selection Criteria on High Dimensions

17 0.48542434 115 jmlr-2012-Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints

18 0.48494714 11 jmlr-2012-A Unifying Probabilistic Perspective for Spectral Dimensionality Reduction: Insights and New Models

19 0.48434854 105 jmlr-2012-Selective Sampling and Active Learning from Single and Multiple Teachers

20 0.48409626 34 jmlr-2012-Dynamic Policy Programming