jmlr jmlr2006 jmlr2006-58 knowledge-graph by maker-knowledge-mining

58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation


Source: pdf

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 FR Laboratoire de Probabilit´ s et Mod` les Al´ atoires e e e Universit´ Paris 6 e 4 place Jussieu, BP 188 75252 Paris, France Editor: G´ bor Lugosi a Abstract In this paper we prove the optimality of an aggregation procedure. [sent-3, score-0.706]

2 We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. [sent-4, score-1.022]

3 Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. [sent-6, score-0.779]

4 Introduction Let (X , A ) be a measurable space and ν be a σ-finite measure on (X , A ). [sent-8, score-0.073]

5 observations drawn from an unknown probability of density f on X with respect to ν. [sent-15, score-0.121]

6 Suppose that we have M ≥ 2 different estimators fˆ1 , . [sent-17, score-0.1]

7 Namely, the aggregation is based on splitting the sample in two independent subsamples D1 and D2 of sizes m and l respectively, where m ≫ l m l and m + l = n. [sent-27, score-0.64]

8 The size of the first subsample has to be greater than the one of the second because it is used for the true estimation, that is for the construction of the M estimators fˆ1 , . [sent-28, score-0.215]

9 The second subsample is used for the adaptation step of the procedure, that is for the construction of an aggregate f˜n , which has to mimic, in a certain sense, the behavior of the best among the estimators fˆi . [sent-32, score-0.289]

10 the whole sample Dn unlike the first estimators fˆ1 , . [sent-36, score-0.1]

11 These papers give a bigger picture about the general topic of procedure aggregation and Yang (2004) complemented their results. [sent-41, score-0.645]

12 Tsybakov (2003) improved these results and formulated the three types of aggregation problems (cf. [sent-42, score-0.623]

13 One can suggest different aggregation procedures and the question is how to look for an optimal one. [sent-44, score-0.657]

14 A way to define optimality in aggregation in a minimax sense for a regression problem is suggested in Tsybakov (2003). [sent-45, score-0.727]

15 Based on the same principle we can define optimality for density c 2006 Guillaume Lecu´ . [sent-46, score-0.16]

16 Thus, the first subsample is fixed and instead of estimators fˆ1 , . [sent-51, score-0.174]

17 Rather than working with a part of the initial sample we will use, for notational simplicity, the whole sample Dn of size n instead of a subsample D2 . [sent-58, score-0.074]

18 l The aim of this paper is to prove the optimality, in the sense of Tsybakov (2003), of the aggregation method proposed by Yang, for the estimation of a density on (Rd , λ) where λ is the Lebesgue measure on Rd . [sent-59, score-0.791]

19 This procedure is a convex aggregation with weights which can be seen in two different ways. [sent-60, score-0.675]

20 Yang’s point of view is to express these weights in function of the likelihood of the model, namely M f˜n (x) = ˜ ∑ wj (n) f j (x), j=1 (n) ∀x ∈ X , (1) (k) where the weights are w j = (n + 1)−1 ∑n w j and ˜ k=0 (k) wj = f j (X1 ) . [sent-61, score-0.179]

21 Define the empirical Kullback loss Kn ( f ) = e −(1/n) ∑n log f (Xi ) (keeping only the term independent of the underlying density to estimate) for i=1 all density f . [sent-74, score-0.372]

22 We can rewrite these weights as exponential weights: (k) wj = exp(−kKk ( f j )) , M ∑l=1 exp(−kKk ( fl )) ∀k = 0, . [sent-75, score-0.114]

23 Most of the results on convergence properties of aggregation methods are obtained for the regression and the gaussian white noise models. [sent-79, score-0.646]

24 Nevertheless, Catoni (1997, 2004), Devroye and Lugosi (2001), Yang (2000a), Zhang (2003) and Rigollet and Tsybakov (2004) have explored the performances of aggregation procedures in the density estimation framework. [sent-80, score-0.807]

25 Most of them have established upper bounds for some procedure and do not deal with the problem of optimality of their procedures. [sent-81, score-0.097]

26 Nemirovski (2000), Juditsky and Nemirovski (2000) and Yang (2004) state lower bounds for aggregation procedure in the regression setup. [sent-82, score-0.73]

27 To our knowledge, lower bounds for the performance of aggregation methods in density estimation are available only in Rigollet and Tsybakov (2004). [sent-83, score-0.835]

28 One aim of this paper is to prove optimality of one of these procedures w. [sent-89, score-0.073]

29 the Hellinger’s distance and L1 -distance (stated in Section 3) and some results of Birg´ (2004) and e Devroye and Lugosi (2001) (recalled in Section 4) suggest that the rates of convergence obtained in Theorem 2 and 4 are optimal in the sense given in Definition 1. [sent-96, score-0.09]

30 In Section 2 we give a Definition of optimality, for a rate of aggregation and for an aggregation procedure, and our main results. [sent-101, score-1.273]

31 In Section 4, we recall a result of Yang (2000a) about an exact oracle inequality satisfied by the aggregation procedure introduced in (1). [sent-103, score-0.667]

32 Main Definition and Main Results To evaluate the accuracy of a density estimator we use the Kullback-Leibler (KL) divergence, the Hellinger’s distance and the L1 -distance as loss functions. [sent-105, score-0.229]

33 The KL divergence is defined for all densities f , g w. [sent-106, score-0.217]

34 a σ−finite measure ν on a space X , by K( f |g) = R X f g log f dν if Pf ≪ Pg ; +∞ otherwise, where Pf (respectively Pg ) denotes the probability distribution of density f (respectively g) w. [sent-109, score-0.232]

35 Hellinger’s distance is defined for all non-negative measurable functions f and g by √ f− g H( f , g) = 2 , 1/2 for all functions f ∈ L2 (X , ν). [sent-113, score-0.123]

36 The where the L2 -norm is defined by f 2 = X f 2 (x)dν(x) L1 -distance is defined for all measurable functions f and g by R v( f , g) = Z X | f − g|dν. [sent-114, score-0.073]

37 The main goal of this paper is to find optimal rate of aggregation in the sense of the definition given below. [sent-115, score-0.668]

38 This definition is an analog, for the density estimation problem, of the one in Tsybakov (2003) for the regression problem. [sent-116, score-0.173]

39 Definition 1 Take M ≥ 2 an integer, F a set of densities on (X , A , ν) and F 0 a set of functions on X with values in R such that F ⊆ F 0 . [sent-117, score-0.169]

40 A sequence of positive numbers (ψn (M))n∈N∗ is called optimal rate of aggregation of M functions in (F 0 , F ) w. [sent-119, score-0.65]

41 , fM in F 0 there exists an estimator f˜n (aggregate) of f such that sup E f d( f , f˜n ) − min d( f , fi ) ≤ Cψn (M), f ∈F i=1,. [sent-125, score-0.279]

42 , fM in F 0 and c > 0 a constant independent of M such that for all estimators fˆn of f , sup E f d( f , fˆn ) − min d( f , fi ) ≥ cψn (M), f ∈F i=1,. [sent-132, score-0.34]

43 (4) Moreover, when the inequalities (3) and (4) are satisfied, we say that the procedure f˜n , appearing in (3), is an optimal aggregation procedure w. [sent-136, score-0.686]

44 In this paper we are interested in the estimation of densities lying in F (A) = {densities bounded by A} and, depending on the used loss function, we aggregate functions in F 0 which can be: 973 (5) ´ L ECU E 1. [sent-141, score-0.3]

45 F H (A) = {non-negative measurable functions bounded by A} for Hellinger’s distance, 3. [sent-143, score-0.091]

46 Let M and n be two integers such that log M ≤ 16(min(1, A − 1))2 n. [sent-147, score-0.149]

47 The sequence log M ψn (M) = n is an optimal rate of aggregation of M functions in (F K (A), F (A)) (introduced in (5)) w. [sent-148, score-0.761]

48 Moreover, the aggregation procedure with exponential weights, defined in (1), achieves this rate. [sent-152, score-0.645]

49 So, this procedure is an optimal aggregation procedure w. [sent-153, score-0.667]

50 ,M d( f , fi )”, in the upper bound and the lower bound of Definition 1, to be multiplied by a constant greater than one, then the rate (ψn (M))n∈N∗ is said ”near optimal rate of aggregation”. [sent-160, score-0.165]

51 Observing Theorem 6 and the result of Devroye and Lugosi (2001) (recalled at the end of Section 4), the rates obtained in Theorems 2 and 4: log M n q 2 are near optimal rates of aggregation for the Hellinger’s distance and the L1 -distance to the power q, where q > 0. [sent-161, score-0.849]

52 Lower Bounds To prove lower bounds of type (4) we use the following lemma on minimax lower bounds which can be obtained by combining Theorems 2. [sent-163, score-0.165]

53 Lemma 1 Let d be a semi-distance on the set of all densities on (X , A , ν) and w be a non-decreasing function defined on R+ which is not identically 0. [sent-167, score-0.169]

54 f f0 Then, inf sup E f w(ψ−1 d( fˆn , f )) ≥ c1 , n fˆn f ∈C where inf fˆn denotes the infimum over all estimators based on a sample of size n from an unknown distribution with density f and c1 > 0 is an absolute constant. [sent-170, score-0.559]

55 Lower bounds are given in the problem of estimation of a density on Rd , namely we have X = Rd and ν is the Lebesgue measure on Rd . [sent-172, score-0.205]

56 We have for all integers n such that log M ≤ 16(min (1, A − 1))2 n, sup inf sup f1 ,. [sent-174, score-0.522]

57 , fM ∈F H (A) fˆn f ∈F (A) log M n E f H( fˆn , f )q − min H( f j , f )q ≥ c j=1,. [sent-177, score-0.153]

58 The sets F (A) and F H (A) are defined in (5) when X = Rd and the infimum is taken over all the estimators based on a sample of size n. [sent-181, score-0.1]

59 , fM ∈F H (A) fˆn f ∈F (A) E f H( fˆn , f )q − min H( f j , f )q ≥ inf sup fˆn f ∈{ f1 ,. [sent-188, score-0.279]

60 Thus, to prove Theorem 1, it suffices to find M appropriate densities bounded by A and to apply Lemma 1 with a suitable rate. [sent-195, score-0.187]

61 We take L such that L ≤ D min(1, A − 1) thus, for all δ ∈ ∆, fδ is a density bounded by A. [sent-205, score-0.139]

62 Since R hdλ = 0, we have D D 1 1 H 2 ( fδ1 , fδ2 ) = D ∑ j=1 Z j D I(δ1 = δ2 ) 1 − j j j−1 2 1 + h j (x) dx D = 2ρ(δ1 , δ2 ) Z 1/D 0 1− 1 + h(x) dx, 2 2 2 2 1 1 1 for √ all δ = (δ1 , . [sent-216, score-0.129]

63 On the other hand the function ϕ(x) = 1 − αx − −3/2 , is convex on [−1, 1] and we have |h(x)| ≤ L/D ≤ 1 so, according to 1 + x, where α = 8 R 1/D R R R 1/D 1 − 1 + h(x) dx ≥ α 0 h2 (x)dx = Jensen, 01 ϕ(h(x))dx ≥ ϕ 01 h(x)dx . [sent-223, score-0.129]

64 89) or Ibragimov and Hasminskii (1980), there exists a D/8-separated set, called ND/8 , on ∆ for the Hamming distance such that its cardinal is higher than 2D/8 and (0, . [sent-227, score-0.068]

65 We have, ⊗n ⊗n K(Pδ |P0 ) = n Z [0,1]d D = n∑ j=1 log ( fδ (x)) fδ (x)dx Z j/D j−1 D D ∑ δj = n j=1 log (1 + δ j h j (x)) (1 + δ j h j (x)) dx Z 1/D log(1 + h(x))(1 + h(x))dx, 0 for all δ = (δ1 , . [sent-244, score-0.351]

66 D2 Since log M ≤ 16(min (1, A − 1))2 n, we can take L such that (nL2 )/D2 = log(M)/16 and still having L ≤ D min(1, A − 1). [sent-249, score-0.111]

67 Applying Lemma 1 when d is H, the Hellinger’s distance, with M densities f1 , . [sent-251, score-0.169]

68 I Remark 1 The construction of the family of densities fδ : δ ∈ ND/8 is in the same spirit as the lower bound of Tsybakov (2003), Rigollet and Tsybakov (2004). [sent-255, score-0.213]

69 We have, for any integer n such that log M ≤ 16(min(1, A − 1))2 n, sup inf sup f1 ,. [sent-261, score-0.512]

70 ,M 976 log M n q , (6) L OWER B OUNDS AND AGGREGATION IN D ENSITY E STIMATION and sup inf sup f1 ,. [sent-267, score-0.484]

71 ,M q log M n , (7) where c is a positive constant which depends only on A. [sent-273, score-0.111]

72 Since we have for all densities f and g, K( f |g) ≥ H 2 ( f , g), (a proof is given in Tsybakov, 2004, p. [sent-276, score-0.169]

73 , fM are densities bounded by A then, E f (K( f | fˆn ))q − min (K( f | fi ))q sup inf sup sup E f (K( f | fˆn ))q f1 ,. [sent-280, score-0.8]

74 We have for any integers n such that log M ≤ 16(min(1, A − 1))2 n, sup inf sup f1 ,. [sent-295, score-0.522]

75 , fM ∈F v (A) fˆn f ∈F (A) log M E f v( f , fˆn )q − min v( f , fi )q ≥ c j=1,. [sent-298, score-0.215]

76 Thus, for L = (D/4) log(M)/n and ND/8 , the D/8-separated set of ∆ introduced in the proof of Theorem 2, we have, v( fδ1 , fδ2 ) ≥ 1 32 log(M) , n ⊗n ⊗n ∀δ1 , δ2 ∈ ND/8 and K(Pδ |P0 ) ≤ 1 log(M), 16 Therefore, by applying Lemma 1 to the L1 -distance with M densities f1 , . [sent-305, score-0.169]

77 Upper Bounds In this section we use an argument in Yang (2000a) (see also Catoni, 2004) to show that the rate of the lower bound of Theorem 3 is an optimal rate of aggregation with respect to the KL loss. [sent-311, score-0.703]

78 We use an aggregate constructed by Yang (defined in (1)) to attain this rate. [sent-312, score-0.065]

79 Remark that Theorem 5 holds in a general framework of a measurable space (X , A ) endowed with a σ-finite measure ν. [sent-314, score-0.073]

80 , Xn be n observations of a probability measure on (X , A ) of density f with respect to ν. [sent-318, score-0.121]

81 The aggregate f˜n , introduced in (1), satisfies, for any underlying density f , E f K( f | f˜n ) ≤ min K( f | f j ) + j=1,. [sent-323, score-0.228]

82 , xk ∈ X ) and fˆ0 (x; X (0) ) = (1/M) ∑M f j (x) for all x ∈ X . [sent-337, score-0.094]

83 We have n n ∑ E f K( f | fˆk ) ∑ = Z log k+1 k=0 X k=0 = = n Z Z X X n+1 n+1 ∑ log k=0 log f (xk+1 ) ˆk (xk+1 ; x(k) ) f k+1 ∏ f (xi )dν⊗(k+1)(x1 , . [sent-340, score-0.333]

84 , xn+1 ∈ X thus, j=1 k=0 n ∑ E f K( f | fˆk ) = k=0 Z X n+1 log f (x1 ) . [sent-358, score-0.111]

85 , xn+1 ) i=1 , L OWER B OUNDS AND AGGREGATION IN D ENSITY E STIMATION finally we have, n ∑ Ef k=0 K( f | fˆk ) ≤ log M + (n + 1) inf j=1,. [sent-391, score-0.212]

86 (9) On the other hand we have, E f K( f | f˜n ) = Z X n+1 log f (xn+1 ) n 1 (k) ˆ n+1 ∑k=0 f k (xn+1 ; x ) n+1 ∏ f (xi )dν⊗(n+1)(x1 , . [sent-395, score-0.111]

87 Birg´ constructs estimators, called T-estimators (the ”T” is for ”test”), which are adaptive in e aggregation selection model of M estimators with a residual proportional at (log M/n)q/2 when Hellinger and L1 -distances are used to evaluate the quality of estimation (cf. [sent-400, score-0.789]

88 ,M d q ( f , fi ) where d is the Hellinger distance or the L1 distance. [sent-405, score-0.112]

89 Nevertheless, observing the proof of Theorem 2 and 4, we can obtain sup inf sup f1 ,. [sent-406, score-0.373]

90 , fM ∈F (A) fˆn f ∈F (A) log M E f d( f , fˆn )q −C(q) min d( f , fi )q ≥ c i=1,. [sent-409, score-0.215]

91 The same residual appears in this lower bound and in the upper bounds of Theorem 6, so we can say that log M n q/2 is near optimal rate of aggregation w. [sent-414, score-0.863]

92 the Hellinger distance or the L1 -distance to the power q, in the sense given at the end of Section 2. [sent-417, score-0.068]

93 e Theorem 6 (Birg´ ) If we have n observations of a probability measure of density f w. [sent-419, score-0.121]

94 , fM densities on (X , A , ν), then there exists an estimator f˜n ( T-estimator) such that for any underlying density f and q > 0, we have E f H( f , f˜n )q ≤ C(q) min H( f , f j )q + j=1,. [sent-425, score-0.371]

95 ,M q/2 log M n , and for the L1 -distance we can construct an estimator f˜n which satisfies : E f v( f , f˜n )q ≤ C(q) log M min v( f , f j ) + j=1,. [sent-428, score-0.303]

96 59)) achieves the same aggregation rate as in Theorem 6 for the L1 -distance with q = 1. [sent-434, score-0.65]

97 , fM ∈ F (A), log M , E f v( f , f˘n ) ≤ 3 min v( f , f j ) + j=1,. [sent-438, score-0.153]

98 ,M n where f˘n is the estimator of Yatracos defined by f˘n = arg min sup f ∈{ f1 ,. [sent-441, score-0.217]

99 Online prediction algorithms for aggregation of arbitrary estimators of a conditional mean. [sent-472, score-0.723]

100 From epsilon-entropy to KL-complexity: analysis of minimum information complexity density estimation. [sent-591, score-0.121]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('aggregation', 0.623), ('tsybakov', 0.268), ('fm', 0.262), ('hellinger', 0.222), ('densities', 0.169), ('yang', 0.153), ('juditsky', 0.153), ('nemirovski', 0.148), ('sup', 0.136), ('rigollet', 0.131), ('dx', 0.129), ('density', 0.121), ('kl', 0.116), ('log', 0.111), ('birg', 0.111), ('catoni', 0.111), ('ecu', 0.109), ('ensity', 0.109), ('inf', 0.101), ('estimators', 0.1), ('xk', 0.094), ('lecu', 0.087), ('stimation', 0.083), ('ower', 0.083), ('xn', 0.08), ('ounds', 0.076), ('subsample', 0.074), ('measurable', 0.073), ('probabilit', 0.065), ('aggregate', 0.065), ('devroye', 0.065), ('lugosi', 0.062), ('fi', 0.062), ('rd', 0.059), ('paris', 0.05), ('distance', 0.05), ('pg', 0.05), ('wj', 0.05), ('divergence', 0.048), ('divergences', 0.046), ('dn', 0.045), ('atoires', 0.044), ('bunea', 0.044), ('guillaume', 0.044), ('ibragimov', 0.044), ('jussieu', 0.044), ('yatracos', 0.044), ('lebesgue', 0.043), ('min', 0.042), ('optimality', 0.039), ('theorem', 0.039), ('estimator', 0.039), ('integers', 0.038), ('laboratoire', 0.037), ('kkk', 0.037), ('augustin', 0.037), ('uq', 0.037), ('bounds', 0.036), ('fl', 0.034), ('procedures', 0.034), ('recalled', 0.033), ('mini', 0.033), ('adaptation', 0.032), ('hamming', 0.03), ('pf', 0.03), ('weights', 0.03), ('estimation', 0.029), ('integer', 0.028), ('mod', 0.028), ('al', 0.028), ('preprint', 0.028), ('rate', 0.027), ('lower', 0.026), ('separated', 0.024), ('minimax', 0.024), ('mum', 0.024), ('regression', 0.023), ('greater', 0.023), ('procedure', 0.022), ('rates', 0.022), ('mixing', 0.022), ('oracle', 0.022), ('near', 0.021), ('appearing', 0.019), ('universit', 0.019), ('residual', 0.019), ('loss', 0.019), ('namely', 0.019), ('hd', 0.018), ('buckland', 0.018), ('cardinal', 0.018), ('nobel', 0.018), ('selection', 0.018), ('sense', 0.018), ('bounded', 0.018), ('springer', 0.018), ('construction', 0.018), ('lemma', 0.017), ('splitting', 0.017), ('ee', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

2 0.27502608 82 jmlr-2006-Some Theory for Generalized Boosting Algorithms

Author: Peter J. Bickel, Ya'acov Ritov, Alon Zakai

Abstract: We give a review of various aspects of boosting, clarifying the issues through a few simple results, and relate our work and that of others to the minimax paradigm of statistics. We consider the population version of the boosting algorithm and prove its convergence to the Bayes classifier as a corollary of a general result about Gauss-Southwell optimization in Hilbert space. We then investigate the algorithmic convergence of the sample version, and give bounds to the time until perfect separation of the sample. We conclude by some results on the statistical optimality of the L2 boosting. Keywords: classification, Gauss-Southwell algorithm, AdaBoost, cross-validation, non-parametric convergence rate

3 0.096086137 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms

Author: Régis Vert, Jean-Philippe Vert

Abstract: We determine the asymptotic behaviour of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator. Keywords: regularization, Gaussian kernel RKHS, one-class SVM, convex loss functions, kernel density estimation

4 0.071558617 84 jmlr-2006-Stability Properties of Empirical Risk Minimization over Donsker Classes

Author: Andrea Caponnetto, Alexander Rakhlin

Abstract: We study some stability properties of algorithms which minimize (or almost-minimize) empirical error over Donsker classes of functions. We show that, as the number n of samples grows, the L 2 1 diameter of the set of almost-minimizers of empirical error with tolerance ξ(n) = o(n − 2 ) converges to zero in probability. Hence, even in the case of multiple minimizers of expected error, as n increases it becomes less and less likely that adding a sample (or a number of samples) to the training set will result in a large jump to a new hypothesis. Moreover, under some assumptions on the entropy of the class, along with an assumption of Komlos-Major-Tusnady type, we derive a power rate of decay for the diameter of almost-minimizers. This rate, through an application of a uniform ratio limit inequality, is shown to govern the closeness of the expected errors of the almost-minimizers. In fact, under the above assumptions, the expected errors of almost-minimizers become closer with a rate strictly faster than n−1/2 . Keywords: empirical risk minimization, empirical processes, stability, Donsker classes

5 0.06033808 24 jmlr-2006-Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss

Author: Di-Rong Chen, Tao Sun

Abstract: The consistency of classification algorithm plays a central role in statistical learning theory. A consistent algorithm guarantees us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. We consider the consistency of ERM scheme over classes of combinations of very simple rules (base classifiers) in multiclass classification. Our approach is, under some mild conditions, to establish a quantitative relationship between classification errors and convex risks. In comparison with the related previous work, the feature of our result is that the conditions are mainly expressed in terms of the differences between some values of the convex function. Keywords: multiclass classification, classifier, consistency, empirical risk minimization, constrained comparison method, Tsybakov noise condition

6 0.055850964 90 jmlr-2006-Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition

7 0.055832852 17 jmlr-2006-Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation

8 0.051204488 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

9 0.050006259 48 jmlr-2006-Learning Minimum Volume Sets

10 0.048185337 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

11 0.040378217 11 jmlr-2006-Active Learning in Approximately Linear Regression Based on Conditional Expectation of Generalization Error

12 0.038032282 73 jmlr-2006-Pattern Recognition for Conditionally Independent Data

13 0.036342319 93 jmlr-2006-Universal Kernels

14 0.035940308 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization

15 0.035459321 46 jmlr-2006-Learning Factor Graphs in Polynomial Time and Sample Complexity

16 0.035426438 56 jmlr-2006-Linear Programs for Hypotheses Selection in Probabilistic Inference Models     (Special Topic on Machine Learning and Optimization)

17 0.034148954 15 jmlr-2006-Bayesian Network Learning with Parameter Constraints     (Special Topic on Machine Learning and Optimization)

18 0.030690614 6 jmlr-2006-A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests

19 0.030158972 36 jmlr-2006-In Search of Non-Gaussian Components of a High-Dimensional Distribution

20 0.028939607 87 jmlr-2006-Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.156), (1, -0.045), (2, -0.06), (3, -0.308), (4, -0.143), (5, 0.357), (6, -0.183), (7, 0.086), (8, 0.081), (9, -0.107), (10, -0.179), (11, 0.159), (12, -0.114), (13, 0.205), (14, 0.082), (15, 0.148), (16, 0.093), (17, 0.037), (18, 0.013), (19, 0.001), (20, 0.004), (21, 0.047), (22, 0.058), (23, 0.112), (24, -0.009), (25, 0.158), (26, -0.073), (27, 0.048), (28, 0.206), (29, -0.084), (30, 0.04), (31, -0.066), (32, -0.064), (33, -0.011), (34, -0.048), (35, -0.021), (36, 0.029), (37, 0.042), (38, 0.026), (39, -0.015), (40, 0.022), (41, 0.002), (42, 0.053), (43, -0.005), (44, -0.043), (45, 0.009), (46, 0.014), (47, 0.067), (48, 0.094), (49, 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95344245 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

2 0.87825418 82 jmlr-2006-Some Theory for Generalized Boosting Algorithms

Author: Peter J. Bickel, Ya'acov Ritov, Alon Zakai

Abstract: We give a review of various aspects of boosting, clarifying the issues through a few simple results, and relate our work and that of others to the minimax paradigm of statistics. We consider the population version of the boosting algorithm and prove its convergence to the Bayes classifier as a corollary of a general result about Gauss-Southwell optimization in Hilbert space. We then investigate the algorithmic convergence of the sample version, and give bounds to the time until perfect separation of the sample. We conclude by some results on the statistical optimality of the L2 boosting. Keywords: classification, Gauss-Southwell algorithm, AdaBoost, cross-validation, non-parametric convergence rate

3 0.32846093 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms

Author: Régis Vert, Jean-Philippe Vert

Abstract: We determine the asymptotic behaviour of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator. Keywords: regularization, Gaussian kernel RKHS, one-class SVM, convex loss functions, kernel density estimation

4 0.28048944 84 jmlr-2006-Stability Properties of Empirical Risk Minimization over Donsker Classes

Author: Andrea Caponnetto, Alexander Rakhlin

Abstract: We study some stability properties of algorithms which minimize (or almost-minimize) empirical error over Donsker classes of functions. We show that, as the number n of samples grows, the L 2 1 diameter of the set of almost-minimizers of empirical error with tolerance ξ(n) = o(n − 2 ) converges to zero in probability. Hence, even in the case of multiple minimizers of expected error, as n increases it becomes less and less likely that adding a sample (or a number of samples) to the training set will result in a large jump to a new hypothesis. Moreover, under some assumptions on the entropy of the class, along with an assumption of Komlos-Major-Tusnady type, we derive a power rate of decay for the diameter of almost-minimizers. This rate, through an application of a uniform ratio limit inequality, is shown to govern the closeness of the expected errors of the almost-minimizers. In fact, under the above assumptions, the expected errors of almost-minimizers become closer with a rate strictly faster than n−1/2 . Keywords: empirical risk minimization, empirical processes, stability, Donsker classes

5 0.27543908 90 jmlr-2006-Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition

Author: Ron Begleiter, Ran El-Yaniv

Abstract: We present worst case bounds for the learning rate of a known prediction method that is based on hierarchical applications of binary context tree weighting (CTW) predictors. A heuristic application of this approach that relies on Huffman’s alphabet decomposition is known to achieve state-ofthe-art performance in prediction and lossless compression benchmarks. We show that our new bound for this heuristic is tighter than the best known performance guarantees for prediction and lossless compression algorithms in various settings. This result substantiates the efficiency of this hierarchical method and provides a compelling explanation for its practical success. In addition, we present the results of a few experiments that examine other possibilities for improving the multialphabet prediction performance of CTW-based algorithms. Keywords: sequential prediction, the context tree weighting method, variable order Markov models, error bounds

6 0.2283643 17 jmlr-2006-Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation

7 0.21530624 48 jmlr-2006-Learning Minimum Volume Sets

8 0.21345244 24 jmlr-2006-Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss

9 0.20685385 46 jmlr-2006-Learning Factor Graphs in Polynomial Time and Sample Complexity

10 0.1977694 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

11 0.1851712 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

12 0.17499332 11 jmlr-2006-Active Learning in Approximately Linear Regression Based on Conditional Expectation of Generalization Error

13 0.16579618 36 jmlr-2006-In Search of Non-Gaussian Components of a High-Dimensional Distribution

14 0.15971334 56 jmlr-2006-Linear Programs for Hypotheses Selection in Probabilistic Inference Models     (Special Topic on Machine Learning and Optimization)

15 0.15328471 73 jmlr-2006-Pattern Recognition for Conditionally Independent Data

16 0.15252003 83 jmlr-2006-Sparse Boosting

17 0.14893533 53 jmlr-2006-Learning a Hidden Hypergraph

18 0.14626347 10 jmlr-2006-Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems

19 0.13659656 93 jmlr-2006-Universal Kernels

20 0.13602167 76 jmlr-2006-QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(8, 0.012), (36, 0.039), (45, 0.029), (50, 0.153), (57, 0.442), (63, 0.017), (76, 0.019), (78, 0.017), (81, 0.063), (84, 0.012), (90, 0.028), (91, 0.017), (96, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.7125206 58 jmlr-2006-Lower Bounds and Aggregation in Density Estimation

Author: Guillaume Lecué

Abstract: In this paper we prove the optimality of an aggregation procedure. We prove lower bounds for aggregation of model selection type of M density estimators for the Kullback-Leibler divergence (KL), the Hellinger’s distance and the L1 -distance. The lower bound, with respect to the KL distance, can be achieved by the on-line type estimate suggested, among others, by Yang (2000a). Combining these results, we state that log M/n is an optimal rate of aggregation in the sense of Tsybakov (2003), where n is the sample size. Keywords: aggregation, optimal rates, Kullback-Leibler divergence

2 0.36494219 23 jmlr-2006-Consistency and Convergence Rates of One-Class SVMs and Related Algorithms

Author: Régis Vert, Jean-Philippe Vert

Abstract: We determine the asymptotic behaviour of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator. Keywords: regularization, Gaussian kernel RKHS, one-class SVM, convex loss functions, kernel density estimation

3 0.36003914 6 jmlr-2006-A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests

Author: Luis M. de Campos

Abstract: We propose a new scoring function for learning Bayesian networks from data using score+search algorithms. This is based on the concept of mutual information and exploits some well-known properties of this measure in a novel way. Essentially, a statistical independence test based on the chi-square distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a non-Bayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the Kullback-Leibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the well-known K2, BDeu and BIC/MDL scores are also presented. Keywords: Bayesian networks, scoring functions, learning, mutual information, conditional independence tests

4 0.35710242 67 jmlr-2006-On Representing and Generating Kernels by Fuzzy Equivalence Relations

Author: Bernhard Moser

Abstract: Kernels are two-placed functions that can be interpreted as inner products in some Hilbert space. It is this property which makes kernels predestinated to carry linear models of learning, optimization or classification strategies over to non-linear variants. Following this idea, various kernel-based methods like support vector machines or kernel principal component analysis have been conceived which prove to be successful for machine learning, data mining and computer vision applications. When applying a kernel-based method a central question is the choice and the design of the kernel function. This paper provides a novel view on kernels based on fuzzy-logical concepts which allows to incorporate prior knowledge in the design process. It is demonstrated that kernels mapping to the unit interval with constant one in its diagonal can be represented by a commonly used fuzzylogical formula for representing fuzzy rule bases. This means that a great class of kernels can be represented by fuzzy-logical concepts. Apart from this result, which only guarantees the existence of such a representation, constructive examples are presented and the relation to unlabeled learning is pointed out. Keywords: kernel, triangular norm, T -transitivity, fuzzy relation, residuum 1. Motivation Positive-definiteness plays a prominent role especially in optimization and machine learning due to the fact that two-place functions with this property, so-called kernels, can be represented as inner products in some Hilbert space. Thereby, optimization techniques conceived on the basis of linear models can be extended to non-linear algorithms. For a survey of applications see, for example, ¨ Jolliffe (1986), Sch¨ lkopf and Smola (2002) and Scholkopf et al. (1998). o Recently in Moser (2006) it was shown that kernels with values from the unit interval can be interpreted as fuzzy equivalence relations motivated by the idea that kernels express a kind of similarity. This means that the concept of fuzzy equivalence relations, or synonymously fuzzy similarity relations, is more general than that of kernels, provided only values in the unit interval are considered. Fuzzy equivalence relations distinguish from Boolean equivalence relations by a many-valued extension of transitivity which can be interpreted as many-valued logical model of the statement “IF x is similar to y AND y is similar to z THEN x is similar to z”. In contrast to the Boolean case, in many-valued logics the set of truth values is extended such that also assertions, for example, whether two elements x and y are similar, can be treated as a matter of degree. The standard model for the set of (quasi) truth values of fuzzy logic and other many-valued logical systems is the unit interval. If E(x, y) represents the (quasi) truth value of the statement that x is c 2006 Bernhard Moser. M OSER similar to y, then the many-valued version of transitivity is modeled by T (E(x, y), E(y, z)) ≤ E(x, z) where T is a so-called triangular norm which is an extension of the Boolean conjunction. This many-valued concept for transitivity is called T -transitivity. For a survey on triangular norms see, for example, Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000), ¨ and for fuzzy equivalence relations and T -transitivity see, for example, Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. (2000), and Zadeh (1971). o Based on the semantics of fuzzy logic, this approach allows to incorporate knowledge-based models for the design of kernels. From this perspective, the most interesting mathematical question is how positive-semidefinite fuzzy equivalence relations can be characterized or at least constructed under some circumstances. At least for some special cases, proofs are provided in Section 4, which motivate further research aiming at establishing a more general theory on the positive-definiteness of fuzzy equivalence relations. These cases are based on the most prominent representatives of triangular norms, that is the Minimum, the Product and the Łukasiewicz t-norm. The paper is structured as follows. First of all, in Section 2, some basic prerequisites concerning kernels and fuzzy relations are outlined. In Section 3, a result about the T -transitivity of kernels from Moser (2006) is cited and interpreted as existence statement that guarantees a representation of kernels mapping to the unit interval with constant 1 in its diagonal by a certain, commonly used, fuzzy-logical construction of a fuzzy equivalence relation. Finally, in contrast to the pure existence theorem of Section 3, in Section 4 constructive examples of fuzzy equivalence relations are provided which are proven to be kernels. In a concluding remark, the relationship to the problem of labeled and unlabeled learning is pointed out. 2. Prerequisites This section summarizes definitions and facts from the theory of kernels as well as from fuzzy set theory which are needed later on. 2.1 Kernels and Positive-Semidefiniteness Preserving Functions There is an extensive literature concerning kernels and kernel-based methods like support vector machines or kernel principal component analysis especially in the machine learning, data mining ¨ and computer vision communities. For an overview and introduction, see, for example, Sch olkopf and Smola (2002). Here we present only what is needed later on. For completeness let us recall the basic definition for kernels and positive-semidefiniteness. Definition 1 Let X be a non-empty set. A real-valued function k : X × X → R is said to be a kernel iff it is symmetric, that is, k(x, y) = k(y, x) for all x, y ∈ X , and positive-semidefinite, that is, ∑n j=1 ci c j k(xi , x j ) ≥ 0 for any n ∈ N, any choice of x1 , . . . , xn ∈ X and any choice of c1 , . . . , cn ∈ R. i, One way to generate new kernels from known kernels is to apply operations which preserve the positive-semidefiniteness property. A characterization of such operations is provided by C. H. FitzGerald (1995). Theorem 2 (Closeness Properties of Kernels) Let f : Rn → R, n ∈ N, then k : X × X → R given by k(x, y) := f (k1 (x, y), . . . , kn (x, y)) 2604 G ENERATING K ERNELS BY F UZZY R ELATIONS is a kernel for any choice of kernels k1 , . . . , kn on X × X iff f is the real restriction of an entire function on Cn of the form f (x1 , . . . , xn ) = ∑ r1 ≥0,...,rn ≥0 r r cr1 ,...,rn x11 · · · xnn (1) where cr1 ,...,rn ≥ 0 for all nonnegative indices r1 , . . . , rn . 2.2 Triangular Norms Triangular norms have been originally studied within the framework of probabilistic metric spaces, see Schweizer and Sklar (1961) and Schweizer and Sklar (1983). In this context, t-norms proved to be an appropriate concept when dealing with triangle inequalities. Later on, t-norms and their dual version, t-conorms, have been used to model conjunction and disjunction for many-valued logic, see Dubois and Prade (1985), Gottwald (1986), Gottwald (1993) and Klement et al. (2000). Definition 3 A function T : [0, 1]2 → [0, 1] is called t-norm (triangular norm), if it satisfies the following conditions: (i) (ii) (iii) (iv) ∀x, y ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y, z ∈ [0, 1] : ∀x, y ∈ [0, 1] : T (x, y) = T (y, x) T (x, T (y, z)) = T (T (x, y), z) y ≤ z =⇒ T (x, y) ≤ T (x, z) T (x, 1) = x ∧ T (1, y) = y (commutativity) (associativity) (monotonicity) (boundary condition) Further, a t-norm is called Archimedean if it is continuous and satisfies x ∈ (0, 1) ⇒ T (x, x) < x. Due to its associativity, many-placed extensions Tn : [0, 1]n → [0, 1], n ∈ N, of a t-norm T are uniquely determined by Tn (x1 , . . . , xn ) = T (x1 , Tn−1 (x2 , . . . , xn )). Archimedean t-norms are characterized by the following representation theorem due to Ling (1965): Theorem 4 Let T : [0, 1]2 → [0, 1] be a t-norm. Then T is Archimedean if, and only if, there is a continuous, strictly decreasing function f : [0, 1] → [0, ∞] with f (1) = 0 such that for x, y ∈ [0, 1], T (x, y) = f −1 (min( f (x) + f (y), f (0))). By setting g(x) = exp (− f (x)), Ling’s characterization yields an alternative representation with a multiplicative generator function T (x, y) = g−1 (max(g(x) g(y), g(0))). For g(x) = x we get the product TP (x, y) = x y. The setting f (x) = 1 − x yields the so-called Łukasiewcz t-norm TL (x, y) = max(x + y − 1, 0). Due to Ling’s theorem 4 an Archimedean t-norm T is isomorphic either to TL or TP , depending on whether the additive generator takes a finite value at 0 or not. In the former case, the Archimedean t-norm is called non-strict, in the latter it is called strict. 2605 M OSER A many-valued model of an implication is provided by the so-called residuum given by → T (a, b) = sup{c ∈ [0, 1]|T (a, c) ≤ b} (2) where T is a left-continuous t-norm. Equation (2) is uniquely determined by the so-called adjunction property → ∀a, b, c ∈ [0, 1] : T (a, b) ≤ c ⇔ a ≤ T (b, c). Consequently, the operator ↔ → → T (a, b) = min T (a, b), T (b, a) (3) (4) models a biimplication. For details, for example, see Gottwald (1986) and Klement et al. (2000). → Tables 1 and 2 list examples of t-norms with their induced residuum T . For further examples see, for example, Klement et al. (2000). √ √ Tcos (a, b) = max(ab − 1 − a2 1 − b2 , 0) TL (a, b) = max(a + b − 1, 0) TP (a, b) = ab TM (a, b) = min(a, b) Table 1: Examples of t-norms → T cos (a, b) = → T L (a, b) = → = T P (a, b) → T M (a, b) = cos(arccos(b) − arccos(a)) if a > b, 1 else min(b − a + 1, 1) b if a > b, a 1 else b if a > b, 1 else Table 2: Examples of residuums 2.3 T -Equivalences If we want to classify based on a notion of similarity or indistinguishability, we face the problem of transitivity. For instance, let us consider two real numbers to be indistinguishable if and only if they differ by at most a certain bound ε > 0, this is modeled by the relation ∼ ε given by x ∼ε y :⇔ |x−y| < ε, ε > 0, x, y ∈ R. Note that the relation ∼ε is not transitive and, therefore, not an equivalence relation. The transitivity requirement turns out to be too strong for this example. The problem of identification and transitivity in the context of similarity of physical objects was early pointed out and discussed philosophically by Poincar´ (1902) and Poincar´ (1904). In the framework of fuzzy e e logic, the way to overcome this problem is to model similarity by fuzzy relations based on a many¨ valued concept of transitivity, see Bodenhofer (2003), H ohle (1993), H¨ hle (1999), Klement et al. o (2000) and Zadeh (1971). 2606 G ENERATING K ERNELS BY F UZZY R ELATIONS Definition 5 A function E : X 2 −→ [0, 1] is called a fuzzy equivalence relation, or synonymously, T -equivalence with respect to the t-norm T if it satisfies the following conditions: (i) ∀x ∈ X : E(x, x) = 1 (reflexivity) (ii) ∀x, y ∈ X : E(x, y) = E(y, x) (symmetry) (iii) ∀x, y, z ∈ X : T (E(x, y), E(y, z)) ≤ E(x, z) (T-transitivity). The value E(x, y) can be also looked at as the (quasi) truth value of the statement “x is equal to y”. Following this semantics, T-transitivity can be seen as a many-valued model of the proposition, “If x is equal to y and y is equal to z, then x is equal to z”. T -equivalences for Archimedean t-norms are closely related to metrics and pseudo-metrics as shown by Klement et al. (2000) and Moser (1995). Theorem 6 Let T be an Archimedean t-norm given by ∀a, b ∈ [0, 1] : T (a, b) = f −1 (min( f (a) + f (b), f (0))), where f : [0, 1] → [0, ∞] is a strictly decreasing, continuous function with f (1) = 0. (i) If d : X 2 → [0, ∞[ is a pseudo-metric, then the function Ed : X 2 → [0, 1] defined by Ed (x, y) = f −1 (min(d(x, y), f (0))) is a T -equivalence with respect to the t-norm T . (ii) If E : X 2 → [0, 1] is a T -equivalence relation, then the function dE : X 2 → [0, ∞] defined by dE (x, y) = f (E(x, y)) is a pseudo-metric. → Another way to construct T -equivalences is to employ T -operators. The proof of the following assertion can be found in Trillas and Valverde (1984), Kruse et al. (1993) and Kruse et al. (1994). ↔ Theorem 7 Let T be a left-continuous t-norm, T its induced biimplication, µi : X → [0, 1], i ∈ I, I non-empty; then E : X × X → [0, 1] given by ↔ E(x, y) = inf T (µi (x), µi (y)) i∈I (5) is a T -equivalence relation. ¨ For further details on T -equivalences see also Boixader and Jacas (1999), H oppner et al. (2002), Jacas (1988), Trillas et al. (1999) and Valverde (1985). 3. Representing Kernels by T -Equivalences It is interesting that the concept of kernels, which is motivated by geometric reasoning in terms of inner products and mappings to Hilbert spaces and which is inherently formulated by algebraic terms, is closely related to the concept of fuzzy equivalence relations as demonstrated and discussed in more detail in Moser (2006). In this section, we start with the result that any kernel k : X × X → [0, 1] with k(x, x) = 1 for all x ∈ X is T -transitive and, therefore, a fuzzy equivalence relation. The proof can be found in Moser (2006), see also Appendix A.1. 2607 M OSER Theorem 8 Any kernel k : X × X → [0, 1] with k(x, x) = 1 is (at least) Tcos -transitive, where 1 − a2 Tcos (a, b) = max{a b − 1 − b2 , 0}. (6) The nomenclature is motivated by the fact that the triangular norm defined by Equation (6) is an Archimedean t-norm which is generated by the arcosine function as its additive generator. From this result, the following existence theorem can be derived, which guarantees that any kernel under consideration can be represented by the fuzzy-logical formula given by (5). In fuzzy systems, this formula is commonly used for modeling rule bases (see, for example, Kruse et al., 1993, 1994). Theorem 9 Let X be a non-empty universe of discourse, k : X × X → [0, 1] a kernel in the sense of Definition 1 and k(x, x) = 1 for all x ∈ X ; then there is a family of membership functions µ i : X → [0, 1], i ∈ I, I non-empty and a t-norm T , such that ↔ ∀x, y ∈ X : k(x, y) = inf T (µi (x), µi (y)). i∈I (7) Proof. Let us set I := X , µx0 (x) = k(x, x0 ) and let us choose Tcos as t-norm. For convenience let us denote ↔ h(x, y) = inf T cos (µx0 (x), µx0 (y)), x0 ∈X which is equivalent to ↔ h(x, y) = inf T cos (k(x0 , x), k(x0 , y)). x0 ∈X According to Theorem 8, k is Tcos -transitive, that is, ↔ ∀x0 , x, y ∈ X : T cos (k(x0 , x), k(x0 , y)) ≤ k(x, y). This implies that h(x, y) ≤ k(x, y) for all x, y ∈ X . Now let us consider the other inequality. Due to the adjunction property (3), we obtain → Tcos (k(x, y), k(x0 , y)) ≤ k(x, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , y), k(x, x0 )) and → Tcos (k(x, y), k(x0 , x)) ≤ k(y, x0 ) ⇔ k(x, y) ≤ T cos (k(x0 , x), k(y, x0 )), from which it follows that → → ∀x, y, x0 ∈ X : k(x, y) ≤ min{ T cos (k(x0 , y), k(x, x0 )), T cos (k(x0 , x), k(y, x0 ))}. Hence by Definition 4, ∀x, y ∈ X : k(x, y) ≤ h(x, y) which ends the proof. For an arbitrary choice of fuzzy membership functions, there is no necessity that the resulting relation (7) implies positive-semidefiniteness and, therefore, a kernel. For an example of a Tcos equivalence which is not a kernel see Appendix A.4. Theorem 9 guarantees only the existence of a representation of the form (5) but it does not tell us how to construct the membership functions µ i . In the following section, we provide examples of fuzzy equivalence relations which yield kernels for any choice of membership functions. 2608 G ENERATING K ERNELS BY F UZZY R ELATIONS 4. Constructing Kernels by Fuzzy Equivalence Relations In the Boolean case, positive-definiteness and equivalence are synonymous, that is, a Boolean relation R : X × X → {0, 1} is positive-definite if and only if R is the indicator function of an equivalence relation ∼ that is, R(x, y) = 1 if x ∼ y and R(x, y) = 0 if x ∼ y. For a proof, see Appendix A.2. This = = =, relationship can be used to obtain an extension to fuzzy relations as given by the next theorem whose proof can be found in the Appendix A.3. Theorem 10 Let X be a non-empty universe of discourse, µ i : X → [0, 1], i ∈ I, I non-empty; then the fuzzy equivalence relation EM : X × X → [0, 1] given by ↔ EM (x, y) = inf T M (µi (x), µi (y)) i∈I is positive-semidefinite. In the following, the most prominent representatives of Archimedean t-norms, the Product TP and the Łukasiewicz t-norm TL , are used to construct positive-semidefinite fuzzy similarity relations. Though the first part can also be derived from a result due to Yaglom (1957) that characterizes isotropic stationary kernels by its spectral representation, here we prefer to present a direct, elementary proof. Compare also Bochner (1955) and Genton (2001). Theorem 11 Let X be a non-empty universe of discourse, ν : X → [0, 1] and let h : [0, 1] → [0, 1] be an isomorphism of the unit interval that can be expanded in the manner of Equation (1), that is h(x) = ∑k ck xk with ck ≥ 0; then the fuzzy equivalence relations EL,h , EP,h : X × X → [0, 1] given by ↔ EL,h (x, y) = h T L h−1 (ν(x)) , h−1 (ν(y)) and ↔ EP,h (x, y) = h T P h−1 (ν(x)) , h−1 (ν(y)) (8) (9) are positive-semidefinite. Proof. To prove the positive-definiteness of the two-placed functions E L,h and EP,h given by equations (8) and (9) respectively, we have to show that n n ∑ i, j=1 EL,h (xi , xi ) ci c j ≥ 0, ∑ i, j=1 EP,h (xi , x j ) ci c j ≥ 0 for any n ∈ N and any choice of x1 , . . . , xn ∈ X , respectively. According to an elementary result from Linear Algebra this is equivalent to the assertion that the determinants (1 ≤ m ≤ n) Dm = det (E(xi , x j ))i, j∈{1,...,m} of the minors of the matrix (E(xi , x j ))i, j satisfy ∀m ∈ {1, . . . , n} : Dm ≥ 0, where E denotes either EL,h or EP,h . Recall that the determinant of a matrix is invariant with respect to renaming the indices, that is, if σ : {1, . . . , n} → {1, . . . , n} is a permutation then det [(ai j )i, j ] = det (aσ(i)σ( j) )i, j . 2609 M OSER For convenience, let denote µi = h−1 (ν(xi )). Then, without loss of generality, we may assume that the values µi are ordered monotonically decreasing, that is, µi ≥ µ j for i < j. ↔ → (10) → Case TL : Note that T L (a, b) = min{ T L (a, b), T L (b, a)} = 1 − |a − b|. Then we have to show that for all dimensions n ∈ N, the determinant of E (n) = (1 − |µi − µ j |)i, j∈{1,...,n} is non-negative, that is Due to the assumption (10), we have det[E (n) ] ≥ 0. 1 − |µi − µ j | = 1 − (µi − µ j ) if i ≤ j, 1 − (µ j − µi ) else which yields   . . . 1 − (µ1 − µn−1 ) 1 − (µ1 − µn )  . . . 1 − (µ2 − µn−1 ) 1 − (µ2 − µn )     . . . 1 − (µ3 − µn−1 ) 1 − (µ3 − µn )    (n) E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) 1 − (µ2 − µn−1 ) . . . 1 1 − (µn−1 − µn ) 1 − (µ1 − µn ) 1 − (µ2 − µn ) . . . 1 − (µn−1 − µn ) 1 1 − (µ1 − µ2 ) 1 1 − (µ2 − µ3 ) . . . 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . Now let us apply determinant-invariant elementary column operations to simplify this matrix by subtracting the column with index i − 1 from the column with index i, i ≥ 2. This yields   1 µ2 − µ1 ... µn−1 − µn−2 µn − µn−1  1 − (µ1 − µ2 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1     1 − (µ1 − µ3 ) −(µ2 − µ1 ) . . . µn−1 − µn−2 µn − µn−1    ˜ E (n) =  . . . . . .. . . . .   . . . . .   1 − (µ1 − µn−1 ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) µn − µn−1  1 − (µ1 − µn ) −(µ2 − µ1 ) . . . −(µn−2 − µn−1 ) −(µn−1 − µn ) Therefore, α = n ∏(µi−1 − µi ) ≥ 0 (11) i=2 ˜ ˆ det[E (n) ] = det[E (n) ] = α det[En ], where   . . . −1 −1  . . . −1 −1    . . . −1 −1   (n) ˆ E = . . . .. . .   . . .   1 − (µ1 − µn−1 ) +1 . . . +1 −1 1 − (µ1 − µn ) +1 . . . +1 +1 1 1 − (µ1 − µ2 ) 1 − (µ1 − µ3 ) . . . 2610 −1 +1 +1 . . . (12) G ENERATING K ERNELS BY F UZZY R ELATIONS Let us apply Laplacian determinant expansion by minors to the first column of matrix (12), that is n det[A] = ∑ (−1)i+ j ai j det[Ai j ] i=1 where A = (ai j ) is an n × n-matrix, j arbitrarily chosen from {1, . . . , n} and Ai j is the matrix corresponding to the cofactor ai j obtained by canceling out the i-th row and the j-th column from A (see, ˆ for example, Muir, 1960). For n = 1, we get the trivial case det[ E (1) ] = 1. Note that the first and (n) ˆ the last rows of the matrices Ei,1 for 1 < i < n only differ by their signum, consequently the minors ˆ (n) det[Ei,1 ] for 1 < i < n, n ≥ 2, are vanishing, that is, det[Ai,1 ] = 0, for 1 < i < n. Therefore, according to the Laplacian expansion, we get (n) (n) ˆ ˆ ˆ det[E (n) ] = 1 · det[E1,1 ] + (−1)n (1 − (µ1 − µn )) · det[E1,n ]. (13) Observe that (n) ˆ det[E1,1 ] = 2n−2 (n) ˆ det[E1,n ] = (−1)n−1 2n−2 . Consequently, Equation (13) simplifies to ˆ det[E (n) ] = 2n−2 1 + (−1)n (−1)n−1 2n−2 (1 − (µ1 − µn )) = 2n−2 (1 − (1 − (µ1 − µn ))) = 2n−2 (µ1 − µn ) ≥ 0 which together with (11) proves the first case. ↔ → → Case TP : First of all, let us compute T P (a, b) = min{ T P (a, b), T L (b, a)}. Hence,  min{ b , a } if a, b > 0,  a b   0 ↔ if a = 0 and b > 0 , T P (a, b) = 0 if b = 0 and a > 0 ,    1 if a = 0 and b = 0 . Again, without loss of generality, let us suppose that the values µ i , i ∈ {1, . . . , n} are ordered monotonically decreasing, that is µ1 ≥ µ2 ≥ . . . ≥ µn . Before checking the general case, let us consider the special case of vanishing µ-values. For this, let us assume for the moment that µi = > 0 if i < i0 , 0 else ↔ ↔ which implies that T P (µi , µ j ) = 0 for i < i0 and j ≥ i0 and T P (µi , µ j ) = 1 for i ≥ i0 and j ≥ i0 . This leads to a decomposition of the matrix ↔ E (n) = T P (µi , µ j ) 2611 ij M OSER such that det[E (n) ] = det[E (i0 −1) ] · det[In−i0 −1 ] where Ik denotes the k × k-matrix with constant entries 1, hence det[In−i0 −1 ] ∈ {0, 1}. Therefore, we may assume that µ1 ≥ µ2 ≥ . . . ≥ µn > 0. Then we have to show that for all dimensions n ∈ N, the determinant of µi µ j , µ j µi E (n) = min i, j∈{1,...,n} is non-negative, that is det[E (n) ] ≥ 0. Consider  1  µ2  µ1  µ3  µ (n) E =  .1  .  .  µn−1  µ1 µn µ1 µ2 µ1 1 µ3 µ2 . . . µn−1 µ2 µn µ2 ... ... ... .. . ... ... Now, multiply the i-th column by −µi+1 /µi and add 1 ≤ i < n, then we get  1 0 ... 2  µ2 ∗ 1 − ... µ1  ∗ ∗ ...   . ˜ .. E (n) =  . . . . . .  ∗ ∗ ... 1−   ∗ ∗ ... µn−1 µ1 µn−1 µ2 µn−1 µ3 µn µ1 µn µ2 µn µ3      . . .  .  µn  µn−1  1 . . . 1 µn µn−1 (14) it to the (i + 1)-th column of matrix (14), 0 0 0 0 0 . . . 0 . . . µn−1 µn−2 2 ∗ 0 1− µn µn−1             2 (15) where ∗ is a placeholder for any real value. By this, the determinant of the matrix in Equation (15) readily turns out to be n−1 µi+1 ˜ det[E (n) ] = det[E (n) ] = ∏ 1 − µi i=1 2 ≥0 which together with Theorem (2) ends the proof. Note that relations (8) and (9) are T -transitive with respect to the corresponding isomorphic Archimedean t-norms, TL,h (x, y) = h(TL (h−1 (x), h−1 (x))) and TP,h (x, y) = h(TP (h−1 (x), h−1 (x))), respectively. 2612 G ENERATING K ERNELS BY F UZZY R ELATIONS Corollary 12 Let X be a non-empty universe of discourse, µ i : X → [0, 1], λi ∈ ]0, 1] with ∑i λi = 1 ˜ ˜ where i ∈ {1, . . . , n}, n ∈ N, then the fuzzy equivalence relations EL , EP : X × X → [0, 1] given by n ↔ ˜ EL (x, y) = ∑ λi T L (µi (x), µi (y)) (16) i=1 and n ↔ ˜ EP (x, y) = ∏ T P (µi (x), µi (y)) λi (17) i=1 are TL - and TP -equivalences, respectively, and kernels. Proof. First of all, let us check the TL -transitivity of formula (16). This can readily be shown by ↔ means of the definition of TL and the TL -transitivity of T L due to the following inequalities: n TL n ↔ i=1 n n ↔ ↔ n ↔ ↔ ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1 , 0 i=1 i=1 n max = i=1 i=1 n = i=1 ∑ λi T L (µi (x), µi (y)) + ∑ λi T L (µi (y), µi (z)) − 1, 0 max max ↔ ∑ λi T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (yz) n ↔ ↔ ∑ λi TL T L (µi (x), µi (y)), ∑ λi T L (µi (y), µi (z)) , 0 i=1 i=1 n max ↔ ∑ λi T L (µi (x), µi (z)), 0 ≤ ≤ = i=1 ↔ λi T L (µi (x), µi (z)). ↔ This, together with the TP -transitivity of T P , proves that the formulas given by (16) and (17) are TL and TP -equivalences, respectively. Expanding the factors of formula (17) yields  1 if µi (x) = µi (y) = 0, λi ↔ λi λi (18) T P (µi (x), µi (y)) =  min(µiλi(x),µiλi(y)) else max(µi (x),µi (y)) which by comparing case TP of the proof of Theorem 11 shows that the left-hand side of Equation (18) is positive-semidefinite. As the convex combination and the product are special cases of positive-semidefiniteness preserving functions according to Theorem 1, the functions defined by equations (16) and (17) prove to be again positive-semidefinite and, therefore, kernels. It is interesting to observe that both formulas (16) and (17) can be expressed in the form, f ( τ(x) − τ(y) 1 ), where f : I → [0, 1], I some interval, is a strictly decreasing function, τ : X → I n , I some interval, τ(x) = (τ1 (x), . . . , τn (x)) and τ(x) 1 = ∑n |τi (x)|. Indeed, for Equation (16) let us define i=1 fL : [0, 1] → [0, 1], fL (a) = 1 − a τL : X → [0, 1] , τL (x) = (λ1 µ1 (x), . . . , λn µn (x)) n 2613 M OSER and for Equation (17) and positive membership functions µ i , µi (x) > 0 for all x ∈ X , let us define fP : [0, ∞[→ [0, 1], fP (a) = e−a τP : X → ] − ∞, 1]n , τP (x) = (λ1 ln(µ1 (x)), . . . , λn ln(µn (x))) Therefore, we get ˜ EL (x, y) = 1 − τL (x) − τL (y) ˜ EP (x, y) = e− τP (x)−τP (y) 1 . 1 (19) (20) While formulas (19) and (20) provide a geometrical interpretation by means of the norm . 1 , the corresponding formulas (16) and (17) yield a semantical model of the assertion “IF x is equal to y with respect to feature µ1 AND . . . AND x is equal to y with respect to feature µn THEN x is equal to y” as aggregation of biimplications in terms of fuzzy logic. While in the former case, the aggregation has some compensatory effect, the latter is just a conjunction in terms of the Product triangular norm. For details on aggregation operators see, for example, Saminger et al. (2002) and Calvo et al. (2002). The formulas (16) and (17) coincide for the following special case. If the membership functions µi are indicator functions of sets Ai ⊆ X which form a partition of X , then the kernels (16) and (17) reduce to the indicator function characterizing the Boolean equivalence relation induced by this partition {A1 , . . . , An }. The formulas (16) and (17) for general membership functions therefore provide kernels which can be interpreted to be induced by a family of fuzzy sets and, in particular, by fuzzy partitions, that is, families of fuzzy sets fulfilling some criteria which extend the axioms for a Boolean partition in a many-valued logical sense. For definitions and further details on fuzzy partitions see, for ¨ example, De Baets and Mesiar (1998), Demirci (2003) and H oppner and Klawonn (2003). It is a frequently used paradigm that the decision boundaries for a classification problem lie between clusters rather than intersecting them. Due to this cluster hypothesis, the problem of designing kernels based on fuzzy partitions is closely related to the problem of learning kernels from unlabeled data. For further details on semi-supervised learning see, for example, Seeger (2002), Chapelle et al. (2003) and T. M. Huang (2006). It is left to future research to explore this relationship to the problem of learning from labeled and unlabeled data and related concepts like covariance kernels. 5. Conclusion In this paper, we have presented a novel view on kernels from a fuzzy logical point of view. Particularly, the similarity-measure aspect of a kernel is addressed and investigated by means of the so-called T -transitivity which is characteristic for fuzzy equivalence relations. As a consequence, we derived that a large class of kernels can be represented in a way that is commonly used for representing fuzzy rule bases. In addition to this proof for the existence of such a representation, constructive examples are presented. It is the idea of this research to look for a combination of knowledge-based strategies with kernel-based methods in order to facilitate a more flexible designing process of kernels which also allows to incorporate prior knowledge. Further research aims at 2614 G ENERATING K ERNELS BY F UZZY R ELATIONS analyzing the behavior of kernels constructed in this way when applied in the various kernel methods like support vector machines, kernel principal components analysis and others. In particular, it is intended to focus on the problem of learning kernels from unlabeled data where the fuzzy partitions are induced by appropriate clustering principles. Acknowledgments Bernhard Moser gratefully acknowledges partial support by the Austrian Government, the State of Upper Austria, and the Johannes Kepler University Linz in the framework of the Kplus Competence Center Program. Furthermore special thanks go to the anonymous reviewers who gave helpful suggestions and to Felix Kossak for careful proof-reading. Appendix A. For sake of completeness the following sections provide proofs regarding Theorem 8, the characterization of kernels in the Boolean case and the construction of kernels by means of the minimum t-norm TM . Furthermore, in Section A.4 an example of a non-positive-semidefinite Tcos -equivalence is given. A.1 Proof of Theorem 8 Let us start with the analysis of 3-dimensional matrices. Lemma 13 Let M = (mi j )i j ∈ [0, 1]3×3 be a 3 × 3 symmetric matrix with mii = 1, i = 1, 2, 3; then M is positive-semidefinite iff for all i, j, k ∈ {1, 2, 3} there holds mi j m jk − 1 − m2j i 1 − m2 ≤ mik jk Proof. For simplicity, let a = m1,2 , b = m1,3 and c = m2,3 . Then the determinant of M, Det(M), is a function of the variables a, b, c given by D(a, b, c) = 1 + 2abc − a2 − b2 − c2 . For any choice of a, b, the quadratic equation D(a, b, c) = 0 can be solved for c, yielding two solutions c1 = c1 (a, b) and c2 = c2 (a, b) as functions of a and b, c1 (a, b) = ab − c2 (a, b) = ab + 1 − a2 1 − a2 1 − b2 1 − b2 . Obviously, for all |a| ≤ 1 and |b| ≤ 1, the values c1 (a, b) and c2 (a, b) are real. By substituting a = cos α and b = cos(β) with α, β ∈ [0, π ], it becomes readily clear that 2 c1 (a, b) = c1 (cos(α), cos(β)) = cos(α) cos(β) − sin(α) sin(β) = cos(α + β) ∈ [−1, 1] 2615 M OSER and, analogously, c2 (a, b) = c2 (cos(α), cos(β)) = cos(α) cos(β) + sin(α) sin(β) = cos(α − β) ∈ [−1, 1]. As for all a, b ∈ [−1, 1] the determinant function Da,b (c) := D(a, b, c) is quadratic in c with negative coefficient for c2 , there is a uniquely determined maximum at c0 (a, b) = ab. Note that for all a, b ∈ [−1, 1], we have c1 (a, b) ≤ c0 (a, b) ≤ c2 (a, b) and D(a, b, c0 (a, b)) = 1 + 2ab(ab) − a2 − b2 − (ab)2 = (1 − a2 )(1 − b2 ) ≥ 0. Therefore, D(a, b, c) ≥ 0 if and only if c ∈ [c1 (a, b), c2 (a, b)]. Recall from linear algebra that by renaming the indices, the determinant does not change. Therefore, without loss of generality, we may assume that a ≥ b ≥ c. For convenience, let Q = {(x, y, z) ∈ [0, 1]3 |x ≥ y ≥ z}. Then, obviously, for any choice of a, b ∈ [0, 1] there holds (a, b, c1 (a, b)) ∈ Q. Elementary algebra shows that (a, b, c2 (a, b)) ∈ Q is only the case for a = b = 1. As for a = b = 1 the two solutions c1 , c2 coincide, that is, c1 (1, 1) = c2 (1, 1) = 1, it follows that for any choice of (a, b, c) ∈ Q, there holds D(a, b, c) ≥ 0 if and only if c1 (a, b) = ab − 1 − a2 1 − b2 ≤ c. (21) If (a, b, c) ∈ Q, then the inequality (21) is trivially satisfied which together with (21) proves the lemma Now Theorem 8 immediately follows from Definition (1), Lemma (13) and the characterizing inequality (21). A.2 Characterization of Kernels in the Boolean Case ¨ The following lemma and proposition can also be found as an exercise in Sch olkopf and Smola (2002). Lemma 14 Let ∼ be an equivalence relation on X and let k : X × X → {0, 1} be induced by ∼ via k(x, y) = 1 if and only if x ∼ y; then k is a kernel. Proof. By definition of positive-definiteness, let us consider an arbitrary sequence of elements x1 , . . . , xn . Then there are at most n equivalence classes Q1 , . . . , Qm on the set of indices {1, . . . , n}, S / m ≤ n, where i=1,...,m Qi = {1, . . . , n} and Qi ∩ Q j = 0 for i = j. Note that k(xi , x j ) = 0 if the indices 2616 G ENERATING K ERNELS BY F UZZY R ELATIONS i, j belong to different equivalence classes. Then, for any choice of reals c 1 , . . . , cn , we obtain ∑ ci c j k(xi , x j ) m = i, j ∑ ∑ ci c j k(xi , x j ) p=1 i, j∈Q p m = ∑ ∑ p=1 i, j∈Q p ci c j · 1 2 m = ∑ ∑ ci p=1 i∈Q p ≥ 0 Proposition 15 k : X × X → {0, 1} with k(x, x) = 1 for all x ∈ X is a kernel if and only if it is induced by an equivalence relation. Proof. It only remains to be shown that if k is a kernel, then it is the indicator function of an equivalence relation, that is, it is induced by an equivalence relation. If k is a kernel, according to Lemma 13, for all x, y, z ∈ X , it has to satisfy Tcos (k(x, y), k(y, z)) ≤ k(x, z), which implies, k(x, y) = 1, k(y, z) = 1 =⇒ k(x, z) = 1. Obviously, we have k(x, x) = 1 and k(x, y) = k(y, x) due to the reflexivity and symmetry assumption of k, respectively. A.3 Constructing Kernels by TM For convenience let us recall the basic notion of an α-cut from fuzzy set theory: Definition 16 Let X be a non-empty set and µ : X → [0, 1]; then [µ]α = {x ∈ X | µ(x) ≥ α} is called the α-cut of the membership function µ. Lemma 17 k : X × X → [0, 1] is a TM -equivalence if and only if all α-cuts of k are Boolean equivalence relations. Proof. (i) Let us assume that k is a TM -equivalence. Let α ∈ [0, 1], then by definition, [k]α = {(x, y) ∈ X × X | k(x, y) ≥ α}. In order to show that [k]α is a Boolean equivalence, the axioms for reflexivity, symmetry and transitivity have to be shown. Reflexivity and symmetry are trivially satisfied as for all x, y ∈ X , there holds by assumption that k(x, x) = 1 and k(x, y) = k(y, x). In order to show transitivity, let us consider (x, y), (y, z) ∈ [k]α , that means k(x, y) ≥ α and k(y, z) ≥ α; then by the TM -transitivity assumption it follows that α ≤ min(k(x, y), k(y, z)) ≤ k(x, z), hence (x, z) ∈ [k]α . 2617 M OSER (ii) Suppose now that all α-cuts of k are Boolean equivalence relations. Then, in particular, [k] α with α = 1 is reflexive, hence k(x, x) = 1 for all x ∈ X . The symmetry of k follows from the fact that for all α ∈ [0, 1] and pairs (x, y) ∈ [k]α , by assumption, we have (y, x) ∈ [k]α . In order to show the TM -transitivity property, let us consider arbitrarily chosen elements x, y, z ∈ X . Let α = min(k(x, y), k(y, z)); then by the transitivity assumption of [k] α , it follows that (x, z) ∈ [k]α , consequently k(x, z) ≥ α = min(k(x, y), k(y, z)). Proposition 18 If k : X × X → [0, 1] is a TM -equivalence then it is positive-semidefinite. Proof. Choose arbitrary elements x1 , . . . , xn ∈ X and consider the set of values which are taken by all combinations k(xi , x j ), i, j ∈ {1, . . . , n} and order them increasingly, that is k(xi , x j )| i, j ∈ {1, . . . , n}} = {α1 , . . . , αm , where 0 ≤ α1 ≤ · · · αm ≤ 1. Observe that for all pairs (xi , x j ), i, j ∈ {1, . . . , n} there holds m k(xi , x j ) = ∑ (αv − αv−1 )1[k] αv v=2 (xi , x j ) + α1 1[k]α1 (xi , x j ) showing that on the set {x1 , . . . , xn } × {x1 , . . . , xn }, the function k is a linear combination of indicator functions of Boolean equivalences (which are positive-semidefinite by Proposition 15) with nonnegative coefficients and, consequently, it has to be positive semidefinite. A.4 Example of a Non-Positive-Semidefinite Tcos -Equivalence For dimensions n > 3, the Tcos -transitivity is no longer sufficient to guarantee positive semi(n) definiteness. Consider, for example An = (ai j )i j where  λ  (n) ai j = 1   0 if min(i, j) = 1, max(i, j) > 1 , if i = j, else . (22) √ (n) (n) (n) Choose λ = 1/ 2, then Tcos (λ, λ) = 0, hence we have Tcos (ai j , a jk ) ≤ aik for all indices i, j, k ∈ 1, . . . , n. As det(An ) < 0 for n > 3, the matrix An cannot be positive-semidefinite though the Tcos transitivity conditions are satisfied. References S. Bochner. Harmonic Analysis and the Theory of Probability. University of California Press, Los Angeles, California, 1955. U. Bodenhofer. A note on approximate equality versus the Poincar´ paradox. Fuzzy Sets and e Systems, 133(2):155–160, 2003. 2618 G ENERATING K ERNELS BY F UZZY R ELATIONS D. Boixader and J. Jacas. T -indistinguishability operators and approximate reasoning via CRI. In D. Dubois, E. P. Klement, and H. Prade, editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 255–268. Kluwer Academic Publishers, Dordrecht, 1999. A. Pinkus C. H. FitzGerald, C.A. Micchelli. Functions that preserve families of positive semidefinite matrices. Linear Alg. and Appl., 221:83–102, 1995. T. Calvo, G. Mayor, and R. Mesiar, editors. Aggregation Operators, volume 97 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, 2002. ¨ O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. volume 15 of NIPS. 2003. B. De Baets and R. Mesiar. T -partitions. Fuzzy Sets and Systems, 97:211–223, 1998. M. Demirci. On many-valued partitions and many-valued equivalence relations. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 11(2):235–253, 2003. D. Dubois and H. Prade. A review of fuzzy set aggregation connectives. Inform. Sci., 36:85–121, 1985. M. G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine Learning Research, 2:299–312, 2001. S. Gottwald. Fuzzy set theory with t-norms and Φ-operators. In A. Di Nola and A. G. S. Ventre, editors, The Mathematics of Fuzzy Systems, volume 88 of Interdisciplinary Systems Research, ¨ pages 143–195. Verlag TUV Rheinland, K¨ ln, 1986. o S. Gottwald. Fuzzy Sets and Fuzzy Logic. Vieweg, Braunschweig, 1993. U. H¨ hle. Fuzzy equalities and indistinguishability. In Proc. 1st European Congress on Fuzzy and o Intelligent Technologies, volume 1, pages 358–363, Aachen, 1993. U. H¨ hle. The Poincar´ paradox and non-classical logics. In D. Dubois, E. P. Klement, and H. Prade, o e editors, Fuzzy Sets, Logics and Reasoning about Knowledge, volume 15 of Applied Logic Series, pages 7–16. Kluwer Academic Publishers, Dordrecht, 1999. F. H¨ ppner and F. Klawonn. Improved fuzzy partitions for fuzzy regression models. Internat. J. o Approx. Reason., 32:85–102, 2003. F. H¨ ppner, F. Klawonn, and P. Eklund. Learning indistinguishability from data. Soft Computing, 6 o (1):6–13, 2002. J. Jacas. On the generators of T -indistinguishability operators. Stochastica, 12:49–63, 1988. I. T. Jolliffe. Principal Component Analysis. Springer Verlag, New York, 1986. E. P. Klement, R. Mesiar, and E. Pap. Triangular Norms, volume 8 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, 2000. 2619 M OSER R. Kruse, J. Gebhardt, and F. Klawonn. Fuzzy-Systeme. B. G. Teubner, Stuttgart, 1993. R. Kruse, J. Gebhardt, and F. Klawonn. Foundations of Fuzzy Systems. John Wiley & Sons, New York, 1994. C. H. Ling. Representation of associative functions. Publ. Math. Debrecen, 12:189–212, 1965. B. Moser. On the t-transitivity of kernels. Fuzzy Sets and Systems, 157:1787–1796, 2006. B. Moser. A New Approach for Representing Control Surfaces by Fuzzy Rule Bases. PhD thesis, Johannes Kepler Universit¨ t Linz, October 1995. a T. Muir. A Treatise on the Theory of Determinants. Dover, New York, 1960. H. Poincar´ . La Science et l’Hypoth´ se. Flammarion, Paris, 1902. e e H. Poincar´ . La Valeur de la Science. Flammarion, Paris, 1904. e S. Saminger, R. Mesiar, and U. Bodenhofer. Domination of aggregation operators and preservation of transitivity. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 10(Suppl.):11–35, 2002. B. Sch¨ lkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002. o ¨ B. Sch¨ lkopf, A. J. Smola, and K. R. Muller. Nonlinear component analysis as a kernel eigenvalue o problem. Neural Computation, 10:1299–1319, 1998. B. Schweizer and A. Sklar. Associative functions and statistical triangle inequalities. Publ. Math. Debrecen, 8:169–186, 1961. B. Schweizer and A. Sklar. Probabilistic Metric Spaces. North-Holland, Amsterdam, 1983. M. Seeger. Covariance kernels from bayesian generative models. Neural Information Processing Systems, 14:905–912, 2002. I. Kopriva T. M. Huang, V. Kecman. Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning. Springer-Verlag, Berlin, 2006. E. Trillas and L. Valverde. An inquiry into indistinguishability operators. In H. J. Skala, S. Termini, and E. Trillas, editors, Aspects of Vagueness, pages 231–256. Reidel, Dordrecht, 1984. E. Trillas, S. Cubillo, and E. Casti˜ eira. Menger and Ovchinnikov on indistinguishabilities revisited. n Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 7(3):213–218, 1999. L. Valverde. On the structure of F-indistinguishability operators. Fuzzy Sets and Systems, 17(3): 313–328, 1985. A. M. Yaglom. Some classes of random fields in n-dimensional space, related to stationary random processes. Theory of Probability and its Applications, 2:273–320, 1957. L. A. Zadeh. Similarity relations and fuzzy orderings. Inform. Sci., 3:177–200, 1971. 2620

5 0.3352235 24 jmlr-2006-Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss

Author: Di-Rong Chen, Tao Sun

Abstract: The consistency of classification algorithm plays a central role in statistical learning theory. A consistent algorithm guarantees us that taking more samples essentially suffices to roughly reconstruct the unknown distribution. We consider the consistency of ERM scheme over classes of combinations of very simple rules (base classifiers) in multiclass classification. Our approach is, under some mild conditions, to establish a quantitative relationship between classification errors and convex risks. In comparison with the related previous work, the feature of our result is that the conditions are mainly expressed in terms of the differences between some values of the convex function. Keywords: multiclass classification, classifier, consistency, empirical risk minimization, constrained comparison method, Tsybakov noise condition

6 0.32824349 9 jmlr-2006-Accurate Error Bounds for the Eigenvalues of the Kernel Matrix

7 0.32274669 46 jmlr-2006-Learning Factor Graphs in Polynomial Time and Sample Complexity

8 0.32260266 87 jmlr-2006-Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation

9 0.31865472 66 jmlr-2006-On Model Selection Consistency of Lasso

10 0.31648582 29 jmlr-2006-Estimation of Gradients and Coordinate Covariation in Classification

11 0.31111833 82 jmlr-2006-Some Theory for Generalized Boosting Algorithms

12 0.30859631 17 jmlr-2006-Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation

13 0.30018431 95 jmlr-2006-Walk-Sums and Belief Propagation in Gaussian Graphical Models

14 0.29737753 48 jmlr-2006-Learning Minimum Volume Sets

15 0.29299903 73 jmlr-2006-Pattern Recognition for Conditionally Independent Data

16 0.29053131 28 jmlr-2006-Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting

17 0.28765449 16 jmlr-2006-Bounds for Linear Multi-Task Learning

18 0.28747618 2 jmlr-2006-A Graphical Representation of Equivalence Classes of AMP Chain Graphs

19 0.2871455 84 jmlr-2006-Stability Properties of Empirical Risk Minimization over Donsker Classes

20 0.2860007 40 jmlr-2006-Infinite-σ Limits For Tikhonov Regularization