jmlr jmlr2007 jmlr2007-27 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Nikolaj Tatti
Abstract: The concepts of similarity and distance are crucial in data mining. We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. The initial definition of our distance is based on geometrical notions of certain sets of distributions. We show that this distance can be computed in cubic time and that it has several intuitive properties. We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. Our empirical tests with real world data show that the distance works well. Keywords: data mining theory, complex data, binary data, itemsets
Reference: text
sentIndex sentText sentNum sentScore
1 FI HIIT Basic Research Unit Laboratory of Computer and Information Science Helsinki University of Technology, Finland Editor: Dale Schuurmans Abstract The concepts of similarity and distance are crucial in data mining. [sent-3, score-0.238]
2 We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. [sent-4, score-0.252]
3 The initial definition of our distance is based on geometrical notions of certain sets of distributions. [sent-5, score-0.315]
4 We show that this distance can be computed in cubic time and that it has several intuitive properties. [sent-6, score-0.176]
5 We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. [sent-7, score-0.352]
6 We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. [sent-8, score-0.36]
7 Our empirical tests with real world data show that the distance works well. [sent-9, score-0.214]
8 Keywords: data mining theory, complex data, binary data, itemsets 1. [sent-10, score-0.341]
9 Introduction In this paper we will consider the following problem: Given two data sets D 1 and D2 of dimension K, define a distance between D1 and D2 . [sent-11, score-0.214]
10 To be more precise, we consider the problem of defining the distance between two multisets of transactions, each set sampled from its own unknown distribution. [sent-12, score-0.2]
11 If one is able to retrieve a distance matrix from a set of objects, then one is able to analyse data by using for example, clustering or visualisation techniques. [sent-15, score-0.323]
12 For example, if a data collection consists of movies from different eras, then we may divide the movies into subcollections based on their release years. [sent-17, score-0.139]
13 A distance between these data (sub)sets would provide means to analyse them as single objects. [sent-18, score-0.254]
14 Let us continue by considering the properties the CM distance should have. [sent-20, score-0.176]
15 Secondly, in our scenario the data sets have statistical nature and the CM distance should take this into account. [sent-23, score-0.214]
16 For example, consider that both data sets are generated from the same distribution, then the CM distance should give small values and approach 0 as the number of data points in the data sets increases. [sent-24, score-0.29]
17 The third requirement is that we should be able to evaluate the CM distance quickly. [sent-25, score-0.215]
18 TATTI The CM distance will be based on summary statistics, features. [sent-28, score-0.176]
19 Then we can suggest the distance between D1 and D2 to be |3/4 − 1/4| = 1/2. [sent-30, score-0.176]
20 The CM distance is based on this idea; however, there is a subtle difficulty: If we calculate several features, then should we take into account the correlation of these features? [sent-31, score-0.176]
21 In Section 2 we give the definition of the CM distance by using some geometrical interpretations. [sent-34, score-0.315]
22 We also study the properties of the distance and provide an alternative characterisation. [sent-35, score-0.176]
23 In Section 3 we study the CM distance and binary data sets. [sent-36, score-0.249]
24 In Section 4 we discuss how the CM distance can be used with event sequences and in Section 5 we comment about the feature selection. [sent-37, score-0.206]
25 The Constrained Minimum Distance In the following subsection we will define our distance using geometrical intuition and show that the distance can be evaluated efficiently. [sent-41, score-0.547]
26 As we said in the introduction, our goal is not to define a distance directly on data sets but rather through some statistics evaluated from the data sets. [sent-50, score-0.252]
27 A frequency θ ∈ R N of S taken with respect to a data 1 set D is the average of values of S taken over the data set, that is, θ = |D| ∑ω∈D S(ω). [sent-55, score-0.141]
28 In other words, we seek a distance whose evaluation time does not depend of the size of Ω but rather of N. [sent-59, score-0.176]
29 Given a feature function S and a frequency θ (calculated from some data set) we say that a distribution p ∈ P satisfies the frequency θ if E p [S] = θ. [sent-61, score-0.198]
30 We also define a constrained set of distributions C+ (S, θ) = {p ∈ P | E p [S] = θ} 132 D ISTANCES BETWEEN DATA S ETS BASED ON S UMMARY S TATISTICS to be the set of the distributions satisfying θ. [sent-62, score-0.137]
31 We interpret the sets P and C+ (S, θ) as geometrical objects. [sent-64, score-0.139]
32 In order to do so, we define a constrained space C (S, θ) = u ∈ R|Ω| | ∑ S(i)ui = θ, ∑ ui = 1 , i∈Ω i∈Ω that is, we drop the last condition from Eq. [sent-74, score-0.129]
33 This implies that C (S, θ) is an affine space, and that, given two different frequencies θ1 and θ2 , the spaces C (S, θ1 ) and C (S, θ2 ) are parallel. [sent-78, score-0.211]
34 The idea of interpreting distributions as geometrical objects is not new. [sent-97, score-0.2]
35 For example, a well-known boolean query problem is solved by applying linear programming to the constrained sets of distributions (Hailperin, 1965; Calders, 2003). [sent-98, score-0.135]
36 There is a natural way of measuring the distance between these two spaces. [sent-100, score-0.176]
37 This is done by taking the length of the shortest segment going from a point in A 1 to a point in A2 (for example see the illustration in Figure 1). [sent-101, score-0.206]
38 We know that the segment has the shortest length if and only if it is orthogonal with the affine spaces. [sent-102, score-0.206]
39 We also know that if we select a point a 1 ∈ A1 having the shortest norm, and if we similarly select a2 ∈ A2 , then the segment going from a1 to a2 has the shortest length. [sent-103, score-0.263]
40 The preceding discussion and the fact that the constrained spaces are affine motivates us to give the following definition: Assume that we are given two data sets, namely D 1 and D2 and a 133 TATTI (0, 0, 1) C+(S, 0. [sent-104, score-0.18]
41 25) P (0, 1, 0) (0, 1, 0) P (1, 0, 0) (1, 0, 0) Figure 1: A geometrical interpretation of the distribution sets for |Ω| = 3. [sent-108, score-0.139]
42 This segment has the shortest length among the segments connecting the constrained spaces. [sent-116, score-0.322]
43 We pick a vector from each constrained space having the shortest norm ui = argmin u u∈C (S,Di ) 2, i = 1, 2. [sent-119, score-0.228]
44 We define the distance between D1 and D2 to be dCM (D1 , D2 | S) = |Ω| u1 − u2 2. [sent-120, score-0.176]
45 We will refer to this distance as Constrained Minimum (CM) distance. [sent-122, score-0.176]
46 Thus the CM distance is not a distance between two distributions; it is rather a distance based on the frequencies of a given feature function and is motivated by the geometrical interpretation of the distribution sets. [sent-124, score-0.861]
47 The main reason why we define the CM distance using the constrained spaces C (S, D i ) and not the distribution sets C+ (S, Di ) is that we can evaluate the CM distance efficiently. [sent-125, score-0.505]
48 We discussed earlier that Ω may be very large so it is crucial that the evaluation time of a distance does not depend on |Ω|. [sent-126, score-0.2]
49 The following theorem says that the CM distance can be represented using the frequencies and a covariance matrix 1 Cov [S] = ∑ S(ω)S(ω)T − |Ω| ω∈Ω 1 ∑ S(ω) |Ω| ω∈Ω 1 ∑ S(ω) |Ω| ω∈Ω T . [sent-127, score-0.555]
50 For the CM distance between two data sets D 1 and D2 we have dCM (D1 , D2 | S)2 = (θ1 − θ2 )T Cov−1 [S] (θ1 − θ2 ) , where θi = S (Di ). [sent-129, score-0.214]
51 The preceding theorem shows that we can evaluate the distance using the covariance matrix and frequencies. [sent-131, score-0.393]
52 If we assume that evaluating a single component of the feature function S is a unit operation, then the frequencies can be calculated in O(N |D1 | + N |D2 |) time. [sent-132, score-0.221]
53 The evaluation time of the covariance matrix is O(|Ω| N 2 ) but we assume that S is such that we know a closed form for the covariance matrix (such cases will be discussed in Section 3), that is, we assume that we can evaluate the covariance matrix in O(N 2 ) time. [sent-133, score-0.366]
54 Inverting the matrix takes O(N 3 ) time and evaluating the distance itself is O(N 2 ) operation. [sent-134, score-0.221]
55 Note that calculating frequencies and inverting the covariance matrix needs to be done only once: for example, assume that we have k data sets, then calculating the distances between every data set pair can be done in O N ∑k |Di | + N 3 + k2 N 2 time. [sent-135, score-0.442]
56 i Example 2 Let us evaluate the distance between the data sets given in Example 1 using both the definition of the CM distance and Theorem 1. [sent-136, score-0.429]
57 Thus the CM distance 8 8 8 is equal to √ √ 22 22 22 1/2 3 dCM (D1 , D2 | S) = 3 u1 − u2 2 = 3 2 + 2 + 2 =√ . [sent-141, score-0.176]
58 Note that S(Di ) = S (Di ) and that Cov [S] = Cov [S ] so Theorem 1 says that the CM distance has not changed during this transformation. [sent-148, score-0.267]
59 The following theorem says that adding external data set to the original data sets makes the distance smaller which is very reasonable property. [sent-157, score-0.358]
60 Corollary 6 Let A be an invertible N × N matrix and b a vector of length N. [sent-165, score-0.178]
61 This means that if we know the frequencies S (D), then we can infer the frequencies T (D) without a new data scan. [sent-170, score-0.366]
62 Corollary 6 says that the CM distance is equal for any such representation. [sent-173, score-0.241]
63 3 Alternative Characterisation of the CM Distance We derived our distance using geometrical interpretation of the distribution sets. [sent-175, score-0.315]
64 Namely, we will show that if some distance is of Mahalanobis type and satisfies some mild assumptions, then this distance is proportional to the CM distance. [sent-177, score-0.352]
65 We say that a distance d is of Mahalanobis type if d (D1 , D2 | S)2 = (θ1 − θ2 )T C(S)−1 (θ1 − θ2 ) , where θ1 = S (D1 ) and θ2 = S (D2 ) and C(S) maps a feature function S to a symmetric N × N matrix. [sent-179, score-0.206]
66 Note that if C(S) = Cov [S], then the distance d is the CM distance. [sent-180, score-0.176]
67 One reason is that a distance belonging to M is guaranteed to be a metric. [sent-183, score-0.2]
68 The most important reason, however, is the fact that we can evaluate the distance belonging to M efficiently (assuming, of course, that we can evaluate C(S)). [sent-184, score-0.278]
69 This is to say that the distance is independent of the representation of the frequency information. [sent-194, score-0.241]
70 We can construct a distance that would satisfy Assumption 1 in the invertible case but fail in a general case. [sent-197, score-0.267]
71 To justify Assumption 2 note that the frequencies have not changed, that is, U (σ(D)) = S (D). [sent-199, score-0.19]
72 Our argument is that the distance should be based on the frequencies and not on the values of the data points. [sent-201, score-0.378]
73 The CM distance and Binary Data Sets In this section we will concentrate on the distances between binary data sets. [sent-205, score-0.305]
74 We will consider the CM distance based on itemset frequencies, a very popular statistics in the literature concerning binary data mining. [sent-206, score-0.453]
75 In the first subsection we will show that a more natural way of representing the CM distance is to use parity frequencies. [sent-207, score-0.343]
76 We also show that we can evaluate the distance in linear time. [sent-208, score-0.215]
77 In the second subsection we will provide more theoretical evidence why the CM distance is a good distance between binary data sets. [sent-209, score-0.481]
78 A boolean formula S : Ω → {0, 1} is a feature function mapping a binary vector to a binary value. [sent-226, score-0.133]
79 We are interested in two particular boolean formulae: Assume that we are given an itemset B = {ai1 , . [sent-227, score-0.206]
80 We define a conjunction function SB to be SB (ω) = ωi1 ∧ ωi2 ∧ · · · ∧ ωiK , that is, SB results 1 if and only if all the variables corresponding the itemset B are on. [sent-231, score-0.227]
81 Given a data set D the frequency SB (D) is called the frequency of the itemset B. [sent-232, score-0.341]
82 Conjunction functions are popular and there are a lot of studies in the literature concerning finding itemsets that have large frequency (see e. [sent-233, score-0.326]
83 We also define a parity function TB to be TB (ω) = ωi1 ⊕ ωi2 ⊕ · · · ⊕ ωiK , where ⊕ is the binary operator XOR. [sent-238, score-0.146]
84 A collection F of itemsets is said to be antimonotonic or downwardly closed if each non-empty subset of an itemset included in F is also included in F . [sent-240, score-0.552]
85 We can show that there is an invertible matrix A such that TF = ASF . [sent-253, score-0.136]
86 In other words, we can get the parity function TF from the conjunction function TF by an invertible linear transformation. [sent-254, score-0.256]
87 The following lemma shows that the covariance matrix Cov [TF ] of the parity function is very simple. [sent-256, score-0.254]
88 137 TATTI Lemma 8 Let TF be a parity function for a family of itemsets F , then Cov [TF ] = 0. [sent-257, score-0.341]
89 5I, that is, the covariance matrix is a diagonal matrix having 0. [sent-258, score-0.154]
90 This identity says that the CM distance can be calculated in O(N) time (assuming that we know the frequencies θ1 and θ2 ). [sent-262, score-0.432]
91 4 implies that dCM (D1 , D2 | SI ) = √ 2 θ1 − θ 2 2, where θ1 and θ2 consists of the marginal frequencies of each a j calculated from D1 and D2 , respectively. [sent-270, score-0.191]
92 In this case the CM distance is simply the L2 distance between the marginal frequencies of the individual attributes. [sent-271, score-0.516]
93 The frequencies θ1 and θ2 resemble term frequencies (TF) used in text mining (see e. [sent-272, score-0.366]
94 For example, consider data sets with two items having the same individual frequencies but different correlations. [sent-278, score-0.244]
95 In this case the data sets may look very different but according to our distance they are equal. [sent-279, score-0.214]
96 K, j < k be a family of itemsets such that each set contains at most two items. [sent-283, score-0.23]
97 The corresponding frequencies contain the individual means and the pairwise correlation for all items. [sent-284, score-0.164]
98 Let Sa j ak be the conjunction function for the itemset a j ak . [sent-285, score-0.425]
99 Let γ jk = Sa j ak (D1 ) − Sa j ak (D2 ) be the difference between the correlation frequencies. [sent-286, score-0.233]
100 Since Ta j ak = Sa j + Sak − 2Sa j ak it follows from Eq. [sent-288, score-0.198]
wordName wordTfidf (topN-words)
[('dcm', 0.49), ('cm', 0.368), ('tf', 0.279), ('itemsets', 0.23), ('cov', 0.185), ('distance', 0.176), ('itemset', 0.173), ('tatti', 0.173), ('frequencies', 0.164), ('heikki', 0.144), ('geometrical', 0.139), ('sa', 0.139), ('padhraic', 0.115), ('parity', 0.111), ('ak', 0.099), ('shortest', 0.099), ('invertible', 0.091), ('ta', 0.087), ('mannila', 0.086), ('di', 0.075), ('istances', 0.073), ('tb', 0.073), ('agrawal', 0.073), ('ets', 0.073), ('tatistics', 0.073), ('mahalanobis', 0.069), ('constrained', 0.067), ('ummary', 0.065), ('frequency', 0.065), ('segment', 0.065), ('says', 0.065), ('covariance', 0.064), ('sb', 0.062), ('ui', 0.062), ('eq', 0.06), ('antimonotonic', 0.058), ('nikolaj', 0.058), ('distances', 0.056), ('sf', 0.056), ('subsection', 0.056), ('conjunction', 0.054), ('segments', 0.049), ('baldi', 0.049), ('rakesh', 0.049), ('spaces', 0.047), ('matrix', 0.045), ('af', 0.043), ('items', 0.042), ('length', 0.042), ('corollary', 0.041), ('theorem', 0.041), ('analyse', 0.04), ('dissimilarity', 0.04), ('evaluate', 0.039), ('mining', 0.038), ('data', 0.038), ('inverting', 0.037), ('gregory', 0.037), ('enumerating', 0.037), ('movies', 0.035), ('jk', 0.035), ('binary', 0.035), ('distributions', 0.035), ('lemma', 0.034), ('boolean', 0.033), ('thinking', 0.033), ('tr', 0.032), ('collection', 0.031), ('concerning', 0.031), ('feature', 0.03), ('included', 0.03), ('ik', 0.03), ('preceding', 0.028), ('ith', 0.028), ('si', 0.027), ('calculated', 0.027), ('justi', 0.026), ('objects', 0.026), ('changed', 0.026), ('justify', 0.026), ('ease', 0.025), ('parallel', 0.025), ('custom', 0.024), ('dover', 0.024), ('finland', 0.024), ('visualisation', 0.024), ('soviet', 0.024), ('judea', 0.024), ('characterisation', 0.024), ('davies', 0.024), ('deduction', 0.024), ('executable', 0.024), ('hannu', 0.024), ('insertions', 0.024), ('multisets', 0.024), ('theodore', 0.024), ('toivonen', 0.024), ('crucial', 0.024), ('belonging', 0.024), ('dimensional', 0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999994 27 jmlr-2007-Distances between Data Sets Based on Summary Statistics
Author: Nikolaj Tatti
Abstract: The concepts of similarity and distance are crucial in data mining. We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. The initial definition of our distance is based on geometrical notions of certain sets of distributions. We show that this distance can be computed in cubic time and that it has several intuitive properties. We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. Our empirical tests with real world data show that the distance works well. Keywords: data mining theory, complex data, binary data, itemsets
Author: Vitaly Feldman
Abstract: We consider the problems of attribute-efficient PAC learning of two well-studied concept classes: parity functions and DNF expressions over {0, 1}n . We show that attribute-efficient learning of parities with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory. This is the first evidence that attribute-efficient learning of a natural PAC learnable concept class can be computationally hard. An algorithm is said to use membership queries (MQs) non-adaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple non-adaptive parity learning algorithm and a modification of Levin’s algorithm for locating a weakly-correlated parity due to Bshouty et al. (1999), we give the first non-adaptive and attribute-efficient algorithm for ˜ learning DNF with respect to the uniform distribution. Our algorithm runs in time O(ns4 /ε) and ˜ uses O(s4 · log2 n/ε) non-adaptive MQs, where s is the number of terms in the shortest DNF representation of the target concept. The algorithm improves on the best previous algorithm for learning DNF (of Bshouty et al., 1999) and can also be easily modified to tolerate random persistent classification noise in MQs. Keywords: attribute-efficient, non-adaptive, membership query, DNF, parity function, random linear code
3 0.058504712 2 jmlr-2007-A Complete Characterization of a Family of Solutions to a Generalized Fisher Criterion
Author: Marco Loog
Abstract: Recently, Ye (2005) suggested yet another optimization criterion for discriminant analysis and proposed a characterization of the family of solutions to this objective. The characterization, however, merely describes a part of the full solution set, that is, it is not complete and therefore not at all a characterization. This correspondence first gives the correct characterization and afterwards compares it to Ye’s. Keywords: linear discriminant analysis, Fisher criterion, small sample, characterization 1. Classical and New Criteria Given N feature vectors of dimensionality n, a linear reduction of dimensionality, based on classical Fisher LDA, determines an n × d transformation matrix L that, for a given d < K, K the number of classes, maximizes the so-called Fisher criterion: F(A) = tr((A t SW A)−1 (At SB A)) or, equivalently, F0 (A) = tr((At ST A)−1 (At SB A)). Here, SB := ∑K pi (mi − m)(mi − m)t , SW := ∑K pi Si , and i=1 i=1 ST = SB + SW . The matrices SB , SW , and ST are the so-called between-class, pooled within-class, and total covariance matrices. In addition, mi is the mean vector of class i, pi is the prior of class i, and the overall mean m equals ∑k pi mi . Finally, Si is the covariance matrix of class i. i=1 A solution to these optimization problems can be obtained by means of a generalized eigenvalue decomposition, which Fukunaga (1990) relates to a simultaneous diagonalization of the two matrices involved (see also Campbell and Atchley, 1981). More common is it to apply a standard −1 eigenvalue decomposition to S−1 SB (or SW SB ), resulting in an equivalent set of eigenvectors. The d T columns of the optimal solution L are simply taken to equal the d eigenvectors corresponding to the d largest eigenvalues. It is known that this solution is not unique and the full class can be obtained by multiplying L to the right with nonsingular d × d matrices (see Fukunaga, 1990). Clearly, if the total covariance ST is singular, neither the generalized nor the standard eigenvalue decomposition can be readily employed. Directly or indirectly, the problem is that the matrix inverse S−1 does not exist, which is the typical situation when dealing with small samples. In an attempt to T overcome this problem, Ye (2005) introduced a different criterion that is defined as F1 (A) = tr((At ST A)+ (At SB A)) , ∗. Also at Nordic Bioscience Imaging, Hovegade 207, DK-2730 Herlev, Denmark. c 2007 Marco Loog. (1) L OOG where + denotes taking the Moore-Penrose generalized inverse of a matrix. Like for F0 , an optimal transform L is one that maximizes the objective F1 . Again, this solution is not unique. 2. Correct Characterization For the full characterization of the set of solutions to Equation (1), initially the problem is looked at from a geometrical point of view (cf., Campbell and Atchley, 1981). It is assumed that the number of samples N is smaller than or equal to the feature dimensionality n. In the undersampled case, it is clear that all data variation occurs in an N − 1-dimensional subspace of the original space. To start with, a PCA of the data is carried out and the first N − 1 principal components are rotated to the first N − 1 axes of the n-dimensional space by means of a rotation matrix R. This matrix consists of all (normalized) eigenvectors of ST taken as its columns. After this rotation, new total and between-class covariance matrices, ST = Rt ST R and SB = Rt SB R, are obtained. These 0 0 matrices can be partitioned as follows: ST = Σ0T 0 and SB = ΣB 0 , where ΣT and ΣB are N − 1 × 0 N − 1 covariance matrices and ΣT is nonsingular and diagonal by construction. The between-class variation is obviously restricted to the N − 1-dimensional subspace in which the total data variation takes place, therefore a same partitioning of SB is possible. However, the covariance submatrix ΣB is not necessarily diagonal, neither does it have to be nonsingular. Basically, the PCA-based rotation R converts the initial problem into a more convenient one, splitting up the original space in an N − 1-dimensional one in which “everything interesting” takes place and a remaining n − N + 1dimensional subspace in which “nothing happens at all”. Now consider F1 in this transformed space and take a general n × d transformation matrix A, which is partitioned in a way similar to the covariance matrices, that is, X . Y A= (2) Here, X is an N − 1 × d-matrix and Y is of size n − N + 1 × d. Taking this definition, the following holds (cf., Ye, 2005): t + t F1 (A) = tr((A ST A) (A SB A)) = tr =tr X t ΣT X 0 0 0 + X Y X t ΣB X 0 0 0 t ΣT 0 = tr 0 0 X Y + (Xt ΣT X)−1 0 0 0 X Y t ΣB 0 0 0 X Y X t ΣB X 0 0 0 = tr((Xt ΣT X)−1 (Xt ΣB X)) = F0 (X) . From this it is immediate that a matrix A maximizes F1 if and only if the submatrix X maximizes the original Fisher criterion in the lower-dimensional subspace. Moreover, if L is such a matrix maximizing F1 in the PCA-transformed space, it is easy to check that R−1 L = Rt L provides a solution to the original, general problem that has not been preprocessed by means of a PCA (see also Fukunaga, 1990). A characterization of the complete family of solutions can now be given. Let Λ ∈ RN−1×d be an optimal solution of F0 (X) = tr((Xt ΣT X)−1 (Xt ΣB X)). As already noted in Section 1, the full set of solutions is given by F = {ΛZ ∈ RN−1×d | Z ∈ GLd (R)}, where GLd (R) denotes the general linear group of d × d invertible matrices. The previous paragraph essentially demonstrates that if X ∈ F , A in Equation (2) maximizes F1 . The matrix Y can be chosen ad 2122 C OMPLETE C HARACTERIZATION OF A FAMILY OF S OLUTIONS libitum. Now, the latter provides the solution in the PCA-transformed space and to solve the general problem we need to take the rotation back to the original space into account. All in all, this leads to the following complete family of solutions L , maximizing F1 in the original space: L = Rt ΛZ ∈ Rn×d Z ∈ GLd (R), Y ∈ Rn−N+1×d Y , (3) where Λ = argmaxX tr((Xt ΣT X)−1 (Xt ΣB X)) and Rt takes care of the rotation back. 3. Original Characterization Though not noted by Ye (2005), his attempt to characterize the full set of solutions of Equation (1) is based on a simultaneous diagonalization of the three covariance matrices S B , SW , and ST that is similar to the ideas described by Campbell and Atchley (1981) and Fukunaga (1990). Moreover, Golub and Van Loan (Theorem 8.7.1. 1996) can be readily applied to demonstrate that such simultaneous diagonalization is possible in the small sample setting. After the diagonalization step, partitioned between-class, pooled within-class, and total covariance matrices are considered. This partitioning is similar to the one employed in the previous section, which does not enforce matrices to be diagonal however. In the subsequent optimization step, the classical Fisher criterion is maximized basically in the appropriate subspace, comparable to the approach described above, but in a mildly more involved and concealed way. For this, matrices of the form Rt X are considered, consider Equations (2) and Y (3). However, Y is simply the null matrix and the family of solutions L provided is limited to L = Rt ΛZ ∈ Rn×d Z ∈ GLd (R) . 0 Obviously, this is far from a complete characterization, especially when N − 1 n which is, for instance, typically the case for the data sets considered by Ye (2005). Generally, the utility of a dimensionality reduction criterion, without additional constrains, depends on the efficiency over the full set of solutions. As Ye (2005) only considers two very specific instances from the large class of possibilities, it is unclear to what extent the new criterion really provides an efficient way of performing a reduction of dimensionality. References N. A. Campbell and W. R. Atchley. The geometry of canonical variate analysis. Systematic Zoology, 30(3):268–280, 1981. K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 1990. G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, third edition, 1996. J. Ye. Characterization of a family of algorithms for generalized discriminant analysis on undersampled problems. Journal of Machine Learning Research, 6:483–502, 2005. 2123
4 0.044053197 3 jmlr-2007-A Generalized Maximum Entropy Approach to Bregman Co-clustering and Matrix Approximation
Author: Arindam Banerjee, Inderjit Dhillon, Joydeep Ghosh, Srujana Merugu, Dharmendra S. Modha
Abstract: Co-clustering, or simultaneous clustering of rows and columns of a two-dimensional data matrix, is rapidly becoming a powerful data analysis technique. Co-clustering has enjoyed wide success in varied application domains such as text clustering, gene-microarray analysis, natural language processing and image, speech and video analysis. In this paper, we introduce a partitional co-clustering formulation that is driven by the search for a good matrix approximation—every co-clustering is associated with an approximation of the original data matrix and the quality of co-clustering is determined by the approximation error. We allow the approximation error to be measured using a large class of loss functions called Bregman divergences that include squared Euclidean distance and KL-divergence as special cases. In addition, we permit multiple structurally different co-clustering schemes that preserve various linear statistics of the original data matrix. To accomplish the above tasks, we introduce a new minimum Bregman information (MBI) principle that simultaneously generalizes the maximum entropy and standard least squares principles, and leads to a matrix approximation that is optimal among all generalized additive models in a certain natural parameter space. Analysis based on this principle yields an elegant meta algorithm, special cases of which include most previously known alternate minimization based clustering algorithms such as kmeans and co-clustering algorithms such as information theoretic (Dhillon et al., 2003b) and minimum sum-squared residue co-clustering (Cho et al., 2004). To demonstrate the generality and flexibility of our co-clustering framework, we provide examples and empirical evidence on a varic 2007 Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, Srujana Merugu and Dharmendra Modha. BANERJEE , D HILLON , G HOSH , M ERUGU AND M ODHA ety of problem domains and also describe novel co-clustering applications such as missing value prediction and
5 0.040574227 56 jmlr-2007-Multi-Task Learning for Classification with Dirichlet Process Priors
Author: Ya Xue, Xuejun Liao, Lawrence Carin, Balaji Krishnapuram
Abstract: Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP. Keywords: classification, hierarchical Bayesian models, Dirichlet process
6 0.038944889 6 jmlr-2007-A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians
7 0.035773043 32 jmlr-2007-Euclidean Embedding of Co-occurrence Data
9 0.029575458 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
10 0.029040296 83 jmlr-2007-The On-Line Shortest Path Problem Under Partial Monitoring
11 0.028788805 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
12 0.026725473 76 jmlr-2007-Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification
13 0.025548862 66 jmlr-2007-Penalized Model-Based Clustering with Application to Variable Selection
14 0.024191758 5 jmlr-2007-A Nonparametric Statistical Approach to Clustering via Mode Identification
15 0.023844725 81 jmlr-2007-The Locally Weighted Bag of Words Framework for Document Representation
16 0.023699293 47 jmlr-2007-Learning Horn Expressions with LOGAN-H
17 0.023536339 40 jmlr-2007-Harnessing the Expertise of 70,000 Human Editors: Knowledge-Based Feature Generation for Text Categorization
18 0.022999538 84 jmlr-2007-The Pyramid Match Kernel: Efficient Learning with Sets of Features
19 0.022817107 8 jmlr-2007-A Unified Continuous Optimization Framework for Center-Based Clustering Methods
20 0.022720054 64 jmlr-2007-Online Learning of Multiple Tasks with a Shared Loss
topicId topicWeight
[(0, 0.148), (1, 0.021), (2, -0.04), (3, 0.012), (4, -0.093), (5, -0.071), (6, -0.02), (7, 0.029), (8, -0.02), (9, -0.071), (10, -0.117), (11, 0.093), (12, 0.003), (13, 0.032), (14, -0.024), (15, 0.035), (16, 0.077), (17, 0.041), (18, 0.095), (19, 0.014), (20, 0.171), (21, -0.003), (22, 0.045), (23, 0.007), (24, 0.006), (25, 0.01), (26, 0.146), (27, -0.143), (28, 0.122), (29, -0.05), (30, 0.082), (31, 0.331), (32, 0.142), (33, -0.396), (34, 0.118), (35, 0.141), (36, -0.023), (37, -0.108), (38, -0.044), (39, 0.044), (40, 0.111), (41, 0.394), (42, -0.161), (43, 0.192), (44, 0.285), (45, 0.112), (46, 0.124), (47, 0.047), (48, -0.025), (49, -0.128)]
simIndex simValue paperId paperTitle
same-paper 1 0.97724795 27 jmlr-2007-Distances between Data Sets Based on Summary Statistics
Author: Nikolaj Tatti
Abstract: The concepts of similarity and distance are crucial in data mining. We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. The initial definition of our distance is based on geometrical notions of certain sets of distributions. We show that this distance can be computed in cubic time and that it has several intuitive properties. We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. Our empirical tests with real world data show that the distance works well. Keywords: data mining theory, complex data, binary data, itemsets
2 0.24059235 3 jmlr-2007-A Generalized Maximum Entropy Approach to Bregman Co-clustering and Matrix Approximation
Author: Arindam Banerjee, Inderjit Dhillon, Joydeep Ghosh, Srujana Merugu, Dharmendra S. Modha
Abstract: Co-clustering, or simultaneous clustering of rows and columns of a two-dimensional data matrix, is rapidly becoming a powerful data analysis technique. Co-clustering has enjoyed wide success in varied application domains such as text clustering, gene-microarray analysis, natural language processing and image, speech and video analysis. In this paper, we introduce a partitional co-clustering formulation that is driven by the search for a good matrix approximation—every co-clustering is associated with an approximation of the original data matrix and the quality of co-clustering is determined by the approximation error. We allow the approximation error to be measured using a large class of loss functions called Bregman divergences that include squared Euclidean distance and KL-divergence as special cases. In addition, we permit multiple structurally different co-clustering schemes that preserve various linear statistics of the original data matrix. To accomplish the above tasks, we introduce a new minimum Bregman information (MBI) principle that simultaneously generalizes the maximum entropy and standard least squares principles, and leads to a matrix approximation that is optimal among all generalized additive models in a certain natural parameter space. Analysis based on this principle yields an elegant meta algorithm, special cases of which include most previously known alternate minimization based clustering algorithms such as kmeans and co-clustering algorithms such as information theoretic (Dhillon et al., 2003b) and minimum sum-squared residue co-clustering (Cho et al., 2004). To demonstrate the generality and flexibility of our co-clustering framework, we provide examples and empirical evidence on a varic 2007 Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, Srujana Merugu and Dharmendra Modha. BANERJEE , D HILLON , G HOSH , M ERUGU AND M ODHA ety of problem domains and also describe novel co-clustering applications such as missing value prediction and
Author: Vitaly Feldman
Abstract: We consider the problems of attribute-efficient PAC learning of two well-studied concept classes: parity functions and DNF expressions over {0, 1}n . We show that attribute-efficient learning of parities with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory. This is the first evidence that attribute-efficient learning of a natural PAC learnable concept class can be computationally hard. An algorithm is said to use membership queries (MQs) non-adaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple non-adaptive parity learning algorithm and a modification of Levin’s algorithm for locating a weakly-correlated parity due to Bshouty et al. (1999), we give the first non-adaptive and attribute-efficient algorithm for ˜ learning DNF with respect to the uniform distribution. Our algorithm runs in time O(ns4 /ε) and ˜ uses O(s4 · log2 n/ε) non-adaptive MQs, where s is the number of terms in the shortest DNF representation of the target concept. The algorithm improves on the best previous algorithm for learning DNF (of Bshouty et al., 1999) and can also be easily modified to tolerate random persistent classification noise in MQs. Keywords: attribute-efficient, non-adaptive, membership query, DNF, parity function, random linear code
4 0.22287017 2 jmlr-2007-A Complete Characterization of a Family of Solutions to a Generalized Fisher Criterion
Author: Marco Loog
Abstract: Recently, Ye (2005) suggested yet another optimization criterion for discriminant analysis and proposed a characterization of the family of solutions to this objective. The characterization, however, merely describes a part of the full solution set, that is, it is not complete and therefore not at all a characterization. This correspondence first gives the correct characterization and afterwards compares it to Ye’s. Keywords: linear discriminant analysis, Fisher criterion, small sample, characterization 1. Classical and New Criteria Given N feature vectors of dimensionality n, a linear reduction of dimensionality, based on classical Fisher LDA, determines an n × d transformation matrix L that, for a given d < K, K the number of classes, maximizes the so-called Fisher criterion: F(A) = tr((A t SW A)−1 (At SB A)) or, equivalently, F0 (A) = tr((At ST A)−1 (At SB A)). Here, SB := ∑K pi (mi − m)(mi − m)t , SW := ∑K pi Si , and i=1 i=1 ST = SB + SW . The matrices SB , SW , and ST are the so-called between-class, pooled within-class, and total covariance matrices. In addition, mi is the mean vector of class i, pi is the prior of class i, and the overall mean m equals ∑k pi mi . Finally, Si is the covariance matrix of class i. i=1 A solution to these optimization problems can be obtained by means of a generalized eigenvalue decomposition, which Fukunaga (1990) relates to a simultaneous diagonalization of the two matrices involved (see also Campbell and Atchley, 1981). More common is it to apply a standard −1 eigenvalue decomposition to S−1 SB (or SW SB ), resulting in an equivalent set of eigenvectors. The d T columns of the optimal solution L are simply taken to equal the d eigenvectors corresponding to the d largest eigenvalues. It is known that this solution is not unique and the full class can be obtained by multiplying L to the right with nonsingular d × d matrices (see Fukunaga, 1990). Clearly, if the total covariance ST is singular, neither the generalized nor the standard eigenvalue decomposition can be readily employed. Directly or indirectly, the problem is that the matrix inverse S−1 does not exist, which is the typical situation when dealing with small samples. In an attempt to T overcome this problem, Ye (2005) introduced a different criterion that is defined as F1 (A) = tr((At ST A)+ (At SB A)) , ∗. Also at Nordic Bioscience Imaging, Hovegade 207, DK-2730 Herlev, Denmark. c 2007 Marco Loog. (1) L OOG where + denotes taking the Moore-Penrose generalized inverse of a matrix. Like for F0 , an optimal transform L is one that maximizes the objective F1 . Again, this solution is not unique. 2. Correct Characterization For the full characterization of the set of solutions to Equation (1), initially the problem is looked at from a geometrical point of view (cf., Campbell and Atchley, 1981). It is assumed that the number of samples N is smaller than or equal to the feature dimensionality n. In the undersampled case, it is clear that all data variation occurs in an N − 1-dimensional subspace of the original space. To start with, a PCA of the data is carried out and the first N − 1 principal components are rotated to the first N − 1 axes of the n-dimensional space by means of a rotation matrix R. This matrix consists of all (normalized) eigenvectors of ST taken as its columns. After this rotation, new total and between-class covariance matrices, ST = Rt ST R and SB = Rt SB R, are obtained. These 0 0 matrices can be partitioned as follows: ST = Σ0T 0 and SB = ΣB 0 , where ΣT and ΣB are N − 1 × 0 N − 1 covariance matrices and ΣT is nonsingular and diagonal by construction. The between-class variation is obviously restricted to the N − 1-dimensional subspace in which the total data variation takes place, therefore a same partitioning of SB is possible. However, the covariance submatrix ΣB is not necessarily diagonal, neither does it have to be nonsingular. Basically, the PCA-based rotation R converts the initial problem into a more convenient one, splitting up the original space in an N − 1-dimensional one in which “everything interesting” takes place and a remaining n − N + 1dimensional subspace in which “nothing happens at all”. Now consider F1 in this transformed space and take a general n × d transformation matrix A, which is partitioned in a way similar to the covariance matrices, that is, X . Y A= (2) Here, X is an N − 1 × d-matrix and Y is of size n − N + 1 × d. Taking this definition, the following holds (cf., Ye, 2005): t + t F1 (A) = tr((A ST A) (A SB A)) = tr =tr X t ΣT X 0 0 0 + X Y X t ΣB X 0 0 0 t ΣT 0 = tr 0 0 X Y + (Xt ΣT X)−1 0 0 0 X Y t ΣB 0 0 0 X Y X t ΣB X 0 0 0 = tr((Xt ΣT X)−1 (Xt ΣB X)) = F0 (X) . From this it is immediate that a matrix A maximizes F1 if and only if the submatrix X maximizes the original Fisher criterion in the lower-dimensional subspace. Moreover, if L is such a matrix maximizing F1 in the PCA-transformed space, it is easy to check that R−1 L = Rt L provides a solution to the original, general problem that has not been preprocessed by means of a PCA (see also Fukunaga, 1990). A characterization of the complete family of solutions can now be given. Let Λ ∈ RN−1×d be an optimal solution of F0 (X) = tr((Xt ΣT X)−1 (Xt ΣB X)). As already noted in Section 1, the full set of solutions is given by F = {ΛZ ∈ RN−1×d | Z ∈ GLd (R)}, where GLd (R) denotes the general linear group of d × d invertible matrices. The previous paragraph essentially demonstrates that if X ∈ F , A in Equation (2) maximizes F1 . The matrix Y can be chosen ad 2122 C OMPLETE C HARACTERIZATION OF A FAMILY OF S OLUTIONS libitum. Now, the latter provides the solution in the PCA-transformed space and to solve the general problem we need to take the rotation back to the original space into account. All in all, this leads to the following complete family of solutions L , maximizing F1 in the original space: L = Rt ΛZ ∈ Rn×d Z ∈ GLd (R), Y ∈ Rn−N+1×d Y , (3) where Λ = argmaxX tr((Xt ΣT X)−1 (Xt ΣB X)) and Rt takes care of the rotation back. 3. Original Characterization Though not noted by Ye (2005), his attempt to characterize the full set of solutions of Equation (1) is based on a simultaneous diagonalization of the three covariance matrices S B , SW , and ST that is similar to the ideas described by Campbell and Atchley (1981) and Fukunaga (1990). Moreover, Golub and Van Loan (Theorem 8.7.1. 1996) can be readily applied to demonstrate that such simultaneous diagonalization is possible in the small sample setting. After the diagonalization step, partitioned between-class, pooled within-class, and total covariance matrices are considered. This partitioning is similar to the one employed in the previous section, which does not enforce matrices to be diagonal however. In the subsequent optimization step, the classical Fisher criterion is maximized basically in the appropriate subspace, comparable to the approach described above, but in a mildly more involved and concealed way. For this, matrices of the form Rt X are considered, consider Equations (2) and Y (3). However, Y is simply the null matrix and the family of solutions L provided is limited to L = Rt ΛZ ∈ Rn×d Z ∈ GLd (R) . 0 Obviously, this is far from a complete characterization, especially when N − 1 n which is, for instance, typically the case for the data sets considered by Ye (2005). Generally, the utility of a dimensionality reduction criterion, without additional constrains, depends on the efficiency over the full set of solutions. As Ye (2005) only considers two very specific instances from the large class of possibilities, it is unclear to what extent the new criterion really provides an efficient way of performing a reduction of dimensionality. References N. A. Campbell and W. R. Atchley. The geometry of canonical variate analysis. Systematic Zoology, 30(3):268–280, 1981. K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 1990. G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, third edition, 1996. J. Ye. Characterization of a family of algorithms for generalized discriminant analysis on undersampled problems. Journal of Machine Learning Research, 6:483–502, 2005. 2123
Author: Carine Hue, Marc Boullé
Abstract: In this paper, we consider the supervised learning task which consists in predicting the normalized rank of a numerical variable. We introduce a novel probabilistic approach to estimate the posterior distribution of the target rank conditionally to the predictors. We turn this learning task into a model selection problem. For that, we define a 2D partitioning family obtained by discretizing numerical variables and grouping categorical ones and we derive an analytical criterion to select the partition with the highest posterior probability. We show how these partitions can be used to build univariate predictors and multivariate ones under a naive Bayes assumption. We also propose a new evaluation criterion for probabilistic rank estimators. Based on the logarithmic score, we show that such criterion presents the advantage to be minored, which is not the case of the logarithmic score computed for probabilistic value estimator. A first set of experimentations on synthetic data shows the good properties of the proposed criterion and of our partitioning approach. A second set of experimentations on real data shows competitive performance of the univariate and selective naive Bayes rank estimators projected on the value range compared to methods submitted to a recent challenge on probabilistic metric regression tasks. Our approach is applicable for all regression problems with categorical or numerical predictors. It is particularly interesting for those with a high number of predictors as it automatically detects the variables which contain predictive information. It builds pertinent predictors of the normalized rank of the numerical target from one or several predictors. As the criteria selection is regularized by the presence of a prior and a posterior term, it does not suffer from overfitting. Keywords: rank regression, probabilistic approach, 2D partitioning, non parametric estimation, Bayesian model selection
6 0.16955851 32 jmlr-2007-Euclidean Embedding of Co-occurrence Data
7 0.16843551 56 jmlr-2007-Multi-Task Learning for Classification with Dirichlet Process Priors
8 0.16764086 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
9 0.15017645 6 jmlr-2007-A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians
10 0.13314056 50 jmlr-2007-Local Discriminant Wavelet Packet Coordinates for Face Recognition
11 0.1242217 84 jmlr-2007-The Pyramid Match Kernel: Efficient Learning with Sets of Features
12 0.11747286 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
13 0.11559275 78 jmlr-2007-Statistical Consistency of Kernel Canonical Correlation Analysis
14 0.11140499 83 jmlr-2007-The On-Line Shortest Path Problem Under Partial Monitoring
15 0.10856963 47 jmlr-2007-Learning Horn Expressions with LOGAN-H
17 0.099997111 34 jmlr-2007-From External to Internal Regret (Special Topic on the Conference on Learning Theory 2005)
18 0.090735555 40 jmlr-2007-Harnessing the Expertise of 70,000 Human Editors: Knowledge-Based Feature Generation for Text Categorization
19 0.090527475 62 jmlr-2007-On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning
20 0.089983411 11 jmlr-2007-Anytime Learning of Decision Trees
topicId topicWeight
[(4, 0.028), (8, 0.023), (10, 0.025), (12, 0.019), (15, 0.021), (28, 0.04), (40, 0.024), (45, 0.014), (48, 0.028), (60, 0.038), (77, 0.511), (80, 0.017), (85, 0.054), (98, 0.08)]
simIndex simValue paperId paperTitle
same-paper 1 0.73008251 27 jmlr-2007-Distances between Data Sets Based on Summary Statistics
Author: Nikolaj Tatti
Abstract: The concepts of similarity and distance are crucial in data mining. We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. The initial definition of our distance is based on geometrical notions of certain sets of distributions. We show that this distance can be computed in cubic time and that it has several intuitive properties. We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. Our empirical tests with real world data show that the distance works well. Keywords: data mining theory, complex data, binary data, itemsets
Author: Onur C. Hamsici, Aleix M. Martinez
Abstract: Many feature representations, as in genomics, describe directional data where all feature vectors share a common norm. In other cases, as in computer vision, a norm or variance normalization step, where all feature vectors are normalized to a common length, is generally used. These representations and pre-processing step map the original data from R p to the surface of a hypersphere S p−1 . Such representations should then be modeled using spherical distributions. However, the difficulty associated with such spherical representations has prompted researchers to model their spherical data using Gaussian distributions instead—as if the data were represented in R p rather than S p−1 . This opens the question to whether the classification results calculated with the Gaussian approximation are the same as those obtained when using the original spherical distributions. In this paper, we show that in some particular cases (which we named spherical-homoscedastic) the answer to this question is positive. In the more general case however, the answer is negative. For this reason, we further investigate the additional error added by the Gaussian modeling. We conclude that the more the data deviates from spherical-homoscedastic, the less advisable it is to employ the Gaussian approximation. We then show how our derivations can be used to define optimal classifiers for spherical-homoscedastic distributions. By using a kernel which maps the original space into one where the data adapts to the spherical-homoscedastic model, we can derive non-linear classifiers with potential applications in a large number of problems. We conclude this paper by demonstrating the uses of spherical-homoscedasticity in the classification of images of objects, gene expression sequences, and text data. Keywords: directional data, spherical distributions, normal distributions, norm normalization, linear and non-linear classifiers, computer vision
3 0.24866942 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
Author: Roland Nilsson, José M. Peña, Johan Björkegren, Jesper Tegnér
Abstract: We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL - OPTIMAL) vs. finding all features relevant to the target variable (ALL RELEVANT ). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL - RELEVANT is much harder than MINIMAL - OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks. Keywords: learning theory, relevance, classification, Markov blanket, bioinformatics
4 0.24163982 55 jmlr-2007-Minimax Regret Classifier for Imprecise Class Distributions
Author: Rocío Alaiz-Rodríguez, Alicia Guerrero-Curieses, Jesús Cid-Sueiro
Abstract: The design of a minimum risk classifier based on data usually stems from the stationarity assumption that the conditions during training and test are the same: the misclassification costs assumed during training must be in agreement with real costs, and the same statistical process must have generated both training and test data. Unfortunately, in real world applications, these assumptions may not hold. This paper deals with the problem of training a classifier when prior probabilities cannot be reliably induced from training data. Some strategies based on optimizing the worst possible case (conventional minimax) have been proposed previously in the literature, but they may achieve a robust classification at the expense of a severe performance degradation. In this paper we propose a minimax regret (minimax deviation) approach, that seeks to minimize the maximum deviation from the performance of the optimal risk classifier. A neural-based minimax regret classifier for general multi-class decision problems is presented. Experimental results show its robustness and the advantages in relation to other approaches. Keywords: classification, imprecise class distribution, minimax regret, minimax deviation, neural networks 1. Introduction - Problem Motivation In the general framework of learning from examples and specifically when dealing with uncertainty, the robustness of the decision machine becomes a key issue. Most machine learning algorithms are based on the assumption that the classifier will use data drawn from the same distribution as the training data set. Unfortunately, for most practical applications (such as remote sensing, direct marketing, fraud detection, information filtering, medical diagnosis or intrusion detection) the target class distribution may not be accurately known during learning: for example, because the cost of labelling data may be class-dependent or the prior probabilities are non-stationary. Therefore, the data used to design the classifier (within the Bayesian context (see VanTrees, 1968), the c 2007 Roc´o Alaiz-Rodr´guez, Alicia Guerrero-Curieses and Jesus Cid-Sueiro. ´ ı ı A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I prior probabilities and the misclassification costs) may be non representative of the underlying real distributions. If the ratio of training data corresponding to each class is not in agreement with real class distributions, designing Bayes decision rules based on prior probabilities estimated from these data will be suboptimal and can seriously affect the reliability and performance of the classifier. A similar problem may arise if real misclassification costs are unknown during training. However, they are usually known by the end user, who can adapt the classifier decision rules to cost changes without re-training the classifier. For this reason, our attention in this paper is mainly focused on the problem of uncertainty in prior probabilities. Furthermore, being aware that class distribution is seldom known (at least totally) in real world applications, a robust approach (as opposite to adaptive) that prevents severe performance degradation appears to be convenient for these situations. Besides other adaptive and robust approaches that address this problem (discussed in more detail in Section 2.2) it is important to highlight those that handle the problem of uncertainty in priors by following a robust minimax principle: minimize the maximum possible risk. Analytic foundations of minimax classification are widely considered in the literature (see VanTrees, 1968; Moon and Stirling, 2000; Duda et al., 2001, for instance) and a few algorithms to carry out minimax decisions have been proposed. From computationally expensive ones such as estimating probability density functions (Takimoto and Warmuth, 2000; Kim, 1996) or using methods from optimization (Polak, 1997) to simpler ones like neural network training algorithms (Guerrero-Curieses et al., 2004; AlaizRodriguez et al., 2005). Minimax classifiers may, however, be seen as over-conservative since its goal is to optimize the performance under the least favorable conditions. Consider, for instance, a direct marketing campaign application carried out in order to maximize profits. Since optimal decisions rely on the proportion of potential buyers and it is usually unknown in advance, our classification system should take into account this uncertainty. Nevertheless, following a pure minimax strategy can lead to solutions where minimizing the maximum loss implies considering there are no potential clients. If it is the case, this minimax approach does not seem to be suitable for this kind of situation. In this imprecise class distribution scenario, it can be noticed that the classifier performance may be highly deviated from the optimal, that is, that of the classifier knowing actual priors. Minimizing this gap (that is, the maximum possible deviation with respect to the optimal classifier) is the focus of this paper. We seek for a system as robust as the conventional minimax approach but less pessimistic at the same time. We will refer to it as a minimax deviation (or minimax regret) classifier. In contrast to other robust and adaptive approaches, it can be used in general multiclass problems. Furthermore, as shown in Guerrero-Curieses et al. (2004), minimax approaches can be used in combination with the adaptive proposal by Saerens et al. (2002) to exploit its advantages. This minimax regret approach has recently been applied in the context of parameter estimation (Eldar et al., 2004; Eldar and Merhav, 2004) and a similar competitive strategy has been used in the context of hypothesis testing (Feder and Merhav, 2002). Under prior uncertainty, our solution provides an upper bound of the performance divergence from the optimal classifier. We propose a simple learning rate scaling algorithm in order to train a neural-based minimax deviation classifier. Although training can be based on minimizing any objective function, we have chosen objective functions that provide estimates of the posterior probabilities (see Cid-Sueiro and Figueiras-Vidal, 2001, for more details). 104 M INIMAX R EGRET C LASSIFIER This paper is organized as follows: the next section provides an overview of the problem as well as some previous approaches to cope with it. Next, Section 3 states the fundamentals of minimax classification together with a deeper analysis of the minimax regret approach proposed in this paper. Section 4 presents a neural training algorithm to get a neural-based minimax regret classifier under complete uncertainty. Moreover, practical situations with partial uncertainty in priors are also discussed. A learning algorithm to solve them is provided in Section 5. In Section 6, some experimental results show that minimax regret classifiers outperform (in terms of maximum risk deviation) classifiers trained on re-balanced data sets and those with the originally assumed priors. Finally, the main conclusions are summarized in Section 7. 2. Problem Overview Traditionally, supervised learning lies in the fact that training data and real data come from the same (although unknown) statistical model. In order to carefully analyze to what extend classifier performance depends on conditions such as class distribution or decision costs, learning and decision theory principles are briefly revisited. Next, some previous approaches to deal with environment imprecision are reviewed. 2.1 Learning and Making Optimal Decisions Let S = {(xk , dk ), k = 1, . . . , K} denote a set of labelled samples where xk ∈ RN is an observation feature vector and dk ∈ UL = {u0 , . . . , uL−1 } is the label vector. Class-i label ui is a unit L-dimensional vector with components ui, j = δi j , with every component equal to 0, except the i-th component which is equal to 1. We assume a learning process that estimates parameters w of a non-linear mapping f w : RN → P from the input space into probability space P = {p ∈ [0, 1]L | ∑L−1 pi = 1}. The soft decision is given i=0 by yk = fw (xk ) ∈ P and the hard output of the classifier is denoted by d. Note that d and d will be used to distinguish the actual class from the predicted one, respectively. Several costs (or benefits) associated with each possible decision are also defined: c i j denotes the cost of deciding in favor of class i when the true class is j. Negative values represent benefits (for instance, cii , which is the cost of correctly classifying a sample from class i could be negative in some practical cases). In general cost-sensitive classification problems, either misclassification costs c i j or cii costs can take different values for each class. Thus, there are many applications where classification errors lead to very different consequences (medical diagnosis, fault detection, credit risk analysis), what implies misclassification costs ci j that may largely vary between them. In the same way, there are also many domains where correct decision costs (or benefits) c ii do not take the same value. For instance, in targeted marketing applications (Zadrozny and Elkan, 2001), correctly identifying a buyer implies some benefit while correctly classifying a non buyer means no income. The same ¨ applies to medical diagnosis domains such as the gastric carcinoma problem studied in G uvenir et al. (2004). In this case, the benefit of correct classification also depends on the class: the benefit of correctly classifying an early stage tumor is higher than that of a later stage. The expected risk (or loss) R is given by L−1 L−1 R = ∑ ∑ ci j P{d = ui |d = u j }Pj j=0 i=0 105 , (1) A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I where P{d = ui |d = u j } with i = j represent conditional error probabilities, and P j = P{d = u j } is the prior probability of class u j . Defining the conditional risk of misclassifying samples from class u j as L−1 Rj = ∑ ci j P{d = ui |d = u j } , i=0 we can express risk (1) as L−1 R= ∑ Ri Pi . (2) i=0 It is well-known that the Bayes decision rule for the minimum risk is given by L−1 d = arg min { ∑ ci j P{d = u j |x}} , ui (3) j=0 where P{d = ui |x} is the a posteriori probability of class i given sample x. The optimal decision rule depends on posterior probabilities and therefore, on the prior probabilities and the likelihood. In theory, as long as posterior probabilities (or likelihood and prior probabilities) are known, the optimal decision in Eq. (3) can be expressed after a trivial manipulation as a function of the cost differences between the costs (ci j − c j j ) (Duda et al., 2001). This is the reason why c j j is usually assumed to be zero and the value of the cost difference is directly assigned to c i j . When dealing ¨ with practical applications, however, some authors (Zadrozny and Elkan, 2001; G uvenir et al., 2004) have urged to use meaningful decision costs measured over a common baseline (and not necessarily taking c j j = 0) in order to avoid mistakes that otherwise could be overlooked. For this reason and, what is more important, the uncertainty class distribution problem addressed in this paper, decision costs measured over a common baseline are considered. Furthermore, absolute values of decision costs are relevant to the design of classifiers under the minimax regret approach. 2.2 Related Work: Dealing with Cost and Prior Uncertainty Most proposals to address uncertainty in priors fall into the categories of adaptive and robust solutions. While the aim of a robust solution is to avoid a classifier with very poor performance under any conditions, an adaptive system pursues to fit the classifier parameters using more incoming data or more precise information. With an adaptive-oriented principle, Provost (2000) states that, once the classifier is trained under specific class distributions and cost assumptions (not necessarily the operating conditions), the selection of the optimal classifier for specific conditions is carried out by a correct placement of the decision thresholds. In the same way, the approaches in Kelly et al. (1999) and Kubat et al. (1998) consider that tuning the classifier parameters should be left to the end user, expecting that class distributions and misclassification costs will be precisely known then. Some graphical methods based on the ROC curve have been proposed in Adams and Hand (1998) and Provost and Fawcett (2001) in order to compare the classifier performance under imprecise class distributions and/or misclassification costs. The ROC convex hull method presented in Provost and Fawcett (2001) (or the alternative representation proposed in Drummond and Holte (2000)) allows the user to select potentially optimal classifiers, providing a flexible way to select 106 M INIMAX R EGRET C LASSIFIER them when precise information about priors or costs is available. Under imprecision, some classifiers can be discarded but this does not necessarily provide a method to select the optimal classifier between the possible ones and fit its parameters. Furthermore, due to its graphical character, these methods are limited to binary classification problems. Changes in prior probabilities have also been discussed by Saerens et al. (2002), who proposes a method based on re-estimating the prior probabilities of real data in an unsupervised way and subsequently adjusting the outputs of the classifier according to the new a priori probabilities. Obviously, the method requires enough unlabelled data being available for re-estimation. As an alternative to adaptive schemes, several robust solutions have been proposed, as the resampling methods, especially in domains where imbalanced classes come out (Kubat and Matwin, 1997; Lawrence et al., 1998; Chawla et al., 2002; Barandela et al., 2003). Either by undersampling or oversampling, the common purpose is to balance artificially the training data set in order to get a uniform class distribution, which is supposed to be the least biased towards any class and, thus, the most robust against changes in class distributions. The same approach is followed in cost sensitive domains, but with some subtle differences in practice. It is well known that class priors and decision costs are intrinsically related. For instance, different decision costs can be simulated by altering the priors and vice versa (see Ting, 2002, for instance). Thus, when a uniform distribution is desired in a cost sensitive domain, but working with cost insensitive decision machines, class priors are altered according to decision costs, what is commonly referred as rebalancing. The manipulation of the training data distribution has been applied to cost-sensitive learning in two-class problems (Breiman et al., 1984) in the following way: basically, the class with higher misclassification cost (suppose n times the lowest misclassification cost) is represented with n times more examples than the other class. Besides random sampling strategies, other sampling-based rebalancing schemes have been proposed to accomplish this task, like those considering closeness to the boundaries between classes (Japkowicz and Stephen, 2002; Zhou and LiuJ, 2006) or the costproportionate rejection sampling presented in Zadrozny et al. (2003). Extending the formulation of this type of procedures to general multiclass problems with multiple (and possibly asymmetric) inter-class misclassification costs appears to be a nontrivial task (Zadrozny et al., 2003; Zhou and LiuJ, 2006), but some progress has been made recently with regard to this latter point (Abe et al., 2004). Note, also, that many (although not all) of these rebalancing strategies are usually implemented by oversampling and/or subsampling, that is, replicating examples (without adding any extra information) and/or deleting them (which implies information loss). 3. Robust Classifiers Under Prior Uncertainty: Minimax Classifiers Prior probability uncertainty can be coped from a robust point of view following a minimax derived strategy. Minimax regret criterion is discussed in this section after presenting the conventional minimax criterion. Although our approach extends to general multi-class problems and the discussion is carried out in that way, we will first illustrate, for the sake of clarity and simplicity, a binary situation. 3.1 Minimax Classifiers As Eq. (3) shows, the minimum risk decisions depend on the misclassification costs, c i j , and the posterior class probabilities and, thus, they depend on the prior probabilities, Pi . Different prior 107 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I PSfrag replacements distributions (frequency for each class) give rise to different Bayes classifiers. Fig. 1 shows the Bayes risk curve, RB (P1 ) versus class-1 prior probability for a binary classification problem. Standard RF (Q1 , P1 ) R0 Minimax RF (Q1mM , P1 ) Risk c00 Minimax Deviation RF (Q1mMd , P1 ) Rbasis R1 RB (P1 ) c11 0 Q1 Q1mM 1 Q1mMd P1 Figure 1: Risk vs. P1 . Minimum risk curve and performance under prior changes for the standard, minimax and minimax deviation classifier. RB (P1 ) stands for the optimal Bayes Risk against P1 . RF (Q1 , P1 ) denotes the Risk of a standard classifier (Fixed decision rule optimized for prior probabilities Q1 estimated in the training phase) against P1 . RF (Q1mM , P1 ) denotes the Risk of a minimax classifier (Fixed decision rule optimized for the minimax probabilities Q1mM ) against P1 . RF (Q1mMd , P1 ) denotes the Risk of a minimax deviation classifier (Fixed decision rule optimized for the minimax deviation probabilities Q 1mMd ) against P1 . If the prior probability distribution is unknown when the classifier is designed, or this distribution changes with time or from one environment to other, the mismatch between training and test conditions can degrade significantly the classifier performance. For instance, assume that Q = (Q0 , Q1 ) is the vector with class-0 and class-1 prior probabilities estimated in the training phase, respectively, and let RB (Q1 ) represent the minimum (Bayes) risk attainable by any decision rule for these priors. Note, that, according to Eq. (2), for a given classifier, the risk is a linear function of priors. Thus, risk RF (Q1 , P1 ) associated to the (fixed) classifier optimized for Q changes linearly with actual prior probabilities P1 and P0 = 1 − P1 , going from (0, R0 ) to (1, R1 ) (the continuous line in Fig. 1), where R0 and R1 refer to the class conditional risks for classes 0 and 1, respectively. Fig. 1 shows the impact of this change in priors and how performance deviates from optimal. Also, it can be shown (see VanTrees, 1968, for instance) that the minimum risk curve obtained for each prior is convex and the risk function of a given classifier verifies R F (Q1 , P1 ) ≥ RB (P1 ) with a tangent point at P1 = Q1 . 108 M INIMAX R EGRET C LASSIFIER The dashed line in Fig. 1 shows the performance of the minimax classifier, which minimizes the maximum possible risk under the least favorable priors, thus providing the most robust solution, in the sense that performance becomes independent from priors. From Fig. 1, it becomes clear that the minimax classifier is optimal for prior probabilities P = QmM = (Q0mM , Q1mM ) maximizing RB . Thus, this strategy is equivalent to maximizing the minimum risk (Moon and Stirling, 2000; Duda et al., 2001). We will refer to them as the minimax probabilities. Fig. 1 also makes clear that although a minimax classifier is a robust solution to address the imprecision in priors, it may become a somewhat pessimistic approach. 3.2 Minimax Deviation Classifiers We propose an alternative classifier that, instead of minimizing the maximum risk, minimizes the maximum deviation (regret) from the optimal Bayes classifier. In the following, we will refer to it as the minimax deviation or minimax regret classifier. A comparison between minimax and minimax deviation approaches is also shown in Fig. 1. This latter case corresponds to a classifier trained on prior probabilities P = Q mMd with performance as a function of priors given by a line (a plane or hyperplane for three or more classes, respectively) parallel to what we name, in the following, basis risk (Rbasis = c00 (1 − P1 ) + c11 P1 ). Note that the maximum deviation (with respect to priors) of the classifier optimized for Q is given by D(Q) = max {RF (Q1 , P1 ) − RB (P1 )} = max {R0 − c00 , R1 − c11 } . P1 The inspection of Fig. 1 shows that the minimum of D (with respect to Q) is achieved when R0 − c00 = R1 − c11 , which means that line RF (Q1 , P1 ) is parallel to arc named Rbasis in the figure and tangent to RB at Q1mMd . Therefore, the minimax regret classifier is also the Bayes solution with respect to the least favorable priors (Q0mMd , Q1mMd ) (see Berger, 1985, for instance), which will be denoted as minimax deviation probabilities. Now, we extend the formulation to a general L-class problem. Definition 1 Consider a L-class decision problem with costs ci j , 0 ≤ i, j < L and c j j ≤ ci j , and let Rw (P) be the risk of a decision machine with parameter vector w when prior class probabilities are given by P = (P0 , . . . , PL−1 ). The deviation function is defined as Dw (P) = Rw (P) − RB (P) and the minimax deviation is defined as DmMd = inf max{Dw (P)} . w P (4) Note that the above definition assumes that the maximum exists. This is actually the case, since Dw (P) is a linear function over a compact set, P . Note, also, that our definition includes the natural assumption that c j j is never higher than ci j , meaning that making a decision error is always less costly than taking the correct decision. This assumption is used in part of our theoretical analysis. The algorithms proposed in this paper are based on the fact that the minimax deviation can be computed without knowing RB 109 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Theorem 2 The minimax deviation is given by DmMd = inf max{Dw (P)} , w P where Dw (P) = Rw (P) − Rbasis (P) and (5) L−1 ∑ c j j Pj Rbasis (P) = . (6) j=0 Proof Note that, according to Eqs. (1) and (2), for any decision machine and any u i ∈ UL , L−1 R(u j ) = R j = ∑ ci j P{d = ui |d = u j } ≥ c j j . i=0 Since the bound is reached by the classifier deciding d = u j for any observation x, we have RB (u j ) = c j j . Therefore, using Eq. (6), we find that, for any u ∈ UL , RB (u) = Rbasis (u) and, thus, Dw (u) = Dw (u) . Since Bayes minimum risk RB (P) is a convex function of priors and Rw (P) is linear, Dw (P) is concave and, thus, it is maximum at some of the vertices in P (i.e., at some P = u ∈ U L ). Thus, max{Dw (P)} = max {Dw (u)} . u∈UL P (7) Since the maximum difference between two hyperplanes defined over P is always at some vertex, we can conclude that max{Dw (P)} = max {Dw (u)} = max {Dw (u)} . P u∈UL u∈UL (8) Combining Eqs. (4), (7) and (8), we get DmMd = inf max{Dw (P)} . w P Note that Rbasis represents the risk baseline of the ideal classifier with zero errors. Th. 2 shows that the minimax regret can be computed as the minimax deviation to this ideal classifier. Note, also, that if costs cii do not depend on i, Eq. (5) becomes equivalent (up to a constant) to the Bayes risk and the minimax regret classifier becomes equivalent to the minimax classifier . Another important result for the algorithms proposed in this paper is that, under some conditions on the minimum risk, the minimum and maximum operators can be permuted. Although general results on the permutability of minimum and maximum operators can be found in the literature (see Polak, 1997, for instance), we provide here the proof for the specific case interesting to this paper. 110 M INIMAX R EGRET C LASSIFIER Theorem 3 Consider the minimum deviation function given by Dmin (P) = inf{Dw (P)} , (9) w where Dw (P) is the normalized deviation function given by Eq. (5), and let P ∗ be the prior probability vector providing the maximum deviation, P∗ = arg max Dmin (P) P . (10) If Dmin (P) is continuously differentiable at P = P∗ , then the minimax deviation, DmMd , defined by Eq. (4), is DmMd = Dmin (P∗ ) = max inf Dw (P) . (11) P w Proof For any classifier with parameter vector w, we can write, max Dw (P) ≥ Dw (P∗ ) ≥ Dmin (P∗ ) P and, thus, inf max Dw (P) ≥ Dmin (P∗ ) . w P (12) Therefore, Dmin (P∗ ) is a lower bound of the minimax regret. Now we prove that Dmin (P∗ ) is also an upper bound. According to Eq. (9), for any ε > 0, there exists a parameter vector wε such that Dwε (P∗ ) ≤ Dmin (P∗ ) + ε . (13) By definition, for any P, Dmin (P) ≤ Dwε (P). Therefore, using Eq. (13), we can write Dwε (P∗ ) − Dwε (P) ≤ Dmin (P∗ ) − Dmin (P) + ε . (14) Since Dmin (P) is continuously differentiable and (according to Eq. (10)) maximum at P ∗ , for any ε > 0 there exists δ > 0 such that, for any P ∈ P with P∗ − P ≤ δ we have Dmin (P∗ ) − Dmin (P) ≤ ε P∗ − P ≤ ε δ . (15) Let Pδ a prior such that P∗ − Pδ = δ. Taking ε = ε δ and combining Eqs. (14) and (15) we can write Dwε (P∗ ) − Dwε (Pδ ) ≤ 2ε δ . Since the above condition is verified for any ε > 0 and any prior Pδ at distance δ from P, and taking into account that Dwε (P) is a linear function of P, we conclude that the maximum slope of D wε (P) is bounded by 2ε and, thus, for any P ∈ P , we have √ Dwε (P) − Dwε (P∗ ) ≤ 2ε P − P∗ ≤ 2 2ε , √ (where we have used the fact that the maximum distance between two probability vectors is 2). Therefore, we can write √ max Dwε (P) ≤ Dwε (P∗ ) + 2 2ε P 111 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I and, thus, √ inf max Dw (P) ≤ Dwε (P∗ ) + 2 2ε . w P √ Finally, using Eq. (13) and taking into account that ε = ε δ ≤ 2ε we get √ inf max Dw (P) ≤ Dmin (P∗ ) + 3 2ε . w P (16) Since the above is true for any ε > 0 we conclude that Dmin (P∗ ) is also an upper bound of Dw . Therefore, combining Eqs. (12) and (16), we conclude that inf max Dw (P) = Dmin (P∗ ) , w P which completes the proof. Note that the deviation function needs to be neither differentiable nor a continuous function of w parameters. If the minimum deviation function is not continuously differentiable at the minimax deviation probability, P∗ , the theorem cannot be applied. The reason is that, although there should exist at least one classifier providing the minimum deviation at P = P∗ , it or they could not provide a constant deviation with respect to the prior probability. The situation can be illustrated with an example. Let x ∈ R be given by p(x|d = 0) = 0.8N(x, σ) + 0.2N(x − 2, σ) and p(x|d = 1) = 0.2N(x − 1, σ) + 0.8N(x − 3, σ), where σ = 0.5 and N(x, σ) = (2πσ)−1/2 exp(−x2 /(2σ2 )), and consider the set Φλ of classifiers given by a single threshold over x and decision dˆ = 1 if x ≥ λ 0 if x < λ. Fig. 2 shows the distribution of both classes over x, and Fig. 3 shows, as a function of priors, the minimum error probability (continuous line) that can be obtained using classifiers in Φ λ . Note that decision costs c00 = c11 = 0 and c01 = c10 = 1 have been considered for this illustrative problem. An abrupt slope change is observed at the minimax deviation probability, for P{d = 1} = 1/2. For this prior, there are two single threshold classifiers providing the minimum error probability, which are given by thresholds λ1 and λ2 in Fig. 2. However, as shown in Fig. 3 neither of them provides a risk that is constant in the prior. The minimax deviation classifier in Φ λ , which has a threshold λ0 , does not attain minimum risk at the minimax deviation probability and, thus, cannot be obtained by using Eq. (11). For this example, the desired robust classifier should have a deviation function given by the horizontal dotted line in Fig. 3. Fortunately, it can be obtained by combining the outputs of several classifiers. For instance, let dˆ1 and dˆ2 the decisions of classifiers given by thresholds λ1 and λ2 , respectively. It is not difficult to see that the classifier selecting dˆ1 and dˆ2 at random (for each input sample x) provides a robust classifier. This procedure can be extended to the multiclass-case: consider a set of L classifiers with parameters wk , k = 0, . . . , L − 1, and consider the classifier such that, for any input sample x, makes a decision equal to dk (i.e., the decision of classifier with parameters wk ), with probability qk . It is not difficult to show that the deviation function of this classifier is given by L−1 D(P) = L−1 j=0 k=0 ∑ Pj ∑ qk D j (wk ) 112 , M INIMAX R EGRET C LASSIFIER 0.7 0.6 Likelihoods 0.5 0.4 0.3 0.2 0.1 λ 0 −2 λ −1 0 λ 0 1 1 2 2 3 4 5 x Figure 2: The conditional data distributions for the one-dimensional example discussed in the text. λ1 and λ2 are the thresholds providing the minimum risk at the minimax deviation probability. λ0 provides the minimax deviation classifier. where D j (wk ) = R j (wk ) − c j j . In order to get a constant deviation function, probabilities q k should be chosen in such a way that L−1 ∑ qk D j (wk ) = D , k=0 where D is a constant. Solving these linear equations for q k , k = 0, . . . , L − 1 (with the constraint ∑k qk = 1), the required probabilities can be found. Note that, in order to build the non-deterministic classifier providing a constant deviation, a set of L independent classifiers that are optimal at the minimax deviation prior should be found. However, we go no further on the investigation of this special case for two main reasons: • The situation does not seem to be common in practice. In our simulations, we have found that the maximum of the minimum risk deviation always provided a response which is approximately parallel to Rbasis . • In general, the abrupt change in the derivative may be a symptom that the classifier structure is not optimal for the data distribution. Instead of building a nondeterministic classifier, increasing the classifier complexity should be more efficient. Although the least favorable prior providing the minimax deviation can be computed in closed form for some simple distributions, in general, it must be computed numerically. Moreover, we assume here that the data distribution is not known, and must be learned from examples. Thus, 113 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 0.25 0.2 λ Error probability 0 0.15 λ λ 1 2 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 P{ d=1} Figure 3: Error probabilities as a function of prior probability of class 1 for the example in Fig. 2. Thresholds λ1 and λ2 do not provide the minimax deviation classifier, which is obtained for threshold λ0 . However, the random combination of classifiers with thresholds λ 1 and λ2 (dotted line) provides a robust classifier with deviation lower than that of λ 0 . we must incorporate the estimation of the least favorable prior in the learning process. Next, we propose a training algorithm in order to get a minimax regret classifier based on neural networks. 4. Neural Robust Classifiers Under Complete Uncertainty Note that, if QmMd is the probability vector providing the maximum in Eq. (11), that is, QmMd = arg max inf{Dw (P)} w P , then we can write DmMd = inf{Dw (QmMd )} . w Therefore, the minimax deviation classifier can be estimated by training a classifier using prior in QmMd . For this reason, QmMd will be called the minimax deviation prior (or least favorable prior). Our proposed algorithms are based on an iterative process of estimating parameters w based on an estimate of the minimax deviation prior, and re-estimating prior based on an estimate of network weights. This is shown in the following. 114 M INIMAX R EGRET C LASSIFIER 4.1 Updating Network Weights Learning is based on minimizing some empirical estimate of the overall error function L−1 L−1 i=0 E{C(y, d)} = i=0 ∑ P{d = ui }E{C(y, d)|d = ui } = ∑ PiCi , where C(y, d) may be any error function and Ci is the expected conditional error for class-i. Selecting the appropriate error function (see Cid-Sueiro and Figueiras-Vidal, 2001, for instance), learning rules can be designed providing a posteriori probability estimates (y i ≈ P{d = ui |x}, where yi is the soft decision) and, thus, according to Eq. (3), the hard decision minimizing the risk can be approximated by L−1 d = arg min { ∑ ci j y j } . i j=0 The overall empirical error function (cost function) used in learning for priors P = (P0 , . . . , PL−1 ) may be written as L−1 C = ∑ PiCi = L−1 i=0 = = 1 K L−1 i=0 1 K k ∑ d C(yk , dk ), Ki k=1 i Pi K k ∑ d C(yk , dk ) Ki /K k=1 i ∑ i=0 1 K ∑ K k=1 ∑ Pi L−1 ∑ Pi d kC(yk , dk ) (0) i i=0 Pi , , (17) (0) where Pi = Ki /K is an initial estimate of class-i prior based on class frequencies in the training set and Pi is the current prior estimate. Minimizing error function (17) by means of a stochastic gradient descent learning rule leads to update the network weights at k-th iteration as w (k+1) = w (k) (n) L−1 −µ = w(k) − Pi i=0 Pi ∑ L−1 d k ∇ C(yk , dk ) (0) i w ∑ µi (n) k di , ∇wC(yk , dk ) , (18) i=0 where (n) (n) µi = µ Pi (19) (0) Pi (n) is a learning step scaled by the prior ratio. Note that di selects the appropriate µi according to the pattern class membership. The classifier is trained without altering the original training data set (0) class distribution Pi and therefore, without missing or duplicating information. 115 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 4.2 Updating Prior Probabilities Eq. (11) shows that the learning process should maximize (5) with respect to the prior probabilities. The estimate of (5) can be computed as ¯ Dw (P) = Rw (P) − Rbasis (P) , where (20) L−1 ∑ R j Pj (21) 1 L−1 ∑ ci j Ni j N j i=0 (22) Rw (P) = j=0 is the overall Bayes risk estimate and Rj = is the class- j conditional risk estimate where N j is the number of class u j patterns in the training phase and Ni j is the number of samples from class u j assigned to ui . L−1 In order to derive a learning rule to find an estimate Pi satisfying constraints ∑i=0 Pi = 1 and 0 ≤ Pi ≤ 1, we will use auxiliary variables Bi such that Pi = exp(Bi ) L−1 ∑ j=0 exp(B j ) . (23) ¯ We maximize Dw with respect to Bi . Applying the chain rule, ¯ ¯ ∂Dw L−1 ∂Dw ∂Pj =∑ , ∂Bi j=0 ∂Pj ∂Bi and using Eqs. (20), (21) and (23), we get ¯ ∂D w ∂Bi L−1 = ∑ (R j − c j j )Pi (δi j − Pj ), j=0 L−1 L−1 j=0 j=0 = Pi Ri − cii − ∑ (R j Pj ) + ∑ (c j j Pj ) , = Pi Ri − cii − Rw − Rbasis , = Pi Rdi , where Rdi = (Ri − cii ) − (Rw − Rbasis ) . The learning rule for auxiliary variable Bi is (n) Bi (n+1) = Bi + ρ (n) ∂D w , ∂Bi (n) (n) = Bi + ρPi Rdi , 116 (24) M INIMAX R EGRET C LASSIFIER where parameter ρ > 0 controls the rate of convergence. Using Eq. (23) and Eq. (24), the updated learning rule for Pi is (n) (n+1) Pi = (n) (n) (n) exp ρPj Rd j ∑L−1 exp B j j=0 (n) = (n) (n) exp(Bi ) exp ρPi Rdi , (n) (n) Pi exp ρPi Rdi (n) (n) (n) ∑L−1 Pj exp ρPj Rd j j=0 . (25) 4.3 Training Algorithm for a Minimax Deviation Classifier In the previous section, both the network weights updating rule (18) and the prior probability update rule (25) have been derived. The algorithm resulting from the combination is shown as follows: for n = 0 to Niterations − 1 do for k = 1 to K do w(k+1) = w(k) − L−1 ∑ µi (n) k di ∇wC(yk , dk ) i=0 end for (n) Estimate R(n) , Ri , i = 0, . . . , L − 1, according to (21) and (22) (n+1) (n+1) Update minimax probability Pi , i = 0, . . . , L − 1 according to (25) and compute µi with (19) end for 5. Robust Classifiers Under Partial Uncertainty Although in many practical situations prior probabilities may not be specified with precision, they can be partially known. In this section we discuss how partial information about priors can be used to improve the classifier performance in relation to a complete uncertainty situation. From now on, let us consider that lower (or upper) bounds of the priors are known based on previous experience. We will denote the lower and upper bounds of class-i prior probability as Pil and Piu , respectively. In order to illustrate this situation consider a binary classification problem where probability lower bounds P0l and P1l are known. That is, P1 ∈ [P1l , 1 − P0l ] where this interval represents the uncertainty region. Let us denote by Γ = {P : 0 ≤ Pi ≤ 1, ∑L−1 Pi = 1, Pi ≥ Pil } the probability region i=0 satisfying the imposed constraints. In the following, we will refer to Γ as the uncertainty region. Now, the aim is to design a classifier that minimizes the maximum regret from the minimum risk only inside the uncertainty region. This is depicted in Fig. 4(a), which shows that reducing the uncertainty in priors allows to reduce deviation from the optimal classifier. This minimax regret approach for the uncertainty region Γ is often called Γ-minimax regret. As discussed before, the minimax deviation solution gives a Bayes solution with respect some priors denoted in the partial uncertainty case as QΓ mMd in Fig. 4(a), which is the least favorable distribution according to the regret criterion. 117 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I cΓ 00 RΓ basis RΓ 1 cΓ 11 PSfrag replacements Risk 0 0.5 1 1.5 2 2.5 3 3.5 0P1l RB (P) + ψ(P) Minimax Deviation with Restriction Risk RΓ 0 QΓ 1mMd 1 − P0l 1 P1 0P1l 1 − P0l 1 P1 (b) (a) Figure 4: Minimax deviation classifier under partial uncertainty of prior probabilities: (a)Γ-minMaxDev Classifier. (b) Modified cost function defined as R B (P) + ψ(P). In contrast to the minimax regret criterion, note that a classical minimax classifier for the considered uncertainty region would minimize the worst-case risk. It would be a Bayes solution for the prior where the minimum risk reaches its maximum and it could be denoted as Q Γ . mM Notice, also, that these solutions will be the same if the risk for the vertex of Γ take the same value (cΓ = k). ii 5.1 Neural Robust Classifiers Under Partial Uncertainty Minimax search can be formulated as maximizing (with respect to priors) the minimum (with respect to network parameters) of deviation function (5), as described in previous section, but subject to some constraints arg max inf {DΓ (P)} , w w P Pi ≥ Pil , i = 0, . . . , L − 1 s.t. where DΓ = RΓ − RΓ . When uncertainty is global, this hyperplane is defined by the risk in the L w w basis extreme cases with Pi = δik , that is, by the corresponding cii . However, with partial knowledge of the prior probabilities, this hyperplane becomes defined by the risk in L points which are the vertex given by the restrictions and with associated risk denoted by c Γj . j Defining 1 l(Pi ) = , (26) 1 + exp−τ(Pi −Pil ) where τ controls the hardness of this restriction, the minimax problem can be re-formulated as arg max inf {DΓ (P)} w P s.t. w l(Pi ) ≥ 1/2, i = 0, . . . , L − 1. Thus, this constrained optimization problem can be solved as a non-constrained problem by considering an auxiliary function that incorporates the restriction as a barrier function 118 M INIMAX R EGRET C LASSIFIER arg max inf {DΓ (P) + Aψ(P)} , w w P where ψ(Pi ) = log(l(Pi )) and the constant A determines the contribution of the barrier function. Fig. 4(b) shows the new risk function corresponding to the binary case previously depicted in Fig. 4(a). Note that, it is the sum of the original RB (P) and the barrier function ψ(P). As in Section 4.1, in order to derive the network weight learning rule, we need to compute ∂ψ ∂Bi L−1 = ∂ψ ∂P j , j ∂Bi ∑ ∂P j=0 = τPi L−1 ∑ 1 − l(Pk ) (δik − Pk ), k=0 = τPi ψdi , where ψdi = ∑L−1 (1 − l(Pk ))(δik − Pk ) k=0 As τ increases, the constraints become harder around the specified bound. The update learning rule for the auxiliary variable Bi at cycle n is (n+1) Bi (n) Γ(n) (n) (n) = Bi (n) + ρPi Rdi + ρAτPi ψdi . And therefore, using (23), the update learning rule for Pi is (n) (n+1) Pi = (n) Γ(n) Pi exp ρPi Rdi L−1 ∑ (n) Pj exp (n) (n) exp ρAτPi ψdi (n) Γ(n) ρ P j Rd j . (n) (n) ρAτPj ψd j exp j=0 Note that if the upper bound is known instead of the lower bound, l(Pi ) defined by (26) should be replaced by u(Pi ) = (1 + exp(τ(Pi − Piu )))−1 at the previous formulation. The minimax constrained optimization problem has been tackled by considering a new objective function defined by the sum of the original cost function and a barrier function. Studying the convexity of this new function becomes important from the fact that a stationary point of this risk curve is a global maximum. Since the minimum risk curve (RB (P)) is a convex function of the priors (see VanTrees, 1968, for details), if we verify the convexity of the barrier function, we can conclude that the function defined by the sum of both of them is also convex. This barrier function is convex in P if the Hessian matrix HR verifies PT HR P ≤ 0 The Hessian matrix of the barrier function equals to a diagonal matrix D r = diag(r) with all negative diagonal entries ri = Aτ2 (−l(Pi )(1 − l(Pi ))). As l(Pi ) ∈ [0, 1] and therefore, ri ≤ 0, it is straightforward to see that PT HR P = PT Dr P, L−1 = ∑ Pi2 ri ≤ 0 . i=0 Since the barrier function is convex, the new objective function (defined by the sum of two convex functions) is also convex. 119 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 5.2 Extension to Other Learning Algorithms The learning algorithm proposed in this paper is intended to train a minimax deviation classifier based on neural networks with feedforward architecture. Actually, the learning algorithm we propose becomes a feasible solution for any learning process based on minimizing some empirical estimate of an overall cost (error) function. However, it is also applicable to a general classifier provided it is trained (in an iterative process) for the estimated minimax deviation probabilities and the assumed decision costs. Specifically, in this paper, scaling the learning rate allows to simulate different class distributions and the hard decisions are made based on posterior probability estimates and decision costs. Furthermore, the neural learning phase carried out in one iteration can be re-used for the next one, what allows to reduce computational cost with respect to a complete optimization process on each iteration. Apart from the general approach of completely training a classifier on each iteration and in order to reduce its computational cost, specific solutions may be studied for different learning machines. Nonetheless, it seems not feasible to readily achieve this improvement for classifiers like SVMs, where support vectors for one solution may have nothing in common with the ones obtained in next iteration and thus, making necessary to re-train the classifier in each iteration. Another possible solution for any classifier that provides a posteriori probabilities estimates or any score that can be converted into probabilities (for details on calibration methods see Wei et al., 1999; Zadrozny and Elkan, 2002; Niculescu-Mizil and Caruana, 2005) is outlined here. In this case, an iterative procedure able to estimate the minimax deviation probabilities and consequently to adjust (without re-training) the outputs of the classifier could be studied. The general idea for this approach is as follows: first, the new minimax deviation prior probabilities are estimated according to (25) and then, posterior probabilities provided by the model are adjusted as follows (see Saerens et al., 2002, for more details) (k) Pi P(k) {d = ui |x} = P(k−1) {d = ui |x} (k−1) Pi L−1 P(k) j P(k−1) {d = u j |x} (k−1) j=0 Pj . (27) ∑ The algorithm’s main structure is summarized as for k = 1 to K do (k) Estimate R(k) , Ri , i = 0, . . . , L − 1, according to (21), (22) and decision costs c i j (k+1) Update minimax probability Pi according to (25) Adjust classifier outputs according to (27) end for The effectiveness of this method relies on the accuracy of the initial a posteriori probability estimates. Studying in depth this approach and comparing different minimax deviation classifiers (decision trees, SVMs, RBF networks, feedforward networks and committee machines) together with different probability calibration methods appears as a challenging issue to be explored in future work. 120 M INIMAX R EGRET C LASSIFIER 6. Experimental Results In this section, we first present the neural network architecture used in the experiments and illustrate the proposed minimax deviation strategy on an artificial data set. Then, we apply it to several realworld classification problems. Moreover, a comparison with other proposals such as the traditional minimax and the common re-balancing approach is carried out. 6.1 Softmax-based Network Although our algorithms can be applied to any classifier architecture, we have chosen a neural network based on the softmax non-linearity with soft decisions given by Mi yi = ∑ yi j , j=1 with yi j = exp(wTj x + wi j0 ) i , Mk ∑L−1 ∑l=1 exp(wT x + wkl0 ) k=0 kl where L stands for the number of classes, M j the number of softmax outputs used to compute y j and wi j are weight vectors. We will refer to this network as a Generalized Softmax Perceptron(GSP). 1 A simple network with M j = 2 is used in the experiments. x1 wj,k y1,1 y1,... x2 x3 y1 y1,M1 Class i ... SOFTMAX ... HARD DECISION n inputs / outputs ... yL,1 xd yL,ML yL,... yL Figure 5: GSP(Generalized Softmax Perceptron) Network Fig. 5 corresponds to the neural network architecture used to classify the samples represented by feature vector x. Learning consists of estimating network parameters w by means of the stochastic gradient minimization of certain objective functions. In the experiments, we have considered the Cross Entropy objective function given by L CE(y, d) = − ∑ di log yi . i=1 The stochastic gradient learning rule for the GSP network is given by Eq. (18). Learning step µ(0) decreases according to µ(k) = 1+k/η , where k is the iteration number, µ(0) the initial learning rate and η a decay factor. µ(k) 1. Note that the GSP is similar to a two layer MLP with a single layer of weights and with coupled saturation function (softmax), instead of sigmoidal units. 121 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I The reason to illustrate this approach with a feedforward architecture is that, as mentioned in Section 5.2, it allows to exploit (in the iterative learning process) the partially optimized solution in current iteration for the next one. On the other hand, posterior probability estimation makes it possible to apply the adaptive strategy based on prior re-estimation proposed by Saerens to the minimax deviation classifier, as long as a data set representative of the operation conditions is available. Finally, the fact that intermediate outputs yi j of the GSP can be interpreted as subclass probabilities may provide quite a natural way to cope with the unexplored problem of uncertainty in subclass distributions as already pointed out by Webb and Ting (2005). Nonetheless, both architecture and cost function issues are not the goal of this paper, but merely illustrative tools. 6.2 Artificial Data Set To illustrate the minimax regret approach proposed in this paper both under complete and partial uncertainty, an artificial data set with two classes (class u0 and class u1 ) has been created. Data examples are drawn from the normal distribution p(x|d = ui ) = N(mi , σ2 ) with mean mi and standard i √ deviation σi . Mean values were set to m0 = 0, m1 = 2 and standard deviation to σ0 = σ1 = 2. A total of 4000 instances were generated with prior probabilities of class membership P{d = u 0 } = 0.93 c00 c01 2 5 and P{d = u1 } = 0.07. The cost-benefit matrix is given by . c10 c11 4 0 Initial learning rate was set to µ(0) = 0.3, decay factor to η = 2000 and training was ended after 80 cycles. Classifier assessment was carried out by following 10-fold cross-validation. Two classifiers were trained, to be called a standard classifier and a minMaxDev classifier. The former is built by considering that the estimated class prior information is precise and stationary and the latter is the approach proposed in this paper to cope with uncertainty in priors. Thus, for the standard classifier, its performance may deviate from the optimal risk in 3.39 when priors change from training to test conditions. However, a minimax deviation classifier reduces this worst-case difference from the optimal classifier to 0.77. Now, we suppose that some information about priors is available (partial uncertainty). For instance, we consider that the lower bound for prior probabilities P0 and P1 are known and set to P0l = 0.55 and P1l = 0.05, respectively, so that the uncertainty region is Γ = {(P0 , P1 )|P0 ∈ [0.55, 0.95], P1 ∈ [0.05, 0.45]}. A minimax deviation classifier can be derived for Γ (it will be called Γ-minMaxDev classifier).The narrower Γ is, the closer the minimax deviation classifier performance is to the optimal. For this particular case, under partially imprecise priors, the standard classifier may differ from optimal (in Γ) in 0.83, while the use of the simple minMaxDev classifier designed under total prior uncertainty conditions attains a maximum deviation of 0.53. However, the Γ-minMaxDev classifier only differs from optimal in 0.24. These data are reported in Table 1 where both, experimental and also theoretical results, are shown. 6.3 Real Databases In this section we report experimental results obtained with several publicly available data sets. From the UCI repository (Blake and Merz, 1998) the following benchmarks: German Credits, Australian Credits, Insurance Company, DNA slice-junction, Page-blocks, Dermatology and Pen-digits. 122 M INIMAX R EGRET C LASSIFIER Standard Th/Exp Maximum deviation from optimal (complete uncertainty) Maximum deviation from optimal in Γ (partial uncertainty) Classifier minMaxDev Γ-minMaxDev Th/Exp Th/Exp 3.41/3.39 0.72/0.77 – 0.85/0.83 0.50/0.53 0.19/0.24 Table 1: A comparison between the standard classifier (build under stationary prior assumptions), the minimax deviation classifier (minMaxDev) and the minimax deviation classifier under partial uncertainty (Γ-minMaxDev) for an artificial data set Database German Credits (GCRE) Australian Credits (AUS) Munich Credits (MCRE) Insurance Company (COIL) DNA Slice-junction (DNA) Page-blocks (PAG) Dermatology (DER) Pen-digits (PEN) # Classes 2 2 2 2 3 5 6 10 Class distribution [0.70 0.30] [0.32 0.68] [0.30 0.70] [0.94 0.06] [0.24 0.24 0.52] [0.90 0.06 0.01 0.01 0.02] [0.31 0.16 0.20 0.13 0.14 0.06] [0.104 0.104 0.104 0.096 0.104 0.096 0.096 0.104 0.096 0.096] # Attributes 8 14 20 85 180 10 34 16 # Instances 1000 690 1000 9822 3186 5473 366 10992 Table 2: Experimental Data sets Other public data set used is Munich Credits from the Dept. of Statistics at the University of Munich.2 Data set description is summarized in Table 2, and cost-benefit matrices are shown in Table 3. We have used the cost values that appear in Ikizler (2002) for those data sets in common. Otherwise, for lack of an expert analyst, the cost values have been chosen by hand. 2. Data sets available at http://www.stat.uni-muenchen.de/service/datenarchiv/welcome e.html. Insurance Company 0 1 German, Australian, Munich Credits −1 0 0 −17 Page-Blocks −1 2 2 2 2 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 −4 2 3 4 3 4 3 −3 3 5 1 5 5 0 Dermatology 3 2 3 2 −8 4 5 −10 4 3 5 4 2 1 4 5 −6 5 2 3 5 2 3 −10 −1 2 2 DNA 2 −1 2 Pendigits ci j = 0 1 Table 3: Cost-Benefit matrices for the experimental Data sets 123 3 3 0 if i = j Otherwise A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Standard Maximum Risk Deviation from the optimal classifier Re-balanced Minimax Deviation Minimax minMaxDev minMax GCRE 0.70 0.80 (0.55 0.60) 0.99 ACRE 1.00 1.00 (0.76 0.86) 1.00 MCRE 0.91 0.77 (0.54 0.59) 0.99 COIL 2.78 0.99 (0.87 0.92) 16.32 DNA 0.34 0.53 (0.30 0.27 0.25) PAG 0.62 0.26 (0.13 0.13 0.20 0.16 0.16) DER 1.03 1.28 (0.67 0.78 0.51 0.48 0.54 PEN 0.061 0.059 (0.024 0.028 0.025 0.019 1.14 0.023 0.021 0.026 0.022 0.86 0.60) 0.023 0.029) 7.62 0.029 Table 4: Classifier Performance evaluated as Maximum Risk Deviation from the optimal classifier for several real-world applications. Class-conditional risk deviations (R i − cii ) reported for the minMaxDev classifier. Experimental results for these data sets are shown in the following sections. The robustness of different decision machines under complete uncertainty of prior probabilities is analyzed in Section 6.3.1. If uncertainty is only partial, a similar study and comparison with the previous approach (complete uncertainty) is carried out in Section 6.3.2. 6.3.1 C LASSIFIER ROBUSTNESS U NDER C OMPLETE U NCERTAINTY We now study how different neural-based classifiers cope with worst-case situations in prior probabilities. The maximum deviation from the optimal classifier (see Table 4) is reported for the proposed minMaxDev strategy as well as for other alternative approaches: the one based on the assumption of stationary priors (standard) and the common alternative of deriving the classifier from an equally distributed data set (re-balanced). A comparison with the traditional minimax strategy is also provided. Together with the previously mentioned value (maximum deviation or regret), deviation for the L class-conditional extreme cases (Ri − cii ) is also reported for the minMaxDev classifier in Table 4. Results allow to verify that this solution is fairly close to the optimal one where deviation is not dependent on priors and thus, class-conditional deviations take the same value. Although the balanced class distribution to train the classifier can be obtained by means of undersampling and/or oversampling, it is simulated by altering the learning rate used in the training 1/L phase according to (19) as µi = µ (0) , where 1/L represents the simulated probability, equal for Pi all classes. Results evidence that the assumption of stationary priors may lead to significant deviations from the optimal decision rule under “unexpected”, but rather realistic, prior changes. This deviation may reach up to three times more than the robust minimax deviation strategy. Thus, for classification problems like Page-blocks the maximum deviation from the optimal classifier is 0.62 for the 124 M INIMAX R EGRET C LASSIFIER Standard Maximum Risk Re-balanced Minimax Deviation minMaxDev Minimax minMax GCRE 0.70 0.15 0.60 0.00 ACRE 0.01 0.02 0.86 -0.00 MCRE 0.05 0.20 0.59 0.00 COIL 0.76 0.99 0.86 0.02 DNA 0.34 0.53 0.25 0.13 PAG 0.62 0.26 0.20 0.10 DER -2.10 -1.68 -2.21 -2.38 PEN 0.061 0.059 0.029 0.029 Table 5: Classifier Performance measured as Maximum Risk for several real-world applications. standard classifier while this reduces to 0.20 for the minMaxDev one. Likewise, for the Insurance company(COIL) application the maximum deviation for the standard classifier is 2.78 compared with 0.92 for the minMaxDev model. The remaining databases also show the same behavior as it is presented in Table 4. On the other hand, the use of a classifier inferred from a re-balanced data set does not necessarily involve a decrease in the maximum deviation with respect to the standard classifier. In the same way, the traditional minimax classifier does not protect against prior changes in terms of maximum relative deviation from the minimum risk classifier. However, if our criterion is more conservative and our aim is the minimization of the maximum possible risk (not the minimization of the deviation), the traditional minimax classifier represents the best option. It is shown in Table 5 where the maximum risk for the different classifiers is reported. Positive values in this table indicate a cost while negative values represent a benefit. For instance, for the Page-blocks application the minimax classifier assures a maximum risk of 0.10 while the standard, re-balanced and minMaxDev classifiers reach values of 0.62, 0.26 and 0.20, respectively. It can be noticed that for the Pen-digits data set, the minimax deviation and minimax approaches attain the same results. The reason is that, for this problem, the R basis plane takes the same value (in this case, zero) in the probability space. 6.3.2 C LASSIFIER ROBUSTNESS UNDER PARTIAL U NCERTAINTY Unlike the previous section, we consider now that partial information about the class priors is available. The aim is to find a classifier that behaves well for a delimited and realistic range of priors what constitutes an aid in reducing the maximum deviation from the optimal classifier. This situation can be treated as a constrained minimax regret strategy where the constraints represent any extra information about prior probability value. Experimental results for several situations of partial prior uncertainty are presented in this section. We consider that lower bounds for the prior probabilities are available (see Table 6). In order to get the Γ-minMaxDev classifier, the risk for the different vertex of the uncertainty domain needs to be calculated. With them, the basis risk RΓ over which deviations are measured is derived. basis 125 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Lower bound for prior probabilities Data Set P0l P1l GCRE 0.40 0.25 ACRE 0.20 0.25 MCRE 0.20 0.25 COIL 0.15 P2l P3l P4l P5l 0.03 DNA 0.10 0.10 0.22 0.02 0.00 0.01 0.1 0.20 0.10 0.10 0.10 0.10 0.06 0.06 0.10 0.10 0.06 P9l 0.06 0.10 0.05 0.05 0.02 PEN P8l 0.02 DER P7l 0.25 PAG P6l Table 6: Lower bounds for prior probabilities defining the uncertainty region, Γ region for the experimental data sets. Maximum Risk Deviation in the uncertainty region Standard Minimax Deviation Minimax Deviation with restriction minMaxDev Γ-minMaxDev GCRE 0.24 0.19 (0.10 0.09) ACRE 0.03 0.64 (0.03 0.03) MCRE 0.22 0.38 (0.13 0.10) COIL 2.33 0.77 (0.17 0.11) DNA 0.14 0.08 (0.07 0.07 0.06) PAG 0.37 0.15 (0.10 0.08 0.08 0.05 0.04) DER 0.08 0.05 (0.03 0.03 0.04 0.02 0.05 PEN 0.013 0.007 (0.003 0.001 0.003 0.000 0.001 0.001 0.000 0.003 0.05) 0.001 0.001) Table 7: Classifier Performance under partial knowledge of prior probabilities measured as Maximum Risk Deviation for several real-world applications. Class-conditional risk deviations (RΓ − cΓ ) are reported for the Γ-minMaxDev classifier. i ii Maximum deviation from the optimal in Γ is reported for the Γ-minMaxDev classifier together with the standard and the minMaxDev ones. For instance, the standard classifier for the Pageblocks data set deviates from the optimal classifier, in the defined uncertainty region, up to 0.37, while when complete uncertainty is assumed the maximum deviation is equal to 0.62. In the same way, reducing the uncertainty also means a reduction in the maximum deviation for minMaxDev classifier (trained without considering this partial knowledge). Thus, for Γ, this classifier assures a deviation bound of 0.15. However, taking into account this partial information to train a Γ-minMaxDev classifier allows to reduce the deviation for the worst-case conditions to 0.10. It can be seen the same behavior for the other databases in Table 7. 126 M INIMAX R EGRET C LASSIFIER 7. Conclusions This work concerns the design of robust neural-based classifiers when the prior probabilities of the classes are partially or completely unknown, even by the end user. This problem of uncertainty in the class priors is often ignored in supervised classification, even though it is a widespread situation in real world applications. As a result, the reliability of the inducted classifier can be greatly affected as previously shown by the experiments. To tackle this problem, we have proposed a novel minimax deviation strategy with the goal to minimize the maximum deviation with respect to the optimal classifier. A neural network training algorithm based on learning rate scaling has been developed. The experimental results show that this minimax deviation (minMaxDev) classifier protects against prior changes while other approaches like ignoring this uncertainty or use a balanced learning data set may result in large differences in performance with respect to the minimum risk classifier. Also, it has been shown that the conventional minimax classifier reduces the maximum possible risk following a conservative attitude but at the expense of large worst-case differences from the optimal classifier. Furthermore, a constrained minimax deviation approach (Γ-minMaxDev) has been derived for those situations where uncertainty is only partial. This may be seen as a general approach with some particular cases: a) precise knowledge of prior probabilities and b) complete uncertainty about the priors. In a) the region of uncertainty collapses to a point and we have the Bayes’ rule of minimum risk and in b) the pure minimax deviation strategy comes up. While the first one may be criticized for being quite unrealistic, the other may be seen rather pessimistic. The experimental results for this proposed intermediate situation show that the Γ-minMaxDev classifier allows to reduce the maximum deviation from the optimal and performs well over a range of prior probabilities. Acknowledgments The authors thank the four referees and the associate editor for their helpful comments. This work was partially supported by the project TEC2005-06766-C03-02 from the Spanish Ministry of Education and Science. References N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 3–11, 2004. N. M. Adams and D. J. Hand. Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition, 32(7):1139–1147, March 1998. R. Alaiz-Rodriguez, A. Guerrero-Curieses, and J. Cid-Sueiro. Minimax classifiers based on neural networks. Pattern Recognition, 38(1):29–39, January 2005. R. Barandela, J. S. Sanchez, V. Garc´a, and E. Rangel. Strategies for learning in class imbalance ı problems. Pattern Recognition, 36(3):849–851, March 2003. J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, second edition, 1985. 127 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/ mlearn/MLRepository.html. URL L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Chapman & Hall, NY, 1984. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority oversampling technique. Journal of Artificial Intelligence Research, 16:321–357, 2002. J. Cid-Sueiro and A. R. Figueiras-Vidal. On the structure of strict sense Bayesian cost functions and its applications. IEEE Transactions on Neural Networks, 12(3):445–455, May 2001. C. Drummond and R. C. Holte. Explicitly representing expected cost: An alternative to ROC representation. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 198–207. ACM Press, 2000. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, 2001. Y. C. Eldar and N. Merhav. Minimax approach to robust estimation of random parameters. IEEE Trans. on Signal Processing, 52(7):1931–1946, July 2004. Y. C. Eldar, A. Ben-Tal, and A. Nemirovski. Linear minimax regret estimation of deterministic parameters with bounded data uncertainties. IEEE Trans. on Signal Processing, 52(8):2177– 2188, August 2004. M. Feder and N. Merhav. Universal composite hypothesis testing: A competitive minimax approach. IEEE Trans. on Information Theory, 48(6):1504–1517, June 2002. A. Guerrero-Curieses, R. Alaiz-Rodriguez, and J. Cid-Sueiro. A fixed-point algorithm to minimax learning with neural networks. IEEE Transactions on Systems, Man and Cybernetics Part C, 34 (4):383–392, November 2004. ¨ H. A. G¨ venir, N. Emeksiz, N. Ikizler, and N. Ormeci. Diagnosis of gastric carcinoma by classifiu cation on feature projections. Artificial Intelligence in Medicine, 31(3), 2004. N. Ikizler. Benefit maximizing classification using feature intervals. Technical Report BU-CE-0208, Bilkent University, Ankara, Turkey, 2002. N. Japkowicz and S. Stephen. The class imbalance problem: A systematic study. Intelligent Data Analysis Journal, 6(5):429–450, November 2002. M. G. Kelly, D. J. Hand, and N. M. Adams. The impact of changing populations on classifier performance. In Proceedings of Fifth International Conference on SIG Knowledge Discovery and Data Mining (SIGKDD), pages 367–371, San Diego, CA, 1999. H. J. Kim. On a constrained optimal rule for classification with unknown prior individual group membership. Journal of Multivariate Analysis, 59(2):166–186, November 1996. M. Kubat and S. Matwin. Addressing the curse of imbalanced training sets: One-sided selection. In Proceedings 14th International Conference on Machine Learning, pages 179–186. Morgan Kaufmann, 1997. 128 M INIMAX R EGRET C LASSIFIER M. Kubat, R. Holte, and S. Matwin. Machine learning for the detection of oil spills in satellite radar images. Machine Learning, 30(2/3):195–215, 1998. S. Lawrence, I. Burns, A. D. Back, A. C. Tsoi, and C. L. Giles. Neural network classification and ¨ unequal prior class probabilities. In G. Orr, K.-R. Muller, and R. Caruana, editors, Tricks of the Trade, Lecture Notes in Computer Science State-of-the-Art Surveys, pages 299–314. Springer Verlag, 1998. T. K. Moon and W. C. Stirling. Mathematical Methods and Algorithms for Signal Processing. Prentice Hall, 2000. A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In ICML ’05: Proceedings of the 22nd International Conference on Machine learning, pages 625– 632, New York, NY, USA, 2005. ACM Press. ISBN 1-59593-180-5. E. Polak. Optimization: Algorithms and Consistent Approximations. Springer, 1997. F. Provost. Learning with imbalanced data sets 101. In Invited paper for the AAAI 2000 Workshop on Imbalanced Data Sets. AAAI Press. Technical Report WS-00-05, 2000. F. Provost and T. Fawcett. Robust classification systems for imprecise environments. Machine Learning, 42(3):203–231, March 2001. M. Saerens, P. Latinne, and C. Decaestecker. Adjusting a classifier for new a priori probabilities: A simple procedure. Neural Computation, 14:21–41, January 2002. E. Takimoto and M. Warmuth. The minimax strategy for Gaussian density estimation. In Proceedings 13th Annual Conference on Computational Learning Theory, pages 100–106. Morgan Kaufmann, San Francisco, 2000. K. M. Ting. A study of the effect of class distribution using cost-sensitive learning. In Proceedings of the Fifth International Conference on Discovery Science, pages 98–112. Berlin: Springer-Verlag, 2002. H. L. VanTrees. Detection, Estimation and Modulation Theory. John Wiley and Sons, 1968. G. I. Webb and K. M. Ting. On the application of ROC analysis to predict classification performance under varying class distributions. Machine Learning, 58(1):25–32, 2005. W. Wei, T. K. Leen, and E. Barnard. A fast histogram-based postprocessor that improves posterior probability estimates. Neural Computation, 11(5):1235 – 1248, July 1999. B. Zadrozny and C. Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 204–213. ACM Press, 2001. B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Eighth International Conference on Knowledge Discovery and Data Mining, 2002. 129 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I B. Zadrozny, J. Langford, and N. Abe. Cost-sensitive learning by cost-proportionate example weighting. In Proceedings of the third IEEE International Conference on Data Mining, pages 435–442, 2003. Z. H. Zhou and X. Y. LiuJ. Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on Knowledge and Data Engineering, 18(1):63–77, January 2006. 130
5 0.23637085 78 jmlr-2007-Statistical Consistency of Kernel Canonical Correlation Analysis
Author: Kenji Fukumizu, Francis R. Bach, Arthur Gretton
Abstract: While kernel canonical correlation analysis (CCA) has been applied in many contexts, the convergence of finite sample estimates of the associated functions to their population counterparts has not yet been established. This paper gives a mathematical proof of the statistical convergence of kernel CCA, providing a theoretical justification for the method. The proof uses covariance operators defined on reproducing kernel Hilbert spaces, and analyzes the convergence of their empirical estimates of finite rank to their population counterparts, which can have infinite rank. The result also gives a sufficient condition for convergence on the regularization coefficient involved in kernel CCA: this should decrease as n−1/3 , where n is the number of data. Keywords: canonical correlation analysis, kernel, consistency, regularization, Hilbert space
6 0.22527751 81 jmlr-2007-The Locally Weighted Bag of Words Framework for Document Representation
7 0.22429937 5 jmlr-2007-A Nonparametric Statistical Approach to Clustering via Mode Identification
8 0.22339359 32 jmlr-2007-Euclidean Embedding of Co-occurrence Data
9 0.2222119 62 jmlr-2007-On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning
10 0.22082502 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
11 0.22062215 66 jmlr-2007-Penalized Model-Based Clustering with Application to Variable Selection
12 0.21950626 49 jmlr-2007-Learning to Classify Ordinal Data: The Data Replication Method
13 0.21925643 33 jmlr-2007-Fast Iterative Kernel Principal Component Analysis
14 0.217878 17 jmlr-2007-Building Blocks for Variational Bayesian Learning of Latent Variable Models
16 0.21731982 46 jmlr-2007-Learning Equivariant Functions with Matrix Valued Kernels
17 0.21726531 37 jmlr-2007-GiniSupport Vector Machine: Quadratic Entropy Based Robust Multi-Class Probability Regression
20 0.21554312 87 jmlr-2007-Undercomplete Blind Subspace Deconvolution