jmlr jmlr2007 jmlr2007-24 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Roland Nilsson, José M. Peña, Johan Björkegren, Jesper Tegnér
Abstract: We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL - OPTIMAL) vs. finding all features relevant to the target variable (ALL RELEVANT ). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL - RELEVANT is much harder than MINIMAL - OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks. Keywords: learning theory, relevance, classification, Markov blanket, bioinformatics
Reference: text
sentIndex sentText sentNum sentScore
1 finding all features relevant to the target variable (ALL RELEVANT ). [sent-11, score-0.217]
2 In this paper, we take a different approach to solving MINIMAL - OPTIMAL: we restrict the problem to the class of strictly positive data distributions, and prove that within this class, the problem is in fact polynomial in the number of features. [sent-25, score-0.174]
3 We then demonstrate that due to measurement noise, most data distributions encountered in practical applications are strictly positive, so that our result is widely applicable. [sent-27, score-0.186]
4 We prove that this problem is much harder than MINIMAL - OPTIMAL; it is asymptotically intractable even for strictly positive distributions. [sent-34, score-0.238]
5 Definition 1 The class of strictly positive data distributions consists of the f (x, y) that satisfies (i) f (x) > 0 almost everywhere (in the Lebesgue measure) and (ii) P(p(y|X) = 1/2) = 0. [sent-55, score-0.186]
6 P, strictly positive; PC, strictly positive satisfying the composition property; PCWT, strictly positive satisfying composition and weak transitivity; PD, strictly positive and DAG-faithful. [sent-62, score-0.745]
7 Since the noise component ε is strictly positive over the domain of X, we immediately obtain f (x) > 0. [sent-65, score-0.143]
8 In general, the strictly positive restriction is considered reasonable whenever there is uncertainty about the data (Pearl, 1988). [sent-67, score-0.143]
9 In Section 4, we will consider the more narrow classes of strictly positive distributions (P) that also satisfy the composition property (PC), and those in PC that additionally satisfies the weak transitivity property (PCWT). [sent-71, score-0.383]
10 Also, the strictly positive and DAG-faithful distributions (PD) are contained in PCWT (Theorem 24, Appendix B). [sent-75, score-0.186]
11 However, we hold that the PCWT class is more realistic than PD, since PCWT distributions will remain PCWT when a subset of the features are marginalized out (that is, are not observed), while PD distributions may not (Chickering and Meek, 2002). [sent-76, score-0.174]
12 ˜ A similar proof for the weak transitivity property can be found in Pena et al. [sent-84, score-0.137]
13 For strictly positive distributions, g ∗ (x) is also unique, except on zero-measure subsets of X (above, the arbitrary choice of g ∗ (x) = +1 at the decision boundary p(y|x) = 1/2 is such a zero-measure set). [sent-93, score-0.196]
14 3 Feature Relevance Measures Much of the theory of feature selection is centered around various definitions of feature relevance. [sent-97, score-0.138]
15 Definition 4 (strong and weak relevance) A feature Xi is strongly relevant to Y iff Y ⊥ Xi |Ri . [sent-114, score-0.406]
16 A feature Xi is weakly relevant to Y iff it is not strongly relevant, but satisfies Y ⊥ Xi |S for some set S ⊂ Ri . [sent-115, score-0.509]
17 592 C ONSISTENT F EATURE S ELECTION IN P OLYNOMIAL T IME Informally, a strongly relevant feature carries information about Y that cannot be obtained from any other feature. [sent-116, score-0.281]
18 A weakly relevant feature also carries information about Y , but this information is ”redundant”—it can also be obtained from other features. [sent-117, score-0.338]
19 Using these definitions, relevance and irrelevance of a feature to a target variable Y are defined as Definition 5 (relevance) A feature Xi is relevant to Y iff it is strongly relevant or weakly relevant to Y. [sent-118, score-0.892]
20 A feature Xi is irrelevant to Y iff it is not relevant to Y. [sent-119, score-0.254]
21 Definition 6 A feature Xi is relevant to a classifier g iff P(g(Xi , Ri ) = g(Xi , Ri )) > 0, where Xi , Xi are independent and identically distributed. [sent-121, score-0.254]
22 We say that an inducer is consistent if the induced classifier I(D l ) converges in probability to g∗ as the sample size l tends to infinity, P I(Dl ) − g∗ . [sent-128, score-0.143]
23 Provided that the inducer used is consistent, we can address the feature selection problem asymptotically by studying the Bayes classifier. [sent-131, score-0.185]
24 Definition 7 The MINIMAL - OPTIMAL feature set S∗ is defined as the set of features relevant to the Bayes classifier g∗ (in the sense of Definition 6). [sent-133, score-0.27]
25 Since g∗ is unique for strictly positive distributions (Lemma 19), it follows directly from Definition 6 that S∗ is then also unique. [sent-135, score-0.186]
26 Theorem 8 For any strictly positive distribution f (x, y), the only strongly relevant features. [sent-137, score-0.371]
27 593 MINIMAL - OPTIMAL set S ∗ contains ˜ ¨ ´ N ILSSON , P E NA , B J ORKEGREN AND T EGN E R X2 ( w e a k l y r e l e v a n t ) R is k 2 a ll fe a tu r e s s tr o n g ly r e le v a n t fe a tu r e s 0 . [sent-138, score-0.158]
28 Here, X1 is strongly relevant and X2 weakly relevant. [sent-142, score-0.384]
29 B: The risk functional R(g) for a linear SVM trained on all relevant features (filled boxes) vs. [sent-144, score-0.258]
30 on strongly relevant features only (open boxes), for the 10-dimensional distribution (4). [sent-145, score-0.316]
31 Proof Since f is strictly positive, g∗ is unique by Lemma 19 (Appendix A). [sent-149, score-0.143]
32 Consider any feature Xi relevant to g∗ , so that P(g∗ (Xi , Ri ) = g∗ (Xi , Ri )) > 0 by Definition 6. [sent-150, score-0.182]
33 From the form (2) of the Bayes classifier, we find that g∗ (xi , ri ) = g∗ (xi , ri ) ⇒ p(y|xi , ri ) = p(y|xi , ri ) (3) everywhere except possibly on the decision surface {x : p(y|x) = 1/2}. [sent-151, score-0.84]
34 Theorem 8 is important because it asserts that we may safely ignore weakly relevant features, conditioned on the assumption f (x) > 0. [sent-157, score-0.285]
35 Note that, although the shape of the distribution in Figure 2 suggests that both features are relevant to Y , 594 C ONSISTENT F EATURE S ELECTION IN P OLYNOMIAL T IME it is easy to verify directly from (4) that X2 , X4 , . [sent-167, score-0.217]
36 , X10 are weakly relevant: considering for example the pair (X1 , X2 ), we have p(y|x1 , x2 ) = f (x1 , x2 , y) f (x1 , x2 ) = 9 1 + exp − ((x1 + y)2 − (x1 − y)2 ) 8 = 1 + exp − 9x1 y 2 −1 −1 which depends only on x1 , so X2 is weakly relevant. [sent-170, score-0.312]
37 For any consistent inducer I, Theorem 8 can be treated as an approximation for finite (but sufficiently large) samples. [sent-172, score-0.143]
38 If this approximation is fair, we expect that adding weakly relevant features will degrade the performance of I, since the Bayes risk must be constant while the design cost must increase (Jain and Waller, 1978). [sent-173, score-0.414]
39 Clearly, adding the weakly relevant features increases risk in this example. [sent-185, score-0.414]
40 As the following example illustrates, the converse of Theorem 8 is false: there exist strictly positive distributions where even strongly relevant features are not relevant to the Bayes classifier. [sent-186, score-0.679]
41 Clearly, this situation occurs whenever a strongly relevant feature Xi affects the value of the posterior p(y|x) but not the Bayes classifier g∗ (because the change in p(y|x) is not large enough to alter the decision of g∗ (x)). [sent-191, score-0.281]
42 weak relevance was treated in the pioneering study by Kohavi and John (1997), who concluded from motivating examples that ”(i) all strongly relevant features and (ii) some of the weakly relevant ones are needed by the Bayes classifier”. [sent-196, score-0.726]
43 Part (ii) is true in general, but Theorem 8 shows that this is not the case for the class of strictly positive f , and therefore it is rarely true in practise. [sent-198, score-0.143]
44 However, Yu & Liu consider arbitrary distributions; for strictly positive distributions however, it is easy to see that all weakly relevant features are also ”redundant” in their terminology, so that their distinction is not useful in this case. [sent-200, score-0.559]
45 By Definition 6, the features relevant to the optimal soft classifier p(y|x) satisfies P(p(Y |Xi , Ri ) = p(Y |Xi , Ri )) > 0 which is equivalent to P(p(Y |Xi , Ri ) = p(Y |Ri )) > 0 by Lemma 20. [sent-205, score-0.248]
46 Thus, the features relevant to p(y|x) are exactly the strongly relevant ones, so the situation in Example 1 does not occur here. [sent-206, score-0.445]
47 When learning soft classifiers from data, a feature set commonly encountered is the Markov boundary of the class variable, defined as the minimal feature set required to predict the posterior. [sent-207, score-0.273]
48 97) shows that this set is well-defined for strictly positive distributions. [sent-210, score-0.143]
49 2 Theorem 9 (Markov boundary) For any strictly positive distribution f (x, y), there exists a unique minimal set M ⊆ X satisfying Y ⊥ X \ M|M. [sent-211, score-0.226]
50 Tsamardinos & Aliferis recently proved that for the PD distribution class (see Figure 1), the Markov boundary coincides with the set of strongly relevant features (Tsamardinos and Aliferis, 2003). [sent-213, score-0.369]
51 Theorem 10 For any strictly positive distribution f (x, y), a feature Xi is strongly relevant if and only if it is in the Markov boundary M = M(Y |X) of Y . [sent-217, score-0.477]
52 The feature set relations established at this point for strictly positive distributions are summarized in Figure 3. [sent-226, score-0.239]
53 3 Consistent Polynomial Algorithms By limiting the class of distributions, we have simplified the problem to the extent that weakly relevant features can be safely ignored. [sent-235, score-0.373]
54 Next, we propose a polynomial-time FS algorithm and show that it is consistent for any strictly positive f . [sent-242, score-0.219]
55 Theorem 11 Take any strictly positive distribution f (x, y) and let c(D l ) be a real-valued criterion ˆ S function such that, for every feature subset S, P c(Dl ) − c(S), ˆ S → (5) where c(S) depends only on the distribution f (x, y) and satisfies c(S) < c(S ) ⇐⇒ R(g∗ ) < R(g∗ ). [sent-244, score-0.196]
56 597 (6) ˜ ¨ ´ N ILSSON , P E NA , B J ORKEGREN AND T EGN E R Proof Since f is strictly positive, S∗ is unique by Lemma 19. [sent-246, score-0.143]
57 Similarly, ˆ other consistent inducers and consistent risk estimators could be used, for example support vector machines (Steinwart, 2002) and the cross-validation error estimate (Devroye et al. [sent-283, score-0.193]
58 In contrast, forward-search algorithms are not consistent even for strictly positive f . [sent-290, score-0.219]
59 Starting a with feature set S, forward-search would find the feature set S = S ∪ {Xi } (that is, add feature Xi ) iff c(Dl ) < c(Dl ). [sent-291, score-0.231]
60 1116) is an example of a strictly positive distribution with this property. [sent-295, score-0.143]
61 As sample size increases, we find that the fraction of strongly relevant features P selected approaches 1, confirming that Φ(Dl ) − S∗ . [sent-301, score-0.316]
62 Nevertheless, 599 ˜ ¨ ´ N ILSSON , P E NA , B J ORKEGREN AND T EGN E R 1 s tro n g fe a tu re s s e le c te d fe a tu r e s 0 . [sent-304, score-0.158]
63 6 # s a m p le s 20 4 0 6 0 8 0 1 0 0 1 20 1 4 0 1 6 0 1 8 0 20 0 Figure 4: A feature selection example on a 10-dimensional density with 5 strongly and 5 weakly relevant features (Equation 4). [sent-308, score-0.557]
64 , 1998) to demonstrate empirically that weakly relevant features do not contribute to classifier accuracy. [sent-323, score-0.373]
65 and Rendell, 1992) and the FBCF algorithm by Yu and Liu (2004), both of which are based on the conjecture that weakly relevant features may be needed. [sent-329, score-0.373]
66 Typically, feature selection is performed to optimize classification performance (that is, one attempts to solve MINIMAL - OPTIMAL), and the features chosen are then examined for biological significance (Golub et al. [sent-351, score-0.21]
67 The features in S ∗ are typically those with good signal-to-noise ratio (that is, those very predictive of the class), but these need not be more biologically significant than other features dependent on Y . [sent-355, score-0.176]
68 601 ˜ ¨ ´ N ILSSON , P E NA , B J ORKEGREN AND T EGN E R Therefore, it is desirable to identify all genes relevant to the target variable, rather than the set S∗ , which may be more determined by technical factors than by biological significance. [sent-358, score-0.208]
69 Definition 12 For a given data distribution, the ALL - RELEVANT feature set S A is the set of features relevant to Y in the sense of Definition 5. [sent-360, score-0.27]
70 We will demonstrate that because SA includes weakly relevant features (Figure 3), the ALL - RELEVANT problem is much harder than MINIMAL - OPTIMAL . [sent-362, score-0.404]
71 In fact, the problem of determining whether a single feature Xi is weakly relevant requires exhaustive search over all 2n subsets of X, even if we restrict ourselves to strictly positive distributions. [sent-363, score-0.481]
72 Theorem 13 For a given feature Xi and for every S ⊆ Ri , there exists a strictly positive f (x, y) satisfying Y ⊥ Xi |S ∧ ∀S = S : Y ⊥ Xi |S . [sent-364, score-0.196]
73 Next, let Xi+1 = Xi + ε for k < i < n, where ε is some strictly positive noise distribution. [sent-373, score-0.143]
74 Taken together, this is equivalent to (7), and f is strictly positive. [sent-376, score-0.143]
75 Therefore, no search method other than exhaustively examining all sets S can possibly determine whether Xi is weakly relevant. [sent-378, score-0.156]
76 This fact is illustrative in comparison with Theorem 8; MINIMAL - OPTIMAL is tractable for strictly positive distributions precisely because S ∗ does not include weakly relevant features. [sent-382, score-0.499]
77 Since the restriction to strictly positive distributions is not sufficient to render ALL - RELEVANT tractable, we must look for additional constraints. [sent-383, score-0.186]
78 602 C ONSISTENT F EATURE S ELECTION IN P OLYNOMIAL T IME Algorithm 1: Recursive independence test (RIT) Input: target node Y , features X / Let S = 0; foreach Xi ∈ X do / if Xi ⊥Y |0 then S = S ∪ {Xi }; end end foreach Xi ∈ S do S = S ∪ RIT(Xi , X \ S); end return S 4. [sent-385, score-0.213]
79 Next, we prove that also the RMB algorithm is consistent for any PCWT distribution, assuming that we are using a consistent estimator of Markov boundaries (see below). [sent-440, score-0.152]
80 604 C ONSISTENT F EATURE S ELECTION IN P OLYNOMIAL T IME Definition 17 An independence map (I-map) over a set of features X = {X1 , . [sent-442, score-0.149]
81 Note that we only consider the minimal I-map over the features X (not over X ∪ {Y }), since in this case, the minimal I-map is unique for any strictly positive distribution (Pearl, 1988). [sent-447, score-0.397]
82 Theorem 18 For any PCWT distribution such that a given estimator M(Y |S) of the Markov boundary of Y with respect to a feature set S is consistent for every S ⊆ X, the RMB algorithm is consistent. [sent-448, score-0.182]
83 Proof For every S ⊆ X, the marginal distribution over S,Y is strictly positive, and therefore every Markov boundary M(Y |S) is unique by corollary 26 (Appendix B). [sent-449, score-0.196]
84 Let G be the minimal I-map over the features X, and let M1 = M(X|Y ). [sent-450, score-0.171]
85 This estimator is consistent in the PC class, so it follows that RMB is n consistent in PCWT in this case (see Figure 1). [sent-464, score-0.152]
86 ∗ T −1 Clearly, both g∗ (x) and p(y|x) are constant with respect to an xi iff (µT Σ−1 )i = 0. [sent-471, score-0.18]
87 On the other hand, several methods have been proposed for solving MINIMAL - OPTIMAL that in fact attempt to find all relevant features, since they do not assume f > 0 and therefore cannot rule out the weakly relevant ones. [sent-485, score-0.414]
88 Conclusion In this paper, we have explored an alternative approach to the feature selection (FS) problem: instead of designing suboptimal methods for the intractable full problem, we propose consistent and efficient (polynomial-time) methods for a restricted data distribution class. [sent-490, score-0.192]
89 We find that a very mild 606 C ONSISTENT F EATURE S ELECTION IN P OLYNOMIAL T IME restriction to strictly positive distributions is sufficient for the MINIMAL - OPTIMAL problem to be tractable (Figure 1). [sent-491, score-0.214]
90 We have also identified a different feature selection problem, that of discovering all relevant features (ALL - RELEVANT). [sent-493, score-0.302]
91 With the advent of major new applications in the bioinformatics field, where identifying features per se is often a more important goal than building accurate predictors, we anticipate that ALL - RELEVANT will become a very important research problem in the future. [sent-495, score-0.153]
92 We have herein provided a first analysis, proved that the problem is intractable even for strictly positive distributions, and proposed two consistent, polynomial-time algorithms for more restricted classes (Figure 1). [sent-496, score-0.174]
93 Lemmas For completeness, we here give a proof of the uniqueness of the Bayes classifier for strictly positive distributions. [sent-504, score-0.143]
94 Lemma 19 For any strictly positive distribution f (x, y), the Bayes classifier g ∗ is unique in the sense that, for every classifier g, it holds that R(g) = R(g∗ ) iff the Lebesgue measure of {x : g(x) = g∗ (x)} is zero. [sent-505, score-0.215]
95 Lemma 22 For any PCWT distribution f (x, y), a feature Xk is relevant iff there exists a path m Z1 = {Z1 , . [sent-539, score-0.286]
96 Any strictly positive distribution f (x, y) satisfies the intersection property Y ⊥ R|(S ∪ T ) ∧ Y ⊥ T |(S ∪ R) ⇒ Y ⊥ (R ∪ T )|S. [sent-570, score-0.173]
97 Corollary 26 For any strictly positive distribution f (x, y), the Markov boundary M(Y |X) is unique. [sent-580, score-0.196]
98 Selection of relevant features and examples in machine learning. [sent-596, score-0.217]
99 Identifying the relevant nodes n o e before learning the structure. [sent-719, score-0.158]
100 Efficient feature selection via analysis of relevance and redundancy. [sent-744, score-0.157]
wordName wordTfidf (topN-words)
[('dl', 0.394), ('zm', 0.264), ('pcwt', 0.236), ('rmb', 0.236), ('zi', 0.232), ('ri', 0.21), ('egn', 0.167), ('ilsson', 0.167), ('orkegren', 0.167), ('weakly', 0.156), ('strictly', 0.143), ('olynomial', 0.129), ('relevant', 0.129), ('fs', 0.118), ('xk', 0.116), ('onsistent', 0.116), ('pd', 0.115), ('rit', 0.111), ('xi', 0.108), ('markov', 0.106), ('ime', 0.106), ('strongly', 0.099), ('eature', 0.098), ('features', 0.088), ('election', 0.087), ('transitivity', 0.084), ('minimal', 0.083), ('zii', 0.083), ('na', 0.078), ('consistent', 0.076), ('relevance', 0.072), ('iff', 0.072), ('zk', 0.071), ('aliferis', 0.071), ('elief', 0.069), ('ds', 0.067), ('inducer', 0.067), ('pearl', 0.063), ('tsamardinos', 0.063), ('independence', 0.061), ('composition', 0.06), ('devroye', 0.06), ('sa', 0.059), ('jesper', 0.056), ('rkegren', 0.056), ('tegn', 0.056), ('er', 0.053), ('boundary', 0.053), ('weak', 0.053), ('johan', 0.053), ('feature', 0.053), ('blanket', 0.05), ('converse', 0.048), ('theorem', 0.048), ('pc', 0.047), ('fe', 0.047), ('ping', 0.047), ('gene', 0.046), ('guyon', 0.044), ('distributions', 0.043), ('bayes', 0.043), ('xor', 0.042), ('genes', 0.042), ('chickering', 0.042), ('pe', 0.042), ('ifm', 0.042), ('nilsson', 0.042), ('oping', 0.042), ('practise', 0.042), ('lemma', 0.042), ('risk', 0.041), ('link', 0.039), ('bj', 0.038), ('contraction', 0.038), ('biological', 0.037), ('liu', 0.036), ('fcbf', 0.035), ('roland', 0.035), ('bioinformatics', 0.035), ('cancer', 0.035), ('recursive', 0.034), ('asymptotically', 0.033), ('selection', 0.032), ('path', 0.032), ('tu', 0.032), ('foreach', 0.032), ('sweden', 0.032), ('jos', 0.032), ('chemistry', 0.032), ('polynomial', 0.031), ('harder', 0.031), ('intractable', 0.031), ('false', 0.031), ('soft', 0.031), ('intersection', 0.03), ('se', 0.03), ('classi', 0.029), ('nodes', 0.029), ('golub', 0.028), ('tractable', 0.028)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
Author: Roland Nilsson, José M. Peña, Johan Björkegren, Jesper Tegnér
Abstract: We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL - OPTIMAL) vs. finding all features relevant to the target variable (ALL RELEVANT ). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL - RELEVANT is much harder than MINIMAL - OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks. Keywords: learning theory, relevance, classification, Markov blanket, bioinformatics
2 0.091908447 70 jmlr-2007-Ranking the Best Instances
Author: Stéphan Clémençon, Nicolas Vayatis
Abstract: We formulate a local form of the bipartite ranking problem where the goal is to focus on the best instances. We propose a methodology based on the construction of real-valued scoring functions. We study empirical risk minimization of dedicated statistics which involve empirical quantiles of the scores. We first state the problem of finding the best instances which can be cast as a classification problem with mass constraint. Next, we develop special performance measures for the local ranking problem which extend the Area Under an ROC Curve (AUC) criterion and describe the optimal elements of these new criteria. We also highlight the fact that the goal of ranking the best instances cannot be achieved in a stage-wise manner where first, the best instances would be tentatively identified and then a standard AUC criterion could be applied. Eventually, we state preliminary statistical results for the local ranking problem. Keywords: ranking, ROC curve and AUC, empirical risk minimization, fast rates
3 0.089851916 89 jmlr-2007-VC Theory of Large Margin Multi-Category Classifiers (Special Topic on Model Selection)
Author: Yann Guermeur
Abstract: In the context of discriminant analysis, Vapnik’s statistical learning theory has mainly been developed in three directions: the computation of dichotomies with binary-valued functions, the computation of dichotomies with real-valued functions, and the computation of polytomies with functions taking their values in finite sets, typically the set of categories itself. The case of classes of vectorvalued functions used to compute polytomies has seldom been considered independently, which is unsatisfactory, for three main reasons. First, this case encompasses the other ones. Second, it cannot be treated appropriately through a na¨ve extension of the results devoted to the computation of ı dichotomies. Third, most of the classification problems met in practice involve multiple categories. In this paper, a VC theory of large margin multi-category classifiers is introduced. Central in this theory are generalized VC dimensions called the γ-Ψ-dimensions. First, a uniform convergence bound on the risk of the classifiers of interest is derived. The capacity measure involved in this bound is a covering number. This covering number can be upper bounded in terms of the γ-Ψdimensions thanks to generalizations of Sauer’s lemma, as is illustrated in the specific case of the scale-sensitive Natarajan dimension. A bound on this latter dimension is then computed for the class of functions on which multi-class SVMs are based. This makes it possible to apply the structural risk minimization inductive principle to those machines. Keywords: multi-class discriminant analysis, large margin classifiers, uniform strong laws of large numbers, generalized VC dimensions, multi-class SVMs, structural risk minimization inductive principle, model selection
4 0.081535555 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
Author: Sébastien Gadat, Laurent Younes
Abstract: We introduce a new model addressing feature selection from a large dictionary of variables that can be computed from a signal or an image. Features are extracted according to an efficiency criterion, on the basis of specified classification or recognition tasks. This is done by estimating a probability distribution P on the complete dictionary, which distributes its mass over the more efficient, or informative, components. We implement a stochastic gradient descent algorithm, using the probability as a state variable and optimizing a multi-task goodness of fit criterion for classifiers based on variable randomly chosen according to P. We then generate classifiers from the optimal distribution of weights learned on the training set. The method is first tested on several pattern recognition problems including face detection, handwritten digit recognition, spam classification and micro-array analysis. We then compare our approach with other step-wise algorithms like random forests or recursive feature elimination. Keywords: stochastic learning algorithms, Robbins-Monro application, pattern recognition, classification algorithm, feature selection
Author: Matthias Hein, Jean-Yves Audibert, Ulrike von Luxburg
Abstract: Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator. Keywords: graphs, graph Laplacians, semi-supervised learning, spectral clustering, dimensionality reduction
6 0.059208237 31 jmlr-2007-Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
7 0.049667366 9 jmlr-2007-AdaBoost is Consistent
8 0.048724685 20 jmlr-2007-Combining PAC-Bayesian and Generic Chaining Bounds
9 0.048535097 8 jmlr-2007-A Unified Continuous Optimization Framework for Center-Based Clustering Methods
10 0.047170181 66 jmlr-2007-Penalized Model-Based Clustering with Application to Variable Selection
11 0.046154745 45 jmlr-2007-Learnability of Gaussians with Flexible Variances
12 0.045383979 61 jmlr-2007-On the Consistency of Multiclass Classification Methods (Special Topic on the Conference on Learning Theory 2005)
13 0.044045281 5 jmlr-2007-A Nonparametric Statistical Approach to Clustering via Mode Identification
14 0.043459285 35 jmlr-2007-General Polynomial Time Decomposition Algorithms (Special Topic on the Conference on Learning Theory 2005)
15 0.04270748 52 jmlr-2007-Margin Trees for High-dimensional Classification
16 0.042099237 49 jmlr-2007-Learning to Classify Ordinal Data: The Data Replication Method
17 0.040895347 40 jmlr-2007-Harnessing the Expertise of 70,000 Human Editors: Knowledge-Based Feature Generation for Text Categorization
19 0.039328866 10 jmlr-2007-An Interior-Point Method for Large-Scalel1-Regularized Logistic Regression
20 0.038665365 19 jmlr-2007-Classification in Networked Data: A Toolkit and a Univariate Case Study
topicId topicWeight
[(0, 0.243), (1, -0.008), (2, 0.022), (3, 0.036), (4, 0.091), (5, 0.027), (6, -0.013), (7, 0.016), (8, 0.192), (9, 0.032), (10, -0.029), (11, 0.094), (12, 0.119), (13, 0.005), (14, -0.07), (15, -0.019), (16, -0.163), (17, 0.058), (18, 0.025), (19, -0.131), (20, 0.191), (21, -0.021), (22, -0.014), (23, -0.007), (24, 0.106), (25, 0.363), (26, -0.047), (27, 0.201), (28, -0.03), (29, 0.09), (30, 0.038), (31, -0.059), (32, 0.018), (33, 0.091), (34, -0.037), (35, 0.174), (36, 0.134), (37, 0.038), (38, -0.121), (39, -0.014), (40, -0.067), (41, 0.131), (42, -0.056), (43, 0.072), (44, 0.041), (45, 0.012), (46, 0.144), (47, -0.012), (48, 0.152), (49, -0.146)]
simIndex simValue paperId paperTitle
same-paper 1 0.95621604 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
Author: Roland Nilsson, José M. Peña, Johan Björkegren, Jesper Tegnér
Abstract: We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL - OPTIMAL) vs. finding all features relevant to the target variable (ALL RELEVANT ). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL - RELEVANT is much harder than MINIMAL - OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks. Keywords: learning theory, relevance, classification, Markov blanket, bioinformatics
2 0.50595677 89 jmlr-2007-VC Theory of Large Margin Multi-Category Classifiers (Special Topic on Model Selection)
Author: Yann Guermeur
Abstract: In the context of discriminant analysis, Vapnik’s statistical learning theory has mainly been developed in three directions: the computation of dichotomies with binary-valued functions, the computation of dichotomies with real-valued functions, and the computation of polytomies with functions taking their values in finite sets, typically the set of categories itself. The case of classes of vectorvalued functions used to compute polytomies has seldom been considered independently, which is unsatisfactory, for three main reasons. First, this case encompasses the other ones. Second, it cannot be treated appropriately through a na¨ve extension of the results devoted to the computation of ı dichotomies. Third, most of the classification problems met in practice involve multiple categories. In this paper, a VC theory of large margin multi-category classifiers is introduced. Central in this theory are generalized VC dimensions called the γ-Ψ-dimensions. First, a uniform convergence bound on the risk of the classifiers of interest is derived. The capacity measure involved in this bound is a covering number. This covering number can be upper bounded in terms of the γ-Ψdimensions thanks to generalizations of Sauer’s lemma, as is illustrated in the specific case of the scale-sensitive Natarajan dimension. A bound on this latter dimension is then computed for the class of functions on which multi-class SVMs are based. This makes it possible to apply the structural risk minimization inductive principle to those machines. Keywords: multi-class discriminant analysis, large margin classifiers, uniform strong laws of large numbers, generalized VC dimensions, multi-class SVMs, structural risk minimization inductive principle, model selection
3 0.48343509 70 jmlr-2007-Ranking the Best Instances
Author: Stéphan Clémençon, Nicolas Vayatis
Abstract: We formulate a local form of the bipartite ranking problem where the goal is to focus on the best instances. We propose a methodology based on the construction of real-valued scoring functions. We study empirical risk minimization of dedicated statistics which involve empirical quantiles of the scores. We first state the problem of finding the best instances which can be cast as a classification problem with mass constraint. Next, we develop special performance measures for the local ranking problem which extend the Area Under an ROC Curve (AUC) criterion and describe the optimal elements of these new criteria. We also highlight the fact that the goal of ranking the best instances cannot be achieved in a stage-wise manner where first, the best instances would be tentatively identified and then a standard AUC criterion could be applied. Eventually, we state preliminary statistical results for the local ranking problem. Keywords: ranking, ROC curve and AUC, empirical risk minimization, fast rates
4 0.32774305 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
Author: Sébastien Gadat, Laurent Younes
Abstract: We introduce a new model addressing feature selection from a large dictionary of variables that can be computed from a signal or an image. Features are extracted according to an efficiency criterion, on the basis of specified classification or recognition tasks. This is done by estimating a probability distribution P on the complete dictionary, which distributes its mass over the more efficient, or informative, components. We implement a stochastic gradient descent algorithm, using the probability as a state variable and optimizing a multi-task goodness of fit criterion for classifiers based on variable randomly chosen according to P. We then generate classifiers from the optimal distribution of weights learned on the training set. The method is first tested on several pattern recognition problems including face detection, handwritten digit recognition, spam classification and micro-array analysis. We then compare our approach with other step-wise algorithms like random forests or recursive feature elimination. Keywords: stochastic learning algorithms, Robbins-Monro application, pattern recognition, classification algorithm, feature selection
5 0.32694513 31 jmlr-2007-Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
Author: Markus Kalisch, Peter Bühlmann
Abstract: We consider the PC-algorithm (Spirtes et al., 2000) for estimating the skeleton and equivalence class of a very high-dimensional directed acyclic graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible and often very fast for sparse problems with many nodes (variables), and it has the attractive property to automatically achieve high computational efficiency as a function of sparseness of the true underlying DAG. We prove uniform consistency of the algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(na ) for any 0 < a < ∞. The sparseness assumption is rather minimal requiring only that the neighborhoods in the DAG are of lower order than sample size n. We also demonstrate the PC-algorithm for simulated data. Keywords: asymptotic consistency, DAG, graphical model, PC-algorithm, skeleton
7 0.28909293 63 jmlr-2007-On the Representer Theorem and Equivalent Degrees of Freedom of SVR
9 0.25855115 45 jmlr-2007-Learnability of Gaussians with Flexible Variances
10 0.25138906 76 jmlr-2007-Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification
12 0.22270596 40 jmlr-2007-Harnessing the Expertise of 70,000 Human Editors: Knowledge-Based Feature Generation for Text Categorization
13 0.21819973 9 jmlr-2007-AdaBoost is Consistent
14 0.21176274 22 jmlr-2007-Compression-Based Averaging of Selective Naive Bayes Classifiers (Special Topic on Model Selection)
15 0.21072827 52 jmlr-2007-Margin Trees for High-dimensional Classification
16 0.20701851 50 jmlr-2007-Local Discriminant Wavelet Packet Coordinates for Face Recognition
17 0.20479995 10 jmlr-2007-An Interior-Point Method for Large-Scalel1-Regularized Logistic Regression
18 0.20310214 49 jmlr-2007-Learning to Classify Ordinal Data: The Data Replication Method
19 0.20300245 43 jmlr-2007-Integrating Naïve Bayes and FOIL
20 0.19857059 8 jmlr-2007-A Unified Continuous Optimization Framework for Center-Based Clustering Methods
topicId topicWeight
[(4, 0.011), (8, 0.025), (10, 0.019), (12, 0.034), (14, 0.077), (15, 0.041), (22, 0.012), (28, 0.052), (34, 0.023), (40, 0.053), (45, 0.029), (48, 0.03), (51, 0.257), (60, 0.088), (77, 0.011), (80, 0.027), (85, 0.036), (98, 0.11)]
simIndex simValue paperId paperTitle
1 0.7960279 55 jmlr-2007-Minimax Regret Classifier for Imprecise Class Distributions
Author: Rocío Alaiz-Rodríguez, Alicia Guerrero-Curieses, Jesús Cid-Sueiro
Abstract: The design of a minimum risk classifier based on data usually stems from the stationarity assumption that the conditions during training and test are the same: the misclassification costs assumed during training must be in agreement with real costs, and the same statistical process must have generated both training and test data. Unfortunately, in real world applications, these assumptions may not hold. This paper deals with the problem of training a classifier when prior probabilities cannot be reliably induced from training data. Some strategies based on optimizing the worst possible case (conventional minimax) have been proposed previously in the literature, but they may achieve a robust classification at the expense of a severe performance degradation. In this paper we propose a minimax regret (minimax deviation) approach, that seeks to minimize the maximum deviation from the performance of the optimal risk classifier. A neural-based minimax regret classifier for general multi-class decision problems is presented. Experimental results show its robustness and the advantages in relation to other approaches. Keywords: classification, imprecise class distribution, minimax regret, minimax deviation, neural networks 1. Introduction - Problem Motivation In the general framework of learning from examples and specifically when dealing with uncertainty, the robustness of the decision machine becomes a key issue. Most machine learning algorithms are based on the assumption that the classifier will use data drawn from the same distribution as the training data set. Unfortunately, for most practical applications (such as remote sensing, direct marketing, fraud detection, information filtering, medical diagnosis or intrusion detection) the target class distribution may not be accurately known during learning: for example, because the cost of labelling data may be class-dependent or the prior probabilities are non-stationary. Therefore, the data used to design the classifier (within the Bayesian context (see VanTrees, 1968), the c 2007 Roc´o Alaiz-Rodr´guez, Alicia Guerrero-Curieses and Jesus Cid-Sueiro. ´ ı ı A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I prior probabilities and the misclassification costs) may be non representative of the underlying real distributions. If the ratio of training data corresponding to each class is not in agreement with real class distributions, designing Bayes decision rules based on prior probabilities estimated from these data will be suboptimal and can seriously affect the reliability and performance of the classifier. A similar problem may arise if real misclassification costs are unknown during training. However, they are usually known by the end user, who can adapt the classifier decision rules to cost changes without re-training the classifier. For this reason, our attention in this paper is mainly focused on the problem of uncertainty in prior probabilities. Furthermore, being aware that class distribution is seldom known (at least totally) in real world applications, a robust approach (as opposite to adaptive) that prevents severe performance degradation appears to be convenient for these situations. Besides other adaptive and robust approaches that address this problem (discussed in more detail in Section 2.2) it is important to highlight those that handle the problem of uncertainty in priors by following a robust minimax principle: minimize the maximum possible risk. Analytic foundations of minimax classification are widely considered in the literature (see VanTrees, 1968; Moon and Stirling, 2000; Duda et al., 2001, for instance) and a few algorithms to carry out minimax decisions have been proposed. From computationally expensive ones such as estimating probability density functions (Takimoto and Warmuth, 2000; Kim, 1996) or using methods from optimization (Polak, 1997) to simpler ones like neural network training algorithms (Guerrero-Curieses et al., 2004; AlaizRodriguez et al., 2005). Minimax classifiers may, however, be seen as over-conservative since its goal is to optimize the performance under the least favorable conditions. Consider, for instance, a direct marketing campaign application carried out in order to maximize profits. Since optimal decisions rely on the proportion of potential buyers and it is usually unknown in advance, our classification system should take into account this uncertainty. Nevertheless, following a pure minimax strategy can lead to solutions where minimizing the maximum loss implies considering there are no potential clients. If it is the case, this minimax approach does not seem to be suitable for this kind of situation. In this imprecise class distribution scenario, it can be noticed that the classifier performance may be highly deviated from the optimal, that is, that of the classifier knowing actual priors. Minimizing this gap (that is, the maximum possible deviation with respect to the optimal classifier) is the focus of this paper. We seek for a system as robust as the conventional minimax approach but less pessimistic at the same time. We will refer to it as a minimax deviation (or minimax regret) classifier. In contrast to other robust and adaptive approaches, it can be used in general multiclass problems. Furthermore, as shown in Guerrero-Curieses et al. (2004), minimax approaches can be used in combination with the adaptive proposal by Saerens et al. (2002) to exploit its advantages. This minimax regret approach has recently been applied in the context of parameter estimation (Eldar et al., 2004; Eldar and Merhav, 2004) and a similar competitive strategy has been used in the context of hypothesis testing (Feder and Merhav, 2002). Under prior uncertainty, our solution provides an upper bound of the performance divergence from the optimal classifier. We propose a simple learning rate scaling algorithm in order to train a neural-based minimax deviation classifier. Although training can be based on minimizing any objective function, we have chosen objective functions that provide estimates of the posterior probabilities (see Cid-Sueiro and Figueiras-Vidal, 2001, for more details). 104 M INIMAX R EGRET C LASSIFIER This paper is organized as follows: the next section provides an overview of the problem as well as some previous approaches to cope with it. Next, Section 3 states the fundamentals of minimax classification together with a deeper analysis of the minimax regret approach proposed in this paper. Section 4 presents a neural training algorithm to get a neural-based minimax regret classifier under complete uncertainty. Moreover, practical situations with partial uncertainty in priors are also discussed. A learning algorithm to solve them is provided in Section 5. In Section 6, some experimental results show that minimax regret classifiers outperform (in terms of maximum risk deviation) classifiers trained on re-balanced data sets and those with the originally assumed priors. Finally, the main conclusions are summarized in Section 7. 2. Problem Overview Traditionally, supervised learning lies in the fact that training data and real data come from the same (although unknown) statistical model. In order to carefully analyze to what extend classifier performance depends on conditions such as class distribution or decision costs, learning and decision theory principles are briefly revisited. Next, some previous approaches to deal with environment imprecision are reviewed. 2.1 Learning and Making Optimal Decisions Let S = {(xk , dk ), k = 1, . . . , K} denote a set of labelled samples where xk ∈ RN is an observation feature vector and dk ∈ UL = {u0 , . . . , uL−1 } is the label vector. Class-i label ui is a unit L-dimensional vector with components ui, j = δi j , with every component equal to 0, except the i-th component which is equal to 1. We assume a learning process that estimates parameters w of a non-linear mapping f w : RN → P from the input space into probability space P = {p ∈ [0, 1]L | ∑L−1 pi = 1}. The soft decision is given i=0 by yk = fw (xk ) ∈ P and the hard output of the classifier is denoted by d. Note that d and d will be used to distinguish the actual class from the predicted one, respectively. Several costs (or benefits) associated with each possible decision are also defined: c i j denotes the cost of deciding in favor of class i when the true class is j. Negative values represent benefits (for instance, cii , which is the cost of correctly classifying a sample from class i could be negative in some practical cases). In general cost-sensitive classification problems, either misclassification costs c i j or cii costs can take different values for each class. Thus, there are many applications where classification errors lead to very different consequences (medical diagnosis, fault detection, credit risk analysis), what implies misclassification costs ci j that may largely vary between them. In the same way, there are also many domains where correct decision costs (or benefits) c ii do not take the same value. For instance, in targeted marketing applications (Zadrozny and Elkan, 2001), correctly identifying a buyer implies some benefit while correctly classifying a non buyer means no income. The same ¨ applies to medical diagnosis domains such as the gastric carcinoma problem studied in G uvenir et al. (2004). In this case, the benefit of correct classification also depends on the class: the benefit of correctly classifying an early stage tumor is higher than that of a later stage. The expected risk (or loss) R is given by L−1 L−1 R = ∑ ∑ ci j P{d = ui |d = u j }Pj j=0 i=0 105 , (1) A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I where P{d = ui |d = u j } with i = j represent conditional error probabilities, and P j = P{d = u j } is the prior probability of class u j . Defining the conditional risk of misclassifying samples from class u j as L−1 Rj = ∑ ci j P{d = ui |d = u j } , i=0 we can express risk (1) as L−1 R= ∑ Ri Pi . (2) i=0 It is well-known that the Bayes decision rule for the minimum risk is given by L−1 d = arg min { ∑ ci j P{d = u j |x}} , ui (3) j=0 where P{d = ui |x} is the a posteriori probability of class i given sample x. The optimal decision rule depends on posterior probabilities and therefore, on the prior probabilities and the likelihood. In theory, as long as posterior probabilities (or likelihood and prior probabilities) are known, the optimal decision in Eq. (3) can be expressed after a trivial manipulation as a function of the cost differences between the costs (ci j − c j j ) (Duda et al., 2001). This is the reason why c j j is usually assumed to be zero and the value of the cost difference is directly assigned to c i j . When dealing ¨ with practical applications, however, some authors (Zadrozny and Elkan, 2001; G uvenir et al., 2004) have urged to use meaningful decision costs measured over a common baseline (and not necessarily taking c j j = 0) in order to avoid mistakes that otherwise could be overlooked. For this reason and, what is more important, the uncertainty class distribution problem addressed in this paper, decision costs measured over a common baseline are considered. Furthermore, absolute values of decision costs are relevant to the design of classifiers under the minimax regret approach. 2.2 Related Work: Dealing with Cost and Prior Uncertainty Most proposals to address uncertainty in priors fall into the categories of adaptive and robust solutions. While the aim of a robust solution is to avoid a classifier with very poor performance under any conditions, an adaptive system pursues to fit the classifier parameters using more incoming data or more precise information. With an adaptive-oriented principle, Provost (2000) states that, once the classifier is trained under specific class distributions and cost assumptions (not necessarily the operating conditions), the selection of the optimal classifier for specific conditions is carried out by a correct placement of the decision thresholds. In the same way, the approaches in Kelly et al. (1999) and Kubat et al. (1998) consider that tuning the classifier parameters should be left to the end user, expecting that class distributions and misclassification costs will be precisely known then. Some graphical methods based on the ROC curve have been proposed in Adams and Hand (1998) and Provost and Fawcett (2001) in order to compare the classifier performance under imprecise class distributions and/or misclassification costs. The ROC convex hull method presented in Provost and Fawcett (2001) (or the alternative representation proposed in Drummond and Holte (2000)) allows the user to select potentially optimal classifiers, providing a flexible way to select 106 M INIMAX R EGRET C LASSIFIER them when precise information about priors or costs is available. Under imprecision, some classifiers can be discarded but this does not necessarily provide a method to select the optimal classifier between the possible ones and fit its parameters. Furthermore, due to its graphical character, these methods are limited to binary classification problems. Changes in prior probabilities have also been discussed by Saerens et al. (2002), who proposes a method based on re-estimating the prior probabilities of real data in an unsupervised way and subsequently adjusting the outputs of the classifier according to the new a priori probabilities. Obviously, the method requires enough unlabelled data being available for re-estimation. As an alternative to adaptive schemes, several robust solutions have been proposed, as the resampling methods, especially in domains where imbalanced classes come out (Kubat and Matwin, 1997; Lawrence et al., 1998; Chawla et al., 2002; Barandela et al., 2003). Either by undersampling or oversampling, the common purpose is to balance artificially the training data set in order to get a uniform class distribution, which is supposed to be the least biased towards any class and, thus, the most robust against changes in class distributions. The same approach is followed in cost sensitive domains, but with some subtle differences in practice. It is well known that class priors and decision costs are intrinsically related. For instance, different decision costs can be simulated by altering the priors and vice versa (see Ting, 2002, for instance). Thus, when a uniform distribution is desired in a cost sensitive domain, but working with cost insensitive decision machines, class priors are altered according to decision costs, what is commonly referred as rebalancing. The manipulation of the training data distribution has been applied to cost-sensitive learning in two-class problems (Breiman et al., 1984) in the following way: basically, the class with higher misclassification cost (suppose n times the lowest misclassification cost) is represented with n times more examples than the other class. Besides random sampling strategies, other sampling-based rebalancing schemes have been proposed to accomplish this task, like those considering closeness to the boundaries between classes (Japkowicz and Stephen, 2002; Zhou and LiuJ, 2006) or the costproportionate rejection sampling presented in Zadrozny et al. (2003). Extending the formulation of this type of procedures to general multiclass problems with multiple (and possibly asymmetric) inter-class misclassification costs appears to be a nontrivial task (Zadrozny et al., 2003; Zhou and LiuJ, 2006), but some progress has been made recently with regard to this latter point (Abe et al., 2004). Note, also, that many (although not all) of these rebalancing strategies are usually implemented by oversampling and/or subsampling, that is, replicating examples (without adding any extra information) and/or deleting them (which implies information loss). 3. Robust Classifiers Under Prior Uncertainty: Minimax Classifiers Prior probability uncertainty can be coped from a robust point of view following a minimax derived strategy. Minimax regret criterion is discussed in this section after presenting the conventional minimax criterion. Although our approach extends to general multi-class problems and the discussion is carried out in that way, we will first illustrate, for the sake of clarity and simplicity, a binary situation. 3.1 Minimax Classifiers As Eq. (3) shows, the minimum risk decisions depend on the misclassification costs, c i j , and the posterior class probabilities and, thus, they depend on the prior probabilities, Pi . Different prior 107 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I PSfrag replacements distributions (frequency for each class) give rise to different Bayes classifiers. Fig. 1 shows the Bayes risk curve, RB (P1 ) versus class-1 prior probability for a binary classification problem. Standard RF (Q1 , P1 ) R0 Minimax RF (Q1mM , P1 ) Risk c00 Minimax Deviation RF (Q1mMd , P1 ) Rbasis R1 RB (P1 ) c11 0 Q1 Q1mM 1 Q1mMd P1 Figure 1: Risk vs. P1 . Minimum risk curve and performance under prior changes for the standard, minimax and minimax deviation classifier. RB (P1 ) stands for the optimal Bayes Risk against P1 . RF (Q1 , P1 ) denotes the Risk of a standard classifier (Fixed decision rule optimized for prior probabilities Q1 estimated in the training phase) against P1 . RF (Q1mM , P1 ) denotes the Risk of a minimax classifier (Fixed decision rule optimized for the minimax probabilities Q1mM ) against P1 . RF (Q1mMd , P1 ) denotes the Risk of a minimax deviation classifier (Fixed decision rule optimized for the minimax deviation probabilities Q 1mMd ) against P1 . If the prior probability distribution is unknown when the classifier is designed, or this distribution changes with time or from one environment to other, the mismatch between training and test conditions can degrade significantly the classifier performance. For instance, assume that Q = (Q0 , Q1 ) is the vector with class-0 and class-1 prior probabilities estimated in the training phase, respectively, and let RB (Q1 ) represent the minimum (Bayes) risk attainable by any decision rule for these priors. Note, that, according to Eq. (2), for a given classifier, the risk is a linear function of priors. Thus, risk RF (Q1 , P1 ) associated to the (fixed) classifier optimized for Q changes linearly with actual prior probabilities P1 and P0 = 1 − P1 , going from (0, R0 ) to (1, R1 ) (the continuous line in Fig. 1), where R0 and R1 refer to the class conditional risks for classes 0 and 1, respectively. Fig. 1 shows the impact of this change in priors and how performance deviates from optimal. Also, it can be shown (see VanTrees, 1968, for instance) that the minimum risk curve obtained for each prior is convex and the risk function of a given classifier verifies R F (Q1 , P1 ) ≥ RB (P1 ) with a tangent point at P1 = Q1 . 108 M INIMAX R EGRET C LASSIFIER The dashed line in Fig. 1 shows the performance of the minimax classifier, which minimizes the maximum possible risk under the least favorable priors, thus providing the most robust solution, in the sense that performance becomes independent from priors. From Fig. 1, it becomes clear that the minimax classifier is optimal for prior probabilities P = QmM = (Q0mM , Q1mM ) maximizing RB . Thus, this strategy is equivalent to maximizing the minimum risk (Moon and Stirling, 2000; Duda et al., 2001). We will refer to them as the minimax probabilities. Fig. 1 also makes clear that although a minimax classifier is a robust solution to address the imprecision in priors, it may become a somewhat pessimistic approach. 3.2 Minimax Deviation Classifiers We propose an alternative classifier that, instead of minimizing the maximum risk, minimizes the maximum deviation (regret) from the optimal Bayes classifier. In the following, we will refer to it as the minimax deviation or minimax regret classifier. A comparison between minimax and minimax deviation approaches is also shown in Fig. 1. This latter case corresponds to a classifier trained on prior probabilities P = Q mMd with performance as a function of priors given by a line (a plane or hyperplane for three or more classes, respectively) parallel to what we name, in the following, basis risk (Rbasis = c00 (1 − P1 ) + c11 P1 ). Note that the maximum deviation (with respect to priors) of the classifier optimized for Q is given by D(Q) = max {RF (Q1 , P1 ) − RB (P1 )} = max {R0 − c00 , R1 − c11 } . P1 The inspection of Fig. 1 shows that the minimum of D (with respect to Q) is achieved when R0 − c00 = R1 − c11 , which means that line RF (Q1 , P1 ) is parallel to arc named Rbasis in the figure and tangent to RB at Q1mMd . Therefore, the minimax regret classifier is also the Bayes solution with respect to the least favorable priors (Q0mMd , Q1mMd ) (see Berger, 1985, for instance), which will be denoted as minimax deviation probabilities. Now, we extend the formulation to a general L-class problem. Definition 1 Consider a L-class decision problem with costs ci j , 0 ≤ i, j < L and c j j ≤ ci j , and let Rw (P) be the risk of a decision machine with parameter vector w when prior class probabilities are given by P = (P0 , . . . , PL−1 ). The deviation function is defined as Dw (P) = Rw (P) − RB (P) and the minimax deviation is defined as DmMd = inf max{Dw (P)} . w P (4) Note that the above definition assumes that the maximum exists. This is actually the case, since Dw (P) is a linear function over a compact set, P . Note, also, that our definition includes the natural assumption that c j j is never higher than ci j , meaning that making a decision error is always less costly than taking the correct decision. This assumption is used in part of our theoretical analysis. The algorithms proposed in this paper are based on the fact that the minimax deviation can be computed without knowing RB 109 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Theorem 2 The minimax deviation is given by DmMd = inf max{Dw (P)} , w P where Dw (P) = Rw (P) − Rbasis (P) and (5) L−1 ∑ c j j Pj Rbasis (P) = . (6) j=0 Proof Note that, according to Eqs. (1) and (2), for any decision machine and any u i ∈ UL , L−1 R(u j ) = R j = ∑ ci j P{d = ui |d = u j } ≥ c j j . i=0 Since the bound is reached by the classifier deciding d = u j for any observation x, we have RB (u j ) = c j j . Therefore, using Eq. (6), we find that, for any u ∈ UL , RB (u) = Rbasis (u) and, thus, Dw (u) = Dw (u) . Since Bayes minimum risk RB (P) is a convex function of priors and Rw (P) is linear, Dw (P) is concave and, thus, it is maximum at some of the vertices in P (i.e., at some P = u ∈ U L ). Thus, max{Dw (P)} = max {Dw (u)} . u∈UL P (7) Since the maximum difference between two hyperplanes defined over P is always at some vertex, we can conclude that max{Dw (P)} = max {Dw (u)} = max {Dw (u)} . P u∈UL u∈UL (8) Combining Eqs. (4), (7) and (8), we get DmMd = inf max{Dw (P)} . w P Note that Rbasis represents the risk baseline of the ideal classifier with zero errors. Th. 2 shows that the minimax regret can be computed as the minimax deviation to this ideal classifier. Note, also, that if costs cii do not depend on i, Eq. (5) becomes equivalent (up to a constant) to the Bayes risk and the minimax regret classifier becomes equivalent to the minimax classifier . Another important result for the algorithms proposed in this paper is that, under some conditions on the minimum risk, the minimum and maximum operators can be permuted. Although general results on the permutability of minimum and maximum operators can be found in the literature (see Polak, 1997, for instance), we provide here the proof for the specific case interesting to this paper. 110 M INIMAX R EGRET C LASSIFIER Theorem 3 Consider the minimum deviation function given by Dmin (P) = inf{Dw (P)} , (9) w where Dw (P) is the normalized deviation function given by Eq. (5), and let P ∗ be the prior probability vector providing the maximum deviation, P∗ = arg max Dmin (P) P . (10) If Dmin (P) is continuously differentiable at P = P∗ , then the minimax deviation, DmMd , defined by Eq. (4), is DmMd = Dmin (P∗ ) = max inf Dw (P) . (11) P w Proof For any classifier with parameter vector w, we can write, max Dw (P) ≥ Dw (P∗ ) ≥ Dmin (P∗ ) P and, thus, inf max Dw (P) ≥ Dmin (P∗ ) . w P (12) Therefore, Dmin (P∗ ) is a lower bound of the minimax regret. Now we prove that Dmin (P∗ ) is also an upper bound. According to Eq. (9), for any ε > 0, there exists a parameter vector wε such that Dwε (P∗ ) ≤ Dmin (P∗ ) + ε . (13) By definition, for any P, Dmin (P) ≤ Dwε (P). Therefore, using Eq. (13), we can write Dwε (P∗ ) − Dwε (P) ≤ Dmin (P∗ ) − Dmin (P) + ε . (14) Since Dmin (P) is continuously differentiable and (according to Eq. (10)) maximum at P ∗ , for any ε > 0 there exists δ > 0 such that, for any P ∈ P with P∗ − P ≤ δ we have Dmin (P∗ ) − Dmin (P) ≤ ε P∗ − P ≤ ε δ . (15) Let Pδ a prior such that P∗ − Pδ = δ. Taking ε = ε δ and combining Eqs. (14) and (15) we can write Dwε (P∗ ) − Dwε (Pδ ) ≤ 2ε δ . Since the above condition is verified for any ε > 0 and any prior Pδ at distance δ from P, and taking into account that Dwε (P) is a linear function of P, we conclude that the maximum slope of D wε (P) is bounded by 2ε and, thus, for any P ∈ P , we have √ Dwε (P) − Dwε (P∗ ) ≤ 2ε P − P∗ ≤ 2 2ε , √ (where we have used the fact that the maximum distance between two probability vectors is 2). Therefore, we can write √ max Dwε (P) ≤ Dwε (P∗ ) + 2 2ε P 111 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I and, thus, √ inf max Dw (P) ≤ Dwε (P∗ ) + 2 2ε . w P √ Finally, using Eq. (13) and taking into account that ε = ε δ ≤ 2ε we get √ inf max Dw (P) ≤ Dmin (P∗ ) + 3 2ε . w P (16) Since the above is true for any ε > 0 we conclude that Dmin (P∗ ) is also an upper bound of Dw . Therefore, combining Eqs. (12) and (16), we conclude that inf max Dw (P) = Dmin (P∗ ) , w P which completes the proof. Note that the deviation function needs to be neither differentiable nor a continuous function of w parameters. If the minimum deviation function is not continuously differentiable at the minimax deviation probability, P∗ , the theorem cannot be applied. The reason is that, although there should exist at least one classifier providing the minimum deviation at P = P∗ , it or they could not provide a constant deviation with respect to the prior probability. The situation can be illustrated with an example. Let x ∈ R be given by p(x|d = 0) = 0.8N(x, σ) + 0.2N(x − 2, σ) and p(x|d = 1) = 0.2N(x − 1, σ) + 0.8N(x − 3, σ), where σ = 0.5 and N(x, σ) = (2πσ)−1/2 exp(−x2 /(2σ2 )), and consider the set Φλ of classifiers given by a single threshold over x and decision dˆ = 1 if x ≥ λ 0 if x < λ. Fig. 2 shows the distribution of both classes over x, and Fig. 3 shows, as a function of priors, the minimum error probability (continuous line) that can be obtained using classifiers in Φ λ . Note that decision costs c00 = c11 = 0 and c01 = c10 = 1 have been considered for this illustrative problem. An abrupt slope change is observed at the minimax deviation probability, for P{d = 1} = 1/2. For this prior, there are two single threshold classifiers providing the minimum error probability, which are given by thresholds λ1 and λ2 in Fig. 2. However, as shown in Fig. 3 neither of them provides a risk that is constant in the prior. The minimax deviation classifier in Φ λ , which has a threshold λ0 , does not attain minimum risk at the minimax deviation probability and, thus, cannot be obtained by using Eq. (11). For this example, the desired robust classifier should have a deviation function given by the horizontal dotted line in Fig. 3. Fortunately, it can be obtained by combining the outputs of several classifiers. For instance, let dˆ1 and dˆ2 the decisions of classifiers given by thresholds λ1 and λ2 , respectively. It is not difficult to see that the classifier selecting dˆ1 and dˆ2 at random (for each input sample x) provides a robust classifier. This procedure can be extended to the multiclass-case: consider a set of L classifiers with parameters wk , k = 0, . . . , L − 1, and consider the classifier such that, for any input sample x, makes a decision equal to dk (i.e., the decision of classifier with parameters wk ), with probability qk . It is not difficult to show that the deviation function of this classifier is given by L−1 D(P) = L−1 j=0 k=0 ∑ Pj ∑ qk D j (wk ) 112 , M INIMAX R EGRET C LASSIFIER 0.7 0.6 Likelihoods 0.5 0.4 0.3 0.2 0.1 λ 0 −2 λ −1 0 λ 0 1 1 2 2 3 4 5 x Figure 2: The conditional data distributions for the one-dimensional example discussed in the text. λ1 and λ2 are the thresholds providing the minimum risk at the minimax deviation probability. λ0 provides the minimax deviation classifier. where D j (wk ) = R j (wk ) − c j j . In order to get a constant deviation function, probabilities q k should be chosen in such a way that L−1 ∑ qk D j (wk ) = D , k=0 where D is a constant. Solving these linear equations for q k , k = 0, . . . , L − 1 (with the constraint ∑k qk = 1), the required probabilities can be found. Note that, in order to build the non-deterministic classifier providing a constant deviation, a set of L independent classifiers that are optimal at the minimax deviation prior should be found. However, we go no further on the investigation of this special case for two main reasons: • The situation does not seem to be common in practice. In our simulations, we have found that the maximum of the minimum risk deviation always provided a response which is approximately parallel to Rbasis . • In general, the abrupt change in the derivative may be a symptom that the classifier structure is not optimal for the data distribution. Instead of building a nondeterministic classifier, increasing the classifier complexity should be more efficient. Although the least favorable prior providing the minimax deviation can be computed in closed form for some simple distributions, in general, it must be computed numerically. Moreover, we assume here that the data distribution is not known, and must be learned from examples. Thus, 113 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 0.25 0.2 λ Error probability 0 0.15 λ λ 1 2 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 P{ d=1} Figure 3: Error probabilities as a function of prior probability of class 1 for the example in Fig. 2. Thresholds λ1 and λ2 do not provide the minimax deviation classifier, which is obtained for threshold λ0 . However, the random combination of classifiers with thresholds λ 1 and λ2 (dotted line) provides a robust classifier with deviation lower than that of λ 0 . we must incorporate the estimation of the least favorable prior in the learning process. Next, we propose a training algorithm in order to get a minimax regret classifier based on neural networks. 4. Neural Robust Classifiers Under Complete Uncertainty Note that, if QmMd is the probability vector providing the maximum in Eq. (11), that is, QmMd = arg max inf{Dw (P)} w P , then we can write DmMd = inf{Dw (QmMd )} . w Therefore, the minimax deviation classifier can be estimated by training a classifier using prior in QmMd . For this reason, QmMd will be called the minimax deviation prior (or least favorable prior). Our proposed algorithms are based on an iterative process of estimating parameters w based on an estimate of the minimax deviation prior, and re-estimating prior based on an estimate of network weights. This is shown in the following. 114 M INIMAX R EGRET C LASSIFIER 4.1 Updating Network Weights Learning is based on minimizing some empirical estimate of the overall error function L−1 L−1 i=0 E{C(y, d)} = i=0 ∑ P{d = ui }E{C(y, d)|d = ui } = ∑ PiCi , where C(y, d) may be any error function and Ci is the expected conditional error for class-i. Selecting the appropriate error function (see Cid-Sueiro and Figueiras-Vidal, 2001, for instance), learning rules can be designed providing a posteriori probability estimates (y i ≈ P{d = ui |x}, where yi is the soft decision) and, thus, according to Eq. (3), the hard decision minimizing the risk can be approximated by L−1 d = arg min { ∑ ci j y j } . i j=0 The overall empirical error function (cost function) used in learning for priors P = (P0 , . . . , PL−1 ) may be written as L−1 C = ∑ PiCi = L−1 i=0 = = 1 K L−1 i=0 1 K k ∑ d C(yk , dk ), Ki k=1 i Pi K k ∑ d C(yk , dk ) Ki /K k=1 i ∑ i=0 1 K ∑ K k=1 ∑ Pi L−1 ∑ Pi d kC(yk , dk ) (0) i i=0 Pi , , (17) (0) where Pi = Ki /K is an initial estimate of class-i prior based on class frequencies in the training set and Pi is the current prior estimate. Minimizing error function (17) by means of a stochastic gradient descent learning rule leads to update the network weights at k-th iteration as w (k+1) = w (k) (n) L−1 −µ = w(k) − Pi i=0 Pi ∑ L−1 d k ∇ C(yk , dk ) (0) i w ∑ µi (n) k di , ∇wC(yk , dk ) , (18) i=0 where (n) (n) µi = µ Pi (19) (0) Pi (n) is a learning step scaled by the prior ratio. Note that di selects the appropriate µi according to the pattern class membership. The classifier is trained without altering the original training data set (0) class distribution Pi and therefore, without missing or duplicating information. 115 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 4.2 Updating Prior Probabilities Eq. (11) shows that the learning process should maximize (5) with respect to the prior probabilities. The estimate of (5) can be computed as ¯ Dw (P) = Rw (P) − Rbasis (P) , where (20) L−1 ∑ R j Pj (21) 1 L−1 ∑ ci j Ni j N j i=0 (22) Rw (P) = j=0 is the overall Bayes risk estimate and Rj = is the class- j conditional risk estimate where N j is the number of class u j patterns in the training phase and Ni j is the number of samples from class u j assigned to ui . L−1 In order to derive a learning rule to find an estimate Pi satisfying constraints ∑i=0 Pi = 1 and 0 ≤ Pi ≤ 1, we will use auxiliary variables Bi such that Pi = exp(Bi ) L−1 ∑ j=0 exp(B j ) . (23) ¯ We maximize Dw with respect to Bi . Applying the chain rule, ¯ ¯ ∂Dw L−1 ∂Dw ∂Pj =∑ , ∂Bi j=0 ∂Pj ∂Bi and using Eqs. (20), (21) and (23), we get ¯ ∂D w ∂Bi L−1 = ∑ (R j − c j j )Pi (δi j − Pj ), j=0 L−1 L−1 j=0 j=0 = Pi Ri − cii − ∑ (R j Pj ) + ∑ (c j j Pj ) , = Pi Ri − cii − Rw − Rbasis , = Pi Rdi , where Rdi = (Ri − cii ) − (Rw − Rbasis ) . The learning rule for auxiliary variable Bi is (n) Bi (n+1) = Bi + ρ (n) ∂D w , ∂Bi (n) (n) = Bi + ρPi Rdi , 116 (24) M INIMAX R EGRET C LASSIFIER where parameter ρ > 0 controls the rate of convergence. Using Eq. (23) and Eq. (24), the updated learning rule for Pi is (n) (n+1) Pi = (n) (n) (n) exp ρPj Rd j ∑L−1 exp B j j=0 (n) = (n) (n) exp(Bi ) exp ρPi Rdi , (n) (n) Pi exp ρPi Rdi (n) (n) (n) ∑L−1 Pj exp ρPj Rd j j=0 . (25) 4.3 Training Algorithm for a Minimax Deviation Classifier In the previous section, both the network weights updating rule (18) and the prior probability update rule (25) have been derived. The algorithm resulting from the combination is shown as follows: for n = 0 to Niterations − 1 do for k = 1 to K do w(k+1) = w(k) − L−1 ∑ µi (n) k di ∇wC(yk , dk ) i=0 end for (n) Estimate R(n) , Ri , i = 0, . . . , L − 1, according to (21) and (22) (n+1) (n+1) Update minimax probability Pi , i = 0, . . . , L − 1 according to (25) and compute µi with (19) end for 5. Robust Classifiers Under Partial Uncertainty Although in many practical situations prior probabilities may not be specified with precision, they can be partially known. In this section we discuss how partial information about priors can be used to improve the classifier performance in relation to a complete uncertainty situation. From now on, let us consider that lower (or upper) bounds of the priors are known based on previous experience. We will denote the lower and upper bounds of class-i prior probability as Pil and Piu , respectively. In order to illustrate this situation consider a binary classification problem where probability lower bounds P0l and P1l are known. That is, P1 ∈ [P1l , 1 − P0l ] where this interval represents the uncertainty region. Let us denote by Γ = {P : 0 ≤ Pi ≤ 1, ∑L−1 Pi = 1, Pi ≥ Pil } the probability region i=0 satisfying the imposed constraints. In the following, we will refer to Γ as the uncertainty region. Now, the aim is to design a classifier that minimizes the maximum regret from the minimum risk only inside the uncertainty region. This is depicted in Fig. 4(a), which shows that reducing the uncertainty in priors allows to reduce deviation from the optimal classifier. This minimax regret approach for the uncertainty region Γ is often called Γ-minimax regret. As discussed before, the minimax deviation solution gives a Bayes solution with respect some priors denoted in the partial uncertainty case as QΓ mMd in Fig. 4(a), which is the least favorable distribution according to the regret criterion. 117 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I cΓ 00 RΓ basis RΓ 1 cΓ 11 PSfrag replacements Risk 0 0.5 1 1.5 2 2.5 3 3.5 0P1l RB (P) + ψ(P) Minimax Deviation with Restriction Risk RΓ 0 QΓ 1mMd 1 − P0l 1 P1 0P1l 1 − P0l 1 P1 (b) (a) Figure 4: Minimax deviation classifier under partial uncertainty of prior probabilities: (a)Γ-minMaxDev Classifier. (b) Modified cost function defined as R B (P) + ψ(P). In contrast to the minimax regret criterion, note that a classical minimax classifier for the considered uncertainty region would minimize the worst-case risk. It would be a Bayes solution for the prior where the minimum risk reaches its maximum and it could be denoted as Q Γ . mM Notice, also, that these solutions will be the same if the risk for the vertex of Γ take the same value (cΓ = k). ii 5.1 Neural Robust Classifiers Under Partial Uncertainty Minimax search can be formulated as maximizing (with respect to priors) the minimum (with respect to network parameters) of deviation function (5), as described in previous section, but subject to some constraints arg max inf {DΓ (P)} , w w P Pi ≥ Pil , i = 0, . . . , L − 1 s.t. where DΓ = RΓ − RΓ . When uncertainty is global, this hyperplane is defined by the risk in the L w w basis extreme cases with Pi = δik , that is, by the corresponding cii . However, with partial knowledge of the prior probabilities, this hyperplane becomes defined by the risk in L points which are the vertex given by the restrictions and with associated risk denoted by c Γj . j Defining 1 l(Pi ) = , (26) 1 + exp−τ(Pi −Pil ) where τ controls the hardness of this restriction, the minimax problem can be re-formulated as arg max inf {DΓ (P)} w P s.t. w l(Pi ) ≥ 1/2, i = 0, . . . , L − 1. Thus, this constrained optimization problem can be solved as a non-constrained problem by considering an auxiliary function that incorporates the restriction as a barrier function 118 M INIMAX R EGRET C LASSIFIER arg max inf {DΓ (P) + Aψ(P)} , w w P where ψ(Pi ) = log(l(Pi )) and the constant A determines the contribution of the barrier function. Fig. 4(b) shows the new risk function corresponding to the binary case previously depicted in Fig. 4(a). Note that, it is the sum of the original RB (P) and the barrier function ψ(P). As in Section 4.1, in order to derive the network weight learning rule, we need to compute ∂ψ ∂Bi L−1 = ∂ψ ∂P j , j ∂Bi ∑ ∂P j=0 = τPi L−1 ∑ 1 − l(Pk ) (δik − Pk ), k=0 = τPi ψdi , where ψdi = ∑L−1 (1 − l(Pk ))(δik − Pk ) k=0 As τ increases, the constraints become harder around the specified bound. The update learning rule for the auxiliary variable Bi at cycle n is (n+1) Bi (n) Γ(n) (n) (n) = Bi (n) + ρPi Rdi + ρAτPi ψdi . And therefore, using (23), the update learning rule for Pi is (n) (n+1) Pi = (n) Γ(n) Pi exp ρPi Rdi L−1 ∑ (n) Pj exp (n) (n) exp ρAτPi ψdi (n) Γ(n) ρ P j Rd j . (n) (n) ρAτPj ψd j exp j=0 Note that if the upper bound is known instead of the lower bound, l(Pi ) defined by (26) should be replaced by u(Pi ) = (1 + exp(τ(Pi − Piu )))−1 at the previous formulation. The minimax constrained optimization problem has been tackled by considering a new objective function defined by the sum of the original cost function and a barrier function. Studying the convexity of this new function becomes important from the fact that a stationary point of this risk curve is a global maximum. Since the minimum risk curve (RB (P)) is a convex function of the priors (see VanTrees, 1968, for details), if we verify the convexity of the barrier function, we can conclude that the function defined by the sum of both of them is also convex. This barrier function is convex in P if the Hessian matrix HR verifies PT HR P ≤ 0 The Hessian matrix of the barrier function equals to a diagonal matrix D r = diag(r) with all negative diagonal entries ri = Aτ2 (−l(Pi )(1 − l(Pi ))). As l(Pi ) ∈ [0, 1] and therefore, ri ≤ 0, it is straightforward to see that PT HR P = PT Dr P, L−1 = ∑ Pi2 ri ≤ 0 . i=0 Since the barrier function is convex, the new objective function (defined by the sum of two convex functions) is also convex. 119 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I 5.2 Extension to Other Learning Algorithms The learning algorithm proposed in this paper is intended to train a minimax deviation classifier based on neural networks with feedforward architecture. Actually, the learning algorithm we propose becomes a feasible solution for any learning process based on minimizing some empirical estimate of an overall cost (error) function. However, it is also applicable to a general classifier provided it is trained (in an iterative process) for the estimated minimax deviation probabilities and the assumed decision costs. Specifically, in this paper, scaling the learning rate allows to simulate different class distributions and the hard decisions are made based on posterior probability estimates and decision costs. Furthermore, the neural learning phase carried out in one iteration can be re-used for the next one, what allows to reduce computational cost with respect to a complete optimization process on each iteration. Apart from the general approach of completely training a classifier on each iteration and in order to reduce its computational cost, specific solutions may be studied for different learning machines. Nonetheless, it seems not feasible to readily achieve this improvement for classifiers like SVMs, where support vectors for one solution may have nothing in common with the ones obtained in next iteration and thus, making necessary to re-train the classifier in each iteration. Another possible solution for any classifier that provides a posteriori probabilities estimates or any score that can be converted into probabilities (for details on calibration methods see Wei et al., 1999; Zadrozny and Elkan, 2002; Niculescu-Mizil and Caruana, 2005) is outlined here. In this case, an iterative procedure able to estimate the minimax deviation probabilities and consequently to adjust (without re-training) the outputs of the classifier could be studied. The general idea for this approach is as follows: first, the new minimax deviation prior probabilities are estimated according to (25) and then, posterior probabilities provided by the model are adjusted as follows (see Saerens et al., 2002, for more details) (k) Pi P(k) {d = ui |x} = P(k−1) {d = ui |x} (k−1) Pi L−1 P(k) j P(k−1) {d = u j |x} (k−1) j=0 Pj . (27) ∑ The algorithm’s main structure is summarized as for k = 1 to K do (k) Estimate R(k) , Ri , i = 0, . . . , L − 1, according to (21), (22) and decision costs c i j (k+1) Update minimax probability Pi according to (25) Adjust classifier outputs according to (27) end for The effectiveness of this method relies on the accuracy of the initial a posteriori probability estimates. Studying in depth this approach and comparing different minimax deviation classifiers (decision trees, SVMs, RBF networks, feedforward networks and committee machines) together with different probability calibration methods appears as a challenging issue to be explored in future work. 120 M INIMAX R EGRET C LASSIFIER 6. Experimental Results In this section, we first present the neural network architecture used in the experiments and illustrate the proposed minimax deviation strategy on an artificial data set. Then, we apply it to several realworld classification problems. Moreover, a comparison with other proposals such as the traditional minimax and the common re-balancing approach is carried out. 6.1 Softmax-based Network Although our algorithms can be applied to any classifier architecture, we have chosen a neural network based on the softmax non-linearity with soft decisions given by Mi yi = ∑ yi j , j=1 with yi j = exp(wTj x + wi j0 ) i , Mk ∑L−1 ∑l=1 exp(wT x + wkl0 ) k=0 kl where L stands for the number of classes, M j the number of softmax outputs used to compute y j and wi j are weight vectors. We will refer to this network as a Generalized Softmax Perceptron(GSP). 1 A simple network with M j = 2 is used in the experiments. x1 wj,k y1,1 y1,... x2 x3 y1 y1,M1 Class i ... SOFTMAX ... HARD DECISION n inputs / outputs ... yL,1 xd yL,ML yL,... yL Figure 5: GSP(Generalized Softmax Perceptron) Network Fig. 5 corresponds to the neural network architecture used to classify the samples represented by feature vector x. Learning consists of estimating network parameters w by means of the stochastic gradient minimization of certain objective functions. In the experiments, we have considered the Cross Entropy objective function given by L CE(y, d) = − ∑ di log yi . i=1 The stochastic gradient learning rule for the GSP network is given by Eq. (18). Learning step µ(0) decreases according to µ(k) = 1+k/η , where k is the iteration number, µ(0) the initial learning rate and η a decay factor. µ(k) 1. Note that the GSP is similar to a two layer MLP with a single layer of weights and with coupled saturation function (softmax), instead of sigmoidal units. 121 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I The reason to illustrate this approach with a feedforward architecture is that, as mentioned in Section 5.2, it allows to exploit (in the iterative learning process) the partially optimized solution in current iteration for the next one. On the other hand, posterior probability estimation makes it possible to apply the adaptive strategy based on prior re-estimation proposed by Saerens to the minimax deviation classifier, as long as a data set representative of the operation conditions is available. Finally, the fact that intermediate outputs yi j of the GSP can be interpreted as subclass probabilities may provide quite a natural way to cope with the unexplored problem of uncertainty in subclass distributions as already pointed out by Webb and Ting (2005). Nonetheless, both architecture and cost function issues are not the goal of this paper, but merely illustrative tools. 6.2 Artificial Data Set To illustrate the minimax regret approach proposed in this paper both under complete and partial uncertainty, an artificial data set with two classes (class u0 and class u1 ) has been created. Data examples are drawn from the normal distribution p(x|d = ui ) = N(mi , σ2 ) with mean mi and standard i √ deviation σi . Mean values were set to m0 = 0, m1 = 2 and standard deviation to σ0 = σ1 = 2. A total of 4000 instances were generated with prior probabilities of class membership P{d = u 0 } = 0.93 c00 c01 2 5 and P{d = u1 } = 0.07. The cost-benefit matrix is given by . c10 c11 4 0 Initial learning rate was set to µ(0) = 0.3, decay factor to η = 2000 and training was ended after 80 cycles. Classifier assessment was carried out by following 10-fold cross-validation. Two classifiers were trained, to be called a standard classifier and a minMaxDev classifier. The former is built by considering that the estimated class prior information is precise and stationary and the latter is the approach proposed in this paper to cope with uncertainty in priors. Thus, for the standard classifier, its performance may deviate from the optimal risk in 3.39 when priors change from training to test conditions. However, a minimax deviation classifier reduces this worst-case difference from the optimal classifier to 0.77. Now, we suppose that some information about priors is available (partial uncertainty). For instance, we consider that the lower bound for prior probabilities P0 and P1 are known and set to P0l = 0.55 and P1l = 0.05, respectively, so that the uncertainty region is Γ = {(P0 , P1 )|P0 ∈ [0.55, 0.95], P1 ∈ [0.05, 0.45]}. A minimax deviation classifier can be derived for Γ (it will be called Γ-minMaxDev classifier).The narrower Γ is, the closer the minimax deviation classifier performance is to the optimal. For this particular case, under partially imprecise priors, the standard classifier may differ from optimal (in Γ) in 0.83, while the use of the simple minMaxDev classifier designed under total prior uncertainty conditions attains a maximum deviation of 0.53. However, the Γ-minMaxDev classifier only differs from optimal in 0.24. These data are reported in Table 1 where both, experimental and also theoretical results, are shown. 6.3 Real Databases In this section we report experimental results obtained with several publicly available data sets. From the UCI repository (Blake and Merz, 1998) the following benchmarks: German Credits, Australian Credits, Insurance Company, DNA slice-junction, Page-blocks, Dermatology and Pen-digits. 122 M INIMAX R EGRET C LASSIFIER Standard Th/Exp Maximum deviation from optimal (complete uncertainty) Maximum deviation from optimal in Γ (partial uncertainty) Classifier minMaxDev Γ-minMaxDev Th/Exp Th/Exp 3.41/3.39 0.72/0.77 – 0.85/0.83 0.50/0.53 0.19/0.24 Table 1: A comparison between the standard classifier (build under stationary prior assumptions), the minimax deviation classifier (minMaxDev) and the minimax deviation classifier under partial uncertainty (Γ-minMaxDev) for an artificial data set Database German Credits (GCRE) Australian Credits (AUS) Munich Credits (MCRE) Insurance Company (COIL) DNA Slice-junction (DNA) Page-blocks (PAG) Dermatology (DER) Pen-digits (PEN) # Classes 2 2 2 2 3 5 6 10 Class distribution [0.70 0.30] [0.32 0.68] [0.30 0.70] [0.94 0.06] [0.24 0.24 0.52] [0.90 0.06 0.01 0.01 0.02] [0.31 0.16 0.20 0.13 0.14 0.06] [0.104 0.104 0.104 0.096 0.104 0.096 0.096 0.104 0.096 0.096] # Attributes 8 14 20 85 180 10 34 16 # Instances 1000 690 1000 9822 3186 5473 366 10992 Table 2: Experimental Data sets Other public data set used is Munich Credits from the Dept. of Statistics at the University of Munich.2 Data set description is summarized in Table 2, and cost-benefit matrices are shown in Table 3. We have used the cost values that appear in Ikizler (2002) for those data sets in common. Otherwise, for lack of an expert analyst, the cost values have been chosen by hand. 2. Data sets available at http://www.stat.uni-muenchen.de/service/datenarchiv/welcome e.html. Insurance Company 0 1 German, Australian, Munich Credits −1 0 0 −17 Page-Blocks −1 2 2 2 2 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 −4 2 3 4 3 4 3 −3 3 5 1 5 5 0 Dermatology 3 2 3 2 −8 4 5 −10 4 3 5 4 2 1 4 5 −6 5 2 3 5 2 3 −10 −1 2 2 DNA 2 −1 2 Pendigits ci j = 0 1 Table 3: Cost-Benefit matrices for the experimental Data sets 123 3 3 0 if i = j Otherwise A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Standard Maximum Risk Deviation from the optimal classifier Re-balanced Minimax Deviation Minimax minMaxDev minMax GCRE 0.70 0.80 (0.55 0.60) 0.99 ACRE 1.00 1.00 (0.76 0.86) 1.00 MCRE 0.91 0.77 (0.54 0.59) 0.99 COIL 2.78 0.99 (0.87 0.92) 16.32 DNA 0.34 0.53 (0.30 0.27 0.25) PAG 0.62 0.26 (0.13 0.13 0.20 0.16 0.16) DER 1.03 1.28 (0.67 0.78 0.51 0.48 0.54 PEN 0.061 0.059 (0.024 0.028 0.025 0.019 1.14 0.023 0.021 0.026 0.022 0.86 0.60) 0.023 0.029) 7.62 0.029 Table 4: Classifier Performance evaluated as Maximum Risk Deviation from the optimal classifier for several real-world applications. Class-conditional risk deviations (R i − cii ) reported for the minMaxDev classifier. Experimental results for these data sets are shown in the following sections. The robustness of different decision machines under complete uncertainty of prior probabilities is analyzed in Section 6.3.1. If uncertainty is only partial, a similar study and comparison with the previous approach (complete uncertainty) is carried out in Section 6.3.2. 6.3.1 C LASSIFIER ROBUSTNESS U NDER C OMPLETE U NCERTAINTY We now study how different neural-based classifiers cope with worst-case situations in prior probabilities. The maximum deviation from the optimal classifier (see Table 4) is reported for the proposed minMaxDev strategy as well as for other alternative approaches: the one based on the assumption of stationary priors (standard) and the common alternative of deriving the classifier from an equally distributed data set (re-balanced). A comparison with the traditional minimax strategy is also provided. Together with the previously mentioned value (maximum deviation or regret), deviation for the L class-conditional extreme cases (Ri − cii ) is also reported for the minMaxDev classifier in Table 4. Results allow to verify that this solution is fairly close to the optimal one where deviation is not dependent on priors and thus, class-conditional deviations take the same value. Although the balanced class distribution to train the classifier can be obtained by means of undersampling and/or oversampling, it is simulated by altering the learning rate used in the training 1/L phase according to (19) as µi = µ (0) , where 1/L represents the simulated probability, equal for Pi all classes. Results evidence that the assumption of stationary priors may lead to significant deviations from the optimal decision rule under “unexpected”, but rather realistic, prior changes. This deviation may reach up to three times more than the robust minimax deviation strategy. Thus, for classification problems like Page-blocks the maximum deviation from the optimal classifier is 0.62 for the 124 M INIMAX R EGRET C LASSIFIER Standard Maximum Risk Re-balanced Minimax Deviation minMaxDev Minimax minMax GCRE 0.70 0.15 0.60 0.00 ACRE 0.01 0.02 0.86 -0.00 MCRE 0.05 0.20 0.59 0.00 COIL 0.76 0.99 0.86 0.02 DNA 0.34 0.53 0.25 0.13 PAG 0.62 0.26 0.20 0.10 DER -2.10 -1.68 -2.21 -2.38 PEN 0.061 0.059 0.029 0.029 Table 5: Classifier Performance measured as Maximum Risk for several real-world applications. standard classifier while this reduces to 0.20 for the minMaxDev one. Likewise, for the Insurance company(COIL) application the maximum deviation for the standard classifier is 2.78 compared with 0.92 for the minMaxDev model. The remaining databases also show the same behavior as it is presented in Table 4. On the other hand, the use of a classifier inferred from a re-balanced data set does not necessarily involve a decrease in the maximum deviation with respect to the standard classifier. In the same way, the traditional minimax classifier does not protect against prior changes in terms of maximum relative deviation from the minimum risk classifier. However, if our criterion is more conservative and our aim is the minimization of the maximum possible risk (not the minimization of the deviation), the traditional minimax classifier represents the best option. It is shown in Table 5 where the maximum risk for the different classifiers is reported. Positive values in this table indicate a cost while negative values represent a benefit. For instance, for the Page-blocks application the minimax classifier assures a maximum risk of 0.10 while the standard, re-balanced and minMaxDev classifiers reach values of 0.62, 0.26 and 0.20, respectively. It can be noticed that for the Pen-digits data set, the minimax deviation and minimax approaches attain the same results. The reason is that, for this problem, the R basis plane takes the same value (in this case, zero) in the probability space. 6.3.2 C LASSIFIER ROBUSTNESS UNDER PARTIAL U NCERTAINTY Unlike the previous section, we consider now that partial information about the class priors is available. The aim is to find a classifier that behaves well for a delimited and realistic range of priors what constitutes an aid in reducing the maximum deviation from the optimal classifier. This situation can be treated as a constrained minimax regret strategy where the constraints represent any extra information about prior probability value. Experimental results for several situations of partial prior uncertainty are presented in this section. We consider that lower bounds for the prior probabilities are available (see Table 6). In order to get the Γ-minMaxDev classifier, the risk for the different vertex of the uncertainty domain needs to be calculated. With them, the basis risk RΓ over which deviations are measured is derived. basis 125 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I Lower bound for prior probabilities Data Set P0l P1l GCRE 0.40 0.25 ACRE 0.20 0.25 MCRE 0.20 0.25 COIL 0.15 P2l P3l P4l P5l 0.03 DNA 0.10 0.10 0.22 0.02 0.00 0.01 0.1 0.20 0.10 0.10 0.10 0.10 0.06 0.06 0.10 0.10 0.06 P9l 0.06 0.10 0.05 0.05 0.02 PEN P8l 0.02 DER P7l 0.25 PAG P6l Table 6: Lower bounds for prior probabilities defining the uncertainty region, Γ region for the experimental data sets. Maximum Risk Deviation in the uncertainty region Standard Minimax Deviation Minimax Deviation with restriction minMaxDev Γ-minMaxDev GCRE 0.24 0.19 (0.10 0.09) ACRE 0.03 0.64 (0.03 0.03) MCRE 0.22 0.38 (0.13 0.10) COIL 2.33 0.77 (0.17 0.11) DNA 0.14 0.08 (0.07 0.07 0.06) PAG 0.37 0.15 (0.10 0.08 0.08 0.05 0.04) DER 0.08 0.05 (0.03 0.03 0.04 0.02 0.05 PEN 0.013 0.007 (0.003 0.001 0.003 0.000 0.001 0.001 0.000 0.003 0.05) 0.001 0.001) Table 7: Classifier Performance under partial knowledge of prior probabilities measured as Maximum Risk Deviation for several real-world applications. Class-conditional risk deviations (RΓ − cΓ ) are reported for the Γ-minMaxDev classifier. i ii Maximum deviation from the optimal in Γ is reported for the Γ-minMaxDev classifier together with the standard and the minMaxDev ones. For instance, the standard classifier for the Pageblocks data set deviates from the optimal classifier, in the defined uncertainty region, up to 0.37, while when complete uncertainty is assumed the maximum deviation is equal to 0.62. In the same way, reducing the uncertainty also means a reduction in the maximum deviation for minMaxDev classifier (trained without considering this partial knowledge). Thus, for Γ, this classifier assures a deviation bound of 0.15. However, taking into account this partial information to train a Γ-minMaxDev classifier allows to reduce the deviation for the worst-case conditions to 0.10. It can be seen the same behavior for the other databases in Table 7. 126 M INIMAX R EGRET C LASSIFIER 7. Conclusions This work concerns the design of robust neural-based classifiers when the prior probabilities of the classes are partially or completely unknown, even by the end user. This problem of uncertainty in the class priors is often ignored in supervised classification, even though it is a widespread situation in real world applications. As a result, the reliability of the inducted classifier can be greatly affected as previously shown by the experiments. To tackle this problem, we have proposed a novel minimax deviation strategy with the goal to minimize the maximum deviation with respect to the optimal classifier. A neural network training algorithm based on learning rate scaling has been developed. The experimental results show that this minimax deviation (minMaxDev) classifier protects against prior changes while other approaches like ignoring this uncertainty or use a balanced learning data set may result in large differences in performance with respect to the minimum risk classifier. Also, it has been shown that the conventional minimax classifier reduces the maximum possible risk following a conservative attitude but at the expense of large worst-case differences from the optimal classifier. Furthermore, a constrained minimax deviation approach (Γ-minMaxDev) has been derived for those situations where uncertainty is only partial. This may be seen as a general approach with some particular cases: a) precise knowledge of prior probabilities and b) complete uncertainty about the priors. In a) the region of uncertainty collapses to a point and we have the Bayes’ rule of minimum risk and in b) the pure minimax deviation strategy comes up. While the first one may be criticized for being quite unrealistic, the other may be seen rather pessimistic. The experimental results for this proposed intermediate situation show that the Γ-minMaxDev classifier allows to reduce the maximum deviation from the optimal and performs well over a range of prior probabilities. Acknowledgments The authors thank the four referees and the associate editor for their helpful comments. This work was partially supported by the project TEC2005-06766-C03-02 from the Spanish Ministry of Education and Science. References N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 3–11, 2004. N. M. Adams and D. J. Hand. Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition, 32(7):1139–1147, March 1998. R. Alaiz-Rodriguez, A. Guerrero-Curieses, and J. Cid-Sueiro. Minimax classifiers based on neural networks. Pattern Recognition, 38(1):29–39, January 2005. R. Barandela, J. S. Sanchez, V. Garc´a, and E. Rangel. Strategies for learning in class imbalance ı problems. Pattern Recognition, 36(3):849–851, March 2003. J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, second edition, 1985. 127 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/ mlearn/MLRepository.html. URL L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Chapman & Hall, NY, 1984. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority oversampling technique. Journal of Artificial Intelligence Research, 16:321–357, 2002. J. Cid-Sueiro and A. R. Figueiras-Vidal. On the structure of strict sense Bayesian cost functions and its applications. IEEE Transactions on Neural Networks, 12(3):445–455, May 2001. C. Drummond and R. C. Holte. Explicitly representing expected cost: An alternative to ROC representation. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 198–207. ACM Press, 2000. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, 2001. Y. C. Eldar and N. Merhav. Minimax approach to robust estimation of random parameters. IEEE Trans. on Signal Processing, 52(7):1931–1946, July 2004. Y. C. Eldar, A. Ben-Tal, and A. Nemirovski. Linear minimax regret estimation of deterministic parameters with bounded data uncertainties. IEEE Trans. on Signal Processing, 52(8):2177– 2188, August 2004. M. Feder and N. Merhav. Universal composite hypothesis testing: A competitive minimax approach. IEEE Trans. on Information Theory, 48(6):1504–1517, June 2002. A. Guerrero-Curieses, R. Alaiz-Rodriguez, and J. Cid-Sueiro. A fixed-point algorithm to minimax learning with neural networks. IEEE Transactions on Systems, Man and Cybernetics Part C, 34 (4):383–392, November 2004. ¨ H. A. G¨ venir, N. Emeksiz, N. Ikizler, and N. Ormeci. Diagnosis of gastric carcinoma by classifiu cation on feature projections. Artificial Intelligence in Medicine, 31(3), 2004. N. Ikizler. Benefit maximizing classification using feature intervals. Technical Report BU-CE-0208, Bilkent University, Ankara, Turkey, 2002. N. Japkowicz and S. Stephen. The class imbalance problem: A systematic study. Intelligent Data Analysis Journal, 6(5):429–450, November 2002. M. G. Kelly, D. J. Hand, and N. M. Adams. The impact of changing populations on classifier performance. In Proceedings of Fifth International Conference on SIG Knowledge Discovery and Data Mining (SIGKDD), pages 367–371, San Diego, CA, 1999. H. J. Kim. On a constrained optimal rule for classification with unknown prior individual group membership. Journal of Multivariate Analysis, 59(2):166–186, November 1996. M. Kubat and S. Matwin. Addressing the curse of imbalanced training sets: One-sided selection. In Proceedings 14th International Conference on Machine Learning, pages 179–186. Morgan Kaufmann, 1997. 128 M INIMAX R EGRET C LASSIFIER M. Kubat, R. Holte, and S. Matwin. Machine learning for the detection of oil spills in satellite radar images. Machine Learning, 30(2/3):195–215, 1998. S. Lawrence, I. Burns, A. D. Back, A. C. Tsoi, and C. L. Giles. Neural network classification and ¨ unequal prior class probabilities. In G. Orr, K.-R. Muller, and R. Caruana, editors, Tricks of the Trade, Lecture Notes in Computer Science State-of-the-Art Surveys, pages 299–314. Springer Verlag, 1998. T. K. Moon and W. C. Stirling. Mathematical Methods and Algorithms for Signal Processing. Prentice Hall, 2000. A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In ICML ’05: Proceedings of the 22nd International Conference on Machine learning, pages 625– 632, New York, NY, USA, 2005. ACM Press. ISBN 1-59593-180-5. E. Polak. Optimization: Algorithms and Consistent Approximations. Springer, 1997. F. Provost. Learning with imbalanced data sets 101. In Invited paper for the AAAI 2000 Workshop on Imbalanced Data Sets. AAAI Press. Technical Report WS-00-05, 2000. F. Provost and T. Fawcett. Robust classification systems for imprecise environments. Machine Learning, 42(3):203–231, March 2001. M. Saerens, P. Latinne, and C. Decaestecker. Adjusting a classifier for new a priori probabilities: A simple procedure. Neural Computation, 14:21–41, January 2002. E. Takimoto and M. Warmuth. The minimax strategy for Gaussian density estimation. In Proceedings 13th Annual Conference on Computational Learning Theory, pages 100–106. Morgan Kaufmann, San Francisco, 2000. K. M. Ting. A study of the effect of class distribution using cost-sensitive learning. In Proceedings of the Fifth International Conference on Discovery Science, pages 98–112. Berlin: Springer-Verlag, 2002. H. L. VanTrees. Detection, Estimation and Modulation Theory. John Wiley and Sons, 1968. G. I. Webb and K. M. Ting. On the application of ROC analysis to predict classification performance under varying class distributions. Machine Learning, 58(1):25–32, 2005. W. Wei, T. K. Leen, and E. Barnard. A fast histogram-based postprocessor that improves posterior probability estimates. Neural Computation, 11(5):1235 – 1248, July 1999. B. Zadrozny and C. Elkan. Learning and making decisions when costs and probabilities are both unknown. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 204–213. ACM Press, 2001. B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Eighth International Conference on Knowledge Discovery and Data Mining, 2002. 129 A LAIZ -RODR´GUEZ , G UERRERO -C URIESES , AND C ID -S UEIRO I B. Zadrozny, J. Langford, and N. Abe. Cost-sensitive learning by cost-proportionate example weighting. In Proceedings of the third IEEE International Conference on Data Mining, pages 435–442, 2003. Z. H. Zhou and X. Y. LiuJ. Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on Knowledge and Data Engineering, 18(1):63–77, January 2006. 130
same-paper 2 0.75252885 24 jmlr-2007-Consistent Feature Selection for Pattern Recognition in Polynomial Time
Author: Roland Nilsson, José M. Peña, Johan Björkegren, Jesper Tegnér
Abstract: We analyze two different feature selection problems: finding a minimal feature set optimal for classification (MINIMAL - OPTIMAL) vs. finding all features relevant to the target variable (ALL RELEVANT ). The latter problem is motivated by recent applications within bioinformatics, particularly gene expression analysis. For both problems, we identify classes of data distributions for which there exist consistent, polynomial-time algorithms. We also prove that ALL - RELEVANT is much harder than MINIMAL - OPTIMAL and propose two consistent, polynomial-time algorithms. We argue that the distribution classes considered are reasonable in many practical cases, so that our results simplify feature selection in a wide range of machine learning tasks. Keywords: learning theory, relevance, classification, Markov blanket, bioinformatics
3 0.56922835 31 jmlr-2007-Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
Author: Markus Kalisch, Peter Bühlmann
Abstract: We consider the PC-algorithm (Spirtes et al., 2000) for estimating the skeleton and equivalence class of a very high-dimensional directed acyclic graph (DAG) with corresponding Gaussian distribution. The PC-algorithm is computationally feasible and often very fast for sparse problems with many nodes (variables), and it has the attractive property to automatically achieve high computational efficiency as a function of sparseness of the true underlying DAG. We prove uniform consistency of the algorithm for very high-dimensional, sparse DAGs where the number of nodes is allowed to quickly grow with sample size n, as fast as O(na ) for any 0 < a < ∞. The sparseness assumption is rather minimal requiring only that the neighborhoods in the DAG are of lower order than sample size n. We also demonstrate the PC-algorithm for simulated data. Keywords: asymptotic consistency, DAG, graphical model, PC-algorithm, skeleton
4 0.51533401 43 jmlr-2007-Integrating Naïve Bayes and FOIL
Author: Niels Landwehr, Kristian Kersting, Luc De Raedt
Abstract: A novel relational learning approach that tightly integrates the na¨ve Bayes learning scheme with ı the inductive logic programming rule-learner FOIL is presented. In contrast to previous combinations that have employed na¨ve Bayes only for post-processing the rule sets, the presented approach ı employs the na¨ve Bayes criterion to guide its search directly. The proposed technique is impleı mented in the N FOIL and T FOIL systems, which employ standard na¨ve Bayes and tree augmented ı na¨ve Bayes models respectively. We show that these integrated approaches to probabilistic model ı and rule learning outperform post-processing approaches. They also yield significantly more accurate models than simple rule learning and are competitive with more sophisticated ILP systems. Keywords: rule learning, na¨ve Bayes, statistical relational learning, inductive logic programming ı
Author: Miroslav Dudík, Steven J. Phillips, Robert E. Schapire
Abstract: We present a unified and complete account of maximum entropy density estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we easily derive performance guarantees for many known regularization types, including 1 , 2 , 2 , and 1 + 2 style regularization. We propose an algorithm solving a large and 2 2 general subclass of generalized maximum entropy problems, including all discussed in the paper, and prove its convergence. Our approach generalizes and unifies techniques based on information geometry and Bregman divergences as well as those based more directly on compactness. Our work is motivated by a novel application of maximum entropy to species distribution modeling, an important problem in conservation biology and ecology. In a set of experiments on real-world data, we demonstrate the utility of maximum entropy in this setting. We explore effects of different feature types, sample sizes, and regularization levels on the performance of maxent, and discuss interpretability of the resulting models. Keywords: maximum entropy, density estimation, regularization, iterative scaling, species distribution modeling
6 0.50205004 20 jmlr-2007-Combining PAC-Bayesian and Generic Chaining Bounds
7 0.49544835 73 jmlr-2007-Revised Loss Bounds for the Set Covering Machine and Sample-Compression Loss Bounds for Imbalanced Data
9 0.48953608 7 jmlr-2007-A Stochastic Algorithm for Feature Selection in Pattern Recognition
10 0.48908183 62 jmlr-2007-On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning
11 0.48729086 37 jmlr-2007-GiniSupport Vector Machine: Quadratic Entropy Based Robust Multi-Class Probability Regression
12 0.48523933 89 jmlr-2007-VC Theory of Large Margin Multi-Category Classifiers (Special Topic on Model Selection)
13 0.48438907 36 jmlr-2007-Generalization Error Bounds in Semi-supervised Classification Under the Cluster Assumption
14 0.48278821 9 jmlr-2007-AdaBoost is Consistent
15 0.48219997 81 jmlr-2007-The Locally Weighted Bag of Words Framework for Document Representation
16 0.48143208 49 jmlr-2007-Learning to Classify Ordinal Data: The Data Replication Method
17 0.48045689 32 jmlr-2007-Euclidean Embedding of Co-occurrence Data
18 0.4782216 5 jmlr-2007-A Nonparametric Statistical Approach to Clustering via Mode Identification
19 0.47736496 63 jmlr-2007-On the Representer Theorem and Equivalent Degrees of Freedom of SVR
20 0.47677815 66 jmlr-2007-Penalized Model-Based Clustering with Application to Variable Selection