jmlr jmlr2010 jmlr2010-7 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Yael Ben-Haim, Elad Tom-Tov
Abstract: We propose a new algorithm for building decision tree classifiers. The algorithm is executed in a distributed environment and is especially designed for classifying large data sets and streaming data. It is empirically shown to be as accurate as a standard decision tree classifier, while being scalable for processing of streaming data on multiple processors. These findings are supported by a rigorous analysis of the algorithm’s accuracy. The essence of the algorithm is to quickly construct histograms at the processors, which compress the data to a fixed amount of memory. A master processor uses this information to find near-optimal split points to terminal tree nodes. Our analysis shows that guarantees on the local accuracy of split points imply guarantees on the overall tree accuracy. Keywords: decision tree classifiers, distributed computing, streaming data, scalability
Reference: text
sentIndex sentText sentNum sentScore
1 COM IBM Haifa Research Lab Haifa University Campus Mount Carmel, Haifa 31905, ISRAEL Editor: Soeren Sonnenburg Abstract We propose a new algorithm for building decision tree classifiers. [sent-5, score-0.326]
2 It is empirically shown to be as accurate as a standard decision tree classifier, while being scalable for processing of streaming data on multiple processors. [sent-7, score-0.465]
3 A master processor uses this information to find near-optimal split points to terminal tree nodes. [sent-10, score-0.459]
4 Our analysis shows that guarantees on the local accuracy of split points imply guarantees on the overall tree accuracy. [sent-11, score-0.306]
5 Keywords: decision tree classifiers, distributed computing, streaming data, scalability 1. [sent-12, score-0.465]
6 Introduction We propose a new algorithm for building decision tree classifiers for classifying both large data sets and streaming data. [sent-13, score-0.511]
7 , 1998), or replacing sorting with approximate representations of the data such as sampling and/or histogram building, for example, BOAT (Gehrke et al. [sent-24, score-0.263]
8 Faced with the challenge of handling large data, a large body of work has been dedicated to parallel decision tree algorithms (Shafer et al. [sent-28, score-0.376]
9 Task parallelism distributes the tree nodes among the processors. [sent-39, score-0.299]
10 Finally, hybrid parallelism combines horizontal or vertical parallelism in the first stages of tree construction with task parallelism towards the end. [sent-40, score-0.413]
11 Like their serial counterparts, parallel decision trees overcome the sorting obstacle by applying pre-sorting, distributed sorting, and approximations. [sent-41, score-0.294]
12 Our proposed algorithm builds the decision tree in a breadth-first mode, using horizontal parallelism. [sent-43, score-0.28]
13 The core of our algorithm is an on-line method for building histograms from streaming data at the processors. [sent-44, score-0.465]
14 The histograms are essentially compressed representations of the data, so that each processor can transmit an approximate description of the data that it sees to a master processor, with low communication complexity. [sent-45, score-0.422]
15 The master processor integrates the information received from all the processors and determines which terminal nodes to split and how. [sent-46, score-0.381]
16 In Section 2 we introduce the SPDT algorithm and the underlying histogram building algorithm. [sent-48, score-0.309]
17 , c}, our goal is to construct a decision tree that will accurately classify test examples. [sent-62, score-0.28]
18 We first present our histogram data structure and the methods related to it. [sent-71, score-0.263]
19 850 A S TREAMING PARALLEL D ECISION T REE A LGORITHM Algorithm 1 Update Procedure input A histogram h = {(p1 , m1 ), . [sent-75, score-0.263]
20 output A histogram with B bins that represents the set S ∪ {p}, where S is the set represented by h. [sent-79, score-0.433]
21 1: if p = pi for some i then 2: mi = mi + 1 3: else 4: Add the bin (p, 1) to the histogram, resulting in a histogram of B+1 bins h∪{(p, 1)}. [sent-80, score-0.958]
22 Denote ki = mπ(i) , namely, the histogram h ∪ (p, 1) is equivalent to (q1 , k1 ), . [sent-96, score-0.359]
23 7: Replace the bins (qi , ki ), (qi+1 , ki+1 ) by the bin qi ki + qi+1 ki+1 , ki + ki+1 . [sent-104, score-0.578]
24 1 On-line Histogram Building A histogram is a set of B pairs (called bins) of real numbers {(p1 , m1 ), . [sent-106, score-0.263]
25 The histogram is a compressed and approximate representation of a set S of real numbers. [sent-110, score-0.263]
26 The i=1 histogram data structure supports four procedures, named update, merge, sum, and uniform. [sent-112, score-0.263]
27 The merge procedure (Algorithm 2) creates a histogram that represents the union S1 ∪ S2 of the sets S1 , S2 , whose representing histograms are given. [sent-117, score-0.555]
28 The algorithm is similar to the update algorithm; in the first step, the two histograms form a single histogram with many bins. [sent-118, score-0.528]
29 The process repeats until the histogram has B bins. [sent-120, score-0.263]
30 The sum procedure estimates the number of points in a given interval [a, b], that belong to a set whose histogram is given. [sent-121, score-0.345]
31 Consequently, the number of points in the interval [pi , pi+1 ] is equal to (mi + mi+1 )/2, which is the area of the trapezoid (pi , 0), (pi , mi ), (pi+1 , mi+1 ), (pi+1 , 0), divided by (pi+1 − pi ). [sent-124, score-0.484]
32 To estimate the number of points in the interval [pi , b], for pi < b < pi+1 , we draw a straight line from (pi , mi ) to (pi+1 , mi+1 ). [sent-125, score-0.41]
33 We set mb = mi + mi+1 −mi pi+1 −pi (b − pi ), so that (b, mb ) is on this line. [sent-126, score-0.488]
34 The estimated number of points in the interval [pi , b] is then the area of the trapezoid (pi , 0), (pi , mi ), (b, mb ), (b, 0), divided again by (pi+1 − pi ). [sent-127, score-0.564]
35 output A histogram with B bins that represents the set S1 ∪ S2 , where S1 and S2 are the sets represented by h1 and h2 , respectively. [sent-136, score-0.433]
36 Denote ki = mπ(i) , namely, the histogram h1 ∪ h2 is equivalent to (q1 , k1 ), . [sent-159, score-0.359]
37 5: Replace the bins (qi , ki ), (qi+1 , ki+1 ) by the bin qi ki + qi+1 ki+1 , ki + ki+1 . [sent-167, score-0.578]
38 ki + ki+1 6: until The histogram has B bins Algorithm 3 Sum Procedure input A histogram {(p1 , m1 ), . [sent-168, score-0.792]
39 2: Set mi + mb b − pi s= · 2 pi+1 − pi where mb = mi + for all j < i do s = s+mj 5: end for 6: s = s + mi /2 mi+1 − mi (b − pi ). [sent-174, score-1.281]
40 pi+1 − pi 3: 4: or almost all the points in S are in the interval [p0 , pB+1 ] (p0 and pB+1 can be determined on the fly during the histogram’s construction). [sent-175, score-0.273]
41 The uniform (Algorithm 4) procedure receives as input a histogram {(p1 , m1 ), . [sent-176, score-0.263]
42 This is very similar to the calculations performed in ˜ B 852 A S TREAMING PARALLEL D ECISION T REE A LGORITHM Algorithm 4 Uniform Procedure ˜ input A histogram {(p1 , m1 ), . [sent-184, score-0.263]
43 , B − 1 do j 2: Set s = B ∑B mi ˜ i=1 3: Find i such that sum([−∞, pi ]) < s < sum([−∞, pi+1 ]). [sent-195, score-0.328]
44 5: Search for u j such that mi + mu j u j − pi · , d= 2 pi+1 − pi where mu j = mi + Substituting mi+1 − mi (u j − pi ). [sent-197, score-0.984]
45 pi+1 − pi z= u j − pi , pi+1 − pi we obtain a quadratic equation az2 + bz + c = 0 with a = mi+1 − mi , b = 2mi , and c = −2d. [sent-198, score-0.71]
46 Hence set u j = pi + (pi+1 − pi )z, where √ −b + b2 − 4ac z= . [sent-199, score-0.382]
47 2 Tree Growing Algorithm We construct a decision tree based on a set of training examples {(x1 , y1 ), . [sent-202, score-0.28]
48 Every internal node in the tree possesses two ordered child nodes and a decision rule of the form x(i) < a, where x(i) is the ith attribute and a is a real number. [sent-215, score-0.45]
49 4: for all unlabeled leaves v in T do 5: if v satisfies the stopping criterion or there are no samples reaching v then 6: Label v with the most frequent label among the samples reaching v 7: else 8: Choose candidate splits for v and estimate ∆ for each of them. [sent-231, score-0.326]
50 Each processor can observe 1/W of the data, but has a view of the complete classification tree built so far. [sent-251, score-0.274]
51 xk ) do 3: if the sample is directed to an unlabeled leaf v then 4: for all attributes i do (i) 5: Update the histogram h(v, i, yk ) with the point xk , using the update procedure. [sent-258, score-0.439]
52 The processors build histograms describing the data they observed and send them to a master processor. [sent-260, score-0.428]
53 The number of bins in the histograms is specified through a trade-off between accuracy and computational load: A large number of bins allows a more accurate description, whereas small histograms are beneficial for avoiding time, memory, and communications overloads. [sent-262, score-0.808]
54 For every unlabeled leaf v, attribute i, and class j, the master processor merges the W histograms h(v, i, j) received from the processors. [sent-263, score-0.575]
55 The master node now has an exact knowledge of the frequency of each label in each tree node, and hence the ability to calculate the impurity of all unlabeled leaves. [sent-264, score-0.444]
56 Finally, ∆ for each candidate split is estimated using the sum procedure and the histograms h(v, i, j). [sent-279, score-0.35]
57 The only memory allocation is for the histograms being 855 B EN -H AIM AND YOM -T OV constructed. [sent-294, score-0.271]
58 The number of bins in the histograms is constant; hence, operations on histograms take a constant amount of time. [sent-295, score-0.638]
59 Every processor performs at most N/W histogram updates, where N is the size of the data batch and W is the number of processors. [sent-296, score-0.346]
60 The histograms are communicated to the master processor, which merges them and applies the sum and uniform procedures. [sent-299, score-0.304]
61 4 Related Work In this section we discuss previous work on histogram and quantile approximations, as well as procedures for building decision trees on parallel platforms. [sent-305, score-0.559]
62 Our histogram algorithms tackle two related problems: data compression and quantile approximations. [sent-309, score-0.263]
63 As we show, when the data distribution is highly skewed, the accuracy of the on-line histogram decays. [sent-330, score-0.263]
64 , 1999) algorithms build decision trees for streaming data and work in a distributed environment. [sent-337, score-0.339]
65 The first difference is in the histogram building algorithm. [sent-340, score-0.309]
66 The purpose of the second pass is to locate exactly the best split location for every node, and hence eventually to construct the same tree as the standard algorithm. [sent-346, score-0.289]
67 SS is more similar to SPDT, since both algorithms build histograms with an equal number of points in each bin and take the boundaries of the histograms to be the candidate splits. [sent-347, score-0.619]
68 Since only a constant number of split locations is checked, it is possible that a suboptimal split is chosen, which may cause the entire tree to be different from the one constructed by the standard algorithm. [sent-348, score-0.331]
69 The third difference between our work and previous works is our ability to analytically show that the error rate of the parallel tree approaches the error rate of the serial tree, even though the trees are not identical. [sent-349, score-0.396]
70 1 Background Let n be the number of training samples used to train a decision tree T . [sent-355, score-0.28]
71 For a tree node v, denote by nv the number of training samples that reach v, and by qv, j the probability that a sample reaching v has label j, for j = 1, . [sent-356, score-0.471]
72 n v leaf in T (2) For our analysis, we rewrite Algorithm 5 such that only one new leaf is added to the tree in each iteration (see Algorithm 7). [sent-366, score-0.365]
73 The resulting full-grown tree is identical to the tree constructed by Algorithm 5. [sent-367, score-0.382]
74 A tree T is said to perform locally well if every internal node v in it performs locally well. [sent-373, score-0.316]
75 Finally, a decision tree building algorithm performs locally well if for every training set, the output tree performs locally well. [sent-374, score-0.579]
76 By (2), and since the number of leaves in Tt−1 is t, there exists a leaf v in Tt−1 for which nv G({qv, j }) ≥ GTt−1 /t, hence n nv f ({qv, j }) ≥ α GTt−1 . [sent-378, score-0.452]
77 We have ˜ ˜ n ˜ n GTt−1 − GTt = nv nv α nv ˜ ∆v ≥ ∆v ≥ f ({qv, j }) ≥ GTt−1 . [sent-381, score-0.474]
78 Recall that B is the number of bins in the ˜ histograms constructed by the processors, and B is the size of the output of uniform. [sent-390, score-0.404]
79 Let v be a leaf in a decision tree which is under construction, and let x(i) < a be the best split for v according to the standard algorithm. [sent-401, score-0.437]
80 Corollary 3 Assume that the standard decision tree algorithm performs locally well with respect to a function f ({q j }), and that the functions operating on histograms return exact answers. [sent-415, score-0.545]
81 Then for every positive function δ({q j }), the SPDT algorithm performs locally well with respect to f ({q j })− δ({q j }), in the sense that for every training set there exists B such that the tree constructed by the SPDT algorithm with B bins performs locally well. [sent-416, score-0.423]
82 8: end if 9: end for 10: Split an unlabeled leaf v such that nv ∆ is maximal among all unlabeled leaves and all possible candidate splits, where nv is the number of samples reaching v. [sent-422, score-0.673]
83 We first show the accuracy of the histogram building and merging procedures, and later compare the accuracy of SPDT compared to a standard decision tree algorithm. [sent-431, score-0.629]
84 1 Histogram Algorithms We evaluated the accuracy of the histogram building and information extraction algorithms. [sent-433, score-0.309]
85 For each part Sk we built a histogram hk with B = 100 bins, using the update ˜ procedure. [sent-436, score-0.294]
86 We repeat the same experiment on the histograms h1,2 , h3,4 , obtained after merging h1 with h2 and h3 with h4 . [sent-447, score-0.274]
87 4 The figure was obtained by calculating the histograms h1,2,3,4 and points u1 , . [sent-521, score-0.279]
88 63 Tree size before pruning 5731 403 1069 210 62 87 384 3690 4539 173 253 625 Tree size after pruning 359 281 433 194 29 77 95 258 93 52 169 447 Table 8: Percent error, areas under ROC curves, and tree sizes (number of tree nodes) before and after pruning, with eight processors. [sent-752, score-0.522]
89 Tables 5 and 6 display the error rates and areas under the ROC curves of the standard decision tree and the SPDT algorithm with 1, 2, 4, and 8 processors. [sent-754, score-0.28]
90 It is also interesting to study the effect of pruning on the error rate and tree size. [sent-758, score-0.261]
91 We also provide a way to analytically compare the error rate of trees constructed by serial and parallel algorithms without comparing similarities between the trees themselves. [sent-787, score-0.27]
92 We demonstrate how the histogram algorithms run on the following input sequence: 23, 19, 10, 16, 36, 2, 9, 32, 30, 45. [sent-790, score-0.263]
93 (3) Suppose that we wish to build a histogram with five bins for the first seven elements. [sent-791, score-0.433]
94 After reading the first five elements, we obtain the histogram (23, 1), (19, 1), (10, 1), (16, 1), (36, 1). [sent-793, score-0.263]
95 The resulting histogram is given in Figure 4(c): (2, 1), (9. [sent-801, score-0.263]
96 Let us now merge the last histogram with the following one: (32, 1), (30, 1), (45, 1). [sent-804, score-0.321]
97 Figure 5 follows the changes in the histogram during the three iterations of the merge procedure. [sent-805, score-0.321]
98 The final histogram is given in Figure 5(d): (2, 1), (9. [sent-807, score-0.263]
99 The true answer, obtained by looking at the set represented by the histogram (see Equation (3)), is three points: 2, 9, and 10. [sent-836, score-0.263]
100 On the boosting ability of top-down decision tree learning algorithms. [sent-929, score-0.28]
wordName wordTfidf (topN-words)
[('spdt', 0.481), ('histogram', 0.263), ('histograms', 0.234), ('pi', 0.191), ('tree', 0.191), ('streaming', 0.185), ('bins', 0.17), ('gtt', 0.169), ('nv', 0.158), ('yom', 0.148), ('pb', 0.137), ('mi', 0.137), ('treaming', 0.136), ('ecision', 0.127), ('ql', 0.127), ('processors', 0.124), ('ov', 0.114), ('qr', 0.098), ('ki', 0.096), ('parallel', 0.096), ('lgorithm', 0.09), ('decision', 0.089), ('leaf', 0.087), ('workers', 0.086), ('qv', 0.086), ('ocr', 0.084), ('processor', 0.083), ('mb', 0.08), ('parallelism', 0.074), ('trapezoid', 0.074), ('agrawal', 0.074), ('ree', 0.073), ('pruning', 0.07), ('split', 0.07), ('master', 0.07), ('en', 0.066), ('trees', 0.065), ('node', 0.063), ('impurity', 0.062), ('isolet', 0.062), ('bin', 0.06), ('qi', 0.06), ('reaching', 0.059), ('merge', 0.058), ('unlabeled', 0.058), ('pascal', 0.057), ('splits', 0.055), ('ui', 0.055), ('magic', 0.053), ('alsabti', 0.049), ('guha', 0.049), ('spies', 0.049), ('leaves', 0.049), ('building', 0.046), ('candidate', 0.046), ('points', 0.045), ('serial', 0.044), ('attribute', 0.043), ('nursery', 0.042), ('ibm', 0.041), ('merging', 0.04), ('memory', 0.037), ('interval', 0.037), ('clouds', 0.037), ('ett', 0.037), ('lognormal', 0.037), ('manku', 0.037), ('mehta', 0.037), ('pclouds', 0.037), ('rakesh', 0.037), ('shafer', 0.037), ('sreenivas', 0.037), ('tth', 0.037), ('percent', 0.037), ('spam', 0.037), ('splitting', 0.036), ('communication', 0.035), ('pen', 0.035), ('beta', 0.035), ('ub', 0.035), ('uk', 0.034), ('nodes', 0.034), ('uci', 0.032), ('executed', 0.032), ('gini', 0.032), ('sanjay', 0.032), ('sigmod', 0.032), ('srivastava', 0.032), ('aim', 0.031), ('locally', 0.031), ('update', 0.031), ('tt', 0.031), ('face', 0.03), ('child', 0.03), ('haifa', 0.029), ('mansour', 0.029), ('adult', 0.028), ('pass', 0.028), ('detection', 0.027), ('abalone', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 7 jmlr-2010-A Streaming Parallel Decision Tree Algorithm
Author: Yael Ben-Haim, Elad Tom-Tov
Abstract: We propose a new algorithm for building decision tree classifiers. The algorithm is executed in a distributed environment and is especially designed for classifying large data sets and streaming data. It is empirically shown to be as accurate as a standard decision tree classifier, while being scalable for processing of streaming data on multiple processors. These findings are supported by a rigorous analysis of the algorithm’s accuracy. The essence of the algorithm is to quickly construct histograms at the processors, which compress the data to a fixed amount of memory. A master processor uses this information to find near-optimal split points to terminal tree nodes. Our analysis shows that guarantees on the local accuracy of split points imply guarantees on the overall tree accuracy. Keywords: decision tree classifiers, distributed computing, streaming data, scalability
2 0.12592031 15 jmlr-2010-Approximate Tree Kernels
Author: Konrad Rieck, Tammo Krueger, Ulf Brefeld, Klaus-Robert Müller
Abstract: Convolution kernels for trees provide simple means for learning with tree-structured data. The computation time of tree kernels is quadratic in the size of the trees, since all pairs of nodes need to be compared. Thus, large parse trees, obtained from HTML documents or structured network data, render convolution kernels inapplicable. In this article, we propose an effective approximation technique for parse tree kernels. The approximate tree kernels (ATKs) limit kernel computation to a sparse subset of relevant subtrees and discard redundant structures, such that training and testing of kernel-based learning methods are significantly accelerated. We devise linear programming approaches for identifying such subsets for supervised and unsupervised learning tasks, respectively. Empirically, the approximate tree kernels attain run-time improvements up to three orders of magnitude while preserving the predictive accuracy of regular tree kernels. For unsupervised tasks, the approximate tree kernels even lead to more accurate predictions by identifying relevant dimensions in feature space. Keywords: tree kernels, approximation, kernel methods, convolution kernels
3 0.10872168 69 jmlr-2010-Lp-Nested Symmetric Distributions
Author: Fabian Sinz, Matthias Bethge
Abstract: In this paper, we introduce a new family of probability densities called L p -nested symmetric distributions. The common property, shared by all members of the new class, is the same functional form ˜ x x ρ(x ) = ρ( f (x )), where f is a nested cascade of L p -norms x p = (∑ |xi | p )1/p . L p -nested symmetric distributions thereby are a special case of ν-spherical distributions for which f is only required to be positively homogeneous of degree one. While both, ν-spherical and L p -nested symmetric distributions, contain many widely used families of probability models such as the Gaussian, spherically and elliptically symmetric distributions, L p -spherically symmetric distributions, and certain types of independent component analysis (ICA) and independent subspace analysis (ISA) models, ν-spherical distributions are usually computationally intractable. Here we demonstrate that L p nested symmetric distributions are still computationally feasible by deriving an analytic expression for its normalization constant, gradients for maximum likelihood estimation, analytic expressions for certain types of marginals, as well as an exact and efficient sampling algorithm. We discuss the tight links of L p -nested symmetric distributions to well known machine learning methods such as ICA, ISA and mixed norm regularizers, and introduce the nested radial factorization algorithm (NRF), which is a form of non-linear ICA that transforms any linearly mixed, non-factorial L p nested symmetric source into statistically independent signals. As a corollary, we also introduce the uniform distribution on the L p -nested unit sphere. Keywords: parametric density model, symmetric distribution, ν-spherical distributions, non-linear independent component analysis, independent subspace analysis, robust Bayesian inference, mixed norm density model, uniform distributions on mixed norm spheres, nested radial factorization
4 0.065236963 74 jmlr-2010-Maximum Relative Margin and Data-Dependent Regularization
Author: Pannagadatta K. Shivaswamy, Tony Jebara
Abstract: Leading classification methods such as support vector machines (SVMs) and their counterparts achieve strong generalization performance by maximizing the margin of separation between data classes. While the maximum margin approach has achieved promising performance, this article identifies its sensitivity to affine transformations of the data and to directions with large data spread. Maximum margin solutions may be misled by the spread of data and preferentially separate classes along large spread directions. This article corrects these weaknesses by measuring margin not in the absolute sense but rather only relative to the spread of data in any projection direction. Maximum relative margin corresponds to a data-dependent regularization on the classification function while maximum absolute margin corresponds to an ℓ2 norm constraint on the classification function. Interestingly, the proposed improvements only require simple extensions to existing maximum margin formulations and preserve the computational efficiency of SVMs. Through the maximization of relative margin, surprising performance gains are achieved on real-world problems such as digit, text classification and on several other benchmark data sets. In addition, risk bounds are derived for the new formulation based on Rademacher averages. Keywords: support vector machines, kernel methods, large margin, Rademacher complexity
5 0.061234072 63 jmlr-2010-Learning Instance-Specific Predictive Models
Author: Shyam Visweswaran, Gregory F. Cooper
Abstract: This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. Keywords: instance-specific, Bayesian network, Markov blanket, Bayesian model averaging
6 0.055826019 80 jmlr-2010-On-Line Sequential Bin Packing
7 0.054222211 90 jmlr-2010-Permutation Tests for Studying Classifier Performance
8 0.052766744 11 jmlr-2010-An Investigation of Missing Data Methods for Classification Trees Applied to Binary Response Data
9 0.0498905 53 jmlr-2010-Inducing Tree-Substitution Grammars
10 0.047283482 42 jmlr-2010-Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data
11 0.043938294 22 jmlr-2010-Classification Using Geometric Level Sets
12 0.043229274 59 jmlr-2010-Large Scale Online Learning of Image Similarity Through Ranking
13 0.042161733 89 jmlr-2010-PAC-Bayesian Analysis of Co-clustering and Beyond
14 0.040464804 103 jmlr-2010-Sparse Semi-supervised Learning Using Conjugate Functions
15 0.040029597 113 jmlr-2010-Tree Decomposition for Large-Scale SVM Problems
16 0.039226517 33 jmlr-2010-Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers
17 0.038797922 40 jmlr-2010-Fast and Scalable Local Kernel Machines
18 0.037842289 6 jmlr-2010-A Rotation Test to Verify Latent Structure
19 0.037165672 52 jmlr-2010-Incremental Sigmoid Belief Networks for Grammar Learning
20 0.036098961 75 jmlr-2010-Mean Field Variational Approximation for Continuous-Time Bayesian Networks
topicId topicWeight
[(0, -0.18), (1, 0.04), (2, -0.08), (3, 0.083), (4, 0.024), (5, 0.031), (6, -0.067), (7, 0.012), (8, -0.027), (9, 0.054), (10, 0.101), (11, 0.139), (12, 0.118), (13, 0.042), (14, -0.178), (15, 0.083), (16, -0.022), (17, 0.021), (18, -0.163), (19, -0.124), (20, 0.096), (21, 0.01), (22, 0.087), (23, -0.172), (24, 0.239), (25, 0.072), (26, 0.217), (27, 0.006), (28, -0.177), (29, -0.045), (30, -0.108), (31, -0.14), (32, -0.079), (33, -0.071), (34, -0.197), (35, -0.084), (36, -0.042), (37, 0.017), (38, -0.114), (39, -0.079), (40, 0.036), (41, -0.1), (42, -0.093), (43, 0.09), (44, -0.084), (45, 0.056), (46, -0.077), (47, 0.054), (48, -0.018), (49, -0.109)]
simIndex simValue paperId paperTitle
same-paper 1 0.94438022 7 jmlr-2010-A Streaming Parallel Decision Tree Algorithm
Author: Yael Ben-Haim, Elad Tom-Tov
Abstract: We propose a new algorithm for building decision tree classifiers. The algorithm is executed in a distributed environment and is especially designed for classifying large data sets and streaming data. It is empirically shown to be as accurate as a standard decision tree classifier, while being scalable for processing of streaming data on multiple processors. These findings are supported by a rigorous analysis of the algorithm’s accuracy. The essence of the algorithm is to quickly construct histograms at the processors, which compress the data to a fixed amount of memory. A master processor uses this information to find near-optimal split points to terminal tree nodes. Our analysis shows that guarantees on the local accuracy of split points imply guarantees on the overall tree accuracy. Keywords: decision tree classifiers, distributed computing, streaming data, scalability
2 0.66667527 69 jmlr-2010-Lp-Nested Symmetric Distributions
Author: Fabian Sinz, Matthias Bethge
Abstract: In this paper, we introduce a new family of probability densities called L p -nested symmetric distributions. The common property, shared by all members of the new class, is the same functional form ˜ x x ρ(x ) = ρ( f (x )), where f is a nested cascade of L p -norms x p = (∑ |xi | p )1/p . L p -nested symmetric distributions thereby are a special case of ν-spherical distributions for which f is only required to be positively homogeneous of degree one. While both, ν-spherical and L p -nested symmetric distributions, contain many widely used families of probability models such as the Gaussian, spherically and elliptically symmetric distributions, L p -spherically symmetric distributions, and certain types of independent component analysis (ICA) and independent subspace analysis (ISA) models, ν-spherical distributions are usually computationally intractable. Here we demonstrate that L p nested symmetric distributions are still computationally feasible by deriving an analytic expression for its normalization constant, gradients for maximum likelihood estimation, analytic expressions for certain types of marginals, as well as an exact and efficient sampling algorithm. We discuss the tight links of L p -nested symmetric distributions to well known machine learning methods such as ICA, ISA and mixed norm regularizers, and introduce the nested radial factorization algorithm (NRF), which is a form of non-linear ICA that transforms any linearly mixed, non-factorial L p nested symmetric source into statistically independent signals. As a corollary, we also introduce the uniform distribution on the L p -nested unit sphere. Keywords: parametric density model, symmetric distribution, ν-spherical distributions, non-linear independent component analysis, independent subspace analysis, robust Bayesian inference, mixed norm density model, uniform distributions on mixed norm spheres, nested radial factorization
3 0.53435212 15 jmlr-2010-Approximate Tree Kernels
Author: Konrad Rieck, Tammo Krueger, Ulf Brefeld, Klaus-Robert Müller
Abstract: Convolution kernels for trees provide simple means for learning with tree-structured data. The computation time of tree kernels is quadratic in the size of the trees, since all pairs of nodes need to be compared. Thus, large parse trees, obtained from HTML documents or structured network data, render convolution kernels inapplicable. In this article, we propose an effective approximation technique for parse tree kernels. The approximate tree kernels (ATKs) limit kernel computation to a sparse subset of relevant subtrees and discard redundant structures, such that training and testing of kernel-based learning methods are significantly accelerated. We devise linear programming approaches for identifying such subsets for supervised and unsupervised learning tasks, respectively. Empirically, the approximate tree kernels attain run-time improvements up to three orders of magnitude while preserving the predictive accuracy of regular tree kernels. For unsupervised tasks, the approximate tree kernels even lead to more accurate predictions by identifying relevant dimensions in feature space. Keywords: tree kernels, approximation, kernel methods, convolution kernels
4 0.38132977 63 jmlr-2010-Learning Instance-Specific Predictive Models
Author: Shyam Visweswaran, Gregory F. Cooper
Abstract: This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. Keywords: instance-specific, Bayesian network, Markov blanket, Bayesian model averaging
5 0.28497881 59 jmlr-2010-Large Scale Online Learning of Image Similarity Through Ranking
Author: Gal Chechik, Varun Sharma, Uri Shalit, Samy Bengio
Abstract: Learning a measure of similarity between pairs of objects is an important generic problem in machine learning. It is particularly useful in large scale applications like searching for an image that is similar to a given image or finding videos that are relevant to a given video. In these tasks, users look for objects that are not only visually similar but also semantically related to a given object. Unfortunately, the approaches that exist today for learning such semantic similarity do not scale to large data sets. This is both because typically their CPU and storage requirements grow quadratically with the sample size, and because many methods impose complex positivity constraints on the space of learned similarity functions. The current paper presents OASIS, an Online Algorithm for Scalable Image Similarity learning that learns a bilinear similarity measure over sparse representations. OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost. Our experiments show that OASIS is both fast and accurate at a wide range of scales: for a data set with thousands of images, it achieves better results than existing state-of-the-art methods, while being an order of magnitude faster. For large, web scale, data sets, OASIS can be trained on more than two million images from 150K text queries within 3 days on a single CPU. On this large scale data set, human evaluations showed that 35% of the ten nearest neighbors of a given test image, as found by OASIS, were semantically relevant to that image. This suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before. Keywords: large scale, metric learning, image similarity, online learning ∗. Varun Sharma and Uri Shalit contributed equally to this work. †. Also at ICNC, The Hebrew University of Jerusalem, 91904, Israel. c 2010 Gal Chechik, Varun Sharma, Uri Shalit
6 0.28416815 11 jmlr-2010-An Investigation of Missing Data Methods for Classification Trees Applied to Binary Response Data
7 0.28232315 113 jmlr-2010-Tree Decomposition for Large-Scale SVM Problems
8 0.26992673 74 jmlr-2010-Maximum Relative Margin and Data-Dependent Regularization
9 0.26686266 33 jmlr-2010-Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers
10 0.25154743 53 jmlr-2010-Inducing Tree-Substitution Grammars
11 0.22469422 90 jmlr-2010-Permutation Tests for Studying Classifier Performance
12 0.21626036 80 jmlr-2010-On-Line Sequential Bin Packing
13 0.20546558 22 jmlr-2010-Classification Using Geometric Level Sets
14 0.19850293 54 jmlr-2010-Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization
15 0.19825549 89 jmlr-2010-PAC-Bayesian Analysis of Co-clustering and Beyond
16 0.19514222 3 jmlr-2010-A Fast Hybrid Algorithm for Large-Scalel1-Regularized Logistic Regression
17 0.19151568 52 jmlr-2010-Incremental Sigmoid Belief Networks for Grammar Learning
18 0.18079662 6 jmlr-2010-A Rotation Test to Verify Latent Structure
19 0.17917001 103 jmlr-2010-Sparse Semi-supervised Learning Using Conjugate Functions
20 0.17611383 77 jmlr-2010-Model-based Boosting 2.0
topicId topicWeight
[(3, 0.01), (4, 0.538), (8, 0.018), (21, 0.014), (32, 0.056), (33, 0.014), (36, 0.027), (37, 0.039), (75, 0.106), (85, 0.059)]
simIndex simValue paperId paperTitle
same-paper 1 0.79165065 7 jmlr-2010-A Streaming Parallel Decision Tree Algorithm
Author: Yael Ben-Haim, Elad Tom-Tov
Abstract: We propose a new algorithm for building decision tree classifiers. The algorithm is executed in a distributed environment and is especially designed for classifying large data sets and streaming data. It is empirically shown to be as accurate as a standard decision tree classifier, while being scalable for processing of streaming data on multiple processors. These findings are supported by a rigorous analysis of the algorithm’s accuracy. The essence of the algorithm is to quickly construct histograms at the processors, which compress the data to a fixed amount of memory. A master processor uses this information to find near-optimal split points to terminal tree nodes. Our analysis shows that guarantees on the local accuracy of split points imply guarantees on the overall tree accuracy. Keywords: decision tree classifiers, distributed computing, streaming data, scalability
2 0.72758025 62 jmlr-2010-Learning Gradients: Predictive Models that Infer Geometry and Statistical Dependence
Author: Qiang Wu, Justin Guinney, Mauro Maggioni, Sayan Mukherjee
Abstract: The problems of dimension reduction and inference of statistical dependence are addressed by the modeling framework of learning gradients. The models we propose hold for Euclidean spaces as well as the manifold setting. The central quantity in this approach is an estimate of the gradient of the regression or classification function. Two quadratic forms are constructed from gradient estimates: the gradient outer product and gradient based diffusion maps. The first quantity can be used for supervised dimension reduction on manifolds as well as inference of a graphical model encoding dependencies that are predictive of a response variable. The second quantity can be used for nonlinear projections that incorporate both the geometric structure of the manifold as well as variation of the response variable on the manifold. We relate the gradient outer product to standard statistical quantities such as covariances and provide a simple and precise comparison of a variety of supervised dimensionality reduction methods. We provide rates of convergence for both inference of informative directions as well as inference of a graphical model of variable dependencies. Keywords: gradient estimates, manifold learning, graphical models, inverse regression, dimension reduction, gradient diffusion maps
3 0.3694905 15 jmlr-2010-Approximate Tree Kernels
Author: Konrad Rieck, Tammo Krueger, Ulf Brefeld, Klaus-Robert Müller
Abstract: Convolution kernels for trees provide simple means for learning with tree-structured data. The computation time of tree kernels is quadratic in the size of the trees, since all pairs of nodes need to be compared. Thus, large parse trees, obtained from HTML documents or structured network data, render convolution kernels inapplicable. In this article, we propose an effective approximation technique for parse tree kernels. The approximate tree kernels (ATKs) limit kernel computation to a sparse subset of relevant subtrees and discard redundant structures, such that training and testing of kernel-based learning methods are significantly accelerated. We devise linear programming approaches for identifying such subsets for supervised and unsupervised learning tasks, respectively. Empirically, the approximate tree kernels attain run-time improvements up to three orders of magnitude while preserving the predictive accuracy of regular tree kernels. For unsupervised tasks, the approximate tree kernels even lead to more accurate predictions by identifying relevant dimensions in feature space. Keywords: tree kernels, approximation, kernel methods, convolution kernels
4 0.36892352 69 jmlr-2010-Lp-Nested Symmetric Distributions
Author: Fabian Sinz, Matthias Bethge
Abstract: In this paper, we introduce a new family of probability densities called L p -nested symmetric distributions. The common property, shared by all members of the new class, is the same functional form ˜ x x ρ(x ) = ρ( f (x )), where f is a nested cascade of L p -norms x p = (∑ |xi | p )1/p . L p -nested symmetric distributions thereby are a special case of ν-spherical distributions for which f is only required to be positively homogeneous of degree one. While both, ν-spherical and L p -nested symmetric distributions, contain many widely used families of probability models such as the Gaussian, spherically and elliptically symmetric distributions, L p -spherically symmetric distributions, and certain types of independent component analysis (ICA) and independent subspace analysis (ISA) models, ν-spherical distributions are usually computationally intractable. Here we demonstrate that L p nested symmetric distributions are still computationally feasible by deriving an analytic expression for its normalization constant, gradients for maximum likelihood estimation, analytic expressions for certain types of marginals, as well as an exact and efficient sampling algorithm. We discuss the tight links of L p -nested symmetric distributions to well known machine learning methods such as ICA, ISA and mixed norm regularizers, and introduce the nested radial factorization algorithm (NRF), which is a form of non-linear ICA that transforms any linearly mixed, non-factorial L p nested symmetric source into statistically independent signals. As a corollary, we also introduce the uniform distribution on the L p -nested unit sphere. Keywords: parametric density model, symmetric distribution, ν-spherical distributions, non-linear independent component analysis, independent subspace analysis, robust Bayesian inference, mixed norm density model, uniform distributions on mixed norm spheres, nested radial factorization
5 0.32942361 11 jmlr-2010-An Investigation of Missing Data Methods for Classification Trees Applied to Binary Response Data
Author: Yufeng Ding, Jeffrey S. Simonoff
Abstract: There are many different methods used by classification tree algorithms when missing data occur in the predictors, but few studies have been done comparing their appropriateness and performance. This paper provides both analytic and Monte Carlo evidence regarding the effectiveness of six popular missing data methods for classification trees applied to binary response data. We show that in the context of classification trees, the relationship between the missingness and the dependent variable, as well as the existence or non-existence of missing values in the testing data, are the most helpful criteria to distinguish different missing data methods. In particular, separate class is clearly the best method to use when the testing set has missing values and the missingness is related to the response variable. A real data set related to modeling bankruptcy of a firm is then analyzed. The paper concludes with discussion of adaptation of these results to logistic regression, and other potential generalizations. Keywords: classification tree, missing data, separate class, RPART, C4.5, CART 1. Classification Trees and the Problem of Missing Data Classification trees are a supervised learning method appropriate for data where the response variable is categorical. The simple methodology behind classification trees is to recursively split data based upon the predictors that best distinguish the response variable classes. There are, of course, many subtleties, such as the choice of criterion function used to pick the best split variable, stopping rules, pruning rules, and so on. In this study, we mostly rely on the built-in features of the tree algorithms C 4.5 and RPART to implement tree methods. Details about classification trees can be found in various references, for example, Breiman, Friedman, Olshen, and Stone (1998) and Quinlan (1993). Classification trees are computationally efficient, can handle mixed variables (continuous and discrete) easily and the rules generated by them are relatively easy to interpret and understand. Classification trees are highly flexible, and naturally uncover interaction effects among the independent variables. Classification trees are also popular because they can easily be incorporated into learning ensembles or larger learning systems as base learners. c 2010 Yufeng Ding and Jeffrey S. Simonoff. D ING AND S IMONOFF Like most statistics or machine learning methods, “base form” classification trees are designed assuming that data are complete. That is, all of the values in the data matrix, with the rows being the observations (instances) and the columns being the variables (attributes), are observed. However, missing data (meaning that some of the values in the data matrix are not observed) is a very common problem, and for this reason classification trees have to, and do, have ways of dealing with missing data in the predictors. (In supervised learning, an observation with missing response value has no information about the underlying relationship, and must be omitted. There is, however, research in the field of semi-supervised learning methods that tries to handle the situation where the response value is missing, for example, Wang and Shen 2007.) Although there are many different ways of dealing with missing data in classification trees, there are relatively few studies in the literature about the appropriateness and performance of these missing data methods. Moreover, most of these studies limited their coverage to the simplest missing data scenario, namely, missing completely at random (MCAR), while our study shows that the missing data generating process is one of the two crucial criteria in determining the best missing data method. The other crucial criterion is whether or not the testing set is complete. The following two subsections describe in more detail these two criteria. 1.1 Different Types of Missing Data Generating Process Data originate according to the data generating process (DGP) under which the data matrix is “generated” according to the probabilistic relationships between the variables. We can think of the missingness itself as a random variable, realized as the matrix of the missingness indicator Im . Im is generated according to the missingness generating process (MGP), which governs the relationship between Im and the variables in the data matrix. Im has the same dimension as the original data matrix, with each entry equal to 0 if the corresponding original data value is observed and 1 if the corresponding original data value is not observed (missing). Note that an Im value not only can be related to its corresponding original data value, but can also be related to other variables of the same observation. Depending on the relationship between Im and the original data, Rubin (1976) and Little and Rubin (2002) categorize the missingness into three different types. If Im is dependent upon the missing values (the unobserved original data values), then the missingness pattern is called “not missing at random” (NMAR). Otherwise, the missingness pattern is called “missing at random” (MAR). As a special case of MAR, when the missingness is also not dependent on the observed values (that is, is independent of all data values), the missingness pattern is called “missing completely at random” (MCAR). The definition of MCAR is rather restrictive, which makes MCAR unlikely in reality. For example, in the bankruptcy data discussed later in the paper, there is evidence that after the Enron scandal in 2001, when both government and the public became more wary about financial reporting misconduct, missingness of values in financial statement data was related to the well-being of the company, and thus other values in the data. This makes intuitive sense because when scrutinized, a company is more likely to have trouble reporting their financial data if there were problems. Thus, focusing on the MCAR case is a major limitation that will be avoided in this paper. In fact, this paper shows that the categorization of MCAR, MAR and NMAR itself is not appropriate for the missing data problem in classification trees, as well as in another supervised learning context (at least with respect to prediction), although it has been shown to be helpful with likelihood-based or Bayesian analysis. 132 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES 1 2 3 4 5 6 7 8 Missingness is related to Missing Observed Response values Predictors Variable No No No No Yes No Yes No No Yes Yes No No No Yes No Yes Yes Yes No Yes Yes Yes Yes LR MCAR MAR NMAR NMAR MAR MAR NMAR NMAR Three-Letter −−− −X− M−− M X− −−Y −X Y M−Y MXY Table 1: Eight missingness patterns investigated in this study and their correspondence to the categorization MCAR, MAR and NMAR defined by Rubin (1976) and Little and Rubin (2002) (the LR column). The column Three-Letter shows the notation that is used in this paper. In this paper, we investigate eight different missingness patterns, depending on the relationship between the missingness and three types of variables, the observed predictors, the unobserved predictors (the missing values) and the response variable. The relationship is conditional upon other factors, for example, missingness is not dependent upon the missing values means that the missingness is conditionally independent of the missing values given the observed predictors and/or the response variable. Table 1 shows their correspondence with the MCAR/MAR/NMAR categorization as well as the three-letter notation we use in this paper. The three letters indicate if the missingness is conditionally dependent on the missing values (M), on other predictors (X) and on the response variable (Y), respectively. As will be shown, the dependence of the missingness on the response variable (the letter Y) is the one that affects the choice of best missingness data method. Later in the paper, some derived notations are also used. For example, ∗X∗ means the union of −X−, −XY, MX− and MXY, that is, the missingness is dependent upon the observed predictors, and it may or may not be related to the missing values and/or the response variable. 1.2 Scenarios Where the Testing Data May or May Not Be Complete There are essentially two stages of applying classification trees, the training phase where the historical data (training set) are used to construct the tree, and the testing phase where the tree is put into use and applied to testing data. Similar to most other studies, this study deals with the scenario where missing data occur in the training set, but the testing set may or may not have missing values. One basic assumption is, of course, that the DGP (as well as MGP if the testing set also contains missing values) is the same for both the training set and the testing set. While it would probably typically be the case that the testing data would also have missing values (generated by the same process that generated them in the training set), it should be noted that in certain circumstances a testing set without missing values could be expected. For example, consider a problem involving prediction of bankruptcy from various financial ratios. If the training set comes from a publicly available database, there could be missing values corresponding to information that was not supplied by various companies. If the goal is to use these publicly available data to try 133 D ING AND S IMONOFF to predict bankruptcy from ratios from one’s own company, it would be expected that all of the necessary information for prediction would be available, and thus the test set would be complete. This study shows that when the missingness is dependent upon the response variable and the test set has missing values, separate class is the best missing data method to use. In other situations, the choice is not as clear, but some insights on effective choices are provided. The rest of paper provides detailed theoretical and empirical analysis and is organized as follows. Section 2 gives a brief introduction to the previous research on this topic. This is followed by discussion of the design of this study and findings in Section 3. The generality of the results are then tested on real data sets in Section 4. A brief extension of the results to logistic regression is presented in Section 5. We conclude with discussion of these results and future work in Section 6. 2. Previous Research There have been several studies of missing data and classification trees in the literature. Liu, White, Thompson, and Bramer (1997) gave a general description of the problem, but did not discuss solutions. Saar-Tsechansky and Provost (2007) discussed various missing data methods in classification trees and proposed a cost-sensitive approach to the missing data problem for the scenario when missing data occur only at the testing phase, which is different from the problem studied here (where missing values occur in the training phase). Kim and Yates (2003) conducted a simulation study of seven popular missing value methods but did not find any dominant method. Feelders (1999) compared the performance of surrogate split and imputation and found the imputation methods to work better. (These methods, and the methods described below, are described more fully in the next section.) Batista and Monard (2003) compared four different missing data methods, and found that 10 nearest neighbor imputation outperformed other methods in most cases. In the context of cost sensitive classification trees, Zhang, Qin, Ling, and Sheng (2005) studied four different missing data methods based on their performances on five data sets with artificially generated random missing values. They concluded that the internal node method (the decision rules for the observations with the next split variable missing will be made at the (internal) node) is better than the other three methods examined. Fujikawa and Ho (2002) compared several imputation methods based on preliminary clustering algorithms to probabilistic split on simulations based on several real data sets and found comparable performance. A weakness of all of the above studies is that they focused only on the restrictive MCAR situation. Other studies examined both MAR and NMAR missingness. Kalousis and Hilario (2000) used simulations from real data sets to examine the properties of seven algorithms: two rule inducers, a nearest neighbor method, two decision tree inducers, a naive Bayes inducer, and linear discriminant analysis. They found that the naive Bayes method was by far most resilient to missing data, in the sense that its properties changed the least when the missing rate was increased (note that this resilience is related to, but not the same as, its overall predictive performance). They also found that the deleterious effects of missing data are more serious if a given amount of missing values are spread over several variables, rather than concentrated in a few. Twala (2009) used computer simulations based on real data sets to compare the properties of different missing value methods, including using complete cases, single imputation of missing values, likelihood-based multiple imputation (where missing values are imputed several times, and the results of fitting trees to the different generated data sets are combined), probabilistic split, and surrogate split. He studied MAR, MCAR, and NMAR missingness generating processes, although 134 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES dependence of missingness on the response variable was not examined. Multiple imputation was found to be most effective, with probabilistic split also performing reasonably well, although little difference was found between methods when the proportion of missing values was low. As would be expected, MCAR missingness caused the least problems for methods, while NMAR missingness caused the most, and as was also found by Kalousis and Hilario (2000), missingness spread over several predictors is more serious than if it is concentrated in only one. Twala, Jones, and Hand (2008) proposed a method closely related to creating a separate class for missing values, and found that its performance was competitive with that of likelihood-based multiple imputation. The study described in the next section extends these previous studies in several ways. First, theoretical analyses are provided for simple situations that help explain observed empirical performance. We then extend these analyses to more complex situations and data sets (including large ones) using Monte Carlo simulations based on generated and real data sets. The importance of whether missing is dependent on the response variable, which has been ignored in previous studies on classification trees yet turns out to be of crucial importance, is a fundamental aspect of these results. The generality of the conclusions is finally tested using real data sets and application to logistic regression. 3. The Effectiveness of Missing Data Methods The recursive nature of classification trees makes them almost impossible to analyze analytically in the general case beyond 2×2 tables (where there is only one binary predictor and a binary response variable). On the other hand, trees built on 2×2 tables, which can be thought of as “stumps” with a binary split, can be considered as degenerate classification trees, with a classification tree being built (recursively) as a hierarchy of these degenerate trees. Therefore, analyzing 2×2 tables can result in important insights for more general cases. We then build on the 2×2 analyses using Monte Carlo simulation, where factors that might have impact on performance are incrementally added, in order to see the effect of each factor. The factors include variation in both the data generating process (DGP) and the missing data generating process (MGP), the number and type of predictors in the data, the number of predictors that contain missing values, and the number of observations with missing data. This study examines six different missing data methods: probabilistic split, complete case method, grand mode/mean imputation, separate class, surrogate split, and complete variable method. Probabilistic split is the default method of C 4.5 (Quinlan, 1993). In the training phase, observations with values observed on the split variable are split first. The ones with missing values are then put into each of the child nodes with a weight given as the proportion of non-missing instances in the child. In the testing phase, an observation with a missing value on a split variable will be associated with all of the children using probabilities, which are the weights recorded in the training phase. The complete case method deletes all observations that contain missing values in any of the predictors in the training phase. If the testing set also contains missing values, the complete case method is not applicable and thus some other method has to be used. In the simulations, we use C 4.5 to realize the complete case method. In the training phase, we manually delete all of the observations with missing values and then run C 4.5 on the pre-processed remaining complete data. In the testing phase, the default missing data method, probabilistic split, is used. Grand mode imputation imputes the missing value with the grand mode of that variable if it is categorical. Grand mean is used if the variable is continuous. The separate class method treats the missing values as a new class 135 D ING AND S IMONOFF (category) of the predictor. This is trivial to apply when the original variable is categorical, where we can create a new category called “missing”. To apply the separate class method to a numerical variable, we give all of the missing values a single extremely large value that is obviously outside of the original data range. This creates the needed separation between the nonmissing values and the missing values, implying that any split that involves the variable with missing values will put all of the missing observations into the same branch of the tree. Surrogate split is the default method of CART (realized using RPART in this study; Breiman et al. 1998 and Therneau and Atkinson 1997). It finds and uses a surrogate variable (or several surrogates in order) within a node if the variable for the next split contains missing values. In the testing phase, if a split variable contains missing values, the surrogate variables in the training phase are used instead. The complete variable method simply deletes all variables that contain missing values. Before we start presenting results, we define a performance measure that is appropriate for measuring the impact of missing data. Accuracy, calculated as the percentage of correctly classified observations, is often used to measure the performance of classification trees. Since it can be affected by both the data structure (some data are intrinsically easier to classify than others) and by the missing data, this is not necessarily a good summary of the impact of missing data. In this study, we define a measure called relative accuracy (RelAcc), calculated as RelAcc = Accuracy with missing data . Accuracy with original full data This can be thought of as a standardized accuracy, as RelAcc measures the accuracy achievable with missing values relative to that achievable with the original full data. 3.1 Analytical Results In the following consistency theorems, the data are assumed to reflect the DGP exactly, and therefore the training set and the testing set are exactly the same. Several of the theorems are for 2×2 tables, and in those cases stopping and pruning rules are not relevant, since the only question is whether or not the one possible split is made. The proofs are thus dependent on the underlying parameters of the DGP and MGP, rather than on data randomly generated from them. It is important to recognize that these results are only designed to be illustrative of the results found in the much more realistic simulation analyses to follow. Proofs of all of the results are given in the appendix. Before presenting the theorems, we define some terms to avoid possible confusion. First, a partition of the data refers to the grouping of the observations defined by the classification tree’s splitting rules. Note that it is possible for two different trees on the same data set to define the same partition. For example, suppose that there are only two binary explanatory variables, X1 and X2 , and one tree splits on X1 then X2 while another tree splits on X2 then X1 . In this case, these two trees have different structures, but they can lead to the same partition of the data. Secondly, the set of rules defined by a classification tree consists of the rules defined by the tree leaves on each of the groups (the partition) of the data. 3.1.1 W HEN THE T EST S ET IS F ULLY O BSERVED W ITH N O M ISSING VALUES We start with Theorems 1 to 3 that apply to the complete case method. Theorems 4 and 5 apply to probabilistic split and mode imputation, respectively. Proofs of the theorems can be found in the appendix. 136 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES Theorem 1 Complete Case Method: If the MGP is conditionally independent of Y given X, then the tree built on the data containing missing values using the complete case method gives the same set of rules as the tree built on the original full data set. Theorem 2 Complete Case Method: If the partition of the data defined by the tree built on the incomplete data is not changed from the one defined by the tree built on the original full data, the loss in accuracy when the testing set is complete is bounded above by PM , where PM is the missing rate, defined as the percentage of observations that contain missing values. Theorem 3 Complete Case Method: If the partition of the data defined by the tree built on the incomplete data is not changed from the one defined by the tree built on the original full data, the relative accuracy when the testing set is complete is bounded below by RelAccmin = 1 − PM , 1 + PM where PM is the missing rate. Notice that the tree structure itself could change as long as it gives the same final partition of the data. There are similar results in regression analyses as in Theorem 1. In regression analyses, when the missingness is independent of the response variable, by using only the complete observations, the parameter estimators are all unbiased (Allison, 2001). This implies that in theory, when the missingness is independent of the response variable, using complete cases only is not a bad approach on average. However, in practice, as will be seen later, deleting observations with missing values can cause severe loss in information, and thus has generally poor performance. Theorem 4 Probabilistic Split: In a 2×2 data table, if the MGP is independent of either Y or X, given the other variable, then the following results hold for probabilistic split. 1. If X is not informative in terms of classification, that is, the majority classes of Y for different X values are the same, then probabilistic split will give the same rule as the one that would be obtained from the original full data; 2. If probabilistic split shows that X is informative in terms of classification, that is, the majority classes of Y for different X values are different, then it finds the same rule as the one that would be obtained from the original full data; 3. The absolute accuracy when the testing set is complete is bounded below by 0.5. Since the original full data accuracy is at most 1, the relative accuracy is also bounded below by 0.5. Theorem 5 Mode Imputation: If the MGP is independent of Y , given X, then the same results hold for mode imputation as for probabilistic split under the conditions of Theorem 4. Theorems 1, 2 and 3 (for the complete case method) are true for general data sets. Theorems 4 and 5 are for 2×2 tables only but they imply that probabilistic split and mode imputation have advantages over the complete case method, which can have very poor performance (as will be shown in Figure 1). 137 D ING AND S IMONOFF Moreover, with 2×2 tables, the complete variable method will always have a higher than 0.5 accuracy since by ignoring the only predictor, we will always classify all of the data to the overall majority class and achieve at least 0.5 accuracy, and thus at least 0.5 relative accuracy. Together with Theorems 4 and 5, as well as the evidence to be shown in Figure 1, this is an indication that classification trees tend not to be hurt much by missing values, since trees built on 2 × 2 tables can be considered as degenerate classification trees and more complex trees are composites of these degenerate trees. The performance of a classification tree is the average (weighted by the number of observations at each leaf) over the degenerate trees at the leaf level, and, as will be seen later in the simulations, can often be quite good. Surrogate split is not applicable to 2×2 tables because there are no other predictors. For 2×2 table problems with a complete testing set, separate class is essentially the same as the complete case method, because as long as the data are split according to the predictor (and it is very likely that this will be so), the separate class method builds separate rules for the observations with missing values; when the testing set is complete, the rules that are used in the testing phase are exactly the ones built on the complete observations. When there is more than one predictor, however, the creation of the “separate class” will save the observations with missing values from being deleted and affect the tree building process. It will very likely lead to a change in the tree structure. This, as will be seen, tends to have a favorable impact on the performance accuracy. Figure 1 illustrates the lower bound calculated in Theorem 3. The illustration is achieved by Monte Carlo simulation of 2×2 tables. A 2×2 table with missing values has only eight cells, that is, eight different value combinations of the binary variables X, Y and M, where M is the missingness indicator such that M = 0 if X is observed and M = 1 if X is missing. There is one constraint, that the sum of the eight cell probabilities must equal one. Therefore, this table is determined by seven parameters. In the simulation, for each 2 × 2 table, the following seven parameters (probabilities) are randomly and independently generated from a uniform distribution between (0, 1): (1)P(X = 1), (2)P(Y = 1|X = 0), (3)P(Y = 1|X = 1), (4)P(M = 1|X = 0,Y = 0), (5)P(M = 1|X = 0,Y = 1), (6)P(M = 1|X = 1,Y = 0) and (7)P(M = 1|X = 1,Y = 1). Here we assume the data tables reflect the true underlying DGP and MGP without random variation, and thus the expected performance of the classification trees can be derived using the parameters. In this simulation, sets of the seven parameters are generated (but no data sets are generated using these parameters) repeatedly, and the relative accuracy of each missing data method on each parameter set is determined. One million sets of parameters are generated for each missingness pattern. In Figure 1, the plot on the left is a scatter plot of relative accuracy versus missing rate for each Monte Carlo replication for the complete case method when the MGP depends on the response variable. The lower bound is clearly shown. We can see that when the missing rate is high, the lower bound can reduce to almost zero (implying that not only relative accuracy, but accuracy itself, can approach zero). This perhaps somewhat counterintuitive result can occur in the following way. Imagine the extreme case where almost all cases are positive and (virtually) all of the positive cases have missing predictor value at the training phase; in this situation the resultant rule will be to classify everything as negative. When this rule is applied to a complete testing set with almost all positive cases, the accuracy will be almost zero. The graph on the right is the quantile version of the scatter plot on the left. The lines shown in the quantile plot are the theoretical lower bound, the 10th, 20th, 30th, 40th and 50th percentile lines from the lowest to the highest. Higher percentile lines are the same as the 50th percentile (median) line, which is already the horizontal line at RelAcc = 1. The percentile lines are constructed by connecting the corresponding percentiles in a moving window 138 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES Figure 1: Scatter plot and the corresponding quantile plot of the complete testing set RelAcc vs. missing rate of the complete case method when the MGP is dependent on the response variable. Recall that “∗ ∗ Y” means the MGP is conditionally dependent on the response variable but no restriction on the relationship between the MGP and other variables, missing or observed, is assumed. Each point in the scatter plot represents the result on one of the simulated data tables. of data from the left to the right. Due to space limitations, we do not show quantile plots of other missing data methods and/or under different scenarios, but in all of the other plots, the quantile lines are all higher (that is, the quantile plot in Figure 1 shows the worst case scenario). The plots show that the missing data problem, when the missing rate is not too high, may not be as serious as we might have thought. For example, when 40% of the observations contain missing data, 80% of the time the expected relative accuracy is higher than 90%, and 90% of the time the expected relative accuracy is higher than 80%. 3.1.2 W HEN THE T EST S ET H AS M ISSING VALUES Theorem 6 Separate Class: In 2×2 data tables, if missing values occur in both the training set and the testing set, then the separate class method achieves the best possible performance. In the Monte Carlo simulation of the 2 × 2 tables, the head-to-head comparison between the separate class method and other missing data methods confirmed the uniform dominance of the separate class when the test set also contains missing values, regardless whether the MGP is dependent on the response variable or not. However, as shown in Figure 2, when the MGP is independent of the response variable, separate class never performances better than the performance on the original full data, indicated by relative accuracies less than one. This means that separate class is not gaining from the missingness. On the other hand, when the MGP is dependent on the response variable, a fairly large percentage of the time the relative accuracy of the separate class method is larger than one (the quantiles shown are from the 10th to the 90th percentile with increment 10 percent). This means that trees based on the separate class method can improve on predictive performance compared to the situation where there are no missing data. Our simulations show that other methods can also gain from the missingness when the MGP is dependent on the response variable, but not as frequently as the separate class method and the gains are in general not as large. We follow up on this behavior in more detail in the next section, but the simple explanation is that since missingness depends on the response variable, the tree algorithm can use the presence of missing data in an observation to improve prediction of the response for that observation. Duda, Hart, and Stork (2001) and Hand (1997) briefly mentioned this possibility in the classification context, but did not give any 139 D ING AND S IMONOFF Figure 2: Scatter plot of the separate class method with incomplete testing set. Each point in the scatter plot represents the result on one of the simulated data tables. supporting evidence. Theorem 6 makes a fairly strong statement in the simple situation, and it will be seen to be strongly indicative of the results in more general cases. 3.2 Monte Carlo Simulations of General Data Sets In this section extensions of the simulations in the last section are summarized. 3.2.1 A N OVERVIEW OF THE S IMULATION The following simulations are carried out. 1. 2×2 tables, missing values occur in the only predictor. 2. Up to seven binary predictors, missing values occur in only one predictor. 3. Eight binary predictors, missing values occur in two of them. 4. Twelve binary predictors, missing values occur in six of them. 5. Eight continuous predictors, missing values occur in two of them. 6. Twelve continuous predictors, missing values occur in six of them. Two different scenarios of each of the last four simulations listed above were performed. In the first scenario, the six complete predictors are all independent of the missing ones, while in the second scenario three of the six complete predictors are related to the missing ones. Therefore, ten simulations were done in total. In each of the simulations, 5000 sets of DGPs are simulated in order to cover a wide range of different-structured data sets so that a generalizable inference from the simulation is possible. For 140 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES Density 0.6 0.7 0.8 0.9 0.0 1.0 2.0 3.0 Out−of−sample accuracy 0.0 1.0 2.0 3.0 Density In−sample accuracy 1.0 0.5 0.6 0.7 0.8 0.9 1.0 4 0 Density 8 Out−of−sample AUC 0 1 2 3 4 Out−of−sample accuracy In−sample AUC Density In−sample accuracy 0.5 0.6 0.7 0.8 0.9 1.0 0.5 In−sample AUC 0.6 0.7 0.8 0.9 Out−of−sample AUC Figure 3: A summary of the tree performance on the simulated original full data. each DGP, eight different MGPs are simulated to cover different types of missingness patterns. For each data set, the variables are generated sequentially in the order of the predictors, the response and the missingness. The probabilities associated with the binary response variable and the binary missingness variable are generated using conditional logit functions. The predictors may or may not be correlated with each other. Details about the simulations implementation can be found in Ding and Simonoff (2008). For each set of DGP/MGP, several different sample sizes are simulated to see any possible learning curve effect, since it was shown by Perlich, Provost, and Simonoff (2003) that sample size is an important factor in the effectiveness of classification trees. Figure 3 shows the distribution of the tree performance on the simulated original full data, as measured by accuracy and area under the ROC curve (AUC). As we can see, there is broad coverage of the entire range of strength of the underlying relationship. Also, as expected, the out-of-sample performance (on the test set) is generally worse than the in-sample performance (on the training set). When the in-sample AUC is close to 0.5, a tree is likely to not split and as a result, any missing data method will not actually be applied, resulting in equivalent performance over all of them. To make the comparisons more meaningful, we exclude the cases where the in-sample AUC is below 0.7. Lower thresholds for exclusion (0.55 and 0.6) yield very similar results. Of the six missing data methods covered by this study, five of them, namely, complete case method, probabilistic split, separate class, imputation and complete variable method, are realized using C 4.5. These methods are always comparable. However, surrogate split is carried out using RPART , which makes it less comparable to the other methods because of differences between RPART and C 4.5 other than the missing data methods. To remedy this problem, we tuned the RPART parameters (primarily the parameter “cp”) so that it gives balanced results compared to C 4.5 when applied to the original full data (i.e., each has a similar probability of outperforming the other), and special attention is given when comparing RPART with other methods. The out-of-sample performances of each pair of missing data methods were compared based on both t-tests and nonparametric tests; each difference discussed in the following sections was strongly statistically significant. 141 D ING AND S IMONOFF 100 P M D S T C D M C T S 500 2000 0 20 40 60 80 P P C D T S M − − Y Winning pct of each method 0 20 40 60 80 Winning pct of each method − − − 10000 P P P D C T S M 100 D C T M S 500 D M S T C 2000 M D S T C 500 2000 0 20 40 60 80 D C M T S P D C T S M P Winning pct of each method 0 20 40 60 80 − X Y P 100 10000 P P P D C T D 100 D M C S T C T M S M S 500 2000 D C T M S M D S T C 500 2000 0 20 40 60 80 P P Winning pct of each method 0 20 40 60 80 M − Y P D C T S M 10000 P P P D C T D 100 D C T M S M S 500 C S T M 2000 P P P D C T S M D C M T S 500 M D S T C 2000 10000 0 20 40 60 80 M X Y Winning pct of each method 0 20 40 60 80 10000 Sample size M X − Winning pct of each method Sample size 100 10000 Sample size M − − Winning pct of each method Sample size 100 10000 Sample size − X − Winning pct of each method Sample size P P P D C T 100 Sample size D C T S M M S 500 D C S M T 2000 10000 Sample size Figure 4: A summary of the order of six missing data methods when tested on a new complete testing set. The Y axis is the percentage of times each method is the best (including being tied with other methods; therefore the percentages do not sum up to one). 3.2.2 T HE T WO FACTORS THAT D ETERMINE DATA M ETHODS THE P ERFORMANCE OF D IFFERENT M ISSING The simulations make clear that the dependence relationship between the missingness and the response variable is the most informative factor in differentiating different missing data methods, and thus is most helpful in determining the appropriateness of the methods. This can be clearly seen in Figures 4 and 5 (these figures refer to the case with twelve continuous predictors, six of which are subject to missing values, but results for other situations were broadly similar). The left column in 142 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES 100 P M D T S C D T M C S 500 2000 0 20 40 60 80 P C P D T S M − − Y Winning pct of each method 0 20 40 60 80 Winning pct of each method − − − 10000 S S P D T C M S C P D T M 100 500 P M D T C 2000 P D T C M S 500 M D T S C 2000 0 20 40 60 80 P Winning pct of each method 0 20 40 60 80 − X Y P D C T S M 100 10000 S S S P D T C M C D P T M 100 500 P M D T C 2000 500 P M D S T C 2000 0 20 40 60 80 P D T C M S Winning pct of each method 0 20 40 60 80 M − Y P D C T S M 10000 S S S C P D T M 100 P D T C M 500 P T D M C 2000 P D T M C S D P C T S M 500 P M S D T C 2000 10000 0 20 40 60 80 M X Y Winning pct of each method 0 20 40 60 80 10000 Sample size M X − Winning pct of each method Sample size 100 10000 Sample size M − − Winning pct of each method Sample size 100 10000 Sample size − X − Winning pct of each method Sample size S S S C P D T M 100 Sample size P D T C M 500 P T D M C 2000 10000 Sample size Figure 5: A summary of the order of six missing data methods when tested on a new incomplete testing set. The Y axis is the percentage of times each method is the best (including being tied with other methods). the pictures shows the results when the missingness is independent of the response variable and the right column shows the results when the missingness is dependent on the response variable. We can see that there are clear differences between the two columns, but within each column there is essentially no difference. This also says the categorization of MCAR/MAR/NMAR (which is based upon the dependence relationship between the missingness and missing values, and does not distinguish the dependence of the missingness on other Xs and on Y ) is not helpful in this context. 143 D ING AND S IMONOFF Figure 6: Plot of the case-wise missing rate MR2 versus the value-wise missing rate MR1 in the simulations using the 36 real data sets. Comparison of the right columns of Figures 4 and 5 shows that whether or not there are missing values in the testing set is the second important criterion in differentiating between the methods. The separate class method is strongly dominant when the testing set contains missing values and the missingness is related to the response variable. The reason for this is that when missing data exist in both the training phase and the testing phase, they become part of the data and the MGP becomes an essential part of the DGP. This, of course, requires the assumption that the MGP (as well as the DGP) is the same in both the training phase and the testing phase. Under this scenario, if the missingness is related to the response variable, then there is information about the response variable in the missingness, which should be helpful when making predictions. Separate class, by taking the missingness directly as an “observed” variable, uses the information in the missingness about the response variable most effectively and thus is the best method to use. As a matter of fact, as can be seen in the bottom rows of Figures 7 and 8 (which give average relative accuracies separated by missing rate), the average relative accuracy of separate class under this situation is larger than one, indicating, on average, a better performance than with the original full data. On the other hand, when the missing data only occur in the training phase and the testing set does not have missing values, or when the missingness is not related to and carries no information about the response variable, the existence of missing values is a nuisance. Its only effect is to obscure the underlying DGP and thus would most likely reduce a tree’s performance. In this case, simulations show probabilistic split to be the dominantly best method. However, we don’t see this dominance later in results based on real data sets. More discussion of this point will follow in Section 4. 3.2.3 M ISSING R ATE E FFECT There are two ways of defining the missing rate: the percentage of predictor values that are missing from the data set (the value-wise missing rate, termed here MR1 ), and the percentage of observations that contain missing values (the case-wise missing rate, termed here MR2 ). If there is only one predictor, as is the case with 2×2 tables, then the two definitions are the same. We have seen earlier in the theoretical analyses that the missing rate has a clear impact on the performance of the missing data methods. In the simulations, there is also evidence of a relationship between relative performance and missing rate, whichever definition is used to define the missing rate. 144 A N I NVESTIGATION OF M ISSING DATA M ETHODS FOR C LASSIFICATION T REES 100 500 2000 10000 100 500 2000 80 100 P T D M C 60 40 20 80 60 40 P D C T M 0 P D M C T Winning Pct of each method P D C T M C P D T M S S S P T C M D P T D C M 0 D C P T M S S S 20 60 40 20 S Winning pct of each method 80 S S 0 Winning pct of each method Winning pct / MGP: MXY / MR1>0.35 100 Winning Pct / MGP: MXY / 0.2 <0.3 100 Winning Pct / MGP: MXY / MR1<0.15 10000 100 500 T D P C M 2000 10000 Mean RelAcc / MGP: MXY / MR1<0.15 Mean RelAcc / MGP: MXY / 0.2
6 0.31454915 89 jmlr-2010-PAC-Bayesian Analysis of Co-clustering and Beyond
7 0.31019139 46 jmlr-2010-High Dimensional Inverse Covariance Matrix Estimation via Linear Programming
8 0.30219382 113 jmlr-2010-Tree Decomposition for Large-Scale SVM Problems
9 0.30085105 54 jmlr-2010-Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization
10 0.3002367 53 jmlr-2010-Inducing Tree-Substitution Grammars
11 0.29935133 13 jmlr-2010-Approximate Inference on Planar Graphs using Loop Calculus and Belief Propagation
12 0.29815051 59 jmlr-2010-Large Scale Online Learning of Image Similarity Through Ranking
13 0.29760048 82 jmlr-2010-On Learning with Integral Operators
14 0.29367927 49 jmlr-2010-Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data
15 0.28914276 102 jmlr-2010-Semi-Supervised Novelty Detection
16 0.28702816 60 jmlr-2010-Learnability, Stability and Uniform Convergence
17 0.28058296 92 jmlr-2010-Practical Approaches to Principal Component Analysis in the Presence of Missing Values
18 0.27883089 114 jmlr-2010-Unsupervised Supervised Learning I: Estimating Classification and Regression Errors without Labels
19 0.27632469 94 jmlr-2010-Quadratic Programming Feature Selection