jmlr jmlr2013 jmlr2013-42 knowledge-graph by maker-knowledge-mining

42 jmlr-2013-Fast Generalized Subset Scan for Anomalous Pattern Detection


Source: pdf

Author: Edward McFowland III, Skyler Speakman, Daniel B. Neill

Abstract: We propose Fast Generalized Subset Scan (FGSS), a new method for detecting anomalous patterns in general categorical data sets. We frame the pattern detection problem as a search over subsets of data records and attributes, maximizing a nonparametric scan statistic over all such subsets. We prove that the nonparametric scan statistics possess a novel property that allows for efficient optimization over the exponentially many subsets of the data without an exhaustive search, enabling FGSS to scale to massive and high-dimensional data sets. We evaluate the performance of FGSS in three real-world application domains (customs monitoring, disease surveillance, and network intrusion detection), and demonstrate that FGSS can successfully detect and characterize relevant patterns in each domain. As compared to three other recently proposed detection algorithms, FGSS substantially decreased run time and improved detection power for massive multivariate data sets. Keywords: pattern detection, anomaly detection, knowledge discovery, Bayesian networks, scan statistics

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We frame the pattern detection problem as a search over subsets of data records and attributes, maximizing a nonparametric scan statistic over all such subsets. [sent-9, score-0.89]

2 Most existing anomaly detection methods focus on the discovery of single anomalous data records, for example, detecting a fraudulent transaction in financial data. [sent-17, score-0.717]

3 By searching for groups of these similar and slightly anomalous shipments, we can detect the presence of the subtle underlying pattern of smuggling. [sent-24, score-0.66]

4 Our approach can identify subsets of data records (ED visits) for which any subset of attributes are anomalous, enabling early and accurate outbreak detection. [sent-33, score-0.775]

5 1 The Anomalous Pattern Detection Problem Here we focus on the problem of anomalous pattern detection in general data, that is, data sets where data records are described by an arbitrary set of attributes. [sent-35, score-1.097]

6 We describe the anomalous pattern detection problem as detecting groups of anomalous records and characterizing their anomalous features, with the intention of understanding the anomalous process that generated these groups. [sent-36, score-2.712]

7 The anomalous pattern detection task begins with the assumption that there are a set of processes generating records in a data set. [sent-37, score-1.097]

8 The “background” process generates records that are typical and expected; these records are assumed to constitute the majority of the data set. [sent-38, score-0.852]

9 If these anomalies are generated by a process which is very different from the background process, it may be sufficient to evaluate each individual record in isolation because many of the records’ attributes will be atypical, or individual attributes may take on extremely surprising values. [sent-40, score-0.749]

10 However, a subtle anomalous process will generate records that may each be only slightly anomalous and therefore extremely challenging to detect. [sent-41, score-1.475]

11 Conversely, general methods for anomalous pattern detection are most useful for identifying interesting and non-obvious patterns occurring in the data, when there is little knowledge of what patterns to 1534 FAST G ENERALIZED S UBSET S CAN FOR A NOMALOUS PATTERN D ETECTION look for. [sent-48, score-0.776]

12 A general anomalous pattern detection method must learn M0 while making few assumptions, as it must maintain the ability to detect previously unknown and relevant patterns across diverse data without prior knowledge of the domain or the true distribution from which the data records are drawn. [sent-50, score-1.189]

13 We can reduce the challenge of anomalous pattern detection in general data to a sequence of tasks: learning a null model, defining the search space (i. [sent-51, score-0.728]

14 Therefore, we briefly summarize several previously proposed methods for anomalous pattern detection in general categorical data sets based on their techniques, and their limitations, for addressing these tasks; more detailed descriptions can be found in §3. [sent-54, score-0.708]

15 Each method discussed here, including our proposed Fast Generalized Subset Scan approach, learns the structure and parameters of a Bayesian network from training data to represent M0 , and then searches for subsets of records in test data that are collectively anomalous given M0 . [sent-55, score-1.07]

16 Das and Schneider (2007) present a simple solution to the problem of individual record anomaly detection by computing each record’s likelihood given M0 , and assuming that the lowest-likelihood records are most anomalous; we refer to this approach as the Bayesian Network Anomaly Detector (BN). [sent-58, score-0.741]

17 Although BN is able to locate highly individually anomalous records very quickly, it will lose power to detect anomalous groups produced by a subtle anomalous process where each record, when considered individually, is only slightly anomalous. [sent-59, score-2.119]

18 Furthermore, BN ignores the group structure of anomalies and thus fails to provide specific details (groups of records, or subsets of attributes for which these records are anomalous) useful for understanding the underlying anomalous processes. [sent-60, score-1.42]

19 Anomaly Pattern Detection (APD) first computes each record’s likelihood given M0 , assuming that the lowest-likelihood records are individually anomalous, and then finds rules (conjunctions of attribute values) with higher than expected numbers of individually anomalous records (Das et al. [sent-61, score-1.59]

20 Also, APD permits records within a group to each be anomalous for different reasons, therefore compromising its ability to differentiate between true examples of anomalous patterns and noise, and making it difficult to characterize why a given subset is anomalous. [sent-65, score-1.585]

21 Therefore the current state of the literature requires anomalies to be found either in isolation or through a reduction in the search space, when searching for groups, which could remove the anomalous groups of interest from consideration. [sent-71, score-0.698]

22 In §2 we propose an algorithm, Fast Generalized Subset Scan (FGSS), that can efficiently maximize a scoring function over all possible subsets of data records and attributes, allowing us to find the most anomalous sub1535 M C F OWLAND III, S PEAKMAN AND N EILL set. [sent-72, score-1.04]

23 Furthermore, our algorithm does not depend on the anomalousness of an entire record, but only some subset of its attributes, and is therefore able to provide useful information about the underlying anomalous process by identifying the subset of attributes for which a group is anomalous. [sent-73, score-0.941]

24 Fast Generalized Subset Scan Fast Generalized Subset Scan (FGSS) is a novel method for anomalous pattern detection in general categorical data. [sent-75, score-0.708]

25 Unlike previous methods, we frame the pattern detection problem as a search over subsets of data records and subsets of attributes; we therefore search for self-similar groups of records for which some subset of attributes is anomalous. [sent-76, score-1.523]

26 We wish to find the most anomalous subset S∗ = R∗ × A∗ = arg max F(S), S where the score function F(S) defines the anomalousness of a subset of records and attributes. [sent-93, score-1.12]

27 As in the previously proposed BN, APD, and AGD methods, this Bayesian network is typically learned from a separate “clean” data set of training data assumed to contain no anomalous patterns, but can also be learned from the test data if the proportion of anomalies is assumed to be very small. [sent-102, score-0.689]

28 The less subtle the anomalous process, that is, the more individually anomalous the records it generates are expected to be, the lower αmax can be set. [sent-158, score-1.518]

29 We note that maximizing of F(S) over a range of α values, rather than for a single arbitrarily-chosen value of α, enables the nonparametric scan statistic to detect a small number of highly anomalous p-values, a larger number of subtly anomalous p-values, or anything in between. [sent-159, score-1.31]

30 Our empirical results below demonstrate that the BJ statistic (6) outperforms HC for some real-world anomalous pattern detection tasks, and our use of empirical p-value ranges guarantees unbiased scores even when ties in likelihood are present. [sent-170, score-0.798]

31 Furthermore, we present a novel approach for efficient optimization of any nonparametric scan statistic (satisfying the monotonicity and convexity properties (A1)-(A3) assumed above) over subsets of records and attributes, as described below. [sent-171, score-0.704]

32 For a pair of functions F(S) and G(Ri ), which represent the “score” of a given subset S and the “priority” of data record Ri respectively, the LTSS property guarantees that the only subsets with the potential to be optimal are those consisting of the top-k highest priority records {R(1) . [sent-176, score-0.71]

33 We demonstrate that the nonparametric scan statistics satisfy the necessary conditions for the linear-time subset scanning property to hold, allowing efficient maximization over subsets of data records (for a given subset of attributes) or over subsets of attributes (for a given subset of records). [sent-181, score-1.086]

34 For each α ∈ U, we employ the same logic as described in Corollary 2 to optimize Fα (S): compute the priority Gα (Ri ) for each record as in (7), sort the records from highest to lowest priority, and evaluate subsets S = {R(1) . [sent-247, score-0.677]

35 This approach is computationally efficient when the number of attributes is small, and is guaranteed to find the globally optimal subset of records and attributes. [sent-277, score-0.707]

36 We first maximize F(S) over all subsets of records for the current subset of attributes A, and set the current set of records as follows: R = arg maxR⊆{R1 . [sent-285, score-1.216]

37 (11) We then maximize F(S) over all subsets of attributes for the current subset of records R, and set the current set of attributes as follows: A = arg maxA⊆{A1 . [sent-289, score-1.038]

38 Moreover, if N and M are both large, this iterative search is much faster than an exhaustive search approach, making it computationally feasible to detect anomalous subsets of records and attributes in data sets that are both large and high-dimensional. [sent-302, score-1.402]

39 In this expression, the O(NM) term results from aggregating over records and attributes, while the O(N log N) and O(M log M) terms result from sorting the records and attributes by priority respectively. [sent-304, score-1.154]

40 step optimizes over all subsets of records (given the current subset of attributes) and all subsets of attributes (given the current subset of records), convergence is extremely fast, with average values of Z less than 3. [sent-314, score-0.876]

41 5 Incorporating Similarity Constraints The search approaches described above exploit the linear-time subset scanning property to efficiently identify the unconstrained subset of records and attributes that maximizes the score function F(S). [sent-317, score-0.858]

42 However, the unconstrained optimal subset may contain unrelated records, while records generated by the same anomalous process are expected to be similar to each other. [sent-318, score-0.992]

43 More precisely, we create each replica data set, containing the same number of records as the original test data set, by sampling uniformly at random from the training data or by generating random records according to our Bayesian network representing H0 . [sent-335, score-0.929]

44 Record the maximum value F ∗ of F(S), and the corresponding subsets of records R∗ and attributes A∗ over all such iterations: (a) Initialize A ← random subset of attributes. [sent-357, score-0.775]

45 Maximize F(S) = maxα≤αmax Fα (R × A) over subsets of records R ⊆ Si in the local neighborhood, for the current subset of attributes A, and set R ← arg maxR⊆Si F(R × A). [sent-359, score-0.775]

46 Maximize F(S) = maxα≤αmax Fα (R × A) over all subsets of attributes A, for the current subset of records R, and set A ← arg maxA⊆{A1 . [sent-361, score-0.775]

47 , 2008) attempts to solve the problem of finding anomalous records in a categorical data set through a two-step approach. [sent-374, score-0.98]

48 Each rule is scored by comparing the observed and expected numbers of individually 1547 M C F OWLAND III, S PEAKMAN AND N EILL anomalous records with the given attribute values, using Fisher’s Exact Test. [sent-382, score-1.101]

49 When the number of individually anomalous records is significantly higher than expected, that rule is considered anomalous. [sent-383, score-0.986]

50 All records that share these attribute values will be evaluated together, but it is conceivable that the true anomalies only make up a small fraction of the records that satisfy a given rule. [sent-385, score-1.08]

51 Also, APD bases the score of a rule on the number of (perceived) individually anomalous records that satisfy it. [sent-389, score-1.035]

52 Thus we do not expect it to perform well in cases where each individual record is not highly anomalous, and the anomalous pattern is only visible when the records are considered as a group. [sent-390, score-1.11]

53 Finally, APD, unlike FGSS, lacks the ability to provide accurate insight into the subset of attributes or relationships for which a given group of records is anomalous. [sent-391, score-0.755]

54 , 2008; Das, 2009) is a method designed to find the most anomalous groups of records in a categorical data set. [sent-396, score-1.016]

55 AGD attempts to solve this problem in a loosely constrained manner, improving on a limitation of APD, such that any arbitrary group of anomalous records can be detected and reported. [sent-397, score-0.991]

56 A subset that has a large F(S) is one whose records are mutually very likely given the local Bayesian network (self-similar) but are dissimilar to the records outside of subset S. [sent-401, score-0.959]

57 We extend LTSS to detect self-similar, anomalous subsets of records and attributes in general multivariate data, where many of the traditional parametric assumptions found in space-time detection fail to hold. [sent-410, score-1.417]

58 We consider data sets from three distinct application domains (customs monitoring, disease surveillance, and network intrusion detection) in order to evaluate each method’s ability to detect anomalous patterns. [sent-417, score-0.701]

59 Each test data set is composed of records that represent typical system behavior as well as anomalous groups, while each normal data set has the same number of records as the test data sets but does not contain any anomalous groups. [sent-430, score-1.938]

60 For the BN method, the score of a record Ri is the negative log-likelihood of that record given the Bayesian network learned from training data, and the score of a data set is the average negative loglikelihood of the individual records it contains. [sent-439, score-0.794]

61 For the APD method, all records that belong to a significant pattern are ranked above all records that do not belong to a significant pattern; within each of these subsets of records, the individual records are ranked using the individual anomaly detector (BN method). [sent-443, score-1.486]

62 Similarly, a data set’s score is the score of the most individually anomalous record it contains, with all data sets containing significant patterns ranked above all data sets which do not contain significant patterns (Das et al. [sent-444, score-0.852]

63 The score of a data set is defined as the average group score of all grouped iN records, ∑ FNi i , where Fi is the score of group i and Ni is the number of records in group i. [sent-452, score-0.717]

64 A subset of attributes Ain j is then chosen at random; each of the identical records in the group is then modified by randomly redrawing its values for this subset of attributes. [sent-604, score-0.788]

65 The records within the injected group are self-similar, as each pair of records differs by at most min j = |Ain j | attributes. [sent-606, score-0.941]

66 One possible 1551 M C F OWLAND III, S PEAKMAN AND N EILL real world scenario where such an anomalous group might occur is when a smuggler attempts to ship contraband using methods which have proved successful in the past, thus creating a group of similar, subtly anomalous container shipments. [sent-608, score-1.184]

67 We performed ten different experiments which differed in four parameters: the number of records N in the test data sets, the number of injected groups kin j , the number of records per injected group sin j , and the number of randomly altered attributes min j . [sent-609, score-1.302]

68 A separate training data set containing 100,000 records was generated for each experiment; the training and normal data sets are assumed to contain only “normal” shipping patterns with no anomalous patterns of interest. [sent-611, score-1.047]

69 Table 2 compares each method’s average area under the PR curve across the various PIERS scenarios, thus evaluating the methods’ ability to distinguish between anomalous and normal records in the test data sets. [sent-613, score-1.027]

70 All methods tended to have improved performance when the proportion of anomalies kin j sin j /N was larger, when the group size sin j was larger, and when the records were more individually anomalous (corresponding to a larger number of randomly changed attributes min j ). [sent-616, score-1.413]

71 Table 3 compares each method’s average area under the ROC curve across the various PIERS scenarios, thus evaluating the methods’ ability to distinguish between the test data sets (which contain anomalous patterns) and the equally-sized normal data sets (in which no anomalies are present). [sent-622, score-0.714]

72 We observe that AGD demonstrates the best performance for identifying which records are anomalous (as measured by area under the PR curve). [sent-664, score-0.984]

73 The anomalies from a given intrusion type are likely to be both self-similar and different from normal activity, as they are generated by the same underlying anomalous process. [sent-688, score-0.701]

74 These facts should make it possible to detect intrusions by identifying anomalous patterns of network activity, without requiring labeled training examples of each intrusion type. [sent-689, score-0.722]

75 Das (2009) notes that using all 41 features makes the anomalies very individually anomalous, such that any individual record anomaly detection method could easily distinguish these records from normal network activity. [sent-690, score-0.934]

76 These results are consistent with what we understand about the data and the various methods, since records generated by most of the attack types are individually highly anomalous, and FGSS-HC tends to detect smaller subsets of more individually anomalous records. [sent-697, score-1.184]

77 For our subset of 22 attributes, all of the records injected by the Mailbomb attack are identical to each other; after the discretization of continuous attributes, the records injected by the Snmpguess attack are also identical to each other and to many normal records. [sent-703, score-1.061]

78 This atypical case, where all the records of interest are identical, rewards the AGD method, which requires large groups of similar records in order to achieve high detection power. [sent-704, score-0.998]

79 Our search procedure can be used to efficiently maximize the score function over subsets of records while exhaustively searching over subsets of attributes (Exhaustive FGSS) or to efficiently optimize over both subsets of records and subsets of attributes using an iterative search procedure (FGSS). [sent-716, score-1.748]

80 Also, we can enforce similarity constraints on the anomalous groups returned, or perform an unconstrained search over all subsets of records and attributes. [sent-717, score-1.095]

81 Figure 3 compares the run times of FGSS and AGD for varying numbers of records N and attributes M. [sent-718, score-0.674]

82 When the data set contains anomalies, unlike the “normal” data used in this scalability experiment, AGD will find more anomalous records to group together and thus forms larger groups, substantially increasing its run time. [sent-740, score-0.991]

83 5 Pattern Characterization Anomalous pattern detection can be described as a form of knowledge discovery, where the knowledge of interest includes not only the subset of data records affected by an anomalous process, but also which subset of attributes are anomalous for these records. [sent-742, score-1.946]

84 In addition to identifying the subset of records affected by an anomalous process, our FGSS method also identifies the subset of attributes for which these records are anomalous. [sent-744, score-1.718]

85 The previously proposed APD method also characterizes anomalous patterns by identifying “rules” which correspond to a higher than expected number of individually anomalous records. [sent-745, score-1.138]

86 Recall that to generate an anomalous group in the PIERS data, we selected a subset of attributes at random, and regenerated these attribute values for each affected record. [sent-748, score-0.979]

87 The affected subset of attributes for each record is used as the ground truth to which we can compare the subset of attributes identified as anomalous by a given method. [sent-749, score-1.203]

88 When an attribute has an anomalous value, either its corresponding likelihood or the likelihoods of its children given the Bayesian network structure will be low. [sent-753, score-0.719]

89 Given the set of predicted attributes A and the set of true (affected) attributes B, if there exists an attribute Ai ∈ A with a parent A p ∈ B, then A = A ∪ {A p }, that is, the parent attribute is counted as correctly predicted. [sent-756, score-0.726]

90 FGSS places all ungrouped records in a group together, and thus we also use the p-value characterization method to identify a subset of attributes for each ungrouped record. [sent-761, score-0.755]

91 These results support our hypothesis that the grouping of records that are self-similar and anomalous for some subset of attributes in our FGSS framework results in substantially improved pattern characterization ability. [sent-764, score-1.287]

92 We formalize the pattern detection problem as a search over subsets of data records and attributes, and present the Fast Generalized Subset Scan (FGSS) algorithm, which efficiently detects anomalous patterns in general categorical data sets. [sent-768, score-1.278]

93 The algorithm then uses the distribution of these empirical p-value ranges under the null hypothesis in order to find subsets of data records and attributes that as a group significantly deviate from their expectation as measured by a nonparametric scan statistic. [sent-770, score-1.043]

94 This property allows us to search efficiently and exactly over all subsets of data records or attributes while evaluating only a linear number of subsets. [sent-772, score-0.774]

95 Additionally, similarity constraints can be easily incorporated into our FGSS framework, allowing for the detection of self-similar subsets of records which have anomalous values for some subset of attributes. [sent-774, score-1.154]

96 FGSS excels in scenarios when there is a strong self-similarity among the records generated by an anomalous process, with each individual record only emitting a subtle anomalous signal. [sent-777, score-1.598]

97 Furthermore, we empirically demonstrate that, even as FGSS scales to high-dimensional data, it finds the globally optimal subset of records and attributes with high probability. [sent-779, score-0.707]

98 When the number of attributes is large, FGSS converges to a conditional maximum (for which the subset of records is optimal given the subset of attributes and vice-versa), and multiple restarts are used to approach the joint optimum. [sent-783, score-1.003]

99 Finally, FGSS not only achieves high detection power but is also able to accurately characterize the subset of attributes for which each identified subset of records is anomalous. [sent-784, score-0.85]

100 We will detect subsets of records and attributes that are unlikely given each known pattern model as well as M0 , thus enabling FGSS to discover previously unknown pattern types given the current set of known patterns. [sent-795, score-0.878]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('anomalous', 0.517), ('records', 0.426), ('fgss', 0.423), ('attributes', 0.248), ('agd', 0.2), ('apd', 0.186), ('scan', 0.133), ('attribute', 0.115), ('neill', 0.114), ('anomalies', 0.113), ('detection', 0.11), ('record', 0.106), ('pmax', 0.1), ('pi', 0.075), ('das', 0.073), ('ubset', 0.068), ('subsets', 0.068), ('eill', 0.064), ('ltss', 0.064), ('nomalous', 0.064), ('owland', 0.064), ('peakman', 0.064), ('anomaly', 0.062), ('pmin', 0.056), ('intrusion', 0.055), ('priority', 0.054), ('eneralized', 0.054), ('statistic', 0.054), ('ranges', 0.053), ('etection', 0.049), ('score', 0.049), ('group', 0.048), ('detect', 0.048), ('anomalousness', 0.045), ('piers', 0.045), ('pattern', 0.044), ('patterns', 0.044), ('individually', 0.043), ('hc', 0.042), ('ri', 0.042), ('network', 0.041), ('injected', 0.041), ('disease', 0.04), ('roc', 0.04), ('bn', 0.04), ('attack', 0.039), ('categorical', 0.037), ('container', 0.036), ('groups', 0.036), ('subset', 0.033), ('bj', 0.032), ('search', 0.032), ('anthrax', 0.032), ('shipments', 0.032), ('surveillance', 0.032), ('exhaustive', 0.031), ('bayesian', 0.03), ('detecting', 0.028), ('customs', 0.027), ('emergency', 0.027), ('hospital', 0.027), ('jmk', 0.027), ('curve', 0.026), ('likelihoods', 0.026), ('null', 0.025), ('li', 0.025), ('area', 0.024), ('pr', 0.024), ('spatial', 0.024), ('nonparametric', 0.023), ('nbeat', 0.023), ('highest', 0.023), ('cance', 0.021), ('scanning', 0.021), ('iii', 0.021), ('likelihood', 0.02), ('hypothesis', 0.019), ('affected', 0.018), ('test', 0.018), ('int', 0.018), ('cials', 0.018), ('kin', 0.018), ('mcfowland', 0.018), ('ntie', 0.018), ('subtly', 0.018), ('replica', 0.018), ('identifying', 0.017), ('max', 0.017), ('individual', 0.017), ('unconstrained', 0.016), ('ntrain', 0.016), ('normal', 0.016), ('maximize', 0.015), ('activity', 0.015), ('subtle', 0.015), ('restarts', 0.015), ('visits', 0.015), ('vi', 0.015), ('scoring', 0.014), ('monitoring', 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 42 jmlr-2013-Fast Generalized Subset Scan for Anomalous Pattern Detection

Author: Edward McFowland III, Skyler Speakman, Daniel B. Neill

Abstract: We propose Fast Generalized Subset Scan (FGSS), a new method for detecting anomalous patterns in general categorical data sets. We frame the pattern detection problem as a search over subsets of data records and attributes, maximizing a nonparametric scan statistic over all such subsets. We prove that the nonparametric scan statistics possess a novel property that allows for efficient optimization over the exponentially many subsets of the data without an exhaustive search, enabling FGSS to scale to massive and high-dimensional data sets. We evaluate the performance of FGSS in three real-world application domains (customs monitoring, disease surveillance, and network intrusion detection), and demonstrate that FGSS can successfully detect and characterize relevant patterns in each domain. As compared to three other recently proposed detection algorithms, FGSS substantially decreased run time and improved detection power for massive multivariate data sets. Keywords: pattern detection, anomaly detection, knowledge discovery, Bayesian networks, scan statistics

2 0.12692004 89 jmlr-2013-QuantMiner for Mining Quantitative Association Rules

Author: Ansaf Salleb-Aouissi, Christel Vrain, Cyril Nortet, Xiangrong Kong, Vivek Rathod, Daniel Cassard

Abstract: In this paper, we propose Q UANT M INER, a mining quantitative association rules system. This system is based on a genetic algorithm that dynamically discovers “good” intervals in association rules by optimizing both the support and the confidence. The experiments on real and artificial databases have shown the usefulness of Q UANT M INER as an interactive, exploratory data mining tool. Keywords: association rules, numerical and categorical attributes, unsupervised discretization, genetic algorithm, simulated annealing

3 0.053165179 12 jmlr-2013-Alleviating Naive Bayes Attribute Independence Assumption by Attribute Weighting

Author: Nayyar A. Zaidi, Jesús Cerquides, Mark J. Carman, Geoffrey I. Webb

Abstract: Despite the simplicity of the Naive Bayes classifier, it has continued to perform well against more sophisticated newcomers and has remained, therefore, of great interest to the machine learning community. Of numerous approaches to refining the naive Bayes classifier, attribute weighting has received less attention than it warrants. Most approaches, perhaps influenced by attribute weighting in other machine learning algorithms, use weighting to place more emphasis on highly predictive attributes than those that are less predictive. In this paper, we argue that for naive Bayes attribute weighting should instead be used to alleviate the conditional independence assumption. Based on this premise, we propose a weighted naive Bayes algorithm, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions. We perform extensive evaluations and find that WANBIA is a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE. Keywords: classification, naive Bayes, attribute independence assumption, weighted naive Bayes classification

4 0.048651982 6 jmlr-2013-A Plug-in Approach to Neyman-Pearson Classification

Author: Xin Tong

Abstract: The Neyman-Pearson (NP) paradigm in binary classification treats type I and type II errors with different priorities. It seeks classifiers that minimize type II error, subject to a type I error constraint under a user specified level α. In this paper, plug-in classifiers are developed under the NP paradigm. Based on the fundamental Neyman-Pearson Lemma, we propose two related plug-in classifiers which amount to thresholding respectively the class conditional density ratio and the regression function. These two classifiers handle different sampling schemes. This work focuses on theoretical properties of the proposed classifiers; in particular, we derive oracle inequalities that can be viewed as finite sample versions of risk bounds. NP classification can be used to address anomaly detection problems, where asymmetry in errors is an intrinsic property. As opposed to a common practice in anomaly detection that consists of thresholding normal class density, our approach does not assume a specific form for anomaly distributions. Such consideration is particularly necessary when the anomaly class density is far from uniformly distributed. Keywords: plug-in approach, Neyman-Pearson paradigm, nonparametric statistics, oracle inequality, anomaly detection

5 0.0397329 48 jmlr-2013-Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation

Author: Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Pierre Dupont

Abstract: We describe a Bayesian method for group feature selection in linear regression problems. The method is based on a generalized version of the standard spike-and-slab prior distribution which is often used for individual feature selection. Exact Bayesian inference under the prior considered is infeasible for typical regression problems. However, approximate inference can be carried out efficiently using Expectation Propagation (EP). A detailed analysis of the generalized spike-and-slab prior shows that it is well suited for regression problems that are sparse at the group level. Furthermore, this prior can be used to introduce prior knowledge about specific groups of features that are a priori believed to be more relevant. An experimental evaluation compares the performance of the proposed method with those of group LASSO, Bayesian group LASSO, automatic relevance determination and additional variants used for group feature selection. The results of these experiments show that a model based on the generalized spike-and-slab prior and the EP algorithm has state-of-the-art prediction performance in the problems analyzed. Furthermore, this model is also very useful to carry out sequential experimental design (also known as active learning), where the data instances that are most informative are iteratively included in the training set, reducing the number of instances needed to obtain a particular level of prediction accuracy. Keywords: group feature selection, generalized spike-and-slab priors, expectation propagation, sparse linear model, approximate inference, sequential experimental design, signal reconstruction

6 0.036322482 95 jmlr-2013-Ranking Forests

7 0.024457043 58 jmlr-2013-Language-Motivated Approaches to Action Recognition

8 0.023159845 111 jmlr-2013-Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows

9 0.022304304 44 jmlr-2013-Finding Optimal Bayesian Networks Using Precedence Constraints

10 0.021965314 101 jmlr-2013-Sparse Activity and Sparse Connectivity in Supervised Learning

11 0.021086568 90 jmlr-2013-Quasi-Newton Method: A New Direction

12 0.018949674 119 jmlr-2013-Variable Selection in High-Dimension with Random Designs and Orthogonal Matching Pursuit

13 0.018917823 98 jmlr-2013-Segregating Event Streams and Noise with a Markov Renewal Process Model

14 0.017281873 80 jmlr-2013-One-shot Learning Gesture Recognition from RGB-D Data Using Bag of Features

15 0.017008675 110 jmlr-2013-Sub-Local Constraint-Based Learning of Bayesian Networks Using A Joint Dependence Criterion

16 0.016845902 94 jmlr-2013-Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections

17 0.016528532 39 jmlr-2013-Efficient Active Learning of Halfspaces: An Aggressive Approach

18 0.016387584 96 jmlr-2013-Regularization-Free Principal Curve Estimation

19 0.015858583 8 jmlr-2013-A Theory of Multiclass Boosting

20 0.015660901 56 jmlr-2013-Keep It Simple And Sparse: Real-Time Action Recognition


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.089), (1, -0.003), (2, -0.035), (3, 0.016), (4, 0.034), (5, 0.018), (6, 0.002), (7, 0.015), (8, -0.031), (9, 0.013), (10, -0.012), (11, -0.175), (12, -0.052), (13, -0.264), (14, 0.011), (15, -0.022), (16, -0.347), (17, 0.089), (18, -0.039), (19, -0.266), (20, -0.164), (21, 0.026), (22, -0.046), (23, -0.067), (24, -0.02), (25, -0.001), (26, -0.051), (27, 0.012), (28, 0.079), (29, -0.028), (30, -0.062), (31, 0.015), (32, 0.104), (33, -0.034), (34, -0.013), (35, -0.016), (36, -0.029), (37, 0.183), (38, -0.006), (39, -0.045), (40, -0.002), (41, -0.101), (42, -0.034), (43, -0.01), (44, 0.175), (45, -0.199), (46, 0.045), (47, -0.006), (48, -0.099), (49, -0.046)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96614426 42 jmlr-2013-Fast Generalized Subset Scan for Anomalous Pattern Detection

Author: Edward McFowland III, Skyler Speakman, Daniel B. Neill

Abstract: We propose Fast Generalized Subset Scan (FGSS), a new method for detecting anomalous patterns in general categorical data sets. We frame the pattern detection problem as a search over subsets of data records and attributes, maximizing a nonparametric scan statistic over all such subsets. We prove that the nonparametric scan statistics possess a novel property that allows for efficient optimization over the exponentially many subsets of the data without an exhaustive search, enabling FGSS to scale to massive and high-dimensional data sets. We evaluate the performance of FGSS in three real-world application domains (customs monitoring, disease surveillance, and network intrusion detection), and demonstrate that FGSS can successfully detect and characterize relevant patterns in each domain. As compared to three other recently proposed detection algorithms, FGSS substantially decreased run time and improved detection power for massive multivariate data sets. Keywords: pattern detection, anomaly detection, knowledge discovery, Bayesian networks, scan statistics

2 0.79453468 89 jmlr-2013-QuantMiner for Mining Quantitative Association Rules

Author: Ansaf Salleb-Aouissi, Christel Vrain, Cyril Nortet, Xiangrong Kong, Vivek Rathod, Daniel Cassard

Abstract: In this paper, we propose Q UANT M INER, a mining quantitative association rules system. This system is based on a genetic algorithm that dynamically discovers “good” intervals in association rules by optimizing both the support and the confidence. The experiments on real and artificial databases have shown the usefulness of Q UANT M INER as an interactive, exploratory data mining tool. Keywords: association rules, numerical and categorical attributes, unsupervised discretization, genetic algorithm, simulated annealing

3 0.39234087 12 jmlr-2013-Alleviating Naive Bayes Attribute Independence Assumption by Attribute Weighting

Author: Nayyar A. Zaidi, Jesús Cerquides, Mark J. Carman, Geoffrey I. Webb

Abstract: Despite the simplicity of the Naive Bayes classifier, it has continued to perform well against more sophisticated newcomers and has remained, therefore, of great interest to the machine learning community. Of numerous approaches to refining the naive Bayes classifier, attribute weighting has received less attention than it warrants. Most approaches, perhaps influenced by attribute weighting in other machine learning algorithms, use weighting to place more emphasis on highly predictive attributes than those that are less predictive. In this paper, we argue that for naive Bayes attribute weighting should instead be used to alleviate the conditional independence assumption. Based on this premise, we propose a weighted naive Bayes algorithm, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions. We perform extensive evaluations and find that WANBIA is a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE. Keywords: classification, naive Bayes, attribute independence assumption, weighted naive Bayes classification

4 0.23480672 6 jmlr-2013-A Plug-in Approach to Neyman-Pearson Classification

Author: Xin Tong

Abstract: The Neyman-Pearson (NP) paradigm in binary classification treats type I and type II errors with different priorities. It seeks classifiers that minimize type II error, subject to a type I error constraint under a user specified level α. In this paper, plug-in classifiers are developed under the NP paradigm. Based on the fundamental Neyman-Pearson Lemma, we propose two related plug-in classifiers which amount to thresholding respectively the class conditional density ratio and the regression function. These two classifiers handle different sampling schemes. This work focuses on theoretical properties of the proposed classifiers; in particular, we derive oracle inequalities that can be viewed as finite sample versions of risk bounds. NP classification can be used to address anomaly detection problems, where asymmetry in errors is an intrinsic property. As opposed to a common practice in anomaly detection that consists of thresholding normal class density, our approach does not assume a specific form for anomaly distributions. Such consideration is particularly necessary when the anomaly class density is far from uniformly distributed. Keywords: plug-in approach, Neyman-Pearson paradigm, nonparametric statistics, oracle inequality, anomaly detection

5 0.23102711 95 jmlr-2013-Ranking Forests

Author: Stéphan Clémençon, Marine Depecker, Nicolas Vayatis

Abstract: The present paper examines how the aggregation and feature randomization principles underlying the algorithm R ANDOM F OREST (Breiman, 2001) can be adapted to bipartite ranking. The approach taken here is based on nonparametric scoring and ROC curve optimization in the sense of the AUC criterion. In this problem, aggregation is used to increase the performance of scoring rules produced by ranking trees, as those developed in Cl´ mencon and Vayatis (2009c). The present work e ¸ describes the principles for building median scoring rules based on concepts from rank aggregation. Consistency results are derived for these aggregated scoring rules and an algorithm called R ANK ING F OREST is presented. Furthermore, various strategies for feature randomization are explored through a series of numerical experiments on artificial data sets. Keywords: bipartite ranking, nonparametric scoring, classification data, ROC optimization, AUC criterion, tree-based ranking rules, bootstrap, bagging, rank aggregation, median ranking, feature randomization

6 0.22928788 118 jmlr-2013-Using Symmetry and Evolutionary Search to Minimize Sorting Networks

7 0.15530074 48 jmlr-2013-Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation

8 0.14912504 110 jmlr-2013-Sub-Local Constraint-Based Learning of Bayesian Networks Using A Joint Dependence Criterion

9 0.13625073 94 jmlr-2013-Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections

10 0.1314909 2 jmlr-2013-A Binary-Classification-Based Metric between Time-Series Distributions and Its Use in Statistical and Learning Problems

11 0.12886268 33 jmlr-2013-Dimension Independent Similarity Computation

12 0.12421717 43 jmlr-2013-Fast MCMC Sampling for Markov Jump Processes and Extensions

13 0.12166719 14 jmlr-2013-Asymptotic Results on Adaptive False Discovery Rate Controlling Procedures Based on Kernel Estimators

14 0.1177625 90 jmlr-2013-Quasi-Newton Method: A New Direction

15 0.11308575 103 jmlr-2013-Sparse Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory

16 0.11194693 40 jmlr-2013-Efficient Program Synthesis Using Constraint Satisfaction in Inductive Logic Programming

17 0.1101279 106 jmlr-2013-Stationary-Sparse Causality Network Learning

18 0.10722697 101 jmlr-2013-Sparse Activity and Sparse Connectivity in Supervised Learning

19 0.10369171 83 jmlr-2013-Orange: Data Mining Toolbox in Python

20 0.10326897 58 jmlr-2013-Language-Motivated Approaches to Action Recognition


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.028), (5, 0.083), (6, 0.031), (10, 0.064), (20, 0.067), (23, 0.026), (38, 0.42), (68, 0.036), (70, 0.024), (71, 0.013), (75, 0.029), (85, 0.038), (87, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.6973322 42 jmlr-2013-Fast Generalized Subset Scan for Anomalous Pattern Detection

Author: Edward McFowland III, Skyler Speakman, Daniel B. Neill

Abstract: We propose Fast Generalized Subset Scan (FGSS), a new method for detecting anomalous patterns in general categorical data sets. We frame the pattern detection problem as a search over subsets of data records and attributes, maximizing a nonparametric scan statistic over all such subsets. We prove that the nonparametric scan statistics possess a novel property that allows for efficient optimization over the exponentially many subsets of the data without an exhaustive search, enabling FGSS to scale to massive and high-dimensional data sets. We evaluate the performance of FGSS in three real-world application domains (customs monitoring, disease surveillance, and network intrusion detection), and demonstrate that FGSS can successfully detect and characterize relevant patterns in each domain. As compared to three other recently proposed detection algorithms, FGSS substantially decreased run time and improved detection power for massive multivariate data sets. Keywords: pattern detection, anomaly detection, knowledge discovery, Bayesian networks, scan statistics

2 0.59256452 107 jmlr-2013-Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Author: Shai Shalev-Shwartz, Tong Zhang

Abstract: Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications. Keywords: stochastic dual coordinate ascent, optimization, computational complexity, regularized loss minimization, support vector machines, ridge regression, logistic regression

3 0.31507766 51 jmlr-2013-Greedy Sparsity-Constrained Optimization

Author: Sohail Bahmani, Bhiksha Raj, Petros T. Boufounos

Abstract: Sparsity-constrained optimization has wide applicability in machine learning, statistics, and signal processing problems such as feature selection and Compressed Sensing. A vast body of work has studied the sparsity-constrained optimization from theoretical, algorithmic, and application aspects in the context of sparse estimation in linear models where the fidelity of the estimate is measured by the squared error. In contrast, relatively less effort has been made in the study of sparsityconstrained optimization in cases where nonlinear models are involved or the cost function is not quadratic. In this paper we propose a greedy algorithm, Gradient Support Pursuit (GraSP), to approximate sparse minima of cost functions of arbitrary form. Should a cost function have a Stable Restricted Hessian (SRH) or a Stable Restricted Linearization (SRL), both of which are introduced in this paper, our algorithm is guaranteed to produce a sparse vector within a bounded distance from the true sparse optimum. Our approach generalizes known results for quadratic cost functions that arise in sparse linear regression and Compressed Sensing. We also evaluate the performance of GraSP through numerical simulations on synthetic and real data, where the algorithm is employed for sparse logistic regression with and without ℓ2 -regularization. Keywords: sparsity, optimization, compressed sensing, greedy algorithm

4 0.30106738 58 jmlr-2013-Language-Motivated Approaches to Action Recognition

Author: Manavender R. Malgireddy, Ifeoma Nwogu, Venu Govindaraju

Abstract: We present language-motivated approaches to detecting, localizing and classifying activities and gestures in videos. In order to obtain statistical insight into the underlying patterns of motions in activities, we develop a dynamic, hierarchical Bayesian model which connects low-level visual features in videos with poses, motion patterns and classes of activities. This process is somewhat analogous to the method of detecting topics or categories from documents based on the word content of the documents, except that our documents are dynamic. The proposed generative model harnesses both the temporal ordering power of dynamic Bayesian networks such as hidden Markov models (HMMs) and the automatic clustering power of hierarchical Bayesian models such as the latent Dirichlet allocation (LDA) model. We also introduce a probabilistic framework for detecting and localizing pre-specified activities (or gestures) in a video sequence, analogous to the use of filler models for keyword detection in speech processing. We demonstrate the robustness of our classification model and our spotting framework by recognizing activities in unconstrained real-life video sequences and by spotting gestures via a one-shot-learning approach. Keywords: dynamic hierarchical Bayesian networks, topic models, activity recognition, gesture spotting, generative models

5 0.2860744 25 jmlr-2013-Communication-Efficient Algorithms for Statistical Optimization

Author: Yuchen Zhang, John C. Duchi, Martin J. Wainwright

Abstract: We analyze two communication-efficient algorithms for distributed optimization in statistical settings involving large-scale data sets. The first algorithm is a standard averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves √ mean-squared error (MSE) that decays as O (N −1 + (N/m)−2 ). Whenever m ≤ N, this guarantee matches the best possible rate achievable by a centralized algorithm having access to all N samples. The second algorithm is a novel method, based on an appropriate form of bootstrap subsampling. Requiring only a single round of communication, it has mean-squared error that decays as O (N −1 + (N/m)−3 ), and so is more robust to the amount of parallelization. In addition, we show that a stochastic gradient-based method attains mean-squared error decaying as O (N −1 + (N/m)−3/2 ), easing computation at the expense of a potentially slower MSE rate. We also provide an experimental evaluation of our methods, investigating their performance both on simulated data and on a large-scale regression problem from the internet search domain. In particular, we show that our methods can be used to efficiently solve an advertisement prediction problem from the Chinese SoSo Search Engine, which involves logistic regression with N ≈ 2.4 × 108 samples and d ≈ 740,000 covariates. Keywords: distributed learning, stochastic optimization, averaging, subsampling

6 0.28367889 2 jmlr-2013-A Binary-Classification-Based Metric between Time-Series Distributions and Its Use in Statistical and Learning Problems

7 0.28262475 28 jmlr-2013-Construction of Approximation Spaces for Reinforcement Learning

8 0.2820605 5 jmlr-2013-A Near-Optimal Algorithm for Differentially-Private Principal Components

9 0.28176183 77 jmlr-2013-On the Convergence of Maximum Variance Unfolding

10 0.28029084 52 jmlr-2013-How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis

11 0.27958307 48 jmlr-2013-Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation

12 0.27936921 26 jmlr-2013-Conjugate Relation between Loss Functions and Uncertainty Sets in Classification Problems

13 0.27910835 99 jmlr-2013-Semi-Supervised Learning Using Greedy Max-Cut

14 0.27748713 72 jmlr-2013-Multi-Stage Multi-Task Feature Learning

15 0.27747208 105 jmlr-2013-Sparsity Regret Bounds for Individual Sequences in Online Linear Regression

16 0.2774471 29 jmlr-2013-Convex and Scalable Weakly Labeled SVMs

17 0.27683902 22 jmlr-2013-Classifying With Confidence From Incomplete Information

18 0.27674764 110 jmlr-2013-Sub-Local Constraint-Based Learning of Bayesian Networks Using A Joint Dependence Criterion

19 0.27656502 50 jmlr-2013-Greedy Feature Selection for Subspace Clustering

20 0.27598882 100 jmlr-2013-Similarity-based Clustering by Left-Stochastic Matrix Factorization