jmlr jmlr2013 jmlr2013-91 knowledge-graph by maker-knowledge-mining

91 jmlr-2013-Query Induction with Schema-Guided Pruning Strategies


Source: pdf

Author: Joachim Niehren, Jérôme Champavère, Aurélien Lemay, Rémi Gilleron

Abstract: Inference algorithms for tree automata that define node selecting queries in unranked trees rely on tree pruning strategies. These impose additional assumptions on node selection that are needed to compensate for small numbers of annotated examples. Pruning-based heuristics in query learning algorithms for Web information extraction often boost the learning quality and speed up the learning process. We will distinguish the class of regular queries that are stable under a given schemaguided pruning strategy, and show that this class is learnable with polynomial time and data. Our learning algorithm is obtained by adding pruning heuristics to the traditional learning algorithm for tree automata from positive and negative examples. While justified by a formal learning model, our learning algorithm for stable queries also performs very well in practice of XML information extraction. Keywords: XML information extraction, XML schemas, interactive learning, tree automata, grammatical inference

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 B – 40 Avenue Halley 59650 Villeneuve d’Ascq France Editor: Mehryar Mohri Abstract Inference algorithms for tree automata that define node selecting queries in unranked trees rely on tree pruning strategies. [sent-11, score-1.517]

2 The query induction algorithm then learns the query from the XML trees with some annotated elements. [sent-49, score-0.909]

3 Schema-less pruning strategies were essential for good quality with few examples of query induction algorithms based on tree automata inference by, for example, Raeymaekers et al. [sent-63, score-1.041]

4 For an invalid tree, the state qinvalid will appear along the run of the automaton and the tree will be rejected, while a valid tree will be processed and annotated by the correct states. [sent-90, score-1.005]

5 For instance the unranked tree u will be annotated as qcountry (qname , qcity , qregion (qname , qpopulation , qcity ), qregion (qname , qcity , qcity )), and will be accepted because the root state is qcountry . [sent-91, score-0.8]

6 It will turn out that more complex pruning strategies will be needed for ranked trees, in order to deal with the above pruning strategies for unranked trees via binary encoding. [sent-96, score-1.424]

7 We call a query stable for a pruning strategy if the result of applying the strategy to a tree at some selected node does always justify the node’s selection. [sent-97, score-1.023]

8 country name (name,1) city region population city region (name,0) city city Figure 6: An annotated tree in which the first region name has been annotated positively (it must be selected) and the second region name has been annotated negatively (it must not be selected). [sent-104, score-1.339]

9 Pruning strategies can be lifted to pruning functions on positively annotated trees. [sent-111, score-0.824]

10 Given a pruning strategy σ, the next question is whether σ-stable queries can be learned from σ-pruned samples of annotated examples. [sent-113, score-0.922]

11 Depending on the pruning strategy, this yields a hierarchy of query classes that is essential for understanding the difficulty of query learning in practice. [sent-116, score-0.884]

12 We define schema-guided pruning strategies for schemas defined by deterministic bottomup tree automata, both for ranked and unranked trees. [sent-121, score-1.093]

13 We lift pruning strategies to pruning functions that can be applied to positively annotated trees, in order to produce pruned samples for our learning algorithm. [sent-126, score-1.417]

14 We present an algorithm that decides in polynomial time whether a regular query is stable for a regular pruning function. [sent-130, score-0.856]

15 Section 3 introduces the notion of stable queries for schema-guided pruning strategies and shows that less aggressive pruning strategies give rise to larger classes of stable queries. [sent-144, score-1.344]

16 In Section 4 we show how to lift pruning strategies to pruning functions, by which to prune examples with positive annotations only. [sent-145, score-1.032]

17 We then show how to characterize stable queries for pruning functions by regular languages of pruned annotated examples in a unique manner. [sent-146, score-1.19]

18 Section 5 discusses how to define regular stable queries and pruning functions by deterministic tree automata. [sent-148, score-0.996]

19 A tree language L ⊆ TΣ is regular if L = L (D) for some tree automaton D with signature Σ. [sent-228, score-0.904]

20 Given another tree automaton D′ over the same signature, a tree automaton D ∩ D′ with L (D ∩ D′ ) = L (D) ∩ L (D′ ) can be computed in time O(|D| |D′ |). [sent-238, score-1.022]

21 1 Pruned Trees We fix a ranked signature Σ and a schema D which is a deterministic tree automaton with signature Σ and state set X. [sent-251, score-1.01]

22 Definition 2 A pruning strategy for D is a function σ that maps any tree t ∈ L (D) and node ν ∈ nodes(t) to a pruned tree σ(t, ν) ∈ TΣ (X) of which t is a D-instance, such that ν is preserved with its label. [sent-270, score-1.044]

23 The pruning strategy path-onlyD is more restrictive in that it can only be applied to trees satisfying schema D. [sent-276, score-0.827]

24 Query Q1 is stable for both pruning strategies path-only and thus for path-onlyLib while query Q2 is stable only for path-onlyLib but not for path-only. [sent-306, score-0.827]

25 We now define the pruning strategy σ′ with schema D such that it replaces for all trees t ∈ L (D) the same subtrees t0 |D as σ′ by the unique state in evalD (t0 ), but not by the unique state in evalD′ (t0 ) as chosen by σ′ . [sent-315, score-0.903]

26 In order to do so, we will lift pruning strategies to pruning functions that can be applied to example trees with positive annotations. [sent-325, score-1.106]

27 The main idea of the learning algorithm for regular stable queries in Section 7 will be to identify the characteristic languages of the target query from annotated examples. [sent-328, score-0.888]

28 Different methods for lifting pruning strategies to pruning functions will give rise to different target languages and thus to different learning algorithms. [sent-329, score-0.994]

29 1 Annotated Trees Intuitively, annotated examples for a target query are trees in which some selected nodes are annotated by the Boolean 1 (“true”) and some rejected nodes by the Boolean 0 (“false”). [sent-331, score-1.112]

30 We next formalize the notion of annotated trees (independently of any target query or pruning strategy). [sent-335, score-1.118]

31 An annotated tree is a tree with ranked signature Σ ∪ (Σ × B) ∪ X, where all q ∈ X have arity 0 while Boolean annotations preserve the arity. [sent-338, score-0.891]

32 The Σprojection of a language L of annotated trees is the language ΠΣ (L) of pruned trees t ∈ TΣ (X) where every t is the Σ-projection of some tree t ∗ β ∈ L. [sent-354, score-1.032]

33 For every tree automaton A with signature Σ ∪ (Σ × B) ∪ X, one can compute in linear time a nondeterministic automaton ΠΣ (A) over Σ × X such that L (ΠΣ (A)) = ΠΣ (L (A)). [sent-355, score-0.888]

34 2 From Pruning Strategies to Pruning Functions We next show how to lift pruning strategies in order to prune positively annotated trees. [sent-357, score-0.854]

35 Given a pruning strategy σ, our first objective is to define a pruning function pσ that can be applied to all positively annotated trees t ∗ β with t ∈ L (D), while preserving the critical regions σ(t, ν) of all positively annotated nodes ν with their annotations: pσ (t ∗ β) = ⊔ν∈dom(β) σ(t, ν) ∗ β. [sent-359, score-1.756]

36 In our experiments, we will exclusively work with pruning functions pσ , even though the alternative pruning function pcan is highly promising for learning n-ary queries in particular. [sent-374, score-1.069]

37 Lemma 7 For any pruning strategy σ, both pσ and pcan are pruning functions. [sent-378, score-0.913]

38 It should also be noticed that pruning functions need to be adapted to unranked trees, before they become suitable for our experiments on XML query induction. [sent-380, score-0.858]

39 Any pruning function on unranked trees can be compiled back to a (more involved) pruning function on ranked trees via a binary encoding of unranked trees (see Section 8). [sent-381, score-1.798]

40 Let t ∗ β be an annotated example for query Q with schema D and p a pruning function with the same schema. [sent-384, score-1.147]

41 Definition 8 Let D be a deterministic tree automaton, p be a pruning function and Q be a query both with schema D. [sent-389, score-1.126]

42 940 Q UERY I NDUCTION WITH S CHEMA -G UIDED P RUNING S TRATEGIES Proposition 9 Let σ be a pruning strategy and Q a query with the same schema D, then: Q is σ-stable ⇔ Q is pσ -stable. [sent-391, score-0.902]

43 Let D be a deterministic automaton and p be a pruning function with schema D. [sent-406, score-1.015]

44 The following proposition shows that stable queries can be uniquely identified by their language of pruned positively annotated trees, under the assumption that the schema is known. [sent-416, score-0.977]

45 Theorem 11 Let D be a deterministic tree automaton, p a pruning function and Q a query both with schema D. [sent-433, score-1.126]

46 In our learning algorithm we will represent regular p-stable queries by tree automata that recognize the language p(LQ ) of pruned positively annotated examples for Q. [sent-452, score-1.078]

47 The objective of this section is to show that every regular p-stable query can be represented in this manner under the assumption that the pruning function p is regular too, a notion that we will introduce. [sent-453, score-0.804]

48 1 Regular Stable Queries We recall a definition of regular queries and show how to represent stable regular queries by regular languages of pruned annotated examples. [sent-455, score-1.097]

49 Definition 12 A query Q is regular if the set LQ of unpruned annotated examples for Q is a regular tree language. [sent-456, score-0.966]

50 As before, let D be a fixed deterministic tree automaton with signature Σ and state set X, so that pruned annotated tree have the signature Σ ∪ (Σ × B) ∪ X. [sent-457, score-1.324]

51 Given a tree automaton A that recognizes pruned annotated trees, we define the query QA with schema D by QL (A) . [sent-458, score-1.391]

52 Its pruning according to pruning function ppath-onlyLib is shown on the right. [sent-461, score-0.856]

53 Lemma 13 A query Q with schema D is regular if and only if Q = QL for some regular language L of D-pruned annotated trees. [sent-467, score-0.926]

54 Again, we will use tree automata for this purpose, leading to the notion of regular pruning functions. [sent-482, score-0.796]

55 All maximal subtrees in which no node is annotated by 1 must be pruned and all nodes above nodes annotated by 1 must be preserved. [sent-484, score-0.893]

56 Then, it is easy to define a tree automaton which checks whether the definition of a “path-only” pruning function is satisfied. [sent-488, score-0.939]

57 Formally, let us consider a pruning function p and an annotated tree t ∗ β in its domain. [sent-490, score-0.901]

58 We say that a tree automaton P with signature (Σ ∪ (Σ × B)) × {y, n} defines a pruning function p with schema D if L p = {s ∈ L (P) | ΠΣ (s) ∈ L (D)}. [sent-495, score-1.217]

59 Definition 15 A pruning function p with schema D is called regular if it is equal to ℘P,D for some tree automaton P. [sent-498, score-1.228]

60 Furthermore, if we avoid such a conversion, we can indeed evaluate the application of a pruning functions to an annotated tree efficiently, as we show next. [sent-524, score-0.901]

61 If L = L (A) is a language of unpruned annotated trees and p = ℘P,D a pruning function then p(L) can be recognized by a deterministic tree automaton of size O(|D| 2|P| |A|). [sent-540, score-1.602]

62 This shows that every regular p-stable query Q can be represented by a (minimal deterministic) tree automaton that recognizes the language p(LQ ) of p-pruned positively annotated examples for Q. [sent-553, score-1.227]

63 3 Deciding Stability For our experimental validation, it will be necessary to decide whether a regular query is stable for a regular pruning function. [sent-556, score-0.856]

64 This can be done in polynomial time: Theorem 19 Let D and P be deterministic automata defining a regular pruning function p = ℘P,D and Q a query with domain D. [sent-557, score-0.885]

65 Since domains of pruning functions contain positively annotated trees, there must exist a p-pruned ′ ′ example t1 ∗ β′ ∈ p(LQ ) for Q and an unpruned example t2 ∗ β2 ∈ LQ for Q such that t1 and t2 1 ′ (ν) = 1 and β (ν) = 0. [sent-574, score-0.874]

66 Theorem 22 Let D be a deterministic tree automaton with signature Σ and state set X, and A be a tree automaton with signature Σ ∪ (Σ × B) ∪ X. [sent-584, score-1.227]

67 2 p-Consistency In our learning algorithms, we will need to check whether a language of annotated trees contains only p-pruned trees for a given regular pruning function p. [sent-602, score-1.143]

68 Proposition 24 Let p =℘P,D be a regular pruning function, let L = L (A) be a regular set of pruned annotated trees defined by a deterministic automaton with signature Σ ∪ (Σ × B) ∪ X. [sent-604, score-1.575]

69 For instance, “path-only” pruning functions can be defined by an automaton with 2-states (indeed the same automaton for all schemas D), and “path-extended” pruning functions with 3-states. [sent-614, score-1.546]

70 We suppose that the schema is fixed by a deterministic automaton D, and that the pruning function p = ℘P,D is fixed by a deterministic tree automaton P. [sent-621, score-1.584]

71 The idea is therefore to identify the minimal deterministic tree automaton for the language L = p(LQ ) associated with the p-stable target query Q. [sent-623, score-0.889]

72 Definition 25 A p-pruned sample is a pair (S+ , S− ) where S+ is a p-consistent finite set of positively annotated trees and S− a finite set of negatively annotated trees such that S+ ∪ S− is D-functional. [sent-625, score-0.911]

73 Theorem 26 Let schema D be a deterministic tree automaton and p be a fixed regular pruning function with schema D. [sent-628, score-1.501]

74 Let A be the class of deterministic tree automata that recognize languages p(LQ ) of some regular p-stable query Q and S the class of p-pruned samples of annotated examples. [sent-629, score-0.968]

75 Let A be a deterministic tree automaton recognizing the language L = p(LQ ) of the p-stable target query Q. [sent-640, score-0.93]

76 It is parameterized by a deterministic tree automaton D and a regular pruning function p with schema D. [sent-657, score-1.286]

77 As shown above, automaton p-stable-RPNI D (S+ , S− ) can be computed in polynomial time depending on the size of the input sample (S+ , S− ), for fixed regular pruning function p = ℘P,D . [sent-688, score-0.816]

78 3 For instance, pruning strategy σ = path-only with the universal schema D = U , and query Q which selects all a’s whose left-sibling is labeled by b. [sent-710, score-0.902]

79 For instance, any deterministic DTD for unranked trees can be transformed in polynomial time into a bottom-up deterministic tree automaton for curried binary encodings. [sent-738, score-0.982]

80 Node selection queries on unranked trees therefore correspond to leaf selection queries on ranked trees. [sent-750, score-0.808]

81 We next introduce stable regular queries and pruning strategies for unranked trees, and show how to translate them to ranked trees via a binary encoding. [sent-751, score-1.242]

82 A schema will be defined as a deterministic tree automaton D with ranked signature Σ@ and state set X. [sent-754, score-0.947]

83 A pruned unranked tree is a unranked tree with label set Σ ∪ X. [sent-756, score-0.933]

84 We define pruning strategies path-onlyD and path-extD on unranked trees as before, such that they preserve the path to the input node and respectively the extended path. [sent-761, score-0.906]

85 The next example shows that pruning strategies that correspond on ranked trees are quite different from what one obtains when applying the ranked path-onlyD and path-extD pruning strategies to binary encodings of unranked trees. [sent-762, score-1.503]

86 The unranked “path-only” pruning strategy without schema restrictions applied to the first author of the second book yields: path-ext(u, 2 · 2) = lib(⊤, b(a, ⊤), ⊤). [sent-765, score-0.898]

87 It follows that a node selection query on unranked trees is stable for an unranked pruning strategy if and only if the corresponding leaf selection query on ranked trees is stable for the corresponding ranked pruning strategy. [sent-770, score-2.371]

88 It follows as before, that query stability inherits to less aggressive pruning strategies An annotated unranked tree is a tree in UΣ∪(Σ×B) (X). [sent-772, score-1.658]

89 It should be noticed that annotated ranked trees correspond to annotated unranked trees in which only leafs are annotated, and vice versa. [sent-773, score-1.139]

90 Ranked pruning functions then correspond precisely to unranked pruning functions, whose domains are restricted to leaf-only-annotated trees. [sent-775, score-1.058]

91 Furthermore, we can lift pruning strategies to pruning functions in the unranked case in the same manner as in the ranked case. [sent-776, score-1.234]

92 The former is based on learning local tree automata for unranked trees, while the later is based on learning unrestricted deterministic tree automata for ranked trees. [sent-829, score-0.927]

93 path-onlyXMark path-ext yes yes yes yes yes yes yes yes yes yes no no path-extXMark yes yes yes yes yes yes Table 2: Stability of the XML queries used in our experiments and presented in Figure 13 w. [sent-983, score-0.955]

94 The results show that the best pruning strategy for learning a query is always the most aggressive one for which the target query is stable. [sent-990, score-1.011]

95 Conclusion and Future Work We distinguished classes of stable queries for schema-guided pruning strategies, and proposed new learning algorithms for regular stable queries. [sent-994, score-0.793]

96 Also, in the future, query induction with schema-guided pruning strategies should be extended to n-ary regular queries (Lemay et al. [sent-1125, score-1.008]

97 This new framework for pruning strategies provided in this paper should be sufficiently general, so that one can define appropriate pruning strategies in the n-ary case (in contrast to previous settings). [sent-1127, score-0.99]

98 In particular, one of them spotted an error in a previous version of Proposition 5 which lead us to the important distinction between pruning strategies and pruning functions. [sent-1133, score-0.923]

99 Remaining Proofs Lemma 13 (Open Part) For any tree automaton A recognizing D-pruned annotated trees, the language of unpruned trees LQA of the query QA with schema D is regular. [sent-1141, score-1.6]

100 Theorem 19 Let D be a deterministic tree automaton with signature Σ, P a deterministic tree automaton with signature (Σ ∪ (Σ × B)) × {y, n}, p = ℘P,D a pruning function, and Q a query with domain D. [sent-1196, score-1.92]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pruning', 0.428), ('automaton', 0.314), ('annotated', 0.276), ('xml', 0.231), ('query', 0.228), ('schema', 0.215), ('cons', 0.214), ('unranked', 0.202), ('tree', 0.197), ('queries', 0.187), ('trees', 0.153), ('pruned', 0.135), ('unpruned', 0.117), ('lq', 0.116), ('lib', 0.107), ('automata', 0.097), ('xpath', 0.094), ('emay', 0.081), ('hampav', 0.081), ('iehren', 0.081), ('illeron', 0.081), ('nil', 0.081), ('annotations', 0.079), ('ranked', 0.079), ('chema', 0.077), ('nduction', 0.077), ('trategies', 0.077), ('uery', 0.077), ('uided', 0.077), ('regular', 0.074), ('char', 0.073), ('compld', 0.069), ('strategies', 0.067), ('runing', 0.066), ('ql', 0.066), ('signature', 0.063), ('aggressive', 0.063), ('schemas', 0.062), ('language', 0.059), ('nodes', 0.058), ('deterministic', 0.058), ('node', 0.056), ('carme', 0.056), ('dtd', 0.056), ('rpni', 0.056), ('annotation', 0.054), ('positively', 0.053), ('stable', 0.052), ('qas', 0.051), ('extraction', 0.051), ('yes', 0.048), ('champav', 0.047), ('gilleron', 0.047), ('lemay', 0.047), ('documents', 0.046), ('evald', 0.043), ('recognizing', 0.041), ('dom', 0.041), ('curry', 0.039), ('joachim', 0.039), ('qbs', 0.039), ('raeymaekers', 0.039), ('xmark', 0.039), ('city', 0.038), ('languages', 0.038), ('subtrees', 0.034), ('target', 0.033), ('document', 0.033), ('strategy', 0.031), ('lift', 0.03), ('html', 0.03), ('rejected', 0.03), ('geographical', 0.029), ('web', 0.028), ('interactive', 0.028), ('qa', 0.026), ('states', 0.026), ('gottlob', 0.026), ('pcan', 0.026), ('qcity', 0.026), ('recognizes', 0.026), ('aur', 0.026), ('lien', 0.026), ('inclusion', 0.025), ('induction', 0.024), ('country', 0.023), ('name', 0.022), ('user', 0.022), ('georg', 0.022), ('author', 0.022), ('database', 0.022), ('geo', 0.021), ('icgi', 0.021), ('monadic', 0.021), ('niehren', 0.021), ('qname', 0.021), ('grammatical', 0.021), ('state', 0.021), ('population', 0.02), ('lille', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 91 jmlr-2013-Query Induction with Schema-Guided Pruning Strategies

Author: Joachim Niehren, Jérôme Champavère, Aurélien Lemay, Rémi Gilleron

Abstract: Inference algorithms for tree automata that define node selecting queries in unranked trees rely on tree pruning strategies. These impose additional assumptions on node selection that are needed to compensate for small numbers of annotated examples. Pruning-based heuristics in query learning algorithms for Web information extraction often boost the learning quality and speed up the learning process. We will distinguish the class of regular queries that are stable under a given schemaguided pruning strategy, and show that this class is learnable with polynomial time and data. Our learning algorithm is obtained by adding pruning heuristics to the traditional learning algorithm for tree automata from positive and negative examples. While justified by a formal learning model, our learning algorithm for stable queries also performs very well in practice of XML information extraction. Keywords: XML information extraction, XML schemas, interactive learning, tree automata, grammatical inference

2 0.070350282 78 jmlr-2013-On the Learnability of Shuffle Ideals

Author: Dana Angluin, James Aspnes, Sarah Eisenstat, Aryeh Kontorovich

Abstract: PAC learning of unrestricted regular languages is long known to be a difficult problem. The class of shuffle ideals is a very restricted subclass of regular languages, where the shuffle ideal generated by a string u is the collection of all strings containing u as a subsequence. This fundamental language family is of theoretical interest in its own right and provides the building blocks for other important language families. Despite its apparent simplicity, the class of shuffle ideals appears quite difficult to learn. In particular, just as for unrestricted regular languages, the class is not properly PAC learnable in polynomial time if RP = NP, and PAC learning the class improperly in polynomial time would imply polynomial time algorithms for certain fundamental problems in cryptography. In the positive direction, we give an efficient algorithm for properly learning shuffle ideals in the statistical query (and therefore also PAC) model under the uniform distribution. Keywords: PAC learning, statistical queries, regular languages, deterministic finite automata, shuffle ideals, subsequences

3 0.066832632 94 jmlr-2013-Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections

Author: Aleksandrs Slivkins, Filip Radlinski, Sreenivas Gollapudi

Abstract: Most learning to rank research has assumed that the utility of different documents is independent, which results in learned ranking functions that return redundant results. The few approaches that avoid this have rather unsatisfyingly lacked theoretical foundations, or do not scale. We present a learning-to-rank formulation that optimizes the fraction of satisfied users, with several scalable algorithms that explicitly takes document similarity and ranking context into account. Our formulation is a non-trivial common generalization of two multi-armed bandit models from the literature: ranked bandits (Radlinski et al., 2008) and Lipschitz bandits (Kleinberg et al., 2008b). We present theoretical justifications for this approach, as well as a near-optimal algorithm. Our evaluation adds optimizations that improve empirical performance, and shows that our algorithms learn orders of magnitude more quickly than previous approaches. Keywords: online learning, clickthrough data, diversity, multi-armed bandits, contextual bandits, regret, metric spaces

4 0.06650281 39 jmlr-2013-Efficient Active Learning of Halfspaces: An Aggressive Approach

Author: Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz

Abstract: We study pool-based active learning of half-spaces. We revisit the aggressive approach for active learning in the realizable case, and show that it can be made efficient and practical, while also having theoretical guarantees under reasonable assumptions. We further show, both theoretically and experimentally, that it can be preferable to mellow approaches. Our efficient aggressive active learner of half-spaces has formal approximation guarantees that hold when the pool is separable with a margin. While our analysis is focused on the realizable setting, we show that a simple heuristic allows using the same algorithm successfully for pools with low error as well. We further compare the aggressive approach to the mellow approach, and prove that there are cases in which the aggressive approach results in significantly better label complexity compared to the mellow approach. We demonstrate experimentally that substantial improvements in label complexity can be achieved using the aggressive approach, for both realizable and low-error settings. Keywords: active learning, linear classifiers, margin, adaptive sub-modularity

5 0.05976351 92 jmlr-2013-Random Spanning Trees and the Prediction of Weighted Graphs

Author: Nicolò Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella

Abstract: We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary weighted graph. We show that, under a suitable parametrization of the problem, the optimal number of prediction mistakes can be characterized (up to logarithmic factors) by the cutsize of a random spanning tree of the graph. The cutsize is induced by the unknown adversarial labeling of the graph nodes. In deriving our characterization, we obtain a simple randomized algorithm achieving in expectation the optimal mistake bound on any polynomially connected weighted graph. Our algorithm draws a random spanning tree of the original graph and then predicts the nodes of this tree in constant expected amortized time and linear space. Experiments on real-world data sets show that our method compares well to both global (Perceptron) and local (label propagation) methods, while being generally faster in practice. Keywords: online learning, learning on graphs, graph prediction, random spanning trees

6 0.057287991 71 jmlr-2013-Message-Passing Algorithms for Quadratic Minimization

7 0.044167973 60 jmlr-2013-Learning Bilinear Model for Matching Queries and Documents

8 0.036705703 44 jmlr-2013-Finding Optimal Bayesian Networks Using Precedence Constraints

9 0.035556275 120 jmlr-2013-Variational Algorithms for Marginal MAP

10 0.03316509 63 jmlr-2013-Learning Trees from Strings: A Strong Learning Algorithm for some Context-Free Grammars

11 0.031099176 40 jmlr-2013-Efficient Program Synthesis Using Constraint Satisfaction in Inductive Logic Programming

12 0.025163297 19 jmlr-2013-BudgetedSVM: A Toolbox for Scalable SVM Approximations

13 0.024709122 13 jmlr-2013-Approximating the Permanent with Fractional Belief Propagation

14 0.023336748 33 jmlr-2013-Dimension Independent Similarity Computation

15 0.022267822 82 jmlr-2013-Optimally Fuzzy Temporal Memory

16 0.020782439 38 jmlr-2013-Dynamic Affine-Invariant Shape-Appearance Handshape Features and Classification in Sign Language Videos

17 0.020497702 30 jmlr-2013-Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising

18 0.018513365 66 jmlr-2013-MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use

19 0.018193293 10 jmlr-2013-Algorithms and Hardness Results for Parallel Large Margin Learning

20 0.018123752 8 jmlr-2013-A Theory of Multiclass Boosting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.098), (1, 0.016), (2, -0.028), (3, 0.039), (4, 0.004), (5, 0.104), (6, 0.06), (7, 0.014), (8, -0.191), (9, 0.089), (10, 0.005), (11, -0.167), (12, 0.013), (13, 0.182), (14, -0.053), (15, 0.059), (16, -0.072), (17, 0.053), (18, 0.104), (19, 0.039), (20, 0.014), (21, 0.058), (22, 0.122), (23, -0.026), (24, 0.223), (25, -0.139), (26, -0.023), (27, -0.084), (28, 0.112), (29, 0.082), (30, 0.235), (31, 0.017), (32, -0.111), (33, 0.164), (34, 0.003), (35, -0.191), (36, 0.003), (37, -0.115), (38, 0.004), (39, 0.031), (40, -0.033), (41, -0.079), (42, -0.031), (43, -0.17), (44, 0.007), (45, -0.039), (46, 0.099), (47, -0.064), (48, -0.092), (49, -0.065)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98112375 91 jmlr-2013-Query Induction with Schema-Guided Pruning Strategies

Author: Joachim Niehren, Jérôme Champavère, Aurélien Lemay, Rémi Gilleron

Abstract: Inference algorithms for tree automata that define node selecting queries in unranked trees rely on tree pruning strategies. These impose additional assumptions on node selection that are needed to compensate for small numbers of annotated examples. Pruning-based heuristics in query learning algorithms for Web information extraction often boost the learning quality and speed up the learning process. We will distinguish the class of regular queries that are stable under a given schemaguided pruning strategy, and show that this class is learnable with polynomial time and data. Our learning algorithm is obtained by adding pruning heuristics to the traditional learning algorithm for tree automata from positive and negative examples. While justified by a formal learning model, our learning algorithm for stable queries also performs very well in practice of XML information extraction. Keywords: XML information extraction, XML schemas, interactive learning, tree automata, grammatical inference

2 0.4605225 60 jmlr-2013-Learning Bilinear Model for Matching Queries and Documents

Author: Wei Wu, Zhengdong Lu, Hang Li

Abstract: The task of matching data from two heterogeneous domains naturally arises in various areas such as web search, collaborative filtering, and drug design. In web search, existing work has designed relevance models to match queries and documents by exploiting either user clicks or content of queries and documents. To the best of our knowledge, however, there has been little work on principled approaches to leveraging both clicks and content to learn a matching model for search. In this paper, we propose a framework for learning to match heterogeneous objects. The framework learns two linear mappings for two objects respectively, and matches them via the dot product of their images after mapping. Moreover, when different regularizations are enforced, the framework renders a rich family of matching models. With orthonormal constraints on mapping functions, the framework subsumes Partial Least Squares (PLS) as a special case. Alternatively, with a ℓ1 +ℓ2 regularization, we obtain a new model called Regularized Mapping to Latent Structures (RMLS). RMLS enjoys many advantages over PLS, including lower time complexity and easy parallelization. To further understand the matching framework, we conduct generalization analysis and apply the result to both PLS and RMLS. We apply the framework to web search and implement both PLS and RMLS using a click-through bipartite with metadata representing features of queries and documents. We test the efficacy and scalability of RMLS and PLS on large scale web search problems. The results show that both PLS and RMLS can significantly outperform baseline methods, while RMLS substantially speeds up the learning process. Keywords: web search, partial least squares, regularized mapping to latent structures, generalization analysis

3 0.45513955 94 jmlr-2013-Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections

Author: Aleksandrs Slivkins, Filip Radlinski, Sreenivas Gollapudi

Abstract: Most learning to rank research has assumed that the utility of different documents is independent, which results in learned ranking functions that return redundant results. The few approaches that avoid this have rather unsatisfyingly lacked theoretical foundations, or do not scale. We present a learning-to-rank formulation that optimizes the fraction of satisfied users, with several scalable algorithms that explicitly takes document similarity and ranking context into account. Our formulation is a non-trivial common generalization of two multi-armed bandit models from the literature: ranked bandits (Radlinski et al., 2008) and Lipschitz bandits (Kleinberg et al., 2008b). We present theoretical justifications for this approach, as well as a near-optimal algorithm. Our evaluation adds optimizations that improve empirical performance, and shows that our algorithms learn orders of magnitude more quickly than previous approaches. Keywords: online learning, clickthrough data, diversity, multi-armed bandits, contextual bandits, regret, metric spaces

4 0.39813986 92 jmlr-2013-Random Spanning Trees and the Prediction of Weighted Graphs

Author: Nicolò Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella

Abstract: We investigate the problem of sequentially predicting the binary labels on the nodes of an arbitrary weighted graph. We show that, under a suitable parametrization of the problem, the optimal number of prediction mistakes can be characterized (up to logarithmic factors) by the cutsize of a random spanning tree of the graph. The cutsize is induced by the unknown adversarial labeling of the graph nodes. In deriving our characterization, we obtain a simple randomized algorithm achieving in expectation the optimal mistake bound on any polynomially connected weighted graph. Our algorithm draws a random spanning tree of the original graph and then predicts the nodes of this tree in constant expected amortized time and linear space. Experiments on real-world data sets show that our method compares well to both global (Perceptron) and local (label propagation) methods, while being generally faster in practice. Keywords: online learning, learning on graphs, graph prediction, random spanning trees

5 0.38786706 39 jmlr-2013-Efficient Active Learning of Halfspaces: An Aggressive Approach

Author: Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz

Abstract: We study pool-based active learning of half-spaces. We revisit the aggressive approach for active learning in the realizable case, and show that it can be made efficient and practical, while also having theoretical guarantees under reasonable assumptions. We further show, both theoretically and experimentally, that it can be preferable to mellow approaches. Our efficient aggressive active learner of half-spaces has formal approximation guarantees that hold when the pool is separable with a margin. While our analysis is focused on the realizable setting, we show that a simple heuristic allows using the same algorithm successfully for pools with low error as well. We further compare the aggressive approach to the mellow approach, and prove that there are cases in which the aggressive approach results in significantly better label complexity compared to the mellow approach. We demonstrate experimentally that substantial improvements in label complexity can be achieved using the aggressive approach, for both realizable and low-error settings. Keywords: active learning, linear classifiers, margin, adaptive sub-modularity

6 0.35196713 40 jmlr-2013-Efficient Program Synthesis Using Constraint Satisfaction in Inductive Logic Programming

7 0.32845557 71 jmlr-2013-Message-Passing Algorithms for Quadratic Minimization

8 0.24623156 82 jmlr-2013-Optimally Fuzzy Temporal Memory

9 0.24333507 78 jmlr-2013-On the Learnability of Shuffle Ideals

10 0.22610712 38 jmlr-2013-Dynamic Affine-Invariant Shape-Appearance Handshape Features and Classification in Sign Language Videos

11 0.22416252 63 jmlr-2013-Learning Trees from Strings: A Strong Learning Algorithm for some Context-Free Grammars

12 0.19940363 67 jmlr-2013-MLPACK: A Scalable C++ Machine Learning Library

13 0.1896054 120 jmlr-2013-Variational Algorithms for Marginal MAP

14 0.1609562 1 jmlr-2013-AC++Template-Based Reinforcement Learning Library: Fitting the Code to the Mathematics

15 0.1586854 19 jmlr-2013-BudgetedSVM: A Toolbox for Scalable SVM Approximations

16 0.1484725 44 jmlr-2013-Finding Optimal Bayesian Networks Using Precedence Constraints

17 0.14402059 66 jmlr-2013-MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use

18 0.13662481 99 jmlr-2013-Semi-Supervised Learning Using Greedy Max-Cut

19 0.1264821 74 jmlr-2013-Multivariate Convex Regression with Adaptive Partitioning

20 0.11269489 83 jmlr-2013-Orange: Data Mining Toolbox in Python


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.015), (5, 0.068), (6, 0.06), (9, 0.016), (10, 0.035), (20, 0.026), (23, 0.016), (41, 0.567), (68, 0.02), (70, 0.016), (75, 0.03), (87, 0.013), (89, 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88201201 83 jmlr-2013-Orange: Data Mining Toolbox in Python

Author: Janez Demšar, Tomaž Curk, Aleš Erjavec, Črt Gorup, Tomaž Hočevar, Mitar Milutinovič, Martin Možina, Matija Polajnar, Marko Toplak, Anže Starič, Miha Štajdohar, Lan Umek, Lan Žagar, Jure Žbontar, Marinka Žitnik, Blaž Zupan

Abstract: Orange is a machine learning and data mining suite for data analysis through Python scripting and visual programming. Here we report on the scripting part, which features interactive data analysis and component-based assembly of data mining procedures. In the selection and design of components, we focus on the flexibility of their reuse: our principal intention is to let the user write simple and clear scripts in Python, which build upon C++ implementations of computationallyintensive tasks. Orange is intended both for experienced users and programmers, as well as for students of data mining. Keywords: Python, data mining, machine learning, toolbox, scripting

same-paper 2 0.83207369 91 jmlr-2013-Query Induction with Schema-Guided Pruning Strategies

Author: Joachim Niehren, Jérôme Champavère, Aurélien Lemay, Rémi Gilleron

Abstract: Inference algorithms for tree automata that define node selecting queries in unranked trees rely on tree pruning strategies. These impose additional assumptions on node selection that are needed to compensate for small numbers of annotated examples. Pruning-based heuristics in query learning algorithms for Web information extraction often boost the learning quality and speed up the learning process. We will distinguish the class of regular queries that are stable under a given schemaguided pruning strategy, and show that this class is learnable with polynomial time and data. Our learning algorithm is obtained by adding pruning heuristics to the traditional learning algorithm for tree automata from positive and negative examples. While justified by a formal learning model, our learning algorithm for stable queries also performs very well in practice of XML information extraction. Keywords: XML information extraction, XML schemas, interactive learning, tree automata, grammatical inference

3 0.60119063 103 jmlr-2013-Sparse Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory

Author: Aleksandr Y. Aravkin, James V. Burke, Gianluigi Pillonetto

Abstract: We introduce a new class of quadratic support (QS) functions, many of which already play a crucial role in a variety of applications, including machine learning, robust statistical inference, sparsity promotion, and inverse problems such as Kalman smoothing. Well known examples of QS penalties include the ℓ2 , Huber, ℓ1 and Vapnik losses. We build on a dual representation for QS functions, using it to characterize conditions necessary to interpret these functions as negative logs of true probability densities. This interpretation establishes the foundation for statistical modeling with both known and new QS loss functions, and enables construction of non-smooth multivariate distributions with specified means and variances from simple scalar building blocks. The main contribution of this paper is a flexible statistical modeling framework for a variety of learning applications, together with a toolbox of efficient numerical methods for estimation. In particular, a broad subclass of QS loss functions known as piecewise linear quadratic (PLQ) penalties has a dual representation that can be exploited to design interior point (IP) methods. IP methods solve nonsmooth optimization problems by working directly with smooth systems of equations characterizing their optimality. We provide several numerical examples, along with a code that can be used to solve general PLQ problems. The efficiency of the IP approach depends on the structure of particular applications. We consider the class of dynamic inverse problems using Kalman smoothing. This class comprises a wide variety of applications, where the aim is to reconstruct the state of a dynamical system with known process and measurement models starting from noisy output samples. In the classical case, Gaus∗. The authors would like to thank Bradley M. Bell for insightful discussions and helpful suggestions. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007-2013

4 0.19701274 60 jmlr-2013-Learning Bilinear Model for Matching Queries and Documents

Author: Wei Wu, Zhengdong Lu, Hang Li

Abstract: The task of matching data from two heterogeneous domains naturally arises in various areas such as web search, collaborative filtering, and drug design. In web search, existing work has designed relevance models to match queries and documents by exploiting either user clicks or content of queries and documents. To the best of our knowledge, however, there has been little work on principled approaches to leveraging both clicks and content to learn a matching model for search. In this paper, we propose a framework for learning to match heterogeneous objects. The framework learns two linear mappings for two objects respectively, and matches them via the dot product of their images after mapping. Moreover, when different regularizations are enforced, the framework renders a rich family of matching models. With orthonormal constraints on mapping functions, the framework subsumes Partial Least Squares (PLS) as a special case. Alternatively, with a ℓ1 +ℓ2 regularization, we obtain a new model called Regularized Mapping to Latent Structures (RMLS). RMLS enjoys many advantages over PLS, including lower time complexity and easy parallelization. To further understand the matching framework, we conduct generalization analysis and apply the result to both PLS and RMLS. We apply the framework to web search and implement both PLS and RMLS using a click-through bipartite with metadata representing features of queries and documents. We test the efficacy and scalability of RMLS and PLS on large scale web search problems. The results show that both PLS and RMLS can significantly outperform baseline methods, while RMLS substantially speeds up the learning process. Keywords: web search, partial least squares, regularized mapping to latent structures, generalization analysis

5 0.18923372 78 jmlr-2013-On the Learnability of Shuffle Ideals

Author: Dana Angluin, James Aspnes, Sarah Eisenstat, Aryeh Kontorovich

Abstract: PAC learning of unrestricted regular languages is long known to be a difficult problem. The class of shuffle ideals is a very restricted subclass of regular languages, where the shuffle ideal generated by a string u is the collection of all strings containing u as a subsequence. This fundamental language family is of theoretical interest in its own right and provides the building blocks for other important language families. Despite its apparent simplicity, the class of shuffle ideals appears quite difficult to learn. In particular, just as for unrestricted regular languages, the class is not properly PAC learnable in polynomial time if RP = NP, and PAC learning the class improperly in polynomial time would imply polynomial time algorithms for certain fundamental problems in cryptography. In the positive direction, we give an efficient algorithm for properly learning shuffle ideals in the statistical query (and therefore also PAC) model under the uniform distribution. Keywords: PAC learning, statistical queries, regular languages, deterministic finite automata, shuffle ideals, subsequences

6 0.18361799 39 jmlr-2013-Efficient Active Learning of Halfspaces: An Aggressive Approach

7 0.18095632 106 jmlr-2013-Stationary-Sparse Causality Network Learning

8 0.17833216 73 jmlr-2013-Multicategory Large-Margin Unified Machines

9 0.17611416 98 jmlr-2013-Segregating Event Streams and Noise with a Markov Renewal Process Model

10 0.17209955 120 jmlr-2013-Variational Algorithms for Marginal MAP

11 0.17007405 51 jmlr-2013-Greedy Sparsity-Constrained Optimization

12 0.16889219 17 jmlr-2013-Belief Propagation for Continuous State Spaces: Stochastic Message-Passing with Quantitative Guarantees

13 0.16880774 82 jmlr-2013-Optimally Fuzzy Temporal Memory

14 0.16717516 52 jmlr-2013-How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis

15 0.16684505 99 jmlr-2013-Semi-Supervised Learning Using Greedy Max-Cut

16 0.16642673 25 jmlr-2013-Communication-Efficient Algorithms for Statistical Optimization

17 0.16594224 2 jmlr-2013-A Binary-Classification-Based Metric between Time-Series Distributions and Its Use in Statistical and Learning Problems

18 0.16510628 48 jmlr-2013-Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation

19 0.16486317 71 jmlr-2013-Message-Passing Algorithms for Quadratic Minimization

20 0.16389966 28 jmlr-2013-Construction of Approximation Spaces for Reinforcement Learning