emnlp emnlp2012 emnlp2012-37 knowledge-graph by maker-knowledge-mining

37 emnlp-2012-Dynamic Programming for Higher Order Parsing of Gap-Minding Trees


Source: pdf

Author: Emily Pitler ; Sampath Kannan ; Mitchell Marcus

Abstract: We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single in- tervals can produce trees in which a node has descendants in multiple intervals.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. [sent-3, score-0.81]

2 The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. [sent-5, score-0.689]

3 Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single in- tervals can produce trees in which a node has descendants in multiple intervals. [sent-6, score-1.271]

4 One commonly used space is the set of projective trees, in which every node’s descendants form a contiguous interval in the input sentence. [sent-8, score-0.628]

5 Finding the optimal tree in the set of projective trees can be done efficiently (Eisner, 2000), even when the score of a tree depends on higher order factors (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010). [sent-9, score-0.797]

6 The maximum scoring directed spanning tree can be found efficiently when the score of a tree depends only on edge-based factors (McDonald et al. [sent-13, score-0.595]

7 We propose two new classes of trees between projective trees and the set of all spanning trees. [sent-20, score-0.631]

8 These two classes provide a closer approximation to the set of plausible natural language dependency trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. [sent-21, score-0.705]

9 We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which these intervals can be nested. [sent-22, score-0.604]

10 Different levels of gap inheritance define each of these two classes (Section 3). [sent-23, score-0.618]

11 The 1-Inherit class of trees (Section 4) has exactly the same empirical coverage (Table 1) of natural language sentences as the class of mildly non-projective trees (Bodirsky et al. [sent-24, score-0.561]

12 lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl property that all edges into an interval of descendants come from the same node. [sent-29, score-0.565]

13 Non-contiguous intervals are therefore decoupled given this single node, and thus an algorithm which uses only single intervals (as in projective parsing) can produce trees in which a node has descendants in multiple intervals (as in mildly non-projective parsing (G´ omezRodr ı´guez et al. [sent-30, score-1.453]

14 An extension which finds the optimal scoring gap-minding tree with scores over pairs of adjacent edges (grand- parent scoring) is given in Section 6. [sent-34, score-0.431]

15 A dependency tree is a rooted, directed spanning tree that represents a set of dependencies between words in a sentence. [sent-39, score-0.481]

16 3 The tree has one artificial root node and vertices that correspond to the words in an input sentence w1, w2,. [sent-40, score-0.37]

17 The projection of a node is the set of words in the subtree rooted at it (including itself). [sent-46, score-0.628]

18 A tree is projective if, for every node in the tree, that node’s projection forms a contiguous interval in the input sentence order. [sent-47, score-0.863]

19 A gap of a node v is a non-empty, maximal interval that does not contain any words in the projection of v but lies between words that are in the projection of v. [sent-57, score-0.943]

20 The gap degree of a node is the number of gaps it has. [sent-58, score-0.578]

21 The gap degree of a tree is the maximum of the gap degrees of its vertices. [sent-59, score-0.843]

22 , 2005) Note that a projective tree will have gap degree 0. [sent-61, score-0.742]

23 A mildly non-projective tree has gap degree at most one and is well-nested. [sent-67, score-0.748]

24 — 3 Gap Inheritance Empirically, natural language sentences seem to be mostly mildly non-projective trees, but mildly nonprojective trees are quite expensive to parse (O(n7) (G ´omez-Rodr ´ıguez et al. [sent-76, score-0.62]

25 The two new classes of structures, Mild+0-Inherit and Mild+1-Inherit, have more coverage of empirical data than projective structures, yet can be parsed faster than mildly non-projective structures. [sent-86, score-0.399]

26 Let us first consider an example of a tree which both has gap degree at most one and satisfies wellnestedness, yet appears to be an unrealistic structure for a natural language syntactic tree. [sent-91, score-0.592]

27 Consider a tree which is rooted at node xn+2, which has one child, node xn+1, whose projection is [x1, xn+1] ∪ [xn+3, x2n+2] , with n children (x1, . [sent-92, score-0.951]

28 This tree is well- nested, has gap degree 1, but all n of xn+1 ’s children have edges into the other projection interval. [sent-96, score-0.916]

29 A child is gap inheriting if its parent has gap degree 1 and it has descendants on both sides of its parent’s gap. [sent-99, score-1.24]

30 The inheritance degree of a node is the number of its children which inherit its gap. [sent-100, score-0.827]

31 The inheritance degree of a tree is the maximum inheritance degree over all its nodes. [sent-101, score-1.062]

32 Figure 1 gives examples of trees with varying degrees of gap inheritance. [sent-102, score-0.469]

33 Each projection of a node with a gap is shown with two matching rectangles. [sent-103, score-0.555]

34 If a child has a projection rectangle nested inside each of the parent’s projection rectangles, then that child inherits the parent’s gap. [sent-104, score-0.715]

35 Figure 1(a) shows a mildly projective tree (with inheritance degree 2), with both node 2 and node 11inheriting their parent (node 3)’s gap (note that both the dashed and dotted rectangles each show up inside both of the solid rectangles). [sent-105, score-1.896]

36 Figure 1(b) shows a tree with inheritance degree 1: 480 there is now only one pair of rectangles (the dotted ones) which show up in both of the solid ones. [sent-106, score-0.797]

37 Figure 1(c) shows a tree with inheritance degree 0: while there are gaps, each set of matching rectangles is contained within a single rectangle (projection interval) of its parent, i. [sent-107, score-0.782]

38 , the two dashed rectangles of node 2’s projection are contained within the left interval of node 3; the two dotted rectangles of node 12’s projection are contained within the right interval of node 3, etc. [sent-109, score-1.617]

39 How often does gap inheritance occur in the parses of natural language sentences found in treebanks? [sent-111, score-0.618]

40 Furthermore, how often are there multiple gap inheriting children of the same node (inheritance degree at least two)? [sent-113, score-0.697]

41 Table 1 shows what proportion of mildly nonprojective trees have the added property of gap inheritance degree 0 (Mild+0-Inherit) or have gap inheritance degree 1 (Mild+1-Inherit). [sent-114, score-1.933]

42 Over all six languages, there are no examples of multiple gap inheritance Mild+1-Inherit has exactly the same empirical coverage as the unrestricted set of mildly non-projective trees. [sent-115, score-0.805]

43 (c)1M2ild3+04-In5he6rit:7Ev8en9tho10ug1h n1o2d13e has children with gaps (node 2 and node 12), neither of them inherit node 3’s gap. [sent-119, score-0.565]

44 There are several nodes with gaps, but every node with a gap is properly contained within just one of its parent’s intervals. [sent-120, score-0.435]

45 In all three trees, node 3’s two projection intervals are shown in the two solid blue rectangles. [sent-122, score-0.483]

46 The number of children which inherit its gap vary, however; in 1(a), two children have descendants within both sides; in 1(b) only one child has descendants on both sides; in 1(c), none of its children do. [sent-123, score-1.356]

47 One of the fundamental assumptions of syntactic theory is that movement is upward in the phrase structure Consider one movement operation and its effect on the gap degree of all other nodes in the tree: (a) it tree. [sent-125, score-0.491]

48 5 should have no effect on the gap degree of the nodes in the subtree itself, (b) it can create a gap for an ancestor node if it moves out of its projection interval, and (c) it can create a gap for a non-ancestor node if it moves in to its projection interval. [sent-126, score-1.635]

49 Now consider which cases can lead to gap inheritance: in case (b), there is a single path from the ancestor to the root of the subtree, so the parent of the subtree will have no gap inheritance and any higher ancestors will have a single child inherit the gap created by this movement. [sent-127, score-1.71]

50 In case (c), it is possible for there to be multiple children that inherit this newly created gap if multiple children had descendents on both sides. [sent-128, score-0.668]

51 However, the assumption of upward movement in the phrase structure tree should rule out movement into the projection interval of a non-ancestor. [sent-129, score-0.654]

52 Informally, c-commanded means that the first node is descended from the lowest ancestor of the other that has more than one child. [sent-132, score-0.369]

53 1 Parsing Mild+1-Inherit Trees Finding the optimal Mild+1-Inherit tree can be done by bottom-up constructing the tree for each node and its descendants. [sent-134, score-0.462]

54 This subtree can be constructed by first starting with the child spanning the gap, updating its root index to be the parent, and then expanding the interval indices to the left and right to include the other children. [sent-138, score-0.564]

55 The subtree corresponds to the completed item [1, 5] ∪ [8, 13] rooted raet s3p. [sent-141, score-0.355]

56 (201 1)’s O(n7) algorithm for parsing mildly non-projective structures if the most expensive step (Combine Shrinking Gap Centre) is dropped; this step would only ever be needed if a parent node has more than one child inheriting its gap. [sent-143, score-0.692]

57 While more efficient than parsing in the space of mildly projective trees, this is still probably not practically implementable. [sent-147, score-0.427]

58 Part of the difficulty lies in the fact that gap inheritance causes the two non-contiguous projection intervals to be coupled. [sent-148, score-0.934]

59 A tree is called gap-minding7 if it has gap degree at most one, is well-nested, and has gap inheritance degree 0. [sent-150, score-1.29]

60 We now turn to the parsing of gapminding trees and show how a few consequences of its definition allow us to use items ranging over only one interval. [sent-154, score-0.385]

61 This is not unique to this example; all projection intervals in a gap-minding tree have incoming edges from exactly one node outside the interval. [sent-156, score-0.757]

62 Within a gap-minding tree, consider any node n with a gap (i. [sent-158, score-0.408]

63 For each of the intervals of n ’s projection: (a) If the interval contains n, the only edge incoming to that interval is from p to n. [sent-163, score-0.792]

64 482 (b) Ifthe interval does not contain n, all edges incoming to that interval come from n. [sent-166, score-0.629]

65 For the gap interval ([xj+1 , xk−1]): (a) If the interval contains p, then the only edge incoming is from p’s parent to p (b) If the interval does not contain p, then all edges incoming to that interval come from p. [sent-168, score-1.664]

66 As a consequence of the above, [xi, xj] ∪ {n} forms a gap-minding tree rooted at n, [xk, xl] }∪ f {n} also forms a gap-minding tree rooted at n, {annd} [xj+1 , xk−1] ∪ {p} forms a gap-minding tree rooted at p. [sent-169, score-1.26]

67 (Part 1): Assume there was a directed edge (x, y) such that y is inside a projection interval of n and x is not inside the same interval, and x y n. [sent-171, score-0.555]

68 Since x and y are in different intervals, then whichever child of n that x and y are descended from would have inherited n’s gap, leading to a contradiction. [sent-174, score-0.358]

69 (Part 2): First, suppose there existed a set of nodes in n’s gap which were not descended from p. [sent-175, score-0.496]

70 (p clearly has descendants on each side of the gap, because all descendants of n are also descendants of p). [sent-177, score-0.656]

71 n, p’s child, would then have descendants on both sides of p’s gap, which would violate the property of no gap inheritance. [sent-178, score-0.569]

72 It is also not possible for there to be edges incoming from other descendants of p outside the gap, as that would imply another child of p being ill-nested with respect to n. [sent-179, score-0.497]

73 C [i,j,p] : The maximum score of any gapminding tree, rooted at p, with vertices [i, j] ∪ {p} (p may or may dno att b pe, wwiitthhin v [i, j]). [sent-184, score-0.499]

74 By part 2, we can concatenate one interval of a child with its gap, knowing that the gap is entirely descended from the child’s parent, and forget the concatenation split point between the parent’s other descendants and this side of the child. [sent-187, score-1.164]

75 For example, in Figure 1(c), we could first merge [6, 7] rooted at 6 with [8, 13] rooted at 3 to create an interval [6, 13] and say that it is descended from 6, with the rightmost side descended from its child 3. [sent-189, score-1.355]

76 The following step would merge this concatenated interval ([6, 13] rooted at 6 and 3) with [1, 5] rooted at 3. [sent-191, score-0.745]

77 Consider an optimum scoring gap-minding tree T rooted at p with vertices V = [i, j] ∪ {p} and edges E, wtedhe arte pE w h∅ . [sent-195, score-0.664]

78 Let T be the optimum scoring gapminding tree rooted at p with vertices V = [i, j] ∪ {p}. [sent-201, score-0.695]

79 nThge tnre Te raonodte idts a score are derived from one of the following: S/C If p has a single child x in T, then if p ∈ (i, j) (I), T ha’ss score gilse Score(p, x) + hCe [i, p−1, x] + (CI) [p + 1 s ,j,x]; if p ∈/ [i, j] (E), iT,p p’s− score +is Score(p, x) + iCf [i,j,x]. [sent-202, score-0.336]

80 M/D If p has multiple children in T and i and j are descendedfrom different children in T, then there is a split point k such that T ’s score is C [i, k, p] + C [k + 1,j,p]. [sent-204, score-0.398]

81 Case S/C: If p has exactly one child x, then the tree can be decomposed into the edge from p to x and the subtree rooted at x. [sent-207, score-0.737]

82 If p is inside, then x has a gap across p, and so using Claim 1, the maximum scoring tree rooted at p with a single child x has score of Score(p, x) + C [i, p − 1, x] + C [p + 1,j,x] . [sent-209, score-0.967]

83 the endpoints are descended from the same child x, then the child x has to have gap degree 1. [sent-211, score-0.954]

84 The case in which x is on the right side of its gap is symmetric. [sent-215, score-0.32]

85 Let x be the child of p that iis descended from, and let xl and xr be x’s leftmost and right descendants, respectively. [sent-218, score-0.46]

86 Therefore we can split up the interval at xr to have two gap-minding trees, both rooted at p. [sent-222, score-0.587]

87 The score of T is then the sum of the scores of the best subtree rooted at p over [i, k] (C [i, k, p]) and the score of the best subtree rooted at p over [k+1, j] (C[k + 1, j,p]). [sent-223, score-0.838]

88 The maximum score of any gapminding tree is then found in C [1, n, 0], and the tree itself can be found using backpointers. [sent-233, score-0.507]

89 In this section, we assume a grandparent-factored model, where the score of a tree is now the sum over scores of (g, p, c) triples, where (g, p) and (p, c) are both directed edges in the tree. [sent-247, score-0.341]

90 We now show how to extend the above algorithm to find the maximum scoring gap-minding tree with grandparent scoring. [sent-249, score-0.368]

91 Let T be the optimum scoring gapminding tree rooted at p with vertices V = [i, j] ∪ {p}, wingher tree p ∈ (i, j) (I), wwitihth v a grandparent i,njd]e ∪x g (g ∈/ V ). [sent-254, score-1.006]

92 pT ∈hen (i ,Tj )a (nId) ,i tws score are derived from one of the following: S/C If p has a single child x in T, then if p ∈ (i, j) (I), T’s score is Score(g, p, x) + Cp [i, p−1, x, p] T+’sC s [p + 1, j,x, p]; ifp ∈/ [i, j] (E), pT’−s score pi]s+ Score(g, p, x) + C; i [i,j,x, p]. [sent-255, score-0.336]

93 M/D If p has multiple children in T and i and j are descendedfrom different children in T, then there is a split point k such that T ’s score is C [i, k, p, g] + C [k + 1,j,p, g]. [sent-257, score-0.398]

94 7 Experiments The space of projective trees is strictly contained within the space of gap-minding trees which is strictly contained within spanning trees. [sent-263, score-0.685]

95 The gap-minding decoder does better than the projective decoder on Czech, Danish, and Dutch, the three languages with the most non-projectivity, even though it was at a competitive disadvantage in terms of both pruning and (on languages with very long sentences) training data. [sent-301, score-0.352]

96 The gap-minding decoder with grandparent features is better than the projective decoder with sibling features on all six of the languages. [sent-302, score-0.456]

97 We have shown that the mildly non-projective trees present in natural language treebanks all have zero or one children inherit each parent’s gap. [sent-307, score-0.628]

98 We also showed that the assumption of 1 gap inheritance removes a factor of n from parsing time, and the further assumption of 0 gap inheritance removes yet another factor of n. [sent-308, score-1.326]

99 The space of gap-minding trees provides a closer fit to naturally occurring linguistic structures than the space of projective trees, and unlike spanning trees, the inclusion of higher order factors does not substantially increase the difficulty of finding the maximum scoring tree in that space. [sent-309, score-0.698]

100 A depen- dency perspective on the adequacy of tree local multicomponent tree adjoining grammar. [sent-344, score-0.388]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('inheritance', 0.336), ('gap', 0.282), ('rooted', 0.252), ('interval', 0.241), ('descended', 0.214), ('descendants', 0.206), ('trees', 0.187), ('mildly', 0.187), ('projective', 0.181), ('intervals', 0.169), ('tree', 0.168), ('projection', 0.147), ('child', 0.144), ('grandparent', 0.143), ('children', 0.132), ('parent', 0.13), ('node', 0.126), ('inherit', 0.122), ('degree', 0.111), ('mild', 0.11), ('rectangles', 0.107), ('gapminding', 0.107), ('subtree', 0.103), ('bodirsky', 0.076), ('endpointsdiff', 0.076), ('endsfromleftchild', 0.076), ('endsfromrightchild', 0.076), ('maxk', 0.076), ('spanning', 0.076), ('vertices', 0.076), ('edges', 0.076), ('incoming', 0.071), ('edge', 0.07), ('decoder', 0.066), ('claim', 0.065), ('score', 0.064), ('parents', 0.062), ('endpoints', 0.059), ('subproblems', 0.059), ('gaps', 0.059), ('nonprojective', 0.059), ('parsing', 0.059), ('scoring', 0.057), ('xr', 0.055), ('mst', 0.053), ('adjoining', 0.052), ('satta', 0.051), ('mcdonald', 0.051), ('dutch', 0.051), ('movement', 0.049), ('swedish', 0.049), ('guez', 0.047), ('xl', 0.047), ('koo', 0.047), ('inheriting', 0.046), ('maxkc', 0.046), ('singlechild', 0.046), ('property', 0.042), ('xk', 0.041), ('solid', 0.041), ('split', 0.039), ('kuhlmann', 0.039), ('pruning', 0.039), ('sides', 0.039), ('xn', 0.038), ('side', 0.038), ('nested', 0.037), ('dependency', 0.036), ('interior', 0.036), ('grandparents', 0.036), ('projectivity', 0.036), ('subproblem', 0.036), ('czech', 0.036), ('optimum', 0.035), ('dashed', 0.035), ('running', 0.034), ('danish', 0.034), ('dotted', 0.034), ('directed', 0.033), ('rectangle', 0.033), ('eisner', 0.033), ('carreras', 0.033), ('portuguese', 0.033), ('definition', 0.032), ('inside', 0.032), ('subtrees', 0.032), ('yet', 0.031), ('descendedfrom', 0.031), ('exterior', 0.031), ('inherits', 0.031), ('maxgapmindingtree', 0.031), ('maxx', 0.031), ('ops', 0.031), ('ponleft', 0.031), ('poschild', 0.031), ('tjh', 0.031), ('xj', 0.03), ('ancestor', 0.029), ('factors', 0.029), ('contained', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999899 37 emnlp-2012-Dynamic Programming for Higher Order Parsing of Gap-Minding Trees

Author: Emily Pitler ; Sampath Kannan ; Mitchell Marcus

Abstract: We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single in- tervals can produce trees in which a node has descendants in multiple intervals.

2 0.14537638 104 emnlp-2012-Parse, Price and Cut-Delayed Column and Row Generation for Graph Based Parsers

Author: Sebastian Riedel ; David Smith ; Andrew McCallum

Abstract: Graph-based dependency parsers suffer from the sheer number of higher order edges they need to (a) score and (b) consider during optimization. Here we show that when working with LP relaxations, large fractions of these edges can be pruned before they are fully scored—without any loss of optimality guarantees and, hence, accuracy. This is achieved by iteratively parsing with a subset of higherorder edges, adding higher-order edges that may improve the score of the current solution, and adding higher-order edges that are implied by the current best first order edges. This amounts to delayed column and row generation in the LP relaxation and is guaranteed to provide the optimal LP solution. For second order grandparent models, our method considers, or scores, no more than 6–13% of the second order edges of the full model. This yields up to an eightfold parsing speedup, while pro- viding the same empirical accuracy and certificates of optimality as working with the full LP relaxation. We also provide a tighter LP formulation for grandparent models that leads to a smaller integrality gap and higher speed.

3 0.12840876 59 emnlp-2012-Generating Non-Projective Word Order in Statistical Linearization

Author: Bernd Bohnet ; Anders Bjorkelund ; Jonas Kuhn ; Wolfgang Seeker ; Sina Zarriess

Abstract: We propose a technique to generate nonprojective word orders in an efficient statistical linearization system. Our approach predicts liftings of edges in an unordered syntactic tree by means of a classifier, and uses a projective algorithm for tree linearization. We obtain statistically significant improvements on six typologically different languages: English, German, Dutch, Danish, Hungarian, and Czech.

4 0.1204559 57 emnlp-2012-Generalized Higher-Order Dependency Parsing with Cube Pruning

Author: Hao Zhang ; Ryan McDonald

Abstract: State-of-the-art graph-based parsers use features over higher-order dependencies that rely on decoding algorithms that are slow and difficult to generalize. On the other hand, transition-based dependency parsers can easily utilize such features without increasing the linear complexity of the shift-reduce system beyond a constant. In this paper, we attempt to address this imbalance for graph-based parsing by generalizing the Eisner (1996) algorithm to handle arbitrary features over higherorder dependencies. The generalization is at the cost of asymptotic efficiency. To account for this, cube pruning for decoding is utilized (Chiang, 2007). For the first time, label tuple and structural features such as valencies can be scored efficiently with third-order features in a graph-based parser. Our parser achieves the state-of-art unlabeled accuracy of 93.06% and labeled accuracy of 91.86% on the standard test set for English, at a faster speed than a reimplementation ofthe third-ordermodel of Koo et al. (2010).

5 0.09632124 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon

Author: Greg Durrett ; Adam Pauls ; Dan Klein

Abstract: We consider the problem of using a bilingual dictionary to transfer lexico-syntactic information from a resource-rich source language to a resource-poor target language. In contrast to past work that used bitexts to transfer analyses of specific sentences at the token level, we instead use features to transfer the behavior of words at a type level. In a discriminative dependency parsing framework, our approach produces gains across a range of target languages, using two different lowresource training methodologies (one weakly supervised and one indirectly supervised) and two different dictionary sources (one manually constructed and one automatically constructed).

6 0.094744712 46 emnlp-2012-Exploiting Reducibility in Unsupervised Dependency Parsing

7 0.091553375 72 emnlp-2012-Joint Inference for Event Timeline Construction

8 0.090284362 119 emnlp-2012-Spectral Dependency Parsing with Latent Variables

9 0.085451216 12 emnlp-2012-A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing

10 0.083239213 66 emnlp-2012-Improving Transition-Based Dependency Parsing with Buffer Transitions

11 0.081203744 55 emnlp-2012-Forest Reranking through Subtree Ranking

12 0.079272591 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

13 0.07909096 96 emnlp-2012-Name Phylogeny: A Generative Model of String Variation

14 0.077827781 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

15 0.074894011 82 emnlp-2012-Left-to-Right Tree-to-String Decoding with Prediction

16 0.070530407 105 emnlp-2012-Parser Showdown at the Wall Street Corral: An Empirical Investigation of Error Types in Parser Output

17 0.068564661 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

18 0.058802929 126 emnlp-2012-Training Factored PCFGs with Expectation Propagation

19 0.058087241 94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming

20 0.058013529 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.195), (1, -0.131), (2, 0.096), (3, -0.071), (4, 0.021), (5, 0.01), (6, -0.047), (7, 0.04), (8, 0.031), (9, 0.272), (10, 0.102), (11, 0.051), (12, 0.111), (13, 0.036), (14, 0.061), (15, -0.066), (16, 0.095), (17, -0.05), (18, 0.162), (19, -0.03), (20, -0.14), (21, -0.056), (22, 0.059), (23, -0.047), (24, 0.125), (25, 0.079), (26, -0.006), (27, 0.1), (28, -0.098), (29, -0.092), (30, -0.096), (31, -0.072), (32, -0.11), (33, 0.096), (34, -0.105), (35, -0.038), (36, 0.196), (37, 0.028), (38, -0.134), (39, 0.006), (40, -0.038), (41, 0.067), (42, 0.117), (43, -0.207), (44, 0.068), (45, -0.112), (46, 0.064), (47, -0.024), (48, -0.069), (49, 0.061)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97181463 37 emnlp-2012-Dynamic Programming for Higher Order Parsing of Gap-Minding Trees

Author: Emily Pitler ; Sampath Kannan ; Mitchell Marcus

Abstract: We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single in- tervals can produce trees in which a node has descendants in multiple intervals.

2 0.64911437 59 emnlp-2012-Generating Non-Projective Word Order in Statistical Linearization

Author: Bernd Bohnet ; Anders Bjorkelund ; Jonas Kuhn ; Wolfgang Seeker ; Sina Zarriess

Abstract: We propose a technique to generate nonprojective word orders in an efficient statistical linearization system. Our approach predicts liftings of edges in an unordered syntactic tree by means of a classifier, and uses a projective algorithm for tree linearization. We obtain statistically significant improvements on six typologically different languages: English, German, Dutch, Danish, Hungarian, and Czech.

3 0.57940441 104 emnlp-2012-Parse, Price and Cut-Delayed Column and Row Generation for Graph Based Parsers

Author: Sebastian Riedel ; David Smith ; Andrew McCallum

Abstract: Graph-based dependency parsers suffer from the sheer number of higher order edges they need to (a) score and (b) consider during optimization. Here we show that when working with LP relaxations, large fractions of these edges can be pruned before they are fully scored—without any loss of optimality guarantees and, hence, accuracy. This is achieved by iteratively parsing with a subset of higherorder edges, adding higher-order edges that may improve the score of the current solution, and adding higher-order edges that are implied by the current best first order edges. This amounts to delayed column and row generation in the LP relaxation and is guaranteed to provide the optimal LP solution. For second order grandparent models, our method considers, or scores, no more than 6–13% of the second order edges of the full model. This yields up to an eightfold parsing speedup, while pro- viding the same empirical accuracy and certificates of optimality as working with the full LP relaxation. We also provide a tighter LP formulation for grandparent models that leads to a smaller integrality gap and higher speed.

4 0.49154878 119 emnlp-2012-Spectral Dependency Parsing with Latent Variables

Author: Paramveer Dhillon ; Jordan Rodu ; Michael Collins ; Dean Foster ; Lyle Ungar

Abstract: Recently there has been substantial interest in using spectral methods to learn generative sequence models like HMMs. Spectral methods are attractive as they provide globally consistent estimates of the model parameters and are very fast and scalable, unlike EM methods, which can get stuck in local minima. In this paper, we present a novel extension of this class of spectral methods to learn dependency tree structures. We propose a simple yet powerful latent variable generative model for dependency parsing, and a spectral learning method to efficiently estimate it. As a pilot experimental evaluation, we use the spectral tree probabilities estimated by our model to re-rank the outputs of a near state-of-theart parser. Our approach gives us a moderate reduction in error of up to 4.6% over the baseline re-ranker. .

5 0.44081846 46 emnlp-2012-Exploiting Reducibility in Unsupervised Dependency Parsing

Author: David Marecek ; Zdene20 ek Zabokrtsky

Abstract: The possibility of deleting a word from a sentence without violating its syntactic correctness belongs to traditionally known manifestations of syntactic dependency. We introduce a novel unsupervised parsing approach that is based on a new n-gram reducibility measure. We perform experiments across 18 languages available in CoNLL data and we show that our approach achieves better accuracy for the majority of the languages then previously reported results.

6 0.42564723 96 emnlp-2012-Name Phylogeny: A Generative Model of String Variation

7 0.3356415 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

8 0.33535588 57 emnlp-2012-Generalized Higher-Order Dependency Parsing with Cube Pruning

9 0.30591199 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon

10 0.2955887 82 emnlp-2012-Left-to-Right Tree-to-String Decoding with Prediction

11 0.27674562 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

12 0.2692939 12 emnlp-2012-A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing

13 0.26863787 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

14 0.26543564 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

15 0.2630671 55 emnlp-2012-Forest Reranking through Subtree Ranking

16 0.26241273 105 emnlp-2012-Parser Showdown at the Wall Street Corral: An Empirical Investigation of Error Types in Parser Output

17 0.21646309 72 emnlp-2012-Joint Inference for Event Timeline Construction

18 0.21402211 66 emnlp-2012-Improving Transition-Based Dependency Parsing with Buffer Transitions

19 0.21184468 133 emnlp-2012-Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision

20 0.20628157 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.013), (11, 0.432), (16, 0.067), (25, 0.019), (34, 0.09), (60, 0.047), (63, 0.039), (65, 0.02), (70, 0.023), (73, 0.011), (74, 0.028), (76, 0.055), (79, 0.013), (80, 0.016), (81, 0.015), (86, 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.86111051 37 emnlp-2012-Dynamic Programming for Higher Order Parsing of Gap-Minding Trees

Author: Emily Pitler ; Sampath Kannan ; Mitchell Marcus

Abstract: We introduce gap inheritance, a new structural property on trees, which provides a way to quantify the degree to which intervals of descendants can be nested. Based on this property, two new classes of trees are derived that provide a closer approximation to the set of plausible natural language dependency trees than some alternative classes of trees: unlike projective trees, a word can have descendants in more than one interval; unlike spanning trees, these intervals cannot be nested in arbitrary ways. The 1-Inherit class of trees has exactly the same empirical coverage of natural language sentences as the class of mildly nonprojective trees, yet the optimal scoring tree can be found in an order of magnitude less time. Gap-minding trees (the second class) have the property that all edges into an interval of descendants come from the same node, and thus an algorithm which uses only single in- tervals can produce trees in which a node has descendants in multiple intervals.

2 0.62200439 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

Author: David Burkett ; Dan Klein

Abstract: We describe a transformation-based learning method for learning a sequence of monolingual tree transformations that improve the agreement between constituent trees and word alignments in bilingual corpora. Using the manually annotated English Chinese Translation Treebank, we show how our method automatically discovers transformations that accommodate differences in English and Chinese syntax. Furthermore, when transformations are learned on automatically generated trees and alignments from the same domain as the training data for a syntactic MT system, the transformed trees achieve a 0.9 BLEU improvement over baseline trees.

3 0.34143743 82 emnlp-2012-Left-to-Right Tree-to-String Decoding with Prediction

Author: Yang Feng ; Yang Liu ; Qun Liu ; Trevor Cohn

Abstract: Decoding algorithms for syntax based machine translation suffer from high computational complexity, a consequence of intersecting a language model with a context free grammar. Left-to-right decoding, which generates the target string in order, can improve decoding efficiency by simplifying the language model evaluation. This paper presents a novel left to right decoding algorithm for tree-to-string translation, using a bottom-up parsing strategy and dynamic future cost estimation for each partial translation. Our method outperforms previously published tree-to-string decoders, including a competing left-to-right method.

4 0.3369396 66 emnlp-2012-Improving Transition-Based Dependency Parsing with Buffer Transitions

Author: Daniel Fernandez-Gonzalez ; Carlos Gomez-Rodriguez

Abstract: In this paper, we show that significant improvements in the accuracy of well-known transition-based parsers can be obtained, without sacrificing efficiency, by enriching the parsers with simple transitions that act on buffer nodes. First, we show how adding a specific transition to create either a left or right arc of length one between the first two buffer nodes produces improvements in the accuracy of Nivre’s arc-eager projective parser on a number of datasets from the CoNLL-X shared task. Then, we show that accuracy can also be improved by adding transitions involving the topmost stack node and the second buffer node (allowing a limited form of non-projectivity). None of these transitions has a negative impact on the computational complexity of the algorithm. Although the experiments in this paper use the arc-eager parser, the approach is generic enough to be applicable to any stackbased dependency parser.

5 0.33113381 59 emnlp-2012-Generating Non-Projective Word Order in Statistical Linearization

Author: Bernd Bohnet ; Anders Bjorkelund ; Jonas Kuhn ; Wolfgang Seeker ; Sina Zarriess

Abstract: We propose a technique to generate nonprojective word orders in an efficient statistical linearization system. Our approach predicts liftings of edges in an unordered syntactic tree by means of a classifier, and uses a projective algorithm for tree linearization. We obtain statistically significant improvements on six typologically different languages: English, German, Dutch, Danish, Hungarian, and Czech.

6 0.32010311 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

7 0.31018165 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon

8 0.30950049 46 emnlp-2012-Exploiting Reducibility in Unsupervised Dependency Parsing

9 0.30413932 67 emnlp-2012-Inducing a Discriminative Parser to Optimize Machine Translation Reordering

10 0.29951006 133 emnlp-2012-Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision

11 0.28939983 35 emnlp-2012-Document-Wide Decoding for Phrase-Based Statistical Machine Translation

12 0.28855544 118 emnlp-2012-Source Language Adaptation for Resource-Poor Machine Translation

13 0.28828624 12 emnlp-2012-A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing

14 0.2857644 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

15 0.28299358 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

16 0.27943388 5 emnlp-2012-A Discriminative Model for Query Spelling Correction with Latent Structural SVM

17 0.27828115 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

18 0.27762884 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

19 0.27715215 45 emnlp-2012-Exploiting Chunk-level Features to Improve Phrase Chunking

20 0.27559817 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers