acl acl2010 acl2010-67 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Alexander Koller ; Stefan Thater
Abstract: We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
Reference: text
sentIndex sentText sentNum sentScore
1 Computing weakest readings Alexander Koller Cluster of Excellence Saarland University koller@mmci. [sent-1, score-0.923]
2 de Abstract We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. [sent-3, score-0.951]
3 As an alternative, Bos (2008) proposes to compute the weakest reading of each sentence and then use it instead of the “true” reading of the sentence. [sent-11, score-0.556]
4 However, when a sentence has millions of readings, finding the weakest reading is a hard problem. [sent-16, score-0.486]
5 In this paper, we propose a new, efficient approach to the problem of computing weakest readings. [sent-18, score-0.444]
6 We follow an underspecification approach to managing ambiguity: Rather than deriving all semantic representations from the syntactic analysis, we work with a single, compact underspecified semantic representation, from which the semantic representations can then be extracted by need. [sent-19, score-0.218]
7 We then approximate entailment with a rewrite system that rewrites readings into logically weaker readings; the weakest readings are exactly those readings that cannot be rewritten into some other reading any more (the relative normal forms). [sent-20, score-2.712]
8 We present an algorithm that computes the relative normal forms, and evaluate it on the underspecified descriptions that the ERG derives on a 624-sentence subcorpus of the Rondane Treebank. [sent-21, score-0.232]
9 While the mean number of scope readings in the subcorpus is in the millions, our system computes on average 4. [sent-22, score-0.596]
10 5 weakest readings for each sentence, in less than twenty milliseconds; over 80% of all sentences are reduced to at most two weakest readings. [sent-23, score-1.339]
11 Technically, we use underspecified descriptions that are regular tree grammars derived from dominance graphs (Althaus et al. [sent-26, score-0.632]
12 We compute the weakest readings by intersecting these grammars with other grammars representing the rewrite rules. [sent-31, score-1.345]
13 This approach can be used much more generally than just for the computation of weakest readings; we illustrate this by showing how a more general version of the redundancy elimination algorithm by Koller et al. [sent-32, score-0.6]
14 Thus our system can serve as a general framework for removing unintended readings from an underspecified representation. [sent-34, score-0.59]
15 We recall dominance graphs, regular tree grammars, and the basic ideas of underspecification in Section 3, before we show how to compute weakest readings (Section 4) and logical equivalences (Section 5). [sent-37, score-1.508]
16 In Section 6, we define a weakening rewrite system for the ERG and evaluate it on the Rondane Treebank. [sent-38, score-0.452]
17 2 Related work The idea of deriving a single approximative semantic representation for ambiguous sentences goes back to Hobbs (1983); however, Hobbs only works his algorithm out for a restricted class of quantifiers, and his representations can be weaker than our weakest readings. [sent-40, score-0.49]
18 Rules that weaken one reading into another were popular in the 1990s underspecification literature (Reyle, 1995; Monz and de Rijke, 2001 ; van Deemter, 1996) because they simplify logical reasoning with underspecified representations. [sent-41, score-0.29]
19 From a linguistic perspective, Kempson and Cormack (1981) even go so far as to claim that the weakest reading should be taken as the “basic” reading of a sentence, and the other readings only seen as pragmatically licensed special cases. [sent-42, score-1.063]
20 The work presented here is related to other approaches that reduce the set of readings of an underspecified semantic representation (USR). [sent-43, score-0.59]
21 Koller and Niehren (2000) showed how to strengthen a dominance constraint using information about anaphoric accessibility; later, Koller et al. [sent-44, score-0.265]
22 (2008) presented and evaluated an algorithm for redundancy elimination, which removes readings from an USR based on logical equivalence. [sent-45, score-0.653]
23 This paper builds closely upon Koller and Thater (2010), which lays the formal groundwork for the ∃z412∀¬x3∃y 5compzre6pr-ofx,zs78asmeepx,lyey Figure 1: A dominance graph describing the five readings of the sentence “it is not the case that every representative of a company saw a sample. [sent-47, score-0.828]
24 Here we go beyond that paper by applying a concrete implementation of our RTG construction for weakest readings to a real-world grammar, evaluating the system on practical inputs, and combining weakest readings with redundancy elimination. [sent-49, score-1.924]
25 3 Underspecification This section briefly reviews two formalisms for specifying sets of trees: dominance graphs and regular tree grammars. [sent-50, score-0.498]
26 We take a (finite constructor) tree t as a ffi)n ≥ite tree in which each node is labelled with a symbol of Σ, and the number of children of the node is exactly the arity of this symbol. [sent-59, score-0.376]
27 1 Dominance graphs A (labelled) dominance graph D (Althaus et al. [sent-70, score-0.371]
28 , 2003) is a directed graph that consists of a collection of trees called fragments, plus dominance edges relating nodes in different fragments. [sent-71, score-0.358]
29 the “semantic material” common to all readings as fragments, plus dominance relations between these fragments. [sent-76, score-0.772]
30 It represents the five readings of the sentence “it is not the case that every representative of a company saw a sample. [sent-79, score-0.507]
31 2 Regular tree grammars Regular tree grammars (RTGs) are a general grammar formalism for describing languages of trees (Comon et al. [sent-84, score-0.375]
32 This grammar uses sets of root names from D as nonterminal symbols, and generates exactly the five configurations of the graph in Fig. [sent-96, score-0.249]
33 (RTLs), and regular tree grammars are equivalent to finite tree automata, which are defined essentially like the well-known finite string automata, except that they assign states to the nodes in a tree rather than the positions in a string. [sent-100, score-0.528]
34 3 Dominance graphs as RTGs An important class of dominance graphs are hy- × pernormally connected (hnc) dominance graphs (Koller et al. [sent-104, score-0.68]
35 The precise definition of hnc graphs is not important here, but note that virtually all underspecified descriptions that are produced by current grammars are hypernormally connected (Flickinger et al. [sent-106, score-0.303]
36 Every hypernormally connected dominance graph D can be automatically translated into an equivalent RTG GD that generates exactly the same configurations (Koller et al. [sent-108, score-0.525]
37 4 Computing weakest readings Now we are ready to talk about computing the weakest readings of a hypernormally connected dominance graph. [sent-115, score-2.192]
38 We will first explain how we approximate logical weakening with rewrite systems. [sent-116, score-0.52]
39 We will then discuss how weakest readings can be computed efficiently as the relative normal forms of these rewrite systems. [sent-117, score-1.408]
40 The formula represented by (d) logically entails (c); we say that (c) is a weaker reading than (d) because it is satisfied by more models. [sent-122, score-0.192]
41 We can define the weakest readings of the dominance graph as the minimal elements of the entailment order; in the example, these are (b) and (c). [sent-125, score-1.326]
42 A naive algorithm for computing weakest readings would explicitly compute the entailment order, by running a theorem prover on each pair of configurations, and then pick out the minimal elements. [sent-127, score-1.033]
43 The fundamental insight we exploit is that entailment among the configurations of a dominance graph can be approximated with rewriting rules (Baader and Nipkow, 1999). [sent-130, score-0.613]
44 The annotation [−] specifies that we must only apply the rule to su[b−f]ormulas in negative logical polarity: If the quantifiers in (d) were not in the scope of a negation, then applying the rule would actually make the formula stronger. [sent-133, score-0.391]
45 We say that the rule (1) is logically sound because applying it to a subformula with the correct polarity of some configuration t always makes the result t0 logically weaker than t. [sent-134, score-0.4]
46 In our example, the annotator ann models logical polarity by mapping, for instance, ann (+, ∃z, 1) = ann (+, ∃z, 2) = ann (+, ∃y, 2) = +, ann (−, ∃z, 1) = ann (−, ∃z, 2) = ann (+, ∀x, 1) = eantcn. [sent-139, score-0.637]
47 Now we can define an annotated rewrite system R to be a finite set of pairs (a, r) where a is an annotation and r is an ordinary rewrite rule. [sent-142, score-0.754]
48 The rule (1) above is an example of an annotated rewrite rule with a = A rewrite rule (a, r) can be applied at the noad =e u of a tree t if ann assigns the annotation a to u and r is applicable at u as usual. [sent-143, score-1.138]
49 In other words, annotated rewrite systems are rewrite systems where rule applications are restricted to subtrees with specific annotations. [sent-145, score-0.73]
50 The rewrite system R is called linear 33 if every variable that occurs on the left-hand side of a rule occurs on its right-hand side exactly once. [sent-148, score-0.44]
51 2 Relative normal forms The rewrite steps of a sound weakening rewrite system are related to the entailment order: Because every rewrite step transforms a reading into a weaker reading, an actual weakest readings must be such that there is no other configuration into which it can be rewritten. [sent-150, score-2.416]
52 there can be non-rewritable configurations that are not weakest readings, but we will see in Section 6 that this approximation is good enough for practical use. [sent-153, score-0.537]
53 So one way to solve the problem of computing weakest readings is to find readings that cannot be rewritten further. [sent-154, score-1.492]
54 One class of configurations that “cannot be rewritten” with a rewrite system R is the set of normal forms of R, i. [sent-155, score-0.572]
55 those configurations to which no rule in R can be applied. [sent-157, score-0.211]
56 In our example, (b) and (c) are indeed normal forms with respect to a rewrite system that consists only of the rule (1). [sent-158, score-0.541]
57 Consider a rewrite system that also contains the following annotated rewrite rule, which is also sound for logical entailment: [+] ¬(∃z(P,Q)) → ∃z(P,¬(Q)), (2) This rule would allow us to rewrite the configuration (c) into the tree ∃z(compz, ¬(∃y(sampley, ∀x(repr−ofx,z, seex,y)))). [sent-160, score-1.292]
58 If we were to equate weakest readings with normal forms, we would erroneously classify (c) as not being a weakest reading. [sent-162, score-1.422]
59 The correct concept for characterizing weakest readings in terms of rewriting is that of a relative normal form. [sent-163, score-1.1]
60 We define a configuration t of a dominance graph D to be a R-relative normal form of (the configurations of) D iff there is no other configuration t0 ofD such that t →R t0. [sent-164, score-0.605]
61 These are the configurations that can’t be wte →akened further without obtaining a tree that is no longer a configuration of D. [sent-165, score-0.258]
62 In other words, if R approximates entailment, then the R-relative normal forms approximate the weakest readings. [sent-166, score-0.547]
63 3 Computing relative normal forms We now show how the relative normal forms of a dominance graph can be computed efficiently. [sent-168, score-0.651]
64 The key idea of the construction is to represent the relation →R in terms of a context tree transducer M, and→ characterize the relative normal forms of a tree language L in terms of the pre-image of L under M. [sent-171, score-0.403]
65 But while ordinary transducers read the input tree symbol by symbol, a context tree transducer can read multiple symbols at once. [sent-174, score-0.397]
66 For each linear annotated rewrite system R, we can now build a context tree transducer MR such that t →R t0 iff (t, t0) ∈ τMR. [sent-215, score-0.461]
67 MR can nondeterministically choose to either copy the current symbol to the output tree unchanged, or to apply a rewrite rule from R. [sent-218, score-0.597]
68 The rules are built in such a way that in each run, exactly one rewrite rule must be applied. [sent-219, score-0.469]
69 {Ifq MR r ∈ea Adsn a node u in state this means that the annotator assigns annotation a to u and MR will rewrite a subtree at or qa, 34 below u. [sent-222, score-0.377]
70 We know that for each hnc dominance graph D, there is a regular tree grammar GD such that L(GD) is the set of configurations of D. [sent-256, score-0.733]
71 Furthermore, the preimage τ−M1 (L) = {t | exists t0 ∈ L with (t, t0) ∈ τM} of a regular tr)e =e language L t is∈ a Lls wo regular (Koller and Thater, 2010) if M is linear, and regular tree languages are closed under intersection and complement (Comon et al. [sent-257, score-0.355]
72 L(G0) consists of the members of L(GD) which cannot be rewritten by MR into members of L(GD); that is, L(G0) is exactly the set ofR-relative normal forms of D. [sent-260, score-0.195]
73 Itinm oeth Oer( words, we hKaovlele srh aonwdn T hhoawte rt,o 2 compute the weakest readings of a hypernormally connected dominance graph D, as approximated by a weakening rewrite system R, in time linear in the size of GD and linear in the size ofR. [sent-263, score-1.749]
74 The grammar G0 for the relative normal forms is shown in Fig. [sent-272, score-0.207]
75 Redundancy elimination is the problem of computing, from a dominance graph D, another dominance graph D0 such that conf(D0) ⊆ conf(D) and – – 35 every formula in conf(D) is logically equivalent to some formula in conf(D0). [sent-283, score-0.829]
76 Following the approach of Section 4, we can solve the redundancy elimination problem by transforming the equation system into a rewrite system R such that t →R t0 implies that t and t0 are equivalent. [sent-285, score-0.504]
77 To thits → end, we assume an arbitrary linear order < on Σ, and orient all equations into rewrite rules that respect this order. [sent-286, score-0.375]
78 For each dominance graph D that we obtain by converting an MRS description, we take GD as a grammar over the signature Σ = {fu | u ∈ WD, f = LD(u) }. [sent-292, score-0.403]
79 We then specify an annotator over Σ that assigns polarities for the weakening rewrite system. [sent-295, score-0.49]
80 We distinguish three polarities: for positive occurrences, − for negative occurrences (as in predicate logic), −and ⊥ for contexts in which a weakening rule neither ⊥weakens or strengthens the entire formula. [sent-296, score-0.222]
81 We use the following weakening rewrite system for our experiment, where i∈ {1,2}: 1. [sent-308, score-0.452]
82 stand for classes of labels in Σ, and a rule schema [a] (C/i, C0/k) is to be read as shorthand for a set of rewrite rules which rearrange a tree where the i-th child of a symbol from C is a symbol from C0 into a tree where the symbol from C becomes the k-th child of the symbol from C0. [sent-315, score-0.957]
83 Rlaruilteie ssc,h neom rau 1e states, fpolirc instance, othlaarti ttyhe ⊥ specific (wide-scope) reading of the indefinite in the president of a company is logically stronger than the reading in which a company is within the restriction of the definite determiner. [sent-321, score-0.257]
84 The schema is intuitively plausible, and it can also be proved to be logically sound if we make the standard assumption that the definite determiner the means “exactly one” (Montague, 1974). [sent-322, score-0.251]
85 Notice that it is not, strictly speaking, logically sound; however, because strong determiners like all or every carry a presupposition that their restrictions have a non-empty denotation (Lasersohn, 1993), the schema becomes sound for all instances that can be expressed in natural language. [sent-326, score-0.223]
86 Σ/k = P/2 These rule schemata state that permuting existential determiners with each other is an equivalence transformation, and so is permuting definite determiners with existential and definite determiners if one determiner is the second argument (in the scope) of a definite. [sent-334, score-0.403]
87 We orient these equalities into rewrite rules by ordering symbols in P before symbols that are not 37 AllKRT08RERE+WR ma#vedcgor(u#n cfto =i≤mnfe12)8302. [sent-336, score-0.433]
88 We used these rewrite systems to compute, for each USR in RON10, the number of all configurations, the number of configurations that remain after redundancy elimination, and the number of weakest readings (i. [sent-345, score-1.442]
89 , the relative normal forms of the combined equivalence and weakening rewrite systems). [sent-347, score-0.617]
90 By computing weakest readings (WR), we reduce the ambiguity of over 80% of all sentences to one or two readings; this is a clear improvement even over the results of the redundancy elimination (RE). [sent-350, score-1.168]
91 Computing weakest readings reduces the mean number of readings from several million to 4. [sent-351, score-1.43]
92 Finally, computing the weakest readings takes only a tiny amount of extra runtime compared to the RE elimination or even the computation of the RTGs (reported as the runtime for “All”). [sent-355, score-1.117]
93 7 Conclusion In this paper, we have shown how to compute the weakest readings of a dominance graph, characterized by an annotated rewrite system. [sent-368, score-1.508]
94 Our algorithm can be applied to other problems in which an underspecified representation is to be disambiguated, as long as the remaining readings can be characterized as the relative normal forms of a linear annotated rewrite system. [sent-371, score-1.075]
95 On the other hand, it may be possible to reduce the set of readings even further. [sent-377, score-0.507]
96 We retain more readings than necessary in many treebank sentences because the combined weakening and equivalence rewrite system is not confluent, and therefore may not recognize a logical relation between two configurations. [sent-378, score-1.027]
97 The rewrite system could be made more powerful by running the Knuth-Bendix completion algorithm (Knuth and Bendix, 1970). [sent-379, score-0.32]
98 Exploring the practical tradeoff between the further reduction in the number of remaining configurations and the increase in complexity of the rewrite system and the RTG would be worthwhile. [sent-380, score-0.441]
99 Bridging the gap between underspecification formalisms: Hole semantics as dominance constraints. [sent-524, score-0.334]
100 Regular tree grammars as a formalism for scope underspecification. [sent-531, score-0.205]
wordName wordTfidf (topN-words)
[('readings', 0.507), ('weakest', 0.416), ('rewrite', 0.32), ('dominance', 0.265), ('gd', 0.159), ('koller', 0.153), ('rtg', 0.132), ('weakening', 0.132), ('configurations', 0.121), ('elimination', 0.106), ('rondane', 0.106), ('mr', 0.101), ('tree', 0.097), ('rule', 0.09), ('regular', 0.086), ('underspecified', 0.083), ('normal', 0.083), ('entailment', 0.082), ('logically', 0.081), ('redundancy', 0.078), ('ann', 0.077), ('reading', 0.07), ('conf', 0.069), ('underspecification', 0.069), ('logical', 0.068), ('schema', 0.068), ('hnc', 0.066), ('erg', 0.066), ('symbol', 0.064), ('rewriting', 0.06), ('quantifiers', 0.058), ('scope', 0.057), ('graph', 0.056), ('comon', 0.053), ('compz', 0.053), ('hypernormally', 0.053), ('niehren', 0.053), ('grammars', 0.051), ('finite', 0.05), ('graphs', 0.05), ('thater', 0.05), ('forms', 0.048), ('flickinger', 0.047), ('transducer', 0.044), ('grammar', 0.042), ('weaker', 0.041), ('signature', 0.04), ('configuration', 0.04), ('gr', 0.04), ('allu', 0.04), ('althaus', 0.04), ('barwise', 0.04), ('notv', 0.04), ('rtgs', 0.04), ('sampley', 0.04), ('polarities', 0.038), ('trees', 0.037), ('determiners', 0.037), ('sound', 0.037), ('definite', 0.036), ('ordinary', 0.036), ('constructor', 0.035), ('existential', 0.035), ('usr', 0.035), ('rewritten', 0.034), ('relative', 0.034), ('representations', 0.033), ('ambiguity', 0.033), ('subcorpus', 0.032), ('argument', 0.031), ('exactly', 0.03), ('transducers', 0.03), ('runtime', 0.03), ('nonterminals', 0.03), ('polarity', 0.03), ('arity', 0.03), ('con', 0.029), ('rules', 0.029), ('node', 0.029), ('symbols', 0.029), ('determiner', 0.029), ('xm', 0.028), ('wr', 0.028), ('annotation', 0.028), ('computing', 0.028), ('schemas', 0.027), ('write', 0.027), ('everyu', 0.026), ('gabsdil', 0.026), ('holes', 0.026), ('kempson', 0.026), ('nondeterministically', 0.026), ('ofmr', 0.026), ('orient', 0.026), ('rtls', 0.026), ('applicable', 0.026), ('qa', 0.026), ('copestake', 0.026), ('formulas', 0.026), ('monotonic', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 0.9999994 67 acl-2010-Computing Weakest Readings
Author: Alexander Koller ; Stefan Thater
Abstract: We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
2 0.088570505 182 acl-2010-On the Computational Complexity of Dominance Links in Grammatical Formalisms
Author: Sylvain Schmitz
Abstract: Dominance links were introduced in grammars to model long distance scrambling phenomena, motivating the definition of multiset-valued linear indexed grammars (MLIGs) by Rambow (1994b), and inspiring quite a few recent formalisms. It turns out that MLIGs have since been rediscovered and reused in a variety of contexts, and that the complexity of their emptiness problem has become the key to several open questions in computer science. We survey complexity results and open issues on MLIGs and related formalisms, and provide new complexity bounds for some linguistically motivated restrictions.
3 0.084044024 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars
Author: Andreas Maletti
Abstract: A characterization of the expressive power of synchronous tree-adjoining grammars (STAGs) in terms of tree transducers (or equivalently, synchronous tree substitution grammars) is developed. Essentially, a STAG corresponds to an extended tree transducer that uses explicit substitution in both the input and output. This characterization allows the easy integration of STAG into toolkits for extended tree transducers. Moreover, the applicability of the characterization to several representational and algorithmic problems is demonstrated.
4 0.079622306 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar
Author: Mohit Bansal ; Dan Klein
Abstract: We present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set, in contrast to much recent work on tree selection in data-oriented parsing and treesubstitution grammar learning. We require only simple, deterministic grammar symbol refinement, in contrast to recent work on latent symbol refinement. Moreover, our parser requires no explicit lexicon machinery, instead parsing input sentences as character streams. Despite its simplicity, our parser achieves accuracies of over 88% F1 on the standard English WSJ task, which is competitive with substantially more complicated state-of-theart lexicalized and latent-variable parsers. Additional specific contributions center on making implicit all-fragments parsing efficient, including a coarse-to-fine inference scheme and a new graph encoding.
5 0.078985639 127 acl-2010-Global Learning of Focused Entailment Graphs
Author: Jonathan Berant ; Ido Dagan ; Jacob Goldberger
Abstract: We propose a global algorithm for learning entailment relations between predicates. We define a graph structure over predicates that represents entailment relations as directed edges, and use a global transitivity constraint on the graph to learn the optimal set of edges, by formulating the optimization problem as an Integer Linear Program. We motivate this graph with an application that provides a hierarchical summary for a set of propositions that focus on a target concept, and show that our global algorithm improves performance by more than 10% over baseline algorithms.
6 0.06832242 33 acl-2010-Assessing the Role of Discourse References in Entailment Inference
7 0.063989319 169 acl-2010-Learning to Translate with Source and Target Syntax
8 0.060997367 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
9 0.060584724 94 acl-2010-Edit Tree Distance Alignments for Semantic Role Labelling
10 0.059695959 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers
11 0.059130929 118 acl-2010-Fine-Grained Tree-to-String Translation Rule Extraction
12 0.058879275 162 acl-2010-Learning Common Grammar from Multilingual Corpus
13 0.058008522 121 acl-2010-Generating Entailment Rules from FrameNet
14 0.05771663 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars
15 0.056862283 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression
16 0.055681214 30 acl-2010-An Open-Source Package for Recognizing Textual Entailment
17 0.055287912 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar
18 0.054671925 69 acl-2010-Constituency to Dependency Translation with Forests
19 0.05458593 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
20 0.051772222 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
topicId topicWeight
[(0, -0.138), (1, -0.006), (2, 0.048), (3, -0.026), (4, -0.059), (5, -0.041), (6, 0.117), (7, 0.037), (8, -0.061), (9, -0.111), (10, -0.019), (11, 0.038), (12, 0.009), (13, 0.001), (14, -0.039), (15, -0.022), (16, 0.012), (17, 0.002), (18, -0.013), (19, 0.054), (20, -0.043), (21, -0.034), (22, -0.019), (23, -0.043), (24, 0.071), (25, -0.023), (26, -0.069), (27, 0.014), (28, 0.03), (29, 0.038), (30, -0.032), (31, -0.038), (32, -0.056), (33, 0.068), (34, -0.015), (35, 0.07), (36, -0.025), (37, 0.011), (38, 0.081), (39, -0.018), (40, 0.002), (41, 0.006), (42, 0.033), (43, -0.051), (44, -0.001), (45, 0.027), (46, -0.006), (47, -0.044), (48, 0.103), (49, -0.164)]
simIndex simValue paperId paperTitle
same-paper 1 0.92486936 67 acl-2010-Computing Weakest Readings
Author: Alexander Koller ; Stefan Thater
Abstract: We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
2 0.8162787 182 acl-2010-On the Computational Complexity of Dominance Links in Grammatical Formalisms
Author: Sylvain Schmitz
Abstract: Dominance links were introduced in grammars to model long distance scrambling phenomena, motivating the definition of multiset-valued linear indexed grammars (MLIGs) by Rambow (1994b), and inspiring quite a few recent formalisms. It turns out that MLIGs have since been rediscovered and reused in a variety of contexts, and that the complexity of their emptiness problem has become the key to several open questions in computer science. We survey complexity results and open issues on MLIGs and related formalisms, and provide new complexity bounds for some linguistically motivated restrictions.
3 0.61319435 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers
Author: Jonathan May ; Kevin Knight ; Heiko Vogler
Abstract: Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko .vogle r@ tu-dre s den .de 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos.m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed R+1∪W {e+ as∞su}m,te ha thtro thuegh woeuitgh t hi osf p aa ppaetrh t ihsa cta wlceuilgahtetds a asre th ien prod∪uct { +of∞ ∞th}e, wtheaitgh thtes wofe i gtsh etd ogfes a, panatdh t ihsat c athlceu lwateeigdh ats so tfh ae (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 ProceedingUsp opfs thaela 4, 8Stwhe Adnennu,a 1l1- M16ee Jtiunlgy o 2f0 t1h0e. A ?c ss2o0c1ia0ti Aosnso focria Ctioonm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s058–1066, a:a/1a:a/1 b:b/a.:5b/.1aa::ba//. 49Ea:ba/:a.6/.5 (a) InpAut strain:ag/ “a aB” emab:ae/d1dedC in an D(b) first Wa:b:aS/T./4. in cascadeE :c/.6Fa:d/.4 (c//).. second WFST in cascbaad::dde// identityA WSaT: (d)b:Oda/c:f.d3Al./6ai5n.:0c3e/.0ca:o7m/1pao :sBcdi/t.32o584n6a :p/1roa:cCa/h:db.3:/c6d.2/46.35b:a(/be.a)5 :BbA:/u.D1c/keDtbrigBadEDae:ba/p.49proach:ECDa:b/ a.:6b/5.(f)Rdc/Ae./0sD.u35b7aFl4t:c o/f.76ofcBld/iE.nD53e4F6orFbcud/.c0k7e3tapbCpd:cDli/cF.a12t38ion (g)dcI/n.A3i5tD46dcaF/lD.0oF7n3-thea f:lcBdy/E .b(12h:d)8c/O.3A5nD-4dtchF/.e0C -7f3EDlyFas:t /n.B9d-EDinF afterc /x.d3cp/6.l1o28ring C(i)ECDOcFE/.nA5-tD4hdcF/e.0- fl37ysdcta/n.35dB64-iEDnF afteBrc/Eb.3eFs6dtc/ pd./1a2.t83h46 asbeC nEDF found Cobm:bdap:c/:o/d.3/.sa6.5e05D:c tF.h0e7 tranaaaa:sd:c: cdd// /. u12..53c2846ers stacnd//d..A53-i46nDdc f.o.00r73 (f) Appaal:A:ybaD W..19ST (b)BB tEoD WST (a) aftercd p/..0/0..rDo5373Fj46ectionc o.12u28tcdg//o.A.35iDn64dgcF// e0C0dC73gEDeFFs of sBtEaDFteF ADdFc// FiBgBuEDFreF 1: Tc/ .h2.34dcr/6/e. 1e2 8 d/ iA. f53fD64edcF/ r. 0 eCC37nEDtF appBroBEaDFcFhesdc ./t2o.3d4c a..12p28plicatioCCnED tF/ h. A53r6D4ocd/uF/. g0073h cascBBaEdDFeFs ocdf/ .3W264dcS//.1.T282s. bydc w//..53e46ll-known aElFgorithdc/m./2.34s6 to e//.f.5f3i46cieCnEtFly finBdE the kd-c/ bedst/ p6aths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 AppCliEcdcF//a..53ti64on of treeB tFranscd/d./3.u264cers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈ Σ has a rank rk(σ) ∈ N. We cerayll ⊆ Σ, ∈k ∈ aNs t ahe r set rokf tσho)s ∈e σ ∈ Σe such that r⊆k(σ Σ), k= ∈k. NTh teh ese ste otf o vfa trhioasbele σs σi s∈ d eΣnoted X = {x1, x2, . . .} and is assumed to be disjnooitnetd df Xrom = any rank,e.d. a.}lp ahnadb iest aussseudm iend dth tios paper. We use to denote a symbol of rank 0 that is not iWn any e ra ⊥nk toed d eanlpohtaeb aet s yumsebdo lin o fth riasn paper. tA is tr neoet t ∈ TΣ is denoted σ(t1 , . . . , tk) where k ≥ 0, σ ∈ and t1, . . . , tk ∈ TΣ. F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈ TΣ as shorthand for σ() . For every set Sw rditiesjσo in ∈t f Trom Σ, let TΣ (S) = TΣ∪S, where, for all s ∈ S, rk(s) = 0. lW se ∈ d,e rfkin(es) th 0e. positions of a tree t = σ(t1, . . . , tk), for k 0, σ ∈ t1, . . . , tk ∈ TΣ, as a set pos(≥t) ⊂ N∗ s∈uch that {∈ε} T ∪ 1e t≤ p ois (≤t) k ⊂, ⊂v ∈ pTohse( tse)t =of {lεea}f ∪ po {siivtio |ns 1 l ≤v(t i) ≤⊆ k p,ovs(t ∈) apores t(hto)s}e. pTohseit sieotns o fv l a∈f p poossit(ito)n ssu lvch(t )th ⊆at pfoors tn)o ir ∈ th Nse, pvio ∈it ponoss(t v). ∈ We p presume hsta tnhadatr dfo lrex nioco igr ∈aph Nic, ovrid ∈eri pnogss( <∈ nTΣd ≤an odn v p o∈s pos(t). The label of t at Lpoestit ti,osn v, Tdenaontedd v v by ∈ t( pvo)s,( tt)he. sTuhbetr leaeb eolf ot fa tt v, denoted by t|v, and the replacement at v by s, vde,n doetneodt e bdy tb[ys] tv|, are defined as follows: ≥ pos(t) = ,{ aivs a| Σ(k), pos(ti)}. 1. For every σ ∈ Σ(0) , σ(ε) = σ, σ|ε = σ, and σF[osr]ε e v=e sy. 2. For every t = σ(t1 , . . . , tk) such that k = rk(σ) and k 1, t(ε) = σ, t|ε = t, aknd = t[ rsk]ε( =) ns.d kFo ≥r every 1) ≤= iσ ≤ t| k and v ∈ pos(ti), t(ivF) =r vtie (rvy) ,1 1t| ≤iv =i ≤ti |v k, aanndd tv[s] ∈iv p=o sσ(t(t1 , . . . , ti−1 , ti[(sv])v, , tt|i+1 , . . . , t|k). The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt.e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. {Lt(etv )A | avn ∈d lBv( tb)e} sets. Let ϕ : A → TΣ (B) be Lae mt Aapp ainndg. B W bee seexttes.nd L ϕ t oϕ th :e A Am →appi Tng ϕ : TΣ (A) → TΣ (B) such that for a ∈ A, ϕ(a) = ϕ(a) and( Afo)r →k 0, σ ∈ Σch(k th) , atn fdo t1, . . . , tk ∈ TΣ (A), ϕan(dσ( fto1r, . . . ,t0k,) σ) =∈ σ Σ(ϕ(t1), . . . ,ϕ(tk)). ∈ W Te indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈ R | w 0} and R+∞ to dentoote d Ren+o ∪e {th+e∞ set}. { ≥ ≥ ≥ Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈ N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩ N = ∅. 3. PΣ ∩is Na =tup ∅le. (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N.6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . . . , nk) where σ ∈ Σ(k) and n1, . . . , nk →∈ σ N(n. For WRTG∈ G N =. (N, Σ, P, n0), s, t, u ∈ TΣ(N), n ∈ N, and p ∈ P of the form n −→ ∈w T u, we nobt ∈ain N Na ,d aenridva ptio ∈n s Ptep o ffr tohme fso rtom mt n by− →repl ua,ci wneg some leaf nonterminal in s labeled n with u. For- − →w mally, s ⇒pG t if there exists some v ∈ lv(s) smuaclhly t,ha st s⇒(v) =t i fn t haenrde s e[xui]svt = so tm. e W ve say t(hsis) derivation step is leftmost if, for all v0 ∈ lv(s) where v0 < v, s(v0) ∈ Σ. We hencef∈orth lv a(ss-) sume all derivation )ste ∈ps a.re leftmost. If, for some m ∈ N, pi ∈ P, and ti ∈ TΣ (N) for all s1o m≤e i m m≤ ∈ m N, n0 ⇒∈ pP1 t a1n ⇒∈pm T tm, we say t1he ≤ sequence ,d n = (p1, . . . ,p.m ⇒) is a derivation of tm in G and that n0 ⇒∗ tm; the weight of d is wt(d) = π(p1) · . . . ⇒· π(pm). The weighted tree language rec)og ·n .i.z.ed · π by(p G is the mapping LG : TΣ → R+∞ such that for every t ∈ TΣ, LG(t) is the sum→ →of R the swuecihgth htsa to ffo arl el v(eproyss ti b∈ly T infinitely many) derivations of t in G. A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. We define a partial ordering ? on WRTGs sucWh eth date finore W aR TpGarst aGl1 r=d r(iNng1 , Σ?, P o1n , n0) and G2 (N2, Σ, P2, n0), we say G1 ? G2 iff N1 ⊆ N2 and P1 ⊆ P2, where the w?eigh Gts are pres⊆erve Nd. ... = Definition 3.2 (cf. Def. 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆ are the ranked alphabets of input and output symbols, respectively, where (Σ ∪ ∆) ∩ Q = 3. (RΣ i ∪s a∆ )tu ∩ple Q ( =R 0∅, .π), where R0 is a finite set of rules, each rule r of the form q.y → u for q ∈ ru lQes, y c∈h T ruΣle(X r), o fa tnhde u fo r∈m T q∆.y(Q − → →× u uX fo)r. Wqe ∈ ∈fu Qrt,hye r ∈req Tuire(X Xth)a,t annod v uari ∈abl Te x( Q∈ ×X appears rmthoerre rtehqauni roen tchea itn n y, aanrida bthleat x xe ∈ach X Xva arpi-able appearing in u is also in y. Moreover, π : R0 → R+∞ is a weight function of the rules. As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. q.y −π −(r →) u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y u, each x ∈ yd (y) ∩ X appears at most on− →ce ( ur,es epaecchxtive ∈ly, dat( lye)a ∩st Xonc aep) iena us. tW meo dsten oontcee th (ree scpleascsof all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1 , . . . , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i.e., it is a “traditional” WTT (F¨ ul¨ op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈ T∆(Q TΣ), and r ∈ R of the form q.y −w →), u, we obtain a× d Ter)iv,a atniodn r s ∈te pR ofrfom the s f trom mt b.yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒rM t if there oisf tah peo ysi-tmioantc vh n∈g tp roese(.s F)o, am saulblys,ti stu ⇒tion mapping ϕ : X → TΣ, and a rule q.y −u→w bs u ∈ R such that ϕs(v :) X X= → →(q, T ϕ(y)) and t = s[ϕ− →0(u u)] ∈v, wRh seurech hϕ t0h aist a substitution mapping Q X → T∆ (Q TΣ) dae sfiunbesdti usuticohn t mhaatp ϕpin0(qg0, Q Qx) × = X ( →q0, Tϕ(x()Q) f×or T all q0 ∈ Q and x ∈ X. We say this derivation step is l∈eft Qmo asnt dif, x f o∈r Xall. v W0 e∈ s lyv( tsh)i w deherirvea tvio0 n< s v, s(v0) ∈ ∆. We hencefor∈th lavs(sus)m we haellr ede vrivation steps) a ∈re ∆le.ftm Woes ht.e nIcf,e ffoorr sho amsesu sm ∈e aTllΣ d, emriv a∈t oNn, ri p∈s R ar, ea lnedf ttmi o∈s tT.∆ I f(,Q f ×r sToΣm) efo sr ∈all T T1 ≤, m mi ≤ ∈ m, (q0∈, s R) ,⇒ anrd1 tt1 . . . ⇒(rQm ×tm T, w)e f say lth 1e sequence d =, ()r1 ⇒ , . . . , rm..) .i s⇒ ⇒a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation )r ·ec .o..gn ·i πze(rd by M is the mapping τM : TΣ T∆ → R+∞, such that for every s ∈ TΣ and t ∈× T T∆, τM→(s R, t) is the × µ× foofrth eve ewryeig sh ∈ts Tof aalln (dpo ts ∈sib Tly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3. 1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G). and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . . . , nk), create a rule ofthe form n.σ(x1 , . . . , xk) − →w σ(n1 .x1, . . . , nk.xk). Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y → u in R.9 Domain projection for wxLT is bq.eyst →exp ulai inne dR b.y way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1) , x2) − →w δ(q2.x2, β(α, q3.x1)) would yield the production q1 q− →w σ(γ(q3) , q2) in the domain projection. Howev− →er, aσ dγe(lqeting rule such as q1.σ(x1 , x2) − →w γ(q2.x2) necessitates the introduction of a new →non γte(rqminal ⊥ that can genienrtartoed aullc toiof nT Σo fw ai nthe wwe niognhtte r1m . The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10 ) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20 ) 4: outputs 5: wxLT M3 = ((Q1 Q2), Σ, Γ, R3, (q10 , q20 )) such that M3 = (τM1 ; τM2 Q). 6: complexity 7: O(|R1 | max(|R2|size( ˜u), |Q2|)), where ˜u is the × lOar(g|eRst |rimgahtx s(|idRe t|ree in a,n|yQ ru|l))e in R1 8: Let R3be of the form (R30,π) 9: R3 ← (∅, ∅) 10: Ξ ←← ←{ ((q∅,10∅ , q20 )} {seen states} 11 10 : ΨΞ ←← {{((qq10 , q20 ))}} {{speeennd sintagt essta}tes} 1112:: Ψwh ←ile {Ψ( ∅ do) 1123:: (ilqe1 , Ψq26 =) ← ∅ daony element of 14: ← Ψ) \← {a(nqy1 , ql2em)}e 15: for all (q1.y q− −w →1 u) ∈ R1 do 16: for all (z, −w − →2) u∈) )C ∈O RVER(u, M2, q2) do 17: for all (q, x) )∈ ∈∈ C yOdV V(Ez)R ∩(u u(,(QM1 Q2) X) do 18: i fa lql (∈q ,Ξx )th ∈en y 19: qΞ6 ∈ ← Ξ tΞh e∪n {q} 20: ΞΨ ←← ΞΨ ∪∪ {{qq}} 21: r ← ((Ψq1 ← , q 2Ψ) .y {→q }z) 22: rR30 ← ←← (( qR03 ∪ {).ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F ¨ul ¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈ T∆ (Q1 X) 3: wuT ∈ M T2 = (Q×2, X X∆), Γ, R2, q20 ) 4: state q2 ∈ Q2 ×× × 5: outputs 6: set of pairs (z, w) with z ∈ TΓ ((Q1 Q2) X) fsoetrm ofed p ab yir so (nze, ,o wr m) worieth hsu zcc ∈es Tsful runs× ×on Q Qu )b y × ×ru Xles) in R2, starting from q2, and w ∈ R+∞ the sum of the weights of all such runs,. 7: complexity 8: O(|R2|size(u)) 9: 10: 11: 12: 13: 14: if u(ε) is of the form (q1,x) ∈ Q1× X then zinit ← ((q1 q2), x) else zinit ← ⊥ Πlast ←← ←{(z ⊥init, {((ε, ε), q2)}, 1)} for all← v ∈ pos(u) εsu,εch), tqha)t} u(v) ∈ ∆(k) for some fko ≥r 0ll li nv p ∈ref ipxo osr(ude)r sduoc 15: ≥Π v0 i←n p ∅r 16: for ←all ∅(z, θ, w) ∈ Πlast do 17: rf aorll a(zll, vθ0, ∈w )lv ∈(z Π) such that z(v0) = ⊥ do 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: , dfoor all(θ(v,v0).u(v)(x1,...,xk) −w →0h)∈R2 θ0 ← θ For←m sθubstitution mapping ϕ : (Q2 X) → TΓ((Q1 Q2 X) {⊥}). f→or Ti = 1to× ×k dQo for all v00 ∈ pos(h) such that h(v00) = (q02 , xi∈) for some q20 ∈ Q2 do θ0(vi, v0v00) ← q20 if u(vi) ←is q of the form (q1, x) ∈ Q1 X then ∪ ,ϕ(x)q20 ∈, x Qi) ←× X((q t1h, eqn20), x) else ϕ(q20, xi) ← ⊥ Πv ← Πv {(z[ϕ)( ←h)] ⊥v0 , θ0, w · w0)} ∪ 29: Πlast ← Πv 30: Z ← {z |← ←(z Π, θ, Xw) 31: return {(z, X ∈ (z ,θ ,wX) X∈Πl Πlast} w) | z ∈ Z} ast X tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3. 1 as follows: a WRTG is a 6tuple G = (N, P, n0,M,G) where N, P, and n0 are defined as in Definition 3. 1, and either M = G = ∅,10 or M is a wxLNT and G is a normMal = =fo Grm =, c ∅h,ain production-free WRTG such that Σ, 10In which case Σ, the definition is functionally unchanged from before. 1062 w t[xLypN] LeT (a)pPresOYNer Qovsateiodn?f(Grwe´ca(Fsd¨uMrg(lKeS¨coa punelgsid cowtihuSza,[rtxlbce1.2i]9,l0Nn2tybT091), 984w [txy](pbL Ne)T PrespvatiosYnNero svfebda?ckw(rFdu¨MlSeo¨c apesgl onwtiuza[r,xlcb.2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? Gout and (nin ?−→w G u) ∈ Pout ⇔ (nin − →w u) ∈ M(G). 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. × × 13: for all (q.y −f −wt → 1he u) ∈ R do 14: for all (θ, w2) ∈ ∈RE RPL doACE(y,G, n) do 15: Form subs)ti ∈tu RtiEonP LmAaCppEi(nyg, Gϕ, n: Qo X → T∆ (N Q) such that, for all v ∈ ydQ Q(y) × ×and X Xq0 → →∈ Q, (ifN Nth ×ereQ e)x sisutc nh0 h∈a tN, f aonrd a lxl v∈ ∈X y sdu(cyh) th anatd θ q(v∈) = n0 and y(v) = x, t∈he Nn aϕn(dq0 x , x ∈) X= ( snu0c,h hq t0)ha. 16: p0 ((n, q) −w −1 −· −w →2 ϕ(u)) 17: for← ←all ( p ∈, qN)O− −R −M − →(p0 ϕ, N(uo)u)t) do ← 18: Let p b(ke) o.f the form n0− →w δ(n1,...,nk) for 19: δN ∈out ∆ ← Nout ∪ {n0 , . . . , nk } 20: Pout ←← P Nout ∪∪ { {pn} 21: return CHAIN-REM(Gout) M(G).. In the latter case, G is a stand-in for MG ?(G M).,( analogous to the stand-ins for WSAs and G ? WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that M, G) M(G).. PRODUCE obtains the effect of forward applica- Algorithm 4 REPLACE 1: 2: 3: 4: 5: 6: 7: 8: inputs y ∈ TΣ(X) WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions n∈ N outnpu ∈ts N set Π of pairs (θ, w) where θ is a mapping pos(y) → N and w ∈ R+∞ , each pair indicating pa ossu(cyc)ess →ful Nrun a nodn wy b ∈y p Rroductions in G, starting from n, and w is the weight of the run. complexity O(|P|size(y)) 9: Πlast← {({(ε,n)},1)} 10: for all← ←v {∈( { po(εs,(ny)) s,u1c)h} that y(v) ∈ X in prefix order fdoor 11: Πv ← ∅ 12: for ←all ∅(θ, w) ∈ Πlast do 13: ri fa Mll ( w∅) )a ∈nd Π G ∅ then 14: MG ←= ∅PR anOdD GUC6 =E ∅(G th, eθn(v)) = = −w →0 15: for all (θ(v) y(v) (n1, . . . , nk)) ∈ P do 16: Πv ← Πv∪− →{(θ y∪({v ()(vni, ni) , 1≤ )i) ≤ ∈ k P}, d dwo·w0) } 17: Πlast ← Π←v 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G0 = (N0, Σ, P0, n0, M, G), in normal form, such that if M ∅ and G ∅, LG0 = LM(G)., and otherwise Gf M0 = G. = = 56:: comOp(|lePx0it|y) 7: G0← 8: Ξ ←← { nG0} {seen nonterminals} 89:: ΞΨ ←← {{nn0}} {{speeenndi nnogn tneornmteinramlsi}nals} 190:: wΨh ←ile {Ψn =} ∅{ pdeon 11 10 : inl e← Ψa6n =y ∅el deoment of 12: nΨ ←←a nΨy \ e l{emn}e 13: iΨf M ← ∅\ a{nnd} G ∅ then 14: MG0 =← ∅ P aRnOdD GU 6=CE ∅(G the0,n nn) 15: for all (n P−→w RO σ(n1 , . . . , nk)) ∈ P0 do 16: for i= 1→ →to σ (kn ndo 17: if ni ∈ Ξ then 18: Ξ ←∈ Ξ ΞΞ t h∪e {nni} 19: ΞΨ ←← ΞΨ ∪∪ {{nni}} 20: return G0 Ψ = = 1063 g0 g0 −w − →1 −−w →→2 g0 − − → σ(g0, g1) α w − →3 g1 − − → α (a) Input WRTG G G a0 a0.σ(x1, x2) −w − →4 − w − → →5 σ(a0.x1, a1.x2) a0.σ(x1, x2) ψ(a2.x1, a1.x2) a0 .α − − → α a 1.α − − → α a2 .α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) b0.α −w −1 →0 α −w − →9 σ(b0.x1, b0.x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). built as a consequence of building the complete MB(MA(G).). g0a0b0 g0a0b0 −w −1 −· −w4 −·w − →9 σ(g0a0b0, g1a1b0) g0a0b0 −−w − − −2 −· −w −6 − −·−w − → −1 →0 σ α g1a1b0 −w − −3· w−7 −· −w −1 →0 α (e) Complete MB (MA (G).). Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where iGs1 in= F (i{g u0rae0 2}a, 1 {2σa,nψd,α M,ρA},is ∅ i,n g 20ab0., T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) → JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, )x− 3→) → JJ VrB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” − → ”gentle”(a) Rotation rules iVB.NN(x1, x2) iVB.NN(x1, x2)) iVB.NN(x1, x2)) → →→ →→ NN(INS iNN.x1, iNN.x2) NNNN((iINNNS.x i1, iNN.x2) NNNN((iiNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1 , x2, x3) → X(t.x1 , t.x2, t.x3) t.”gentleman” →) → j →1 t . ””ggeennttl eemmaann”” →→ jE1PS t . ”INgSen →tle m j 1a t . I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j 1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE=-E ({XgPLICI}T, c{aσl,lsα }P,R ∅O, gDUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) σ(b0.x1, b0.x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G).. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G). that must be built to form the complete MB (MA(G).). are shown in Figure 2d. The complete MB (MA(G).). is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G).; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing applica- tion. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB (MA (G).)., since G1 is the initial stand-in for MA (G).. 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a samparlee ionf Fwighuicreh 3is. Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. We add an English syntax language model L to theW ceas acdadde a no Ef ntrgalinsshd uscyentras x ju lastn gdueascgrei mbeodd etol L be ttoter simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/.21s 0.e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). et al. Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1): 1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2): 186–213. Zolt a´n E´sik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt a´n F ¨ul ¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 3 13–404. Springer-Verlag. Zolt a´n F ¨ul ¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G ´ecseg and Magnus Steinby. 1984. Tree Automata. Akad e´miai Kiad o´, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14: 135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10): 1187–1 196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231: 17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Lin- guistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1): 1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066
4 0.58954495 234 acl-2010-The Use of Formal Language Models in the Typology of the Morphology of Amerindian Languages
Author: Andres Osvaldo Porta
Abstract: The aim of this work is to present some preliminary results of an investigation in course on the typology of the morphology of the native South American languages from the point of view of the formal language theory. With this object, we give two contrasting examples of descriptions of two Aboriginal languages finite verb forms morphology: Argentinean Quechua (quichua santiague n˜o) and Toba. The description of the morphology of the finite verb forms of Argentinean quechua, uses finite automata and finite transducers. In this case the construction is straightforward using two level morphology and then, describes in a very natural way the Argentinean Quechua morphology using a regular language. On the contrary, the Toba verbs morphology, with a system that simultaneously uses prefixes and suffixes, has not a natural description as regular language. Toba has a complex system of causative suffixes, whose successive applications determinate the use of prefixes belonging different person marking prefix sets. We adopt the solution of Creider et al. (1995) to naturally deal with this and other similar morphological processes which involve interactions between prefixes and suffixes and then we describe the toba morphology using linear context-free languages.1 .
5 0.56767696 222 acl-2010-SystemT: An Algebraic Approach to Declarative Information Extraction
Author: Laura Chiticariu ; Rajasekar Krishnamurthy ; Yunyao Li ; Sriram Raghavan ; Frederick Reiss ; Shivakumar Vaithyanathan
Abstract: As information extraction (IE) becomes more central to enterprise applications, rule-based IE engines have become increasingly important. In this paper, we describe SystemT, a rule-based IE system whose basic design removes the expressivity and performance limitations of current systems based on cascading grammars. SystemT uses a declarative rule language, AQL, and an optimizer that generates high-performance algebraic execution plans for AQL rules. We compare SystemT’s approach against cascading grammars, both theoretically and with a thorough experimental evaluation. Our results show that SystemT can deliver result quality comparable to the state-of-the- art and an order of magnitude higher annotation throughput.
6 0.54135436 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two
7 0.5363543 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
8 0.49900448 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars
9 0.46042335 92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.
10 0.45224106 260 acl-2010-Wide-Coverage NLP with Linguistically Expressive Grammars
11 0.44105983 235 acl-2010-Tools for Multilingual Grammar-Based Translation on the Web
12 0.42061558 162 acl-2010-Learning Common Grammar from Multilingual Corpus
13 0.4183189 255 acl-2010-Viterbi Training for PCFGs: Hardness Results and Competitiveness of Uniform Initialization
14 0.41830039 217 acl-2010-String Extension Learning
15 0.4161689 40 acl-2010-Automatic Sanskrit Segmentizer Using Finite State Transducers
16 0.40995336 66 acl-2010-Compositional Matrix-Space Models of Language
17 0.40806592 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars
18 0.40779206 118 acl-2010-Fine-Grained Tree-to-String Translation Rule Extraction
19 0.4061878 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar
20 0.40268144 127 acl-2010-Global Learning of Focused Entailment Graphs
topicId topicWeight
[(4, 0.014), (14, 0.032), (25, 0.071), (33, 0.017), (39, 0.02), (42, 0.018), (43, 0.298), (44, 0.015), (59, 0.066), (72, 0.016), (73, 0.042), (76, 0.026), (78, 0.078), (80, 0.012), (83, 0.056), (84, 0.045), (98, 0.085)]
simIndex simValue paperId paperTitle
same-paper 1 0.79334426 67 acl-2010-Computing Weakest Readings
Author: Alexander Koller ; Stefan Thater
Abstract: We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
2 0.51785362 51 acl-2010-Bilingual Sense Similarity for Statistical Machine Translation
Author: Boxing Chen ; George Foster ; Roland Kuhn
Abstract: This paper proposes new algorithms to compute the sense similarity between two units (words, phrases, rules, etc.) from parallel corpora. The sense similarity scores are computed by using the vector space model. We then apply the algorithms to statistical machine translation by computing the sense similarity between the source and target side of translation rule pairs. Similarity scores are used as additional features of the translation model to improve translation performance. Significant improvements are obtained over a state-of-the-art hierarchical phrase-based machine translation system. 1
3 0.5024457 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints
Author: Wenliang Chen ; Jun'ichi Kazama ; Kentaro Torisawa
Abstract: This paper proposes a dependency parsing method that uses bilingual constraints to improve the accuracy of parsing bilingual texts (bitexts). In our method, a targetside tree fragment that corresponds to a source-side tree fragment is identified via word alignment and mapping rules that are automatically learned. Then it is verified by checking the subtree list that is collected from large scale automatically parsed data on the target side. Our method, thus, requires gold standard trees only on the source side of a bilingual corpus in the training phase, unlike the joint parsing model, which requires gold standard trees on the both sides. Compared to the reordering constraint model, which requires the same training data as ours, our method achieved higher accuracy because ofricher bilingual constraints. Experiments on the translated portion of the Chinese Treebank show that our system outperforms monolingual parsers by 2.93 points for Chinese and 1.64 points for English.
4 0.47511443 70 acl-2010-Contextualizing Semantic Representations Using Syntactically Enriched Vector Models
Author: Stefan Thater ; Hagen Furstenau ; Manfred Pinkal
Abstract: We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task.
5 0.47289339 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
Author: Yotaro Watanabe ; Masayuki Asahara ; Yuji Matsumoto
Abstract: In predicate-argument structure analysis, it is important to capture non-local dependencies among arguments and interdependencies between the sense of a predicate and the semantic roles of its arguments. However, no existing approach explicitly handles both non-local dependencies and semantic dependencies between predicates and arguments. In this paper we propose a structured model that overcomes the limitation of existing approaches; the model captures both types of dependencies simultaneously by introducing four types of factors including a global factor type capturing non-local dependencies among arguments and a pairwise factor type capturing local dependencies between a predicate and an argument. In experiments the proposed model achieved competitive results compared to the stateof-the-art systems without applying any feature selection procedure.
6 0.46809185 158 acl-2010-Latent Variable Models of Selectional Preference
7 0.46791676 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
8 0.46705568 10 acl-2010-A Latent Dirichlet Allocation Method for Selectional Preferences
9 0.4667021 71 acl-2010-Convolution Kernel over Packed Parse Forest
10 0.46597642 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar
11 0.46354097 130 acl-2010-Hard Constraints for Grammatical Function Labelling
12 0.461799 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
13 0.46011257 248 acl-2010-Unsupervised Ontology Induction from Text
14 0.45760384 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars
15 0.45558429 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars
16 0.45405895 169 acl-2010-Learning to Translate with Source and Target Syntax
17 0.45279592 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
18 0.45228708 162 acl-2010-Learning Common Grammar from Multilingual Corpus
19 0.45140105 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
20 0.4504917 65 acl-2010-Complexity Metrics in an Incremental Right-Corner Parser