acl acl2010 acl2010-95 knowledge-graph by maker-knowledge-mining

95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers


Source: pdf

Author: Jonathan May ; Kevin Knight ; Heiko Vogler

Abstract: Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko .vogle r@ tu-dre s den .de 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos.m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed R+1∪W {e+ as∞su}m,te ha thtro thuegh woeuitgh t hi osf p aa ppaetrh t ihsa cta wlceuilgahtetds a asre th ien prod∪uct { +of∞ ∞th}e, wtheaitgh thtes wofe i gtsh etd ogfes a, panatdh t ihsat c athlceu lwateeigdh ats so tfh ae (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 ProceedingUsp opfs thaela 4, 8Stwhe Adnennu,a 1l1- M16ee Jtiunlgy o 2f0 t1h0e. A ?c ss2o0c1ia0ti Aosnso focria Ctioonm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s058–1066, a:a/1a:a/1 b:b/a.:5b/.1aa::ba//. 49Ea:ba/:a.6/.5 (a) InpAut strain:ag/ “a aB” emab:ae/d1dedC in an D(b) first Wa:b:aS/T./4. in cascadeE :c/.6Fa:d/.4 (c//).. second WFST in cascbaad::dde// identityA WSaT: (d)b:Oda/c:f.d3Al./6ai5n.:0c3e/.0ca:o7m/1pao :sBcdi/t.32o584n6a :p/1roa:cCa/h:db.3:/c6d.2/46.35b:a(/be.a)5 :BbA:/u.D1c/keDtbrigBadEDae:ba/p.49proach:ECDa:b/ a.:6b/5.(f)Rdc/Ae./0sD.u35b7aFl4t:c o/f.76ofcBld/iE.nD53e4F6orFbcud/.c0k7e3tapbCpd:cDli/cF.a12t38ion (g)dcI/n.A3i5tD46dcaF/lD.0oF7n3-thea f:lcBdy/E .b(12h:d)8c/O.3A5nD-4dtchF/.e0C -7f3EDlyFas:t /n.B9d-EDinF afterc /x.d3cp/6.l1o28ring C(i)ECDOcFE/.nA5-tD4hdcF/e.0- fl37ysdcta/n.35dB64-iEDnF afteBrc/Eb.3eFs6dtc/ pd./1a2.t83h46 asbeC nEDF found Cobm:bdap:c/:o/d.3/.sa6.5e05D:c tF.h0e7 tranaaaa:sd:c: cdd// /. u12..53c2846ers stacnd//d..A53-i46nDdc f.o.00r73 (f) Appaal:A:ybaD W..19ST (b)BB tEoD WST (a) aftercd p/..0/0..rDo5373Fj46ectionc o.12u28tcdg//o.A.35iDn64dgcF// e0C0dC73gEDeFFs of sBtEaDFteF ADdFc// FiBgBuEDFreF 1: Tc/ .h2.34dcr/6/e. 1e2 8 d/ iA. f53fD64edcF/ r. 0 eCC37nEDtF appBroBEaDFcFhesdc ./t2o.3d4c a..12p28plicatioCCnED tF/ h. A53r6D4ocd/uF/. g0073h cascBBaEdDFeFs ocdf/ .3W264dcS//.1.T282s. bydc w//..53e46ll-known aElFgorithdc/m./2.34s6 to e//.f.5f3i46cieCnEtFly finBdE the kd-c/ bedst/ p6aths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 AppCliEcdcF//a..53ti64on of treeB tFranscd/d./3.u264cers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈ Σ has a rank rk(σ) ∈ N. We cerayll ⊆ Σ, ∈k ∈ aNs t ahe r set rokf tσho)s ∈e σ ∈ Σe such that r⊆k(σ Σ), k= ∈k. NTh teh ese ste otf o vfa trhioasbele σs σi s∈ d eΣnoted X = {x1, x2, . . .} and is assumed to be disjnooitnetd df Xrom = any rank,e.d. a.}lp ahnadb iest aussseudm iend dth tios paper. We use to denote a symbol of rank 0 that is not iWn any e ra ⊥nk toed d eanlpohtaeb aet s yumsebdo lin o fth riasn paper. tA is tr neoet t ∈ TΣ is denoted σ(t1 , . . . , tk) where k ≥ 0, σ ∈ and t1, . . . , tk ∈ TΣ. F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈ TΣ as shorthand for σ() . For every set Sw rditiesjσo in ∈t f Trom Σ, let TΣ (S) = TΣ∪S, where, for all s ∈ S, rk(s) = 0. lW se ∈ d,e rfkin(es) th 0e. positions of a tree t = σ(t1, . . . , tk), for k 0, σ ∈ t1, . . . , tk ∈ TΣ, as a set pos(≥t) ⊂ N∗ s∈uch that {∈ε} T ∪ 1e t≤ p ois (≤t) k ⊂, ⊂v ∈ pTohse( tse)t =of {lεea}f ∪ po {siivtio |ns 1 l ≤v(t i) ≤⊆ k p,ovs(t ∈) apores t(hto)s}e. pTohseit sieotns o fv l a∈f p poossit(ito)n ssu lvch(t )th ⊆at pfoors tn)o ir ∈ th Nse, pvio ∈it ponoss(t v). ∈ We p presume hsta tnhadatr dfo lrex nioco igr ∈aph Nic, ovrid ∈eri pnogss( <∈ nTΣd ≤an odn v p o∈s pos(t). The label of t at Lpoestit ti,osn v, Tdenaontedd v v by ∈ t( pvo)s,( tt)he. sTuhbetr leaeb eolf ot fa tt v, denoted by t|v, and the replacement at v by s, vde,n doetneodt e bdy tb[ys] tv|, are defined as follows: ≥ pos(t) = ,{ aivs a| Σ(k), pos(ti)}. 1. For every σ ∈ Σ(0) , σ(ε) = σ, σ|ε = σ, and σF[osr]ε e v=e sy. 2. For every t = σ(t1 , . . . , tk) such that k = rk(σ) and k 1, t(ε) = σ, t|ε = t, aknd = t[ rsk]ε( =) ns.d kFo ≥r every 1) ≤= iσ ≤ t| k and v ∈ pos(ti), t(ivF) =r vtie (rvy) ,1 1t| ≤iv =i ≤ti |v k, aanndd tv[s] ∈iv p=o sσ(t(t1 , . . . , ti−1 , ti[(sv])v, , tt|i+1 , . . . , t|k). The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt.e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. {Lt(etv )A | avn ∈d lBv( tb)e} sets. Let ϕ : A → TΣ (B) be Lae mt Aapp ainndg. B W bee seexttes.nd L ϕ t oϕ th :e A Am →appi Tng ϕ : TΣ (A) → TΣ (B) such that for a ∈ A, ϕ(a) = ϕ(a) and( Afo)r →k 0, σ ∈ Σch(k th) , atn fdo t1, . . . , tk ∈ TΣ (A), ϕan(dσ( fto1r, . . . ,t0k,) σ) =∈ σ Σ(ϕ(t1), . . . ,ϕ(tk)). ∈ W Te indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈ R | w 0} and R+∞ to dentoote d Ren+o ∪e {th+e∞ set}. { ≥ ≥ ≥ Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈ N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩ N = ∅. 3. PΣ ∩is Na =tup ∅le. (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N.6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . . . , nk) where σ ∈ Σ(k) and n1, . . . , nk →∈ σ N(n. For WRTG∈ G N =. (N, Σ, P, n0), s, t, u ∈ TΣ(N), n ∈ N, and p ∈ P of the form n −→ ∈w T u, we nobt ∈ain N Na ,d aenridva ptio ∈n s Ptep o ffr tohme fso rtom mt n by− →repl ua,ci wneg some leaf nonterminal in s labeled n with u. For- − →w mally, s ⇒pG t if there exists some v ∈ lv(s) smuaclhly t,ha st s⇒(v) =t i fn t haenrde s e[xui]svt = so tm. e W ve say t(hsis) derivation step is leftmost if, for all v0 ∈ lv(s) where v0 < v, s(v0) ∈ Σ. We hencef∈orth lv a(ss-) sume all derivation )ste ∈ps a.re leftmost. If, for some m ∈ N, pi ∈ P, and ti ∈ TΣ (N) for all s1o m≤e i m m≤ ∈ m N, n0 ⇒∈ pP1 t a1n ⇒∈pm T tm, we say t1he ≤ sequence ,d n = (p1, . . . ,p.m ⇒) is a derivation of tm in G and that n0 ⇒∗ tm; the weight of d is wt(d) = π(p1) · . . . ⇒· π(pm). The weighted tree language rec)og ·n .i.z.ed · π by(p G is the mapping LG : TΣ → R+∞ such that for every t ∈ TΣ, LG(t) is the sum→ →of R the swuecihgth htsa to ffo arl el v(eproyss ti b∈ly T infinitely many) derivations of t in G. A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. We define a partial ordering ? on WRTGs sucWh eth date finore W aR TpGarst aGl1 r=d r(iNng1 , Σ?, P o1n , n0) and G2 (N2, Σ, P2, n0), we say G1 ? G2 iff N1 ⊆ N2 and P1 ⊆ P2, where the w?eigh Gts are pres⊆erve Nd. ... = Definition 3.2 (cf. Def. 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆ are the ranked alphabets of input and output symbols, respectively, where (Σ ∪ ∆) ∩ Q = 3. (RΣ i ∪s a∆ )tu ∩ple Q ( =R 0∅, .π), where R0 is a finite set of rules, each rule r of the form q.y → u for q ∈ ru lQes, y c∈h T ruΣle(X r), o fa tnhde u fo r∈m T q∆.y(Q − → →× u uX fo)r. Wqe ∈ ∈fu Qrt,hye r ∈req Tuire(X Xth)a,t annod v uari ∈abl Te x( Q∈ ×X appears rmthoerre rtehqauni roen tchea itn n y, aanrida bthleat x xe ∈ach X Xva arpi-able appearing in u is also in y. Moreover, π : R0 → R+∞ is a weight function of the rules. As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. q.y −π −(r →) u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y u, each x ∈ yd (y) ∩ X appears at most on− →ce ( ur,es epaecchxtive ∈ly, dat( lye)a ∩st Xonc aep) iena us. tW meo dsten oontcee th (ree scpleascsof all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1 , . . . , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i.e., it is a “traditional” WTT (F¨ ul¨ op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈ T∆(Q TΣ), and r ∈ R of the form q.y −w →), u, we obtain a× d Ter)iv,a atniodn r s ∈te pR ofrfom the s f trom mt b.yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒rM t if there oisf tah peo ysi-tmioantc vh n∈g tp roese(.s F)o, am saulblys,ti stu ⇒tion mapping ϕ : X → TΣ, and a rule q.y −u→w bs u ∈ R such that ϕs(v :) X X= → →(q, T ϕ(y)) and t = s[ϕ− →0(u u)] ∈v, wRh seurech hϕ t0h aist a substitution mapping Q X → T∆ (Q TΣ) dae sfiunbesdti usuticohn t mhaatp ϕpin0(qg0, Q Qx) × = X ( →q0, Tϕ(x()Q) f×or T all q0 ∈ Q and x ∈ X. We say this derivation step is l∈eft Qmo asnt dif, x f o∈r Xall. v W0 e∈ s lyv( tsh)i w deherirvea tvio0 n< s v, s(v0) ∈ ∆. We hencefor∈th lavs(sus)m we haellr ede vrivation steps) a ∈re ∆le.ftm Woes ht.e nIcf,e ffoorr sho amsesu sm ∈e aTllΣ d, emriv a∈t oNn, ri p∈s R ar, ea lnedf ttmi o∈s tT.∆ I f(,Q f ×r sToΣm) efo sr ∈all T T1 ≤, m mi ≤ ∈ m, (q0∈, s R) ,⇒ anrd1 tt1 . . . ⇒(rQm ×tm T, w)e f say lth 1e sequence d =, ()r1 ⇒ , . . . , rm..) .i s⇒ ⇒a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation )r ·ec .o..gn ·i πze(rd by M is the mapping τM : TΣ T∆ → R+∞, such that for every s ∈ TΣ and t ∈× T T∆, τM→(s R, t) is the × µ× foofrth eve ewryeig sh ∈ts Tof aalln (dpo ts ∈sib Tly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3. 1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G). and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . . . , nk), create a rule ofthe form n.σ(x1 , . . . , xk) − →w σ(n1 .x1, . . . , nk.xk). Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y → u in R.9 Domain projection for wxLT is bq.eyst →exp ulai inne dR b.y way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1) , x2) − →w δ(q2.x2, β(α, q3.x1)) would yield the production q1 q− →w σ(γ(q3) , q2) in the domain projection. Howev− →er, aσ dγe(lqeting rule such as q1.σ(x1 , x2) − →w γ(q2.x2) necessitates the introduction of a new →non γte(rqminal ⊥ that can genienrtartoed aullc toiof nT Σo fw ai nthe wwe niognhtte r1m . The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10 ) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20 ) 4: outputs 5: wxLT M3 = ((Q1 Q2), Σ, Γ, R3, (q10 , q20 )) such that M3 = (τM1 ; τM2 Q). 6: complexity 7: O(|R1 | max(|R2|size( ˜u), |Q2|)), where ˜u is the × lOar(g|eRst |rimgahtx s(|idRe t|ree in a,n|yQ ru|l))e in R1 8: Let R3be of the form (R30,π) 9: R3 ← (∅, ∅) 10: Ξ ←← ←{ ((q∅,10∅ , q20 )} {seen states} 11 10 : ΨΞ ←← {{((qq10 , q20 ))}} {{speeennd sintagt essta}tes} 1112:: Ψwh ←ile {Ψ( ∅ do) 1123:: (ilqe1 , Ψq26 =) ← ∅ daony element of 14: ← Ψ) \← {a(nqy1 , ql2em)}e 15: for all (q1.y q− −w →1 u) ∈ R1 do 16: for all (z, −w − →2) u∈) )C ∈O RVER(u, M2, q2) do 17: for all (q, x) )∈ ∈∈ C yOdV V(Ez)R ∩(u u(,(QM1 Q2) X) do 18: i fa lql (∈q ,Ξx )th ∈en y 19: qΞ6 ∈ ← Ξ tΞh e∪n {q} 20: ΞΨ ←← ΞΨ ∪∪ {{qq}} 21: r ← ((Ψq1 ← , q 2Ψ) .y {→q }z) 22: rR30 ← ←← (( qR03 ∪ {).ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F ¨ul ¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈ T∆ (Q1 X) 3: wuT ∈ M T2 = (Q×2, X X∆), Γ, R2, q20 ) 4: state q2 ∈ Q2 ×× × 5: outputs 6: set of pairs (z, w) with z ∈ TΓ ((Q1 Q2) X) fsoetrm ofed p ab yir so (nze, ,o wr m) worieth hsu zcc ∈es Tsful runs× ×on Q Qu )b y × ×ru Xles) in R2, starting from q2, and w ∈ R+∞ the sum of the weights of all such runs,. 7: complexity 8: O(|R2|size(u)) 9: 10: 11: 12: 13: 14: if u(ε) is of the form (q1,x) ∈ Q1× X then zinit ← ((q1 q2), x) else zinit ← ⊥ Πlast ←← ←{(z ⊥init, {((ε, ε), q2)}, 1)} for all← v ∈ pos(u) εsu,εch), tqha)t} u(v) ∈ ∆(k) for some fko ≥r 0ll li nv p ∈ref ipxo osr(ude)r sduoc 15: ≥Π v0 i←n p ∅r 16: for ←all ∅(z, θ, w) ∈ Πlast do 17: rf aorll a(zll, vθ0, ∈w )lv ∈(z Π) such that z(v0) = ⊥ do 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: , dfoor all(θ(v,v0).u(v)(x1,...,xk) −w →0h)∈R2 θ0 ← θ For←m sθubstitution mapping ϕ : (Q2 X) → TΓ((Q1 Q2 X) {⊥}). f→or Ti = 1to× ×k dQo for all v00 ∈ pos(h) such that h(v00) = (q02 , xi∈) for some q20 ∈ Q2 do θ0(vi, v0v00) ← q20 if u(vi) ←is q of the form (q1, x) ∈ Q1 X then ∪ ,ϕ(x)q20 ∈, x Qi) ←× X((q t1h, eqn20), x) else ϕ(q20, xi) ← ⊥ Πv ← Πv {(z[ϕ)( ←h)] ⊥v0 , θ0, w · w0)} ∪ 29: Πlast ← Πv 30: Z ← {z |← ←(z Π, θ, Xw) 31: return {(z, X ∈ (z ,θ ,wX) X∈Πl Πlast} w) | z ∈ Z} ast X tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3. 1 as follows: a WRTG is a 6tuple G = (N, P, n0,M,G) where N, P, and n0 are defined as in Definition 3. 1, and either M = G = ∅,10 or M is a wxLNT and G is a normMal = =fo Grm =, c ∅h,ain production-free WRTG such that Σ, 10In which case Σ, the definition is functionally unchanged from before. 1062 w t[xLypN] LeT (a)pPresOYNer Qovsateiodn?f(Grwe´ca(Fsd¨uMrg(lKeS¨coa punelgsid cowtihuSza,[rtxlbce1.2i]9,l0Nn2tybT091), 984w [txy](pbL Ne)T PrespvatiosYnNero svfebda?ckw(rFdu¨MlSeo¨c apesgl onwtiuza[r,xlcb.2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? Gout and (nin ?−→w G u) ∈ Pout ⇔ (nin − →w u) ∈ M(G). 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. × × 13: for all (q.y −f −wt → 1he u) ∈ R do 14: for all (θ, w2) ∈ ∈RE RPL doACE(y,G, n) do 15: Form subs)ti ∈tu RtiEonP LmAaCppEi(nyg, Gϕ, n: Qo X → T∆ (N Q) such that, for all v ∈ ydQ Q(y) × ×and X Xq0 → →∈ Q, (ifN Nth ×ereQ e)x sisutc nh0 h∈a tN, f aonrd a lxl v∈ ∈X y sdu(cyh) th anatd θ q(v∈) = n0 and y(v) = x, t∈he Nn aϕn(dq0 x , x ∈) X= ( snu0c,h hq t0)ha. 16: p0 ((n, q) −w −1 −· −w →2 ϕ(u)) 17: for← ←all ( p ∈, qN)O− −R −M − →(p0 ϕ, N(uo)u)t) do ← 18: Let p b(ke) o.f the form n0− →w δ(n1,...,nk) for 19: δN ∈out ∆ ← Nout ∪ {n0 , . . . , nk } 20: Pout ←← P Nout ∪∪ { {pn} 21: return CHAIN-REM(Gout) M(G).. In the latter case, G is a stand-in for MG ?(G M).,( analogous to the stand-ins for WSAs and G ? WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that M, G) M(G).. PRODUCE obtains the effect of forward applica- Algorithm 4 REPLACE 1: 2: 3: 4: 5: 6: 7: 8: inputs y ∈ TΣ(X) WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions n∈ N outnpu ∈ts N set Π of pairs (θ, w) where θ is a mapping pos(y) → N and w ∈ R+∞ , each pair indicating pa ossu(cyc)ess →ful Nrun a nodn wy b ∈y p Rroductions in G, starting from n, and w is the weight of the run. complexity O(|P|size(y)) 9: Πlast← {({(ε,n)},1)} 10: for all← ←v {∈( { po(εs,(ny)) s,u1c)h} that y(v) ∈ X in prefix order fdoor 11: Πv ← ∅ 12: for ←all ∅(θ, w) ∈ Πlast do 13: ri fa Mll ( w∅) )a ∈nd Π G ∅ then 14: MG ←= ∅PR anOdD GUC6 =E ∅(G th, eθn(v)) = = −w →0 15: for all (θ(v) y(v) (n1, . . . , nk)) ∈ P do 16: Πv ← Πv∪− →{(θ y∪({v ()(vni, ni) , 1≤ )i) ≤ ∈ k P}, d dwo·w0) } 17: Πlast ← Π←v 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G0 = (N0, Σ, P0, n0, M, G), in normal form, such that if M ∅ and G ∅, LG0 = LM(G)., and otherwise Gf M0 = G. = = 56:: comOp(|lePx0it|y) 7: G0← 8: Ξ ←← { nG0} {seen nonterminals} 89:: ΞΨ ←← {{nn0}} {{speeenndi nnogn tneornmteinramlsi}nals} 190:: wΨh ←ile {Ψn =} ∅{ pdeon 11 10 : inl e← Ψa6n =y ∅el deoment of 12: nΨ ←←a nΨy \ e l{emn}e 13: iΨf M ← ∅\ a{nnd} G ∅ then 14: MG0 =← ∅ P aRnOdD GU 6=CE ∅(G the0,n nn) 15: for all (n P−→w RO σ(n1 , . . . , nk)) ∈ P0 do 16: for i= 1→ →to σ (kn ndo 17: if ni ∈ Ξ then 18: Ξ ←∈ Ξ ΞΞ t h∪e {nni} 19: ΞΨ ←← ΞΨ ∪∪ {{nni}} 20: return G0 Ψ = = 1063 g0 g0 −w − →1 −−w →→2 g0 − − → σ(g0, g1) α w − →3 g1 − − → α (a) Input WRTG G G a0 a0.σ(x1, x2) −w − →4 − w − → →5 σ(a0.x1, a1.x2) a0.σ(x1, x2) ψ(a2.x1, a1.x2) a0 .α − − → α a 1.α − − → α a2 .α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) b0.α −w −1 →0 α −w − →9 σ(b0.x1, b0.x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). built as a consequence of building the complete MB(MA(G).). g0a0b0 g0a0b0 −w −1 −· −w4 −·w − →9 σ(g0a0b0, g1a1b0) g0a0b0 −−w − − −2 −· −w −6 − −·−w − → −1 →0 σ α g1a1b0 −w − −3· w−7 −· −w −1 →0 α (e) Complete MB (MA (G).). Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where iGs1 in= F (i{g u0rae0 2}a, 1 {2σa,nψd,α M,ρA},is ∅ i,n g 20ab0., T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) → JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, )x− 3→) → JJ VrB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” − → ”gentle”(a) Rotation rules iVB.NN(x1, x2) iVB.NN(x1, x2)) iVB.NN(x1, x2)) → →→ →→ NN(INS iNN.x1, iNN.x2) NNNN((iINNNS.x i1, iNN.x2) NNNN((iiNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1 , x2, x3) → X(t.x1 , t.x2, t.x3) t.”gentleman” →) → j →1 t . ””ggeennttl eemmaann”” →→ jE1PS t . ”INgSen →tle m j 1a t . I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j 1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE=-E ({XgPLICI}T, c{aσl,lsα }P,R ∅O, gDUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) σ(b0.x1, b0.x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G).. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G). that must be built to form the complete MB (MA(G).). are shown in Figure 2d. The complete MB (MA(G).). is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G).; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing applica- tion. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB (MA (G).)., since G1 is the initial stand-in for MA (G).. 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a samparlee ionf Fwighuicreh 3is. Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. We add an English syntax language model L to theW ceas acdadde a no Ef ntrgalinsshd uscyentras x ju lastn gdueascgrei mbeodd etol L be ttoter simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/.21s 0.e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). et al. Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1): 1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2): 186–213. Zolt a´n E´sik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt a´n F ¨ul ¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 3 13–404. Springer-Verlag. Zolt a´n F ¨ul ¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G ´ecseg and Magnus Steinby. 1984. Tree Automata. Akad e´miai Kiad o´, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14: 135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10): 1187–1 196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231: 17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Lin- guistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1): 1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu , Abstract Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. [sent-2, score-0.443]

2 We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. [sent-3, score-0.967]

3 Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). [sent-4, score-0.236]

4 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). [sent-5, score-0.145]

5 After application we can do some inference on this result, such as determining its k highest weighted elements. [sent-7, score-0.249]

6 As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. [sent-9, score-0.145]

7 We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. [sent-10, score-0.236]

8 We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. [sent-12, score-0.31]

9 Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko . [sent-13, score-0.435]

10 Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). [sent-16, score-0.647]

11 We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos. [sent-17, score-0.617]

12 m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. [sent-18, score-0.176]

13 1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. [sent-20, score-0.166]

14 By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished. [sent-26, score-0.313]

15 2For backward applications, the roles of input and output are simply exchanged. [sent-28, score-0.147]

16 Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. [sent-117, score-0.366]

17 u264cers Now let us revisit these strategies in the setting of trees and tree transducers. [sent-133, score-0.191]

18 Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. [sent-134, score-0.404]

19 We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. [sent-136, score-0.277]

20 F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. [sent-156, score-0.494]

21 4A weighted tree language is recognizable iff it can be represented by a wrtg. [sent-157, score-0.247]

22 The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt. [sent-187, score-0.128]

23 e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. [sent-188, score-0.128]

24 (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. [sent-206, score-0.247]

25 (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. [sent-212, score-0.155]

26 W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. [sent-213, score-0.274]

27 A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N. [sent-214, score-0.252]

28 6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. [sent-215, score-0.207]

29 A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . [sent-217, score-0.305]

30 A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. [sent-242, score-0.247]

31 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. [sent-254, score-0.449]

32 π), where R0 is a finite set of rules, each rule r of the form q. [sent-259, score-0.117]

33 As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. [sent-264, score-0.236]

34 , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i. [sent-273, score-0.202]

35 yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. [sent-278, score-0.128]

36 The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). [sent-304, score-0.36]

37 2 Applicable classes We now consider transducer classes where recognizability is preserved under application. [sent-306, score-0.329]

38 Table 1 presents known results for the top-down tree transducer classes described in Section 3. [sent-307, score-0.33]

39 Unlike the string case, preservation of recognizability is not universal or symmetric. [sent-309, score-0.205]

40 This is important for us, because we can only construct an application WRTG, i. [sent-310, score-0.13]

41 , a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. [sent-312, score-0.13]

42 Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. [sent-313, score-0.157]

43 7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. [sent-316, score-0.127]

44 If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. [sent-317, score-0.779]

45 and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. [sent-320, score-0.469]

46 Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. [sent-321, score-1.052]

47 As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. [sent-323, score-0.226]

48 Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . [sent-324, score-0.273]

49 Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q. [sent-337, score-0.179]

50 Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). [sent-354, score-0.226]

51 It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. [sent-355, score-0.259]

52 Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. [sent-356, score-0.253]

53 We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. [sent-357, score-0.53]

54 Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). [sent-358, score-0.45]

55 ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? [sent-365, score-0.852]

56 We will revisit the three strategies for accomplishing application discussed above for the string case. [sent-366, score-0.21]

57 In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. [sent-367, score-0.546]

58 Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al. [sent-368, score-0.175]

59 However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. [sent-370, score-0.526]

60 We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. [sent-371, score-0.405]

61 As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. [sent-372, score-0.332]

62 As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. [sent-374, score-0.934]

63 Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. [sent-385, score-0.205]

64 The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. [sent-386, score-0.13]

65 In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. [sent-387, score-0.194]

66 2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. [sent-395, score-0.525]

67 Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? [sent-399, score-0.45]

68 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. [sent-402, score-0.396]

69 α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0. [sent-439, score-0.438]

70 x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). [sent-443, score-0.438]

71 Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. [sent-448, score-0.639]

72 , T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. [sent-452, score-0.159]

73 To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. [sent-453, score-0.366]

74 Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. [sent-454, score-0.253]

75 I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. [sent-482, score-0.145]

76 EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. [sent-484, score-0.158]

77 x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G). [sent-490, score-0.159]

78 Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G). [sent-500, score-0.159]

79 ; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. [sent-501, score-0.258]

80 We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. [sent-502, score-0.277]

81 We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. [sent-503, score-0.806]

82 Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. [sent-504, score-0.855]

83 We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. [sent-507, score-0.513]

84 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. [sent-512, score-0.66]

85 oc = offline composition, bb = bucket brigade, otf = on the fly. [sent-513, score-0.167]

86 We then cast this modified model as a cascade of LNT tree transducers. [sent-516, score-0.364]

87 The rotation transducer R, a samparlee ionf Fwighuicreh 3is. [sent-518, score-0.202]

88 Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. [sent-519, score-0.202]

89 In our experiments we try several different language models to demonstrate varying performance of the application algorithms. [sent-522, score-0.13]

90 The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. [sent-529, score-0.511]

91 We evaluate the speed of each approach: bucket brigade and on-the-fly. [sent-530, score-0.258]

92 As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/. [sent-534, score-0.245]

93 e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. [sent-536, score-0.579]

94 pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. [sent-538, score-0.301]

95 The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. [sent-540, score-0.13]

96 In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. [sent-541, score-0.239]

97 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. [sent-545, score-0.884]

98 We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). [sent-546, score-0.168]

99 Backward and forward application of weighted extended tree transducers. [sent-572, score-0.5]

100 The design principles of a weighted finite-state transducer library. [sent-611, score-0.321]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('wrtg', 0.555), ('cascade', 0.236), ('transducer', 0.202), ('productions', 0.159), ('backward', 0.147), ('transducers', 0.145), ('brigade', 0.143), ('wlnt', 0.143), ('wsts', 0.143), ('application', 0.13), ('tree', 0.128), ('recognizability', 0.127), ('cascades', 0.125), ('forward', 0.123), ('weighted', 0.119), ('bucket', 0.115), ('nin', 0.114), ('composition', 0.113), ('wtt', 0.111), ('mohri', 0.102), ('mb', 0.095), ('wxlt', 0.095), ('heiko', 0.083), ('vogler', 0.083), ('alexandrakis', 0.079), ('bozapalidis', 0.079), ('werner', 0.079), ('wst', 0.079), ('wtts', 0.079), ('wxlnt', 0.079), ('maletti', 0.069), ('nk', 0.067), ('production', 0.067), ('lv', 0.064), ('gout', 0.063), ('kuich', 0.063), ('nout', 0.063), ('pout', 0.063), ('tk', 0.062), ('ma', 0.06), ('embedding', 0.06), ('gin', 0.06), ('automata', 0.059), ('ti', 0.057), ('offline', 0.052), ('compose', 0.05), ('tm', 0.05), ('finite', 0.05), ('recognizes', 0.048), ('chain', 0.048), ('wxtt', 0.048), ('zolt', 0.048), ('normal', 0.047), ('string', 0.046), ('op', 0.045), ('preserves', 0.045), ('inputs', 0.044), ('ul', 0.043), ('nondeleting', 0.042), ('formal', 0.038), ('andreas', 0.038), ('form', 0.038), ('ru', 0.036), ('projection', 0.036), ('trivial', 0.035), ('algorithms', 0.035), ('preserve', 0.034), ('revisit', 0.034), ('compositions', 0.034), ('rk', 0.034), ('fa', 0.033), ('return', 0.033), ('ni', 0.032), ('preservation', 0.032), ('pauls', 0.032), ('tt', 0.032), ('dresden', 0.032), ('droste', 0.032), ('gentle', 0.032), ('lca', 0.032), ('nni', 0.032), ('ntwl', 0.032), ('overwhelms', 0.032), ('sik', 0.032), ('symeon', 0.032), ('trom', 0.032), ('wrtgs', 0.032), ('wsa', 0.032), ('wsas', 0.032), ('wxt', 0.032), ('zinit', 0.032), ('mehryar', 0.031), ('ste', 0.03), ('rule', 0.029), ('trees', 0.029), ('derivation', 0.029), ('symbols', 0.029), ('knight', 0.029), ('calls', 0.028), ('ecseg', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers

Author: Jonathan May ; Kevin Knight ; Heiko Vogler

Abstract: Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko .vogle r@ tu-dre s den .de 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos.m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed R+1∪W {e+ as∞su}m,te ha thtro thuegh woeuitgh t hi osf p aa ppaetrh t ihsa cta wlceuilgahtetds a asre th ien prod∪uct { +of∞ ∞th}e, wtheaitgh thtes wofe i gtsh etd ogfes a, panatdh t ihsat c athlceu lwateeigdh ats so tfh ae (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 ProceedingUsp opfs thaela 4, 8Stwhe Adnennu,a 1l1- M16ee Jtiunlgy o 2f0 t1h0e. A ?c ss2o0c1ia0ti Aosnso focria Ctioonm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s058–1066, a:a/1a:a/1 b:b/a.:5b/.1aa::ba//. 49Ea:ba/:a.6/.5 (a) InpAut strain:ag/ “a aB” emab:ae/d1dedC in an D(b) first Wa:b:aS/T./4. in cascadeE :c/.6Fa:d/.4 (c//).. second WFST in cascbaad::dde// identityA WSaT: (d)b:Oda/c:f.d3Al./6ai5n.:0c3e/.0ca:o7m/1pao :sBcdi/t.32o584n6a :p/1roa:cCa/h:db.3:/c6d.2/46.35b:a(/be.a)5 :BbA:/u.D1c/keDtbrigBadEDae:ba/p.49proach:ECDa:b/ a.:6b/5.(f)Rdc/Ae./0sD.u35b7aFl4t:c o/f.76ofcBld/iE.nD53e4F6orFbcud/.c0k7e3tapbCpd:cDli/cF.a12t38ion (g)dcI/n.A3i5tD46dcaF/lD.0oF7n3-thea f:lcBdy/E .b(12h:d)8c/O.3A5nD-4dtchF/.e0C -7f3EDlyFas:t /n.B9d-EDinF afterc /x.d3cp/6.l1o28ring C(i)ECDOcFE/.nA5-tD4hdcF/e.0- fl37ysdcta/n.35dB64-iEDnF afteBrc/Eb.3eFs6dtc/ pd./1a2.t83h46 asbeC nEDF found Cobm:bdap:c/:o/d.3/.sa6.5e05D:c tF.h0e7 tranaaaa:sd:c: cdd// /. u12..53c2846ers stacnd//d..A53-i46nDdc f.o.00r73 (f) Appaal:A:ybaD W..19ST (b)BB tEoD WST (a) aftercd p/..0/0..rDo5373Fj46ectionc o.12u28tcdg//o.A.35iDn64dgcF// e0C0dC73gEDeFFs of sBtEaDFteF ADdFc// FiBgBuEDFreF 1: Tc/ .h2.34dcr/6/e. 1e2 8 d/ iA. f53fD64edcF/ r. 0 eCC37nEDtF appBroBEaDFcFhesdc ./t2o.3d4c a..12p28plicatioCCnED tF/ h. A53r6D4ocd/uF/. g0073h cascBBaEdDFeFs ocdf/ .3W264dcS//.1.T282s. bydc w//..53e46ll-known aElFgorithdc/m./2.34s6 to e//.f.5f3i46cieCnEtFly finBdE the kd-c/ bedst/ p6aths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 AppCliEcdcF//a..53ti64on of treeB tFranscd/d./3.u264cers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈ Σ has a rank rk(σ) ∈ N. We cerayll ⊆ Σ, ∈k ∈ aNs t ahe r set rokf tσho)s ∈e σ ∈ Σe such that r⊆k(σ Σ), k= ∈k. NTh teh ese ste otf o vfa trhioasbele σs σi s∈ d eΣnoted X = {x1, x2, . . .} and is assumed to be disjnooitnetd df Xrom = any rank,e.d. a.}lp ahnadb iest aussseudm iend dth tios paper. We use to denote a symbol of rank 0 that is not iWn any e ra ⊥nk toed d eanlpohtaeb aet s yumsebdo lin o fth riasn paper. tA is tr neoet t ∈ TΣ is denoted σ(t1 , . . . , tk) where k ≥ 0, σ ∈ and t1, . . . , tk ∈ TΣ. F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈ TΣ as shorthand for σ() . For every set Sw rditiesjσo in ∈t f Trom Σ, let TΣ (S) = TΣ∪S, where, for all s ∈ S, rk(s) = 0. lW se ∈ d,e rfkin(es) th 0e. positions of a tree t = σ(t1, . . . , tk), for k 0, σ ∈ t1, . . . , tk ∈ TΣ, as a set pos(≥t) ⊂ N∗ s∈uch that {∈ε} T ∪ 1e t≤ p ois (≤t) k ⊂, ⊂v ∈ pTohse( tse)t =of {lεea}f ∪ po {siivtio |ns 1 l ≤v(t i) ≤⊆ k p,ovs(t ∈) apores t(hto)s}e. pTohseit sieotns o fv l a∈f p poossit(ito)n ssu lvch(t )th ⊆at pfoors tn)o ir ∈ th Nse, pvio ∈it ponoss(t v). ∈ We p presume hsta tnhadatr dfo lrex nioco igr ∈aph Nic, ovrid ∈eri pnogss( <∈ nTΣd ≤an odn v p o∈s pos(t). The label of t at Lpoestit ti,osn v, Tdenaontedd v v by ∈ t( pvo)s,( tt)he. sTuhbetr leaeb eolf ot fa tt v, denoted by t|v, and the replacement at v by s, vde,n doetneodt e bdy tb[ys] tv|, are defined as follows: ≥ pos(t) = ,{ aivs a| Σ(k), pos(ti)}. 1. For every σ ∈ Σ(0) , σ(ε) = σ, σ|ε = σ, and σF[osr]ε e v=e sy. 2. For every t = σ(t1 , . . . , tk) such that k = rk(σ) and k 1, t(ε) = σ, t|ε = t, aknd = t[ rsk]ε( =) ns.d kFo ≥r every 1) ≤= iσ ≤ t| k and v ∈ pos(ti), t(ivF) =r vtie (rvy) ,1 1t| ≤iv =i ≤ti |v k, aanndd tv[s] ∈iv p=o sσ(t(t1 , . . . , ti−1 , ti[(sv])v, , tt|i+1 , . . . , t|k). The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt.e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. {Lt(etv )A | avn ∈d lBv( tb)e} sets. Let ϕ : A → TΣ (B) be Lae mt Aapp ainndg. B W bee seexttes.nd L ϕ t oϕ th :e A Am →appi Tng ϕ : TΣ (A) → TΣ (B) such that for a ∈ A, ϕ(a) = ϕ(a) and( Afo)r →k 0, σ ∈ Σch(k th) , atn fdo t1, . . . , tk ∈ TΣ (A), ϕan(dσ( fto1r, . . . ,t0k,) σ) =∈ σ Σ(ϕ(t1), . . . ,ϕ(tk)). ∈ W Te indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈ R | w 0} and R+∞ to dentoote d Ren+o ∪e {th+e∞ set}. { ≥ ≥ ≥ Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈ N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩ N = ∅. 3. PΣ ∩is Na =tup ∅le. (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N.6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . . . , nk) where σ ∈ Σ(k) and n1, . . . , nk →∈ σ N(n. For WRTG∈ G N =. (N, Σ, P, n0), s, t, u ∈ TΣ(N), n ∈ N, and p ∈ P of the form n −→ ∈w T u, we nobt ∈ain N Na ,d aenridva ptio ∈n s Ptep o ffr tohme fso rtom mt n by− →repl ua,ci wneg some leaf nonterminal in s labeled n with u. For- − →w mally, s ⇒pG t if there exists some v ∈ lv(s) smuaclhly t,ha st s⇒(v) =t i fn t haenrde s e[xui]svt = so tm. e W ve say t(hsis) derivation step is leftmost if, for all v0 ∈ lv(s) where v0 < v, s(v0) ∈ Σ. We hencef∈orth lv a(ss-) sume all derivation )ste ∈ps a.re leftmost. If, for some m ∈ N, pi ∈ P, and ti ∈ TΣ (N) for all s1o m≤e i m m≤ ∈ m N, n0 ⇒∈ pP1 t a1n ⇒∈pm T tm, we say t1he ≤ sequence ,d n = (p1, . . . ,p.m ⇒) is a derivation of tm in G and that n0 ⇒∗ tm; the weight of d is wt(d) = π(p1) · . . . ⇒· π(pm). The weighted tree language rec)og ·n .i.z.ed · π by(p G is the mapping LG : TΣ → R+∞ such that for every t ∈ TΣ, LG(t) is the sum→ →of R the swuecihgth htsa to ffo arl el v(eproyss ti b∈ly T infinitely many) derivations of t in G. A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. We define a partial ordering ? on WRTGs sucWh eth date finore W aR TpGarst aGl1 r=d r(iNng1 , Σ?, P o1n , n0) and G2 (N2, Σ, P2, n0), we say G1 ? G2 iff N1 ⊆ N2 and P1 ⊆ P2, where the w?eigh Gts are pres⊆erve Nd. ... = Definition 3.2 (cf. Def. 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆ are the ranked alphabets of input and output symbols, respectively, where (Σ ∪ ∆) ∩ Q = 3. (RΣ i ∪s a∆ )tu ∩ple Q ( =R 0∅, .π), where R0 is a finite set of rules, each rule r of the form q.y → u for q ∈ ru lQes, y c∈h T ruΣle(X r), o fa tnhde u fo r∈m T q∆.y(Q − → →× u uX fo)r. Wqe ∈ ∈fu Qrt,hye r ∈req Tuire(X Xth)a,t annod v uari ∈abl Te x( Q∈ ×X appears rmthoerre rtehqauni roen tchea itn n y, aanrida bthleat x xe ∈ach X Xva arpi-able appearing in u is also in y. Moreover, π : R0 → R+∞ is a weight function of the rules. As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. q.y −π −(r →) u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y u, each x ∈ yd (y) ∩ X appears at most on− →ce ( ur,es epaecchxtive ∈ly, dat( lye)a ∩st Xonc aep) iena us. tW meo dsten oontcee th (ree scpleascsof all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1 , . . . , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i.e., it is a “traditional” WTT (F¨ ul¨ op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈ T∆(Q TΣ), and r ∈ R of the form q.y −w →), u, we obtain a× d Ter)iv,a atniodn r s ∈te pR ofrfom the s f trom mt b.yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒rM t if there oisf tah peo ysi-tmioantc vh n∈g tp roese(.s F)o, am saulblys,ti stu ⇒tion mapping ϕ : X → TΣ, and a rule q.y −u→w bs u ∈ R such that ϕs(v :) X X= → →(q, T ϕ(y)) and t = s[ϕ− →0(u u)] ∈v, wRh seurech hϕ t0h aist a substitution mapping Q X → T∆ (Q TΣ) dae sfiunbesdti usuticohn t mhaatp ϕpin0(qg0, Q Qx) × = X ( →q0, Tϕ(x()Q) f×or T all q0 ∈ Q and x ∈ X. We say this derivation step is l∈eft Qmo asnt dif, x f o∈r Xall. v W0 e∈ s lyv( tsh)i w deherirvea tvio0 n< s v, s(v0) ∈ ∆. We hencefor∈th lavs(sus)m we haellr ede vrivation steps) a ∈re ∆le.ftm Woes ht.e nIcf,e ffoorr sho amsesu sm ∈e aTllΣ d, emriv a∈t oNn, ri p∈s R ar, ea lnedf ttmi o∈s tT.∆ I f(,Q f ×r sToΣm) efo sr ∈all T T1 ≤, m mi ≤ ∈ m, (q0∈, s R) ,⇒ anrd1 tt1 . . . ⇒(rQm ×tm T, w)e f say lth 1e sequence d =, ()r1 ⇒ , . . . , rm..) .i s⇒ ⇒a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation )r ·ec .o..gn ·i πze(rd by M is the mapping τM : TΣ T∆ → R+∞, such that for every s ∈ TΣ and t ∈× T T∆, τM→(s R, t) is the × µ× foofrth eve ewryeig sh ∈ts Tof aalln (dpo ts ∈sib Tly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3. 1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G). and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . . . , nk), create a rule ofthe form n.σ(x1 , . . . , xk) − →w σ(n1 .x1, . . . , nk.xk). Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y → u in R.9 Domain projection for wxLT is bq.eyst →exp ulai inne dR b.y way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1) , x2) − →w δ(q2.x2, β(α, q3.x1)) would yield the production q1 q− →w σ(γ(q3) , q2) in the domain projection. Howev− →er, aσ dγe(lqeting rule such as q1.σ(x1 , x2) − →w γ(q2.x2) necessitates the introduction of a new →non γte(rqminal ⊥ that can genienrtartoed aullc toiof nT Σo fw ai nthe wwe niognhtte r1m . The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10 ) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20 ) 4: outputs 5: wxLT M3 = ((Q1 Q2), Σ, Γ, R3, (q10 , q20 )) such that M3 = (τM1 ; τM2 Q). 6: complexity 7: O(|R1 | max(|R2|size( ˜u), |Q2|)), where ˜u is the × lOar(g|eRst |rimgahtx s(|idRe t|ree in a,n|yQ ru|l))e in R1 8: Let R3be of the form (R30,π) 9: R3 ← (∅, ∅) 10: Ξ ←← ←{ ((q∅,10∅ , q20 )} {seen states} 11 10 : ΨΞ ←← {{((qq10 , q20 ))}} {{speeennd sintagt essta}tes} 1112:: Ψwh ←ile {Ψ( ∅ do) 1123:: (ilqe1 , Ψq26 =) ← ∅ daony element of 14: ← Ψ) \← {a(nqy1 , ql2em)}e 15: for all (q1.y q− −w →1 u) ∈ R1 do 16: for all (z, −w − →2) u∈) )C ∈O RVER(u, M2, q2) do 17: for all (q, x) )∈ ∈∈ C yOdV V(Ez)R ∩(u u(,(QM1 Q2) X) do 18: i fa lql (∈q ,Ξx )th ∈en y 19: qΞ6 ∈ ← Ξ tΞh e∪n {q} 20: ΞΨ ←← ΞΨ ∪∪ {{qq}} 21: r ← ((Ψq1 ← , q 2Ψ) .y {→q }z) 22: rR30 ← ←← (( qR03 ∪ {).ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F ¨ul ¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈ T∆ (Q1 X) 3: wuT ∈ M T2 = (Q×2, X X∆), Γ, R2, q20 ) 4: state q2 ∈ Q2 ×× × 5: outputs 6: set of pairs (z, w) with z ∈ TΓ ((Q1 Q2) X) fsoetrm ofed p ab yir so (nze, ,o wr m) worieth hsu zcc ∈es Tsful runs× ×on Q Qu )b y × ×ru Xles) in R2, starting from q2, and w ∈ R+∞ the sum of the weights of all such runs,. 7: complexity 8: O(|R2|size(u)) 9: 10: 11: 12: 13: 14: if u(ε) is of the form (q1,x) ∈ Q1× X then zinit ← ((q1 q2), x) else zinit ← ⊥ Πlast ←← ←{(z ⊥init, {((ε, ε), q2)}, 1)} for all← v ∈ pos(u) εsu,εch), tqha)t} u(v) ∈ ∆(k) for some fko ≥r 0ll li nv p ∈ref ipxo osr(ude)r sduoc 15: ≥Π v0 i←n p ∅r 16: for ←all ∅(z, θ, w) ∈ Πlast do 17: rf aorll a(zll, vθ0, ∈w )lv ∈(z Π) such that z(v0) = ⊥ do 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: , dfoor all(θ(v,v0).u(v)(x1,...,xk) −w →0h)∈R2 θ0 ← θ For←m sθubstitution mapping ϕ : (Q2 X) → TΓ((Q1 Q2 X) {⊥}). f→or Ti = 1to× ×k dQo for all v00 ∈ pos(h) such that h(v00) = (q02 , xi∈) for some q20 ∈ Q2 do θ0(vi, v0v00) ← q20 if u(vi) ←is q of the form (q1, x) ∈ Q1 X then ∪ ,ϕ(x)q20 ∈, x Qi) ←× X((q t1h, eqn20), x) else ϕ(q20, xi) ← ⊥ Πv ← Πv {(z[ϕ)( ←h)] ⊥v0 , θ0, w · w0)} ∪ 29: Πlast ← Πv 30: Z ← {z |← ←(z Π, θ, Xw) 31: return {(z, X ∈ (z ,θ ,wX) X∈Πl Πlast} w) | z ∈ Z} ast X tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3. 1 as follows: a WRTG is a 6tuple G = (N, P, n0,M,G) where N, P, and n0 are defined as in Definition 3. 1, and either M = G = ∅,10 or M is a wxLNT and G is a normMal = =fo Grm =, c ∅h,ain production-free WRTG such that Σ, 10In which case Σ, the definition is functionally unchanged from before. 1062 w t[xLypN] LeT (a)pPresOYNer Qovsateiodn?f(Grwe´ca(Fsd¨uMrg(lKeS¨coa punelgsid cowtihuSza,[rtxlbce1.2i]9,l0Nn2tybT091), 984w [txy](pbL Ne)T PrespvatiosYnNero svfebda?ckw(rFdu¨MlSeo¨c apesgl onwtiuza[r,xlcb.2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? Gout and (nin ?−→w G u) ∈ Pout ⇔ (nin − →w u) ∈ M(G). 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. × × 13: for all (q.y −f −wt → 1he u) ∈ R do 14: for all (θ, w2) ∈ ∈RE RPL doACE(y,G, n) do 15: Form subs)ti ∈tu RtiEonP LmAaCppEi(nyg, Gϕ, n: Qo X → T∆ (N Q) such that, for all v ∈ ydQ Q(y) × ×and X Xq0 → →∈ Q, (ifN Nth ×ereQ e)x sisutc nh0 h∈a tN, f aonrd a lxl v∈ ∈X y sdu(cyh) th anatd θ q(v∈) = n0 and y(v) = x, t∈he Nn aϕn(dq0 x , x ∈) X= ( snu0c,h hq t0)ha. 16: p0 ((n, q) −w −1 −· −w →2 ϕ(u)) 17: for← ←all ( p ∈, qN)O− −R −M − →(p0 ϕ, N(uo)u)t) do ← 18: Let p b(ke) o.f the form n0− →w δ(n1,...,nk) for 19: δN ∈out ∆ ← Nout ∪ {n0 , . . . , nk } 20: Pout ←← P Nout ∪∪ { {pn} 21: return CHAIN-REM(Gout) M(G).. In the latter case, G is a stand-in for MG ?(G M).,( analogous to the stand-ins for WSAs and G ? WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that M, G) M(G).. PRODUCE obtains the effect of forward applica- Algorithm 4 REPLACE 1: 2: 3: 4: 5: 6: 7: 8: inputs y ∈ TΣ(X) WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions n∈ N outnpu ∈ts N set Π of pairs (θ, w) where θ is a mapping pos(y) → N and w ∈ R+∞ , each pair indicating pa ossu(cyc)ess →ful Nrun a nodn wy b ∈y p Rroductions in G, starting from n, and w is the weight of the run. complexity O(|P|size(y)) 9: Πlast← {({(ε,n)},1)} 10: for all← ←v {∈( { po(εs,(ny)) s,u1c)h} that y(v) ∈ X in prefix order fdoor 11: Πv ← ∅ 12: for ←all ∅(θ, w) ∈ Πlast do 13: ri fa Mll ( w∅) )a ∈nd Π G ∅ then 14: MG ←= ∅PR anOdD GUC6 =E ∅(G th, eθn(v)) = = −w →0 15: for all (θ(v) y(v) (n1, . . . , nk)) ∈ P do 16: Πv ← Πv∪− →{(θ y∪({v ()(vni, ni) , 1≤ )i) ≤ ∈ k P}, d dwo·w0) } 17: Πlast ← Π←v 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G0 = (N0, Σ, P0, n0, M, G), in normal form, such that if M ∅ and G ∅, LG0 = LM(G)., and otherwise Gf M0 = G. = = 56:: comOp(|lePx0it|y) 7: G0← 8: Ξ ←← { nG0} {seen nonterminals} 89:: ΞΨ ←← {{nn0}} {{speeenndi nnogn tneornmteinramlsi}nals} 190:: wΨh ←ile {Ψn =} ∅{ pdeon 11 10 : inl e← Ψa6n =y ∅el deoment of 12: nΨ ←←a nΨy \ e l{emn}e 13: iΨf M ← ∅\ a{nnd} G ∅ then 14: MG0 =← ∅ P aRnOdD GU 6=CE ∅(G the0,n nn) 15: for all (n P−→w RO σ(n1 , . . . , nk)) ∈ P0 do 16: for i= 1→ →to σ (kn ndo 17: if ni ∈ Ξ then 18: Ξ ←∈ Ξ ΞΞ t h∪e {nni} 19: ΞΨ ←← ΞΨ ∪∪ {{nni}} 20: return G0 Ψ = = 1063 g0 g0 −w − →1 −−w →→2 g0 − − → σ(g0, g1) α w − →3 g1 − − → α (a) Input WRTG G G a0 a0.σ(x1, x2) −w − →4 − w − → →5 σ(a0.x1, a1.x2) a0.σ(x1, x2) ψ(a2.x1, a1.x2) a0 .α − − → α a 1.α − − → α a2 .α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) b0.α −w −1 →0 α −w − →9 σ(b0.x1, b0.x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). built as a consequence of building the complete MB(MA(G).). g0a0b0 g0a0b0 −w −1 −· −w4 −·w − →9 σ(g0a0b0, g1a1b0) g0a0b0 −−w − − −2 −· −w −6 − −·−w − → −1 →0 σ α g1a1b0 −w − −3· w−7 −· −w −1 →0 α (e) Complete MB (MA (G).). Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where iGs1 in= F (i{g u0rae0 2}a, 1 {2σa,nψd,α M,ρA},is ∅ i,n g 20ab0., T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) → JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, )x− 3→) → JJ VrB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” − → ”gentle”(a) Rotation rules iVB.NN(x1, x2) iVB.NN(x1, x2)) iVB.NN(x1, x2)) → →→ →→ NN(INS iNN.x1, iNN.x2) NNNN((iINNNS.x i1, iNN.x2) NNNN((iiNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1 , x2, x3) → X(t.x1 , t.x2, t.x3) t.”gentleman” →) → j →1 t . ””ggeennttl eemmaann”” →→ jE1PS t . ”INgSen →tle m j 1a t . I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j 1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE=-E ({XgPLICI}T, c{aσl,lsα }P,R ∅O, gDUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) σ(b0.x1, b0.x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G).. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G). that must be built to form the complete MB (MA(G).). are shown in Figure 2d. The complete MB (MA(G).). is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G).; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing applica- tion. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB (MA (G).)., since G1 is the initial stand-in for MA (G).. 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a samparlee ionf Fwighuicreh 3is. Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. We add an English syntax language model L to theW ceas acdadde a no Ef ntrgalinsshd uscyentras x ju lastn gdueascgrei mbeodd etol L be ttoter simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/.21s 0.e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). et al. Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1): 1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2): 186–213. Zolt a´n E´sik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt a´n F ¨ul ¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 3 13–404. Springer-Verlag. Zolt a´n F ¨ul ¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G ´ecseg and Magnus Steinby. 1984. Tree Automata. Akad e´miai Kiad o´, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14: 135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10): 1187–1 196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231: 17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Lin- guistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1): 1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066

2 0.19154491 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars

Author: Andreas Maletti

Abstract: A characterization of the expressive power of synchronous tree-adjoining grammars (STAGs) in terms of tree transducers (or equivalently, synchronous tree substitution grammars) is developed. Essentially, a STAG corresponds to an extended tree transducer that uses explicit substitution in both the input and output. This characterization allows the easy integration of STAG into toolkits for extended tree transducers. Moreover, the applicability of the characterization to several representational and algorithmic problems is demonstrated.

3 0.12727413 97 acl-2010-Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices

Author: Graeme Blackwood ; Adria de Gispert ; William Byrne

Abstract: This paper presents an efficient implementation of linearised lattice minimum Bayes-risk decoding using weighted finite state transducers. We introduce transducers to efficiently count lattice paths containing n-grams and use these to gather the required statistics. We show that these procedures can be implemented exactly through simple transformations of word sequences to sequences of n-grams. This yields a novel implementation of lattice minimum Bayes-risk decoding which is fast and exact even for very large lattices.

4 0.088689886 116 acl-2010-Finding Cognate Groups Using Phylogenies

Author: David Hall ; Dan Klein

Abstract: A central problem in historical linguistics is the identification of historically related cognate words. We present a generative phylogenetic model for automatically inducing cognate group structure from unaligned word lists. Our model represents the process of transformation and transmission from ancestor word to daughter word, as well as the alignment between the words lists of the observed languages. We also present a novel method for simplifying complex weighted automata created during inference to counteract the otherwise exponential growth of message sizes. On the task of identifying cognates in a dataset of Romance words, our model significantly outperforms a baseline ap- proach, increasing accuracy by as much as 80%. Finally, we demonstrate that our automatically induced groups can be used to successfully reconstruct ancestral words.

5 0.069114879 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two

Author: Benoit Sagot ; Giorgio Satta

Abstract: Linear Context-Free Rewriting Systems (LCFRSs) are a grammar formalism capable of modeling discontinuous phrases. Many parsing applications use LCFRSs where the fan-out (a measure of the discontinuity of phrases) does not exceed 2. We present an efficient algorithm for optimal reduction of the length of production right-hand side in LCFRSs with fan-out at most 2. This results in asymptotical running time improvement for known parsing algorithms for this class.

6 0.06739857 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars

7 0.066303231 169 acl-2010-Learning to Translate with Source and Target Syntax

8 0.059695959 67 acl-2010-Computing Weakest Readings

9 0.058615178 217 acl-2010-String Extension Learning

10 0.055009685 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression

11 0.053999308 228 acl-2010-The Importance of Rule Restrictions in CCG

12 0.053527348 56 acl-2010-Bridging SMT and TM with Translation Recommendation

13 0.051673405 255 acl-2010-Viterbi Training for PCFGs: Hardness Results and Competitiveness of Uniform Initialization

14 0.051164843 118 acl-2010-Fine-Grained Tree-to-String Translation Rule Extraction

15 0.050752532 69 acl-2010-Constituency to Dependency Translation with Forests

16 0.050233174 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar

17 0.050213844 66 acl-2010-Compositional Matrix-Space Models of Language

18 0.049091451 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

19 0.048924971 155 acl-2010-Kernel Based Discourse Relation Recognition with Temporal Ordering Information

20 0.048219997 94 acl-2010-Edit Tree Distance Alignments for Semantic Role Labelling


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.131), (1, -0.05), (2, 0.035), (3, -0.032), (4, -0.07), (5, -0.06), (6, 0.101), (7, 0.018), (8, 0.0), (9, -0.085), (10, -0.048), (11, -0.02), (12, 0.069), (13, -0.081), (14, -0.067), (15, 0.001), (16, -0.017), (17, 0.005), (18, -0.003), (19, 0.076), (20, -0.013), (21, -0.008), (22, -0.064), (23, -0.064), (24, 0.128), (25, -0.106), (26, -0.064), (27, -0.057), (28, -0.048), (29, 0.112), (30, -0.002), (31, 0.047), (32, 0.016), (33, 0.074), (34, -0.091), (35, 0.037), (36, 0.042), (37, -0.071), (38, 0.095), (39, 0.018), (40, 0.06), (41, 0.056), (42, 0.047), (43, -0.014), (44, -0.04), (45, 0.102), (46, 0.027), (47, -0.217), (48, -0.146), (49, -0.117)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94060898 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers

Author: Jonathan May ; Kevin Knight ; Heiko Vogler

Abstract: Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko .vogle r@ tu-dre s den .de 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos.m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed R+1∪W {e+ as∞su}m,te ha thtro thuegh woeuitgh t hi osf p aa ppaetrh t ihsa cta wlceuilgahtetds a asre th ien prod∪uct { +of∞ ∞th}e, wtheaitgh thtes wofe i gtsh etd ogfes a, panatdh t ihsat c athlceu lwateeigdh ats so tfh ae (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 ProceedingUsp opfs thaela 4, 8Stwhe Adnennu,a 1l1- M16ee Jtiunlgy o 2f0 t1h0e. A ?c ss2o0c1ia0ti Aosnso focria Ctioonm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s058–1066, a:a/1a:a/1 b:b/a.:5b/.1aa::ba//. 49Ea:ba/:a.6/.5 (a) InpAut strain:ag/ “a aB” emab:ae/d1dedC in an D(b) first Wa:b:aS/T./4. in cascadeE :c/.6Fa:d/.4 (c//).. second WFST in cascbaad::dde// identityA WSaT: (d)b:Oda/c:f.d3Al./6ai5n.:0c3e/.0ca:o7m/1pao :sBcdi/t.32o584n6a :p/1roa:cCa/h:db.3:/c6d.2/46.35b:a(/be.a)5 :BbA:/u.D1c/keDtbrigBadEDae:ba/p.49proach:ECDa:b/ a.:6b/5.(f)Rdc/Ae./0sD.u35b7aFl4t:c o/f.76ofcBld/iE.nD53e4F6orFbcud/.c0k7e3tapbCpd:cDli/cF.a12t38ion (g)dcI/n.A3i5tD46dcaF/lD.0oF7n3-thea f:lcBdy/E .b(12h:d)8c/O.3A5nD-4dtchF/.e0C -7f3EDlyFas:t /n.B9d-EDinF afterc /x.d3cp/6.l1o28ring C(i)ECDOcFE/.nA5-tD4hdcF/e.0- fl37ysdcta/n.35dB64-iEDnF afteBrc/Eb.3eFs6dtc/ pd./1a2.t83h46 asbeC nEDF found Cobm:bdap:c/:o/d.3/.sa6.5e05D:c tF.h0e7 tranaaaa:sd:c: cdd// /. u12..53c2846ers stacnd//d..A53-i46nDdc f.o.00r73 (f) Appaal:A:ybaD W..19ST (b)BB tEoD WST (a) aftercd p/..0/0..rDo5373Fj46ectionc o.12u28tcdg//o.A.35iDn64dgcF// e0C0dC73gEDeFFs of sBtEaDFteF ADdFc// FiBgBuEDFreF 1: Tc/ .h2.34dcr/6/e. 1e2 8 d/ iA. f53fD64edcF/ r. 0 eCC37nEDtF appBroBEaDFcFhesdc ./t2o.3d4c a..12p28plicatioCCnED tF/ h. A53r6D4ocd/uF/. g0073h cascBBaEdDFeFs ocdf/ .3W264dcS//.1.T282s. bydc w//..53e46ll-known aElFgorithdc/m./2.34s6 to e//.f.5f3i46cieCnEtFly finBdE the kd-c/ bedst/ p6aths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 AppCliEcdcF//a..53ti64on of treeB tFranscd/d./3.u264cers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈ Σ has a rank rk(σ) ∈ N. We cerayll ⊆ Σ, ∈k ∈ aNs t ahe r set rokf tσho)s ∈e σ ∈ Σe such that r⊆k(σ Σ), k= ∈k. NTh teh ese ste otf o vfa trhioasbele σs σi s∈ d eΣnoted X = {x1, x2, . . .} and is assumed to be disjnooitnetd df Xrom = any rank,e.d. a.}lp ahnadb iest aussseudm iend dth tios paper. We use to denote a symbol of rank 0 that is not iWn any e ra ⊥nk toed d eanlpohtaeb aet s yumsebdo lin o fth riasn paper. tA is tr neoet t ∈ TΣ is denoted σ(t1 , . . . , tk) where k ≥ 0, σ ∈ and t1, . . . , tk ∈ TΣ. F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈ TΣ as shorthand for σ() . For every set Sw rditiesjσo in ∈t f Trom Σ, let TΣ (S) = TΣ∪S, where, for all s ∈ S, rk(s) = 0. lW se ∈ d,e rfkin(es) th 0e. positions of a tree t = σ(t1, . . . , tk), for k 0, σ ∈ t1, . . . , tk ∈ TΣ, as a set pos(≥t) ⊂ N∗ s∈uch that {∈ε} T ∪ 1e t≤ p ois (≤t) k ⊂, ⊂v ∈ pTohse( tse)t =of {lεea}f ∪ po {siivtio |ns 1 l ≤v(t i) ≤⊆ k p,ovs(t ∈) apores t(hto)s}e. pTohseit sieotns o fv l a∈f p poossit(ito)n ssu lvch(t )th ⊆at pfoors tn)o ir ∈ th Nse, pvio ∈it ponoss(t v). ∈ We p presume hsta tnhadatr dfo lrex nioco igr ∈aph Nic, ovrid ∈eri pnogss( <∈ nTΣd ≤an odn v p o∈s pos(t). The label of t at Lpoestit ti,osn v, Tdenaontedd v v by ∈ t( pvo)s,( tt)he. sTuhbetr leaeb eolf ot fa tt v, denoted by t|v, and the replacement at v by s, vde,n doetneodt e bdy tb[ys] tv|, are defined as follows: ≥ pos(t) = ,{ aivs a| Σ(k), pos(ti)}. 1. For every σ ∈ Σ(0) , σ(ε) = σ, σ|ε = σ, and σF[osr]ε e v=e sy. 2. For every t = σ(t1 , . . . , tk) such that k = rk(σ) and k 1, t(ε) = σ, t|ε = t, aknd = t[ rsk]ε( =) ns.d kFo ≥r every 1) ≤= iσ ≤ t| k and v ∈ pos(ti), t(ivF) =r vtie (rvy) ,1 1t| ≤iv =i ≤ti |v k, aanndd tv[s] ∈iv p=o sσ(t(t1 , . . . , ti−1 , ti[(sv])v, , tt|i+1 , . . . , t|k). The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt.e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. {Lt(etv )A | avn ∈d lBv( tb)e} sets. Let ϕ : A → TΣ (B) be Lae mt Aapp ainndg. B W bee seexttes.nd L ϕ t oϕ th :e A Am →appi Tng ϕ : TΣ (A) → TΣ (B) such that for a ∈ A, ϕ(a) = ϕ(a) and( Afo)r →k 0, σ ∈ Σch(k th) , atn fdo t1, . . . , tk ∈ TΣ (A), ϕan(dσ( fto1r, . . . ,t0k,) σ) =∈ σ Σ(ϕ(t1), . . . ,ϕ(tk)). ∈ W Te indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈ R | w 0} and R+∞ to dentoote d Ren+o ∪e {th+e∞ set}. { ≥ ≥ ≥ Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈ N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩ N = ∅. 3. PΣ ∩is Na =tup ∅le. (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N.6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . . . , nk) where σ ∈ Σ(k) and n1, . . . , nk →∈ σ N(n. For WRTG∈ G N =. (N, Σ, P, n0), s, t, u ∈ TΣ(N), n ∈ N, and p ∈ P of the form n −→ ∈w T u, we nobt ∈ain N Na ,d aenridva ptio ∈n s Ptep o ffr tohme fso rtom mt n by− →repl ua,ci wneg some leaf nonterminal in s labeled n with u. For- − →w mally, s ⇒pG t if there exists some v ∈ lv(s) smuaclhly t,ha st s⇒(v) =t i fn t haenrde s e[xui]svt = so tm. e W ve say t(hsis) derivation step is leftmost if, for all v0 ∈ lv(s) where v0 < v, s(v0) ∈ Σ. We hencef∈orth lv a(ss-) sume all derivation )ste ∈ps a.re leftmost. If, for some m ∈ N, pi ∈ P, and ti ∈ TΣ (N) for all s1o m≤e i m m≤ ∈ m N, n0 ⇒∈ pP1 t a1n ⇒∈pm T tm, we say t1he ≤ sequence ,d n = (p1, . . . ,p.m ⇒) is a derivation of tm in G and that n0 ⇒∗ tm; the weight of d is wt(d) = π(p1) · . . . ⇒· π(pm). The weighted tree language rec)og ·n .i.z.ed · π by(p G is the mapping LG : TΣ → R+∞ such that for every t ∈ TΣ, LG(t) is the sum→ →of R the swuecihgth htsa to ffo arl el v(eproyss ti b∈ly T infinitely many) derivations of t in G. A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. We define a partial ordering ? on WRTGs sucWh eth date finore W aR TpGarst aGl1 r=d r(iNng1 , Σ?, P o1n , n0) and G2 (N2, Σ, P2, n0), we say G1 ? G2 iff N1 ⊆ N2 and P1 ⊆ P2, where the w?eigh Gts are pres⊆erve Nd. ... = Definition 3.2 (cf. Def. 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆ are the ranked alphabets of input and output symbols, respectively, where (Σ ∪ ∆) ∩ Q = 3. (RΣ i ∪s a∆ )tu ∩ple Q ( =R 0∅, .π), where R0 is a finite set of rules, each rule r of the form q.y → u for q ∈ ru lQes, y c∈h T ruΣle(X r), o fa tnhde u fo r∈m T q∆.y(Q − → →× u uX fo)r. Wqe ∈ ∈fu Qrt,hye r ∈req Tuire(X Xth)a,t annod v uari ∈abl Te x( Q∈ ×X appears rmthoerre rtehqauni roen tchea itn n y, aanrida bthleat x xe ∈ach X Xva arpi-able appearing in u is also in y. Moreover, π : R0 → R+∞ is a weight function of the rules. As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. q.y −π −(r →) u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y u, each x ∈ yd (y) ∩ X appears at most on− →ce ( ur,es epaecchxtive ∈ly, dat( lye)a ∩st Xonc aep) iena us. tW meo dsten oontcee th (ree scpleascsof all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1 , . . . , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i.e., it is a “traditional” WTT (F¨ ul¨ op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈ T∆(Q TΣ), and r ∈ R of the form q.y −w →), u, we obtain a× d Ter)iv,a atniodn r s ∈te pR ofrfom the s f trom mt b.yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒rM t if there oisf tah peo ysi-tmioantc vh n∈g tp roese(.s F)o, am saulblys,ti stu ⇒tion mapping ϕ : X → TΣ, and a rule q.y −u→w bs u ∈ R such that ϕs(v :) X X= → →(q, T ϕ(y)) and t = s[ϕ− →0(u u)] ∈v, wRh seurech hϕ t0h aist a substitution mapping Q X → T∆ (Q TΣ) dae sfiunbesdti usuticohn t mhaatp ϕpin0(qg0, Q Qx) × = X ( →q0, Tϕ(x()Q) f×or T all q0 ∈ Q and x ∈ X. We say this derivation step is l∈eft Qmo asnt dif, x f o∈r Xall. v W0 e∈ s lyv( tsh)i w deherirvea tvio0 n< s v, s(v0) ∈ ∆. We hencefor∈th lavs(sus)m we haellr ede vrivation steps) a ∈re ∆le.ftm Woes ht.e nIcf,e ffoorr sho amsesu sm ∈e aTllΣ d, emriv a∈t oNn, ri p∈s R ar, ea lnedf ttmi o∈s tT.∆ I f(,Q f ×r sToΣm) efo sr ∈all T T1 ≤, m mi ≤ ∈ m, (q0∈, s R) ,⇒ anrd1 tt1 . . . ⇒(rQm ×tm T, w)e f say lth 1e sequence d =, ()r1 ⇒ , . . . , rm..) .i s⇒ ⇒a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation )r ·ec .o..gn ·i πze(rd by M is the mapping τM : TΣ T∆ → R+∞, such that for every s ∈ TΣ and t ∈× T T∆, τM→(s R, t) is the × µ× foofrth eve ewryeig sh ∈ts Tof aalln (dpo ts ∈sib Tly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3. 1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G). and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . . . , nk), create a rule ofthe form n.σ(x1 , . . . , xk) − →w σ(n1 .x1, . . . , nk.xk). Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y → u in R.9 Domain projection for wxLT is bq.eyst →exp ulai inne dR b.y way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1) , x2) − →w δ(q2.x2, β(α, q3.x1)) would yield the production q1 q− →w σ(γ(q3) , q2) in the domain projection. Howev− →er, aσ dγe(lqeting rule such as q1.σ(x1 , x2) − →w γ(q2.x2) necessitates the introduction of a new →non γte(rqminal ⊥ that can genienrtartoed aullc toiof nT Σo fw ai nthe wwe niognhtte r1m . The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10 ) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20 ) 4: outputs 5: wxLT M3 = ((Q1 Q2), Σ, Γ, R3, (q10 , q20 )) such that M3 = (τM1 ; τM2 Q). 6: complexity 7: O(|R1 | max(|R2|size( ˜u), |Q2|)), where ˜u is the × lOar(g|eRst |rimgahtx s(|idRe t|ree in a,n|yQ ru|l))e in R1 8: Let R3be of the form (R30,π) 9: R3 ← (∅, ∅) 10: Ξ ←← ←{ ((q∅,10∅ , q20 )} {seen states} 11 10 : ΨΞ ←← {{((qq10 , q20 ))}} {{speeennd sintagt essta}tes} 1112:: Ψwh ←ile {Ψ( ∅ do) 1123:: (ilqe1 , Ψq26 =) ← ∅ daony element of 14: ← Ψ) \← {a(nqy1 , ql2em)}e 15: for all (q1.y q− −w →1 u) ∈ R1 do 16: for all (z, −w − →2) u∈) )C ∈O RVER(u, M2, q2) do 17: for all (q, x) )∈ ∈∈ C yOdV V(Ez)R ∩(u u(,(QM1 Q2) X) do 18: i fa lql (∈q ,Ξx )th ∈en y 19: qΞ6 ∈ ← Ξ tΞh e∪n {q} 20: ΞΨ ←← ΞΨ ∪∪ {{qq}} 21: r ← ((Ψq1 ← , q 2Ψ) .y {→q }z) 22: rR30 ← ←← (( qR03 ∪ {).ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F ¨ul ¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈ T∆ (Q1 X) 3: wuT ∈ M T2 = (Q×2, X X∆), Γ, R2, q20 ) 4: state q2 ∈ Q2 ×× × 5: outputs 6: set of pairs (z, w) with z ∈ TΓ ((Q1 Q2) X) fsoetrm ofed p ab yir so (nze, ,o wr m) worieth hsu zcc ∈es Tsful runs× ×on Q Qu )b y × ×ru Xles) in R2, starting from q2, and w ∈ R+∞ the sum of the weights of all such runs,. 7: complexity 8: O(|R2|size(u)) 9: 10: 11: 12: 13: 14: if u(ε) is of the form (q1,x) ∈ Q1× X then zinit ← ((q1 q2), x) else zinit ← ⊥ Πlast ←← ←{(z ⊥init, {((ε, ε), q2)}, 1)} for all← v ∈ pos(u) εsu,εch), tqha)t} u(v) ∈ ∆(k) for some fko ≥r 0ll li nv p ∈ref ipxo osr(ude)r sduoc 15: ≥Π v0 i←n p ∅r 16: for ←all ∅(z, θ, w) ∈ Πlast do 17: rf aorll a(zll, vθ0, ∈w )lv ∈(z Π) such that z(v0) = ⊥ do 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: , dfoor all(θ(v,v0).u(v)(x1,...,xk) −w →0h)∈R2 θ0 ← θ For←m sθubstitution mapping ϕ : (Q2 X) → TΓ((Q1 Q2 X) {⊥}). f→or Ti = 1to× ×k dQo for all v00 ∈ pos(h) such that h(v00) = (q02 , xi∈) for some q20 ∈ Q2 do θ0(vi, v0v00) ← q20 if u(vi) ←is q of the form (q1, x) ∈ Q1 X then ∪ ,ϕ(x)q20 ∈, x Qi) ←× X((q t1h, eqn20), x) else ϕ(q20, xi) ← ⊥ Πv ← Πv {(z[ϕ)( ←h)] ⊥v0 , θ0, w · w0)} ∪ 29: Πlast ← Πv 30: Z ← {z |← ←(z Π, θ, Xw) 31: return {(z, X ∈ (z ,θ ,wX) X∈Πl Πlast} w) | z ∈ Z} ast X tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3. 1 as follows: a WRTG is a 6tuple G = (N, P, n0,M,G) where N, P, and n0 are defined as in Definition 3. 1, and either M = G = ∅,10 or M is a wxLNT and G is a normMal = =fo Grm =, c ∅h,ain production-free WRTG such that Σ, 10In which case Σ, the definition is functionally unchanged from before. 1062 w t[xLypN] LeT (a)pPresOYNer Qovsateiodn?f(Grwe´ca(Fsd¨uMrg(lKeS¨coa punelgsid cowtihuSza,[rtxlbce1.2i]9,l0Nn2tybT091), 984w [txy](pbL Ne)T PrespvatiosYnNero svfebda?ckw(rFdu¨MlSeo¨c apesgl onwtiuza[r,xlcb.2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? Gout and (nin ?−→w G u) ∈ Pout ⇔ (nin − →w u) ∈ M(G). 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. × × 13: for all (q.y −f −wt → 1he u) ∈ R do 14: for all (θ, w2) ∈ ∈RE RPL doACE(y,G, n) do 15: Form subs)ti ∈tu RtiEonP LmAaCppEi(nyg, Gϕ, n: Qo X → T∆ (N Q) such that, for all v ∈ ydQ Q(y) × ×and X Xq0 → →∈ Q, (ifN Nth ×ereQ e)x sisutc nh0 h∈a tN, f aonrd a lxl v∈ ∈X y sdu(cyh) th anatd θ q(v∈) = n0 and y(v) = x, t∈he Nn aϕn(dq0 x , x ∈) X= ( snu0c,h hq t0)ha. 16: p0 ((n, q) −w −1 −· −w →2 ϕ(u)) 17: for← ←all ( p ∈, qN)O− −R −M − →(p0 ϕ, N(uo)u)t) do ← 18: Let p b(ke) o.f the form n0− →w δ(n1,...,nk) for 19: δN ∈out ∆ ← Nout ∪ {n0 , . . . , nk } 20: Pout ←← P Nout ∪∪ { {pn} 21: return CHAIN-REM(Gout) M(G).. In the latter case, G is a stand-in for MG ?(G M).,( analogous to the stand-ins for WSAs and G ? WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that M, G) M(G).. PRODUCE obtains the effect of forward applica- Algorithm 4 REPLACE 1: 2: 3: 4: 5: 6: 7: 8: inputs y ∈ TΣ(X) WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions n∈ N outnpu ∈ts N set Π of pairs (θ, w) where θ is a mapping pos(y) → N and w ∈ R+∞ , each pair indicating pa ossu(cyc)ess →ful Nrun a nodn wy b ∈y p Rroductions in G, starting from n, and w is the weight of the run. complexity O(|P|size(y)) 9: Πlast← {({(ε,n)},1)} 10: for all← ←v {∈( { po(εs,(ny)) s,u1c)h} that y(v) ∈ X in prefix order fdoor 11: Πv ← ∅ 12: for ←all ∅(θ, w) ∈ Πlast do 13: ri fa Mll ( w∅) )a ∈nd Π G ∅ then 14: MG ←= ∅PR anOdD GUC6 =E ∅(G th, eθn(v)) = = −w →0 15: for all (θ(v) y(v) (n1, . . . , nk)) ∈ P do 16: Πv ← Πv∪− →{(θ y∪({v ()(vni, ni) , 1≤ )i) ≤ ∈ k P}, d dwo·w0) } 17: Πlast ← Π←v 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G0 = (N0, Σ, P0, n0, M, G), in normal form, such that if M ∅ and G ∅, LG0 = LM(G)., and otherwise Gf M0 = G. = = 56:: comOp(|lePx0it|y) 7: G0← 8: Ξ ←← { nG0} {seen nonterminals} 89:: ΞΨ ←← {{nn0}} {{speeenndi nnogn tneornmteinramlsi}nals} 190:: wΨh ←ile {Ψn =} ∅{ pdeon 11 10 : inl e← Ψa6n =y ∅el deoment of 12: nΨ ←←a nΨy \ e l{emn}e 13: iΨf M ← ∅\ a{nnd} G ∅ then 14: MG0 =← ∅ P aRnOdD GU 6=CE ∅(G the0,n nn) 15: for all (n P−→w RO σ(n1 , . . . , nk)) ∈ P0 do 16: for i= 1→ →to σ (kn ndo 17: if ni ∈ Ξ then 18: Ξ ←∈ Ξ ΞΞ t h∪e {nni} 19: ΞΨ ←← ΞΨ ∪∪ {{nni}} 20: return G0 Ψ = = 1063 g0 g0 −w − →1 −−w →→2 g0 − − → σ(g0, g1) α w − →3 g1 − − → α (a) Input WRTG G G a0 a0.σ(x1, x2) −w − →4 − w − → →5 σ(a0.x1, a1.x2) a0.σ(x1, x2) ψ(a2.x1, a1.x2) a0 .α − − → α a 1.α − − → α a2 .α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) b0.α −w −1 →0 α −w − →9 σ(b0.x1, b0.x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). built as a consequence of building the complete MB(MA(G).). g0a0b0 g0a0b0 −w −1 −· −w4 −·w − →9 σ(g0a0b0, g1a1b0) g0a0b0 −−w − − −2 −· −w −6 − −·−w − → −1 →0 σ α g1a1b0 −w − −3· w−7 −· −w −1 →0 α (e) Complete MB (MA (G).). Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where iGs1 in= F (i{g u0rae0 2}a, 1 {2σa,nψd,α M,ρA},is ∅ i,n g 20ab0., T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) → JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, )x− 3→) → JJ VrB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” − → ”gentle”(a) Rotation rules iVB.NN(x1, x2) iVB.NN(x1, x2)) iVB.NN(x1, x2)) → →→ →→ NN(INS iNN.x1, iNN.x2) NNNN((iINNNS.x i1, iNN.x2) NNNN((iiNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1 , x2, x3) → X(t.x1 , t.x2, t.x3) t.”gentleman” →) → j →1 t . ””ggeennttl eemmaann”” →→ jE1PS t . ”INgSen →tle m j 1a t . I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j 1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE=-E ({XgPLICI}T, c{aσl,lsα }P,R ∅O, gDUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) σ(b0.x1, b0.x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G).. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G). that must be built to form the complete MB (MA(G).). are shown in Figure 2d. The complete MB (MA(G).). is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G).; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing applica- tion. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB (MA (G).)., since G1 is the initial stand-in for MA (G).. 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a samparlee ionf Fwighuicreh 3is. Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. We add an English syntax language model L to theW ceas acdadde a no Ef ntrgalinsshd uscyentras x ju lastn gdueascgrei mbeodd etol L be ttoter simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/.21s 0.e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). et al. Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1): 1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2): 186–213. Zolt a´n E´sik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt a´n F ¨ul ¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 3 13–404. Springer-Verlag. Zolt a´n F ¨ul ¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G ´ecseg and Magnus Steinby. 1984. Tree Automata. Akad e´miai Kiad o´, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14: 135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10): 1187–1 196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231: 17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Lin- guistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1): 1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066

2 0.70782912 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars

Author: Andreas Maletti

Abstract: A characterization of the expressive power of synchronous tree-adjoining grammars (STAGs) in terms of tree transducers (or equivalently, synchronous tree substitution grammars) is developed. Essentially, a STAG corresponds to an extended tree transducer that uses explicit substitution in both the input and output. This characterization allows the easy integration of STAG into toolkits for extended tree transducers. Moreover, the applicability of the characterization to several representational and algorithmic problems is demonstrated.

3 0.61467218 97 acl-2010-Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices

Author: Graeme Blackwood ; Adria de Gispert ; William Byrne

Abstract: This paper presents an efficient implementation of linearised lattice minimum Bayes-risk decoding using weighted finite state transducers. We introduce transducers to efficiently count lattice paths containing n-grams and use these to gather the required statistics. We show that these procedures can be implemented exactly through simple transformations of word sequences to sequences of n-grams. This yields a novel implementation of lattice minimum Bayes-risk decoding which is fast and exact even for very large lattices.

4 0.57924902 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two

Author: Benoit Sagot ; Giorgio Satta

Abstract: Linear Context-Free Rewriting Systems (LCFRSs) are a grammar formalism capable of modeling discontinuous phrases. Many parsing applications use LCFRSs where the fan-out (a measure of the discontinuity of phrases) does not exceed 2. We present an efficient algorithm for optimal reduction of the length of production right-hand side in LCFRSs with fan-out at most 2. This results in asymptotical running time improvement for known parsing algorithms for this class.

5 0.56398469 67 acl-2010-Computing Weakest Readings

Author: Alexander Koller ; Stefan Thater

Abstract: We present an efficient algorithm for computing the weakest readings of semantically ambiguous sentences. A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.

6 0.56383073 116 acl-2010-Finding Cognate Groups Using Phylogenies

7 0.39225423 66 acl-2010-Compositional Matrix-Space Models of Language

8 0.38257107 234 acl-2010-The Use of Formal Language Models in the Typology of the Morphology of Amerindian Languages

9 0.37783161 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars

10 0.3678897 182 acl-2010-On the Computational Complexity of Dominance Links in Grammatical Formalisms

11 0.36614633 117 acl-2010-Fine-Grained Genre Classification Using Structural Learning Algorithms

12 0.34491035 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar

13 0.34204286 40 acl-2010-Automatic Sanskrit Segmentizer Using Finite State Transducers

14 0.32442531 255 acl-2010-Viterbi Training for PCFGs: Hardness Results and Competitiveness of Uniform Initialization

15 0.32276583 217 acl-2010-String Extension Learning

16 0.32241994 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression

17 0.31753573 29 acl-2010-An Exact A* Method for Deciphering Letter-Substitution Ciphers

18 0.30315211 222 acl-2010-SystemT: An Algebraic Approach to Declarative Information Extraction

19 0.29746476 228 acl-2010-The Importance of Rule Restrictions in CCG

20 0.28965172 103 acl-2010-Estimating Strictly Piecewise Distributions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(4, 0.024), (7, 0.013), (14, 0.04), (25, 0.043), (33, 0.029), (39, 0.031), (41, 0.341), (42, 0.018), (44, 0.013), (59, 0.071), (73, 0.039), (76, 0.011), (78, 0.056), (83, 0.047), (84, 0.039), (98, 0.089)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.84151387 190 acl-2010-P10-5005 k2opt.pdf

Author: empty-author

Abstract: unkown-abstract

same-paper 2 0.76088881 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers

Author: Jonathan May ; Kevin Knight ; Heiko Vogler

Abstract: Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade applica- tion, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, Heiko Vogler Technische Universit a¨t Dresden Institut f u¨r Theoretische Informatik 01062 Dresden, Germany he iko .vogle r@ tu-dre s den .de 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • • • explicit algorithms for application of WTT casceaxpdelisc novel algorithms for on-the-fly application of nWoTvTe lca alscgoardieths,m mansd f experiments comparing the performance of tehxepseer iamlgeonrtisthm cos.m 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed R+1∪W {e+ as∞su}m,te ha thtro thuegh woeuitgh t hi osf p aa ppaetrh t ihsa cta wlceuilgahtetds a asre th ien prod∪uct { +of∞ ∞th}e, wtheaitgh thtes wofe i gtsh etd ogfes a, panatdh t ihsat c athlceu lwateeigdh ats so tfh ae (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 ProceedingUsp opfs thaela 4, 8Stwhe Adnennu,a 1l1- M16ee Jtiunlgy o 2f0 t1h0e. A ?c ss2o0c1ia0ti Aosnso focria Ctioonm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s058–1066, a:a/1a:a/1 b:b/a.:5b/.1aa::ba//. 49Ea:ba/:a.6/.5 (a) InpAut strain:ag/ “a aB” emab:ae/d1dedC in an D(b) first Wa:b:aS/T./4. in cascadeE :c/.6Fa:d/.4 (c//).. second WFST in cascbaad::dde// identityA WSaT: (d)b:Oda/c:f.d3Al./6ai5n.:0c3e/.0ca:o7m/1pao :sBcdi/t.32o584n6a :p/1roa:cCa/h:db.3:/c6d.2/46.35b:a(/be.a)5 :BbA:/u.D1c/keDtbrigBadEDae:ba/p.49proach:ECDa:b/ a.:6b/5.(f)Rdc/Ae./0sD.u35b7aFl4t:c o/f.76ofcBld/iE.nD53e4F6orFbcud/.c0k7e3tapbCpd:cDli/cF.a12t38ion (g)dcI/n.A3i5tD46dcaF/lD.0oF7n3-thea f:lcBdy/E .b(12h:d)8c/O.3A5nD-4dtchF/.e0C -7f3EDlyFas:t /n.B9d-EDinF afterc /x.d3cp/6.l1o28ring C(i)ECDOcFE/.nA5-tD4hdcF/e.0- fl37ysdcta/n.35dB64-iEDnF afteBrc/Eb.3eFs6dtc/ pd./1a2.t83h46 asbeC nEDF found Cobm:bdap:c/:o/d.3/.sa6.5e05D:c tF.h0e7 tranaaaa:sd:c: cdd// /. u12..53c2846ers stacnd//d..A53-i46nDdc f.o.00r73 (f) Appaal:A:ybaD W..19ST (b)BB tEoD WST (a) aftercd p/..0/0..rDo5373Fj46ectionc o.12u28tcdg//o.A.35iDn64dgcF// e0C0dC73gEDeFFs of sBtEaDFteF ADdFc// FiBgBuEDFreF 1: Tc/ .h2.34dcr/6/e. 1e2 8 d/ iA. f53fD64edcF/ r. 0 eCC37nEDtF appBroBEaDFcFhesdc ./t2o.3d4c a..12p28plicatioCCnED tF/ h. A53r6D4ocd/uF/. g0073h cascBBaEdDFeFs ocdf/ .3W264dcS//.1.T282s. bydc w//..53e46ll-known aElFgorithdc/m./2.34s6 to e//.f.5f3i46cieCnEtFly finBdE the kd-c/ bedst/ p6aths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 AppCliEcdcF//a..53ti64on of treeB tFranscd/d./3.u264cers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈ Σ has a rank rk(σ) ∈ N. We cerayll ⊆ Σ, ∈k ∈ aNs t ahe r set rokf tσho)s ∈e σ ∈ Σe such that r⊆k(σ Σ), k= ∈k. NTh teh ese ste otf o vfa trhioasbele σs σi s∈ d eΣnoted X = {x1, x2, . . .} and is assumed to be disjnooitnetd df Xrom = any rank,e.d. a.}lp ahnadb iest aussseudm iend dth tios paper. We use to denote a symbol of rank 0 that is not iWn any e ra ⊥nk toed d eanlpohtaeb aet s yumsebdo lin o fth riasn paper. tA is tr neoet t ∈ TΣ is denoted σ(t1 , . . . , tk) where k ≥ 0, σ ∈ and t1, . . . , tk ∈ TΣ. F)o wr σ ∈ we mΣe(km) ⊥ Σ T(k), σ ∈ Σk(0 ≥) Σ 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈ TΣ as shorthand for σ() . For every set Sw rditiesjσo in ∈t f Trom Σ, let TΣ (S) = TΣ∪S, where, for all s ∈ S, rk(s) = 0. lW se ∈ d,e rfkin(es) th 0e. positions of a tree t = σ(t1, . . . , tk), for k 0, σ ∈ t1, . . . , tk ∈ TΣ, as a set pos(≥t) ⊂ N∗ s∈uch that {∈ε} T ∪ 1e t≤ p ois (≤t) k ⊂, ⊂v ∈ pTohse( tse)t =of {lεea}f ∪ po {siivtio |ns 1 l ≤v(t i) ≤⊆ k p,ovs(t ∈) apores t(hto)s}e. pTohseit sieotns o fv l a∈f p poossit(ito)n ssu lvch(t )th ⊆at pfoors tn)o ir ∈ th Nse, pvio ∈it ponoss(t v). ∈ We p presume hsta tnhadatr dfo lrex nioco igr ∈aph Nic, ovrid ∈eri pnogss( <∈ nTΣd ≤an odn v p o∈s pos(t). The label of t at Lpoestit ti,osn v, Tdenaontedd v v by ∈ t( pvo)s,( tt)he. sTuhbetr leaeb eolf ot fa tt v, denoted by t|v, and the replacement at v by s, vde,n doetneodt e bdy tb[ys] tv|, are defined as follows: ≥ pos(t) = ,{ aivs a| Σ(k), pos(ti)}. 1. For every σ ∈ Σ(0) , σ(ε) = σ, σ|ε = σ, and σF[osr]ε e v=e sy. 2. For every t = σ(t1 , . . . , tk) such that k = rk(σ) and k 1, t(ε) = σ, t|ε = t, aknd = t[ rsk]ε( =) ns.d kFo ≥r every 1) ≤= iσ ≤ t| k and v ∈ pos(ti), t(ivF) =r vtie (rvy) ,1 1t| ≤iv =i ≤ti |v k, aanndd tv[s] ∈iv p=o sσ(t(t1 , . . . , ti−1 , ti[(sv])v, , tt|i+1 , . . . , t|k). The size of a tree t, size (t) is |pos(t) |, the cardinTahliety s iozef i otsf apo tsrieteio tn, sseizt.e (Tt)he is s y |ipelods (ste)t| ,o tfh ae tcraereis the set of labels of its leaves: for a tree t, yd (t) = {t(v) | v ∈ lv(t)}. {Lt(etv )A | avn ∈d lBv( tb)e} sets. Let ϕ : A → TΣ (B) be Lae mt Aapp ainndg. B W bee seexttes.nd L ϕ t oϕ th :e A Am →appi Tng ϕ : TΣ (A) → TΣ (B) such that for a ∈ A, ϕ(a) = ϕ(a) and( Afo)r →k 0, σ ∈ Σch(k th) , atn fdo t1, . . . , tk ∈ TΣ (A), ϕan(dσ( fto1r, . . . ,t0k,) σ) =∈ σ Σ(ϕ(t1), . . . ,ϕ(tk)). ∈ W Te indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈ R | w 0} and R+∞ to dentoote d Ren+o ∪e {th+e∞ set}. { ≥ ≥ ≥ Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈ N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩ N = ∅. 3. PΣ ∩is Na =tup ∅le. (P0, π), where P0 is a finite set of productions, each production p of the form n → u, n ∈ N, u ∈ TΣ(N), and π : P0 → R+ ins a→ →w uei,g nht ∈ ∈fu Nnc,ti uo n∈ o Tf the productions. W→e w Rill refer to P as a finite set of weighted productions, each production p of the form n −π −(p →) u. A production p is a chain production if it is of the form ni nj, where ni, nj ∈ N.6 − →w 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n σ(n1, . . . , nk) where σ ∈ Σ(k) and n1, . . . , nk →∈ σ N(n. For WRTG∈ G N =. (N, Σ, P, n0), s, t, u ∈ TΣ(N), n ∈ N, and p ∈ P of the form n −→ ∈w T u, we nobt ∈ain N Na ,d aenridva ptio ∈n s Ptep o ffr tohme fso rtom mt n by− →repl ua,ci wneg some leaf nonterminal in s labeled n with u. For- − →w mally, s ⇒pG t if there exists some v ∈ lv(s) smuaclhly t,ha st s⇒(v) =t i fn t haenrde s e[xui]svt = so tm. e W ve say t(hsis) derivation step is leftmost if, for all v0 ∈ lv(s) where v0 < v, s(v0) ∈ Σ. We hencef∈orth lv a(ss-) sume all derivation )ste ∈ps a.re leftmost. If, for some m ∈ N, pi ∈ P, and ti ∈ TΣ (N) for all s1o m≤e i m m≤ ∈ m N, n0 ⇒∈ pP1 t a1n ⇒∈pm T tm, we say t1he ≤ sequence ,d n = (p1, . . . ,p.m ⇒) is a derivation of tm in G and that n0 ⇒∗ tm; the weight of d is wt(d) = π(p1) · . . . ⇒· π(pm). The weighted tree language rec)og ·n .i.z.ed · π by(p G is the mapping LG : TΣ → R+∞ such that for every t ∈ TΣ, LG(t) is the sum→ →of R the swuecihgth htsa to ffo arl el v(eproyss ti b∈ly T infinitely many) derivations of t in G. A weighted tree language f : TΣ → R+∞ is recognizable if there is a WRTG G such t→hat R f = LG. We define a partial ordering ? on WRTGs sucWh eth date finore W aR TpGarst aGl1 r=d r(iNng1 , Σ?, P o1n , n0) and G2 (N2, Σ, P2, n0), we say G1 ? G2 iff N1 ⊆ N2 and P1 ⊆ P2, where the w?eigh Gts are pres⊆erve Nd. ... = Definition 3.2 (cf. Def. 1of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆ are the ranked alphabets of input and output symbols, respectively, where (Σ ∪ ∆) ∩ Q = 3. (RΣ i ∪s a∆ )tu ∩ple Q ( =R 0∅, .π), where R0 is a finite set of rules, each rule r of the form q.y → u for q ∈ ru lQes, y c∈h T ruΣle(X r), o fa tnhde u fo r∈m T q∆.y(Q − → →× u uX fo)r. Wqe ∈ ∈fu Qrt,hye r ∈req Tuire(X Xth)a,t annod v uari ∈abl Te x( Q∈ ×X appears rmthoerre rtehqauni roen tchea itn n y, aanrida bthleat x xe ∈ach X Xva arpi-able appearing in u is also in y. Moreover, π : R0 → R+∞ is a weight function of the rules. As →for RWRTGs, we refer to R as a finite set of weighted rules, each rule r of the form ∅. q.y −π −(r →) u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y u, each x ∈ yd (y) ∩ X appears at most on− →ce ( ur,es epaecchxtive ∈ly, dat( lye)a ∩st Xonc aep) iena us. tW meo dsten oontcee th (ree scpleascsof all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1 , . . . , xk), we remove the letter “x” to signify − →w 1060 × ×× × the transducer is not extended (i.e., it is a “traditional” WTT (F¨ ul¨ op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈ T∆(Q TΣ), and r ∈ R of the form q.y −w →), u, we obtain a× d Ter)iv,a atniodn r s ∈te pR ofrfom the s f trom mt b.yy r→epl ua,c wineg sbotamine leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒rM t if there oisf tah peo ysi-tmioantc vh n∈g tp roese(.s F)o, am saulblys,ti stu ⇒tion mapping ϕ : X → TΣ, and a rule q.y −u→w bs u ∈ R such that ϕs(v :) X X= → →(q, T ϕ(y)) and t = s[ϕ− →0(u u)] ∈v, wRh seurech hϕ t0h aist a substitution mapping Q X → T∆ (Q TΣ) dae sfiunbesdti usuticohn t mhaatp ϕpin0(qg0, Q Qx) × = X ( →q0, Tϕ(x()Q) f×or T all q0 ∈ Q and x ∈ X. We say this derivation step is l∈eft Qmo asnt dif, x f o∈r Xall. v W0 e∈ s lyv( tsh)i w deherirvea tvio0 n< s v, s(v0) ∈ ∆. We hencefor∈th lavs(sus)m we haellr ede vrivation steps) a ∈re ∆le.ftm Woes ht.e nIcf,e ffoorr sho amsesu sm ∈e aTllΣ d, emriv a∈t oNn, ri p∈s R ar, ea lnedf ttmi o∈s tT.∆ I f(,Q f ×r sToΣm) efo sr ∈all T T1 ≤, m mi ≤ ∈ m, (q0∈, s R) ,⇒ anrd1 tt1 . . . ⇒(rQm ×tm T, w)e f say lth 1e sequence d =, ()r1 ⇒ , . . . , rm..) .i s⇒ ⇒a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation )r ·ec .o..gn ·i πze(rd by M is the mapping τM : TΣ T∆ → R+∞, such that for every s ∈ TΣ and t ∈× T T∆, τM→(s R, t) is the × µ× foofrth eve ewryeig sh ∈ts Tof aalln (dpo ts ∈sib Tly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ T∆ → R+∞ and : T∆ TΓ → R+∞ is the weight×edT tree→ →tra Rnsformation (τ×; Tµ) :→ →TΣ R TΓ → R+∞ wPhere for every s ∈ TΣ and u ∈ TΓ, (τ×; Tµ) (→s, uR) = Pt∈T∆ τ(s, t) · µ(t,u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3. 1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved tshuamt in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G). and if M preserves backward recognizability, we can call the backward application WRTG M(G)/. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n − →w σ(n1 , . . . , nk), create a rule ofthe form n.σ(x1 , . . . , xk) − →w σ(n1 .x1, . . . , nk.xk). Range × projection of a w− x→LN σT(n is also trivial—for every q ∈ Q and u ∈ T∆ (Q X) create a production of the form q ∈−→w T u(0 where )u 0c is formed from u by replacing al−l → →lea uves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y → u in R.9 Domain projection for wxLT is bq.eyst →exp ulai inne dR b.y way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1) , x2) − →w δ(q2.x2, β(α, q3.x1)) would yield the production q1 q− →w σ(γ(q3) , q2) in the domain projection. Howev− →er, aσ dγe(lqeting rule such as q1.σ(x1 , x2) − →w γ(q2.x2) necessitates the introduction of a new →non γte(rqminal ⊥ that can genienrtartoed aullc toiof nT Σo fw ai nthe wwe niognhtte r1m . The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the × first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; E´sik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10 ) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20 ) 4: outputs 5: wxLT M3 = ((Q1 Q2), Σ, Γ, R3, (q10 , q20 )) such that M3 = (τM1 ; τM2 Q). 6: complexity 7: O(|R1 | max(|R2|size( ˜u), |Q2|)), where ˜u is the × lOar(g|eRst |rimgahtx s(|idRe t|ree in a,n|yQ ru|l))e in R1 8: Let R3be of the form (R30,π) 9: R3 ← (∅, ∅) 10: Ξ ←← ←{ ((q∅,10∅ , q20 )} {seen states} 11 10 : ΨΞ ←← {{((qq10 , q20 ))}} {{speeennd sintagt essta}tes} 1112:: Ψwh ←ile {Ψ( ∅ do) 1123:: (ilqe1 , Ψq26 =) ← ∅ daony element of 14: ← Ψ) \← {a(nqy1 , ql2em)}e 15: for all (q1.y q− −w →1 u) ∈ R1 do 16: for all (z, −w − →2) u∈) )C ∈O RVER(u, M2, q2) do 17: for all (q, x) )∈ ∈∈ C yOdV V(Ez)R ∩(u u(,(QM1 Q2) X) do 18: i fa lql (∈q ,Ξx )th ∈en y 19: qΞ6 ∈ ← Ξ tΞh e∪n {q} 20: ΞΨ ←← ΞΨ ∪∪ {{qq}} 21: r ← ((Ψq1 ← , q 2Ψ) .y {→q }z) 22: rR30 ← ←← (( qR03 ∪ {).ry} 23: π(r)← ←← R π(∪r) { +r} (w1 · w2) 24: return M3 = Ψ 4 Ψ Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT × is closed under composition (G´ ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F ¨ul ¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈ T∆ (Q1 X) 3: wuT ∈ M T2 = (Q×2, X X∆), Γ, R2, q20 ) 4: state q2 ∈ Q2 ×× × 5: outputs 6: set of pairs (z, w) with z ∈ TΓ ((Q1 Q2) X) fsoetrm ofed p ab yir so (nze, ,o wr m) worieth hsu zcc ∈es Tsful runs× ×on Q Qu )b y × ×ru Xles) in R2, starting from q2, and w ∈ R+∞ the sum of the weights of all such runs,. 7: complexity 8: O(|R2|size(u)) 9: 10: 11: 12: 13: 14: if u(ε) is of the form (q1,x) ∈ Q1× X then zinit ← ((q1 q2), x) else zinit ← ⊥ Πlast ←← ←{(z ⊥init, {((ε, ε), q2)}, 1)} for all← v ∈ pos(u) εsu,εch), tqha)t} u(v) ∈ ∆(k) for some fko ≥r 0ll li nv p ∈ref ipxo osr(ude)r sduoc 15: ≥Π v0 i←n p ∅r 16: for ←all ∅(z, θ, w) ∈ Πlast do 17: rf aorll a(zll, vθ0, ∈w )lv ∈(z Π) such that z(v0) = ⊥ do 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: , dfoor all(θ(v,v0).u(v)(x1,...,xk) −w →0h)∈R2 θ0 ← θ For←m sθubstitution mapping ϕ : (Q2 X) → TΓ((Q1 Q2 X) {⊥}). f→or Ti = 1to× ×k dQo for all v00 ∈ pos(h) such that h(v00) = (q02 , xi∈) for some q20 ∈ Q2 do θ0(vi, v0v00) ← q20 if u(vi) ←is q of the form (q1, x) ∈ Q1 X then ∪ ,ϕ(x)q20 ∈, x Qi) ←× X((q t1h, eqn20), x) else ϕ(q20, xi) ← ⊥ Πv ← Πv {(z[ϕ)( ←h)] ⊥v0 , θ0, w · w0)} ∪ 29: Πlast ← Πv 30: Z ← {z |← ←(z Π, θ, Xw) 31: return {(z, X ∈ (z ,θ ,wX) X∈Πl Πlast} w) | z ∈ Z} ast X tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3. 1 as follows: a WRTG is a 6tuple G = (N, P, n0,M,G) where N, P, and n0 are defined as in Definition 3. 1, and either M = G = ∅,10 or M is a wxLNT and G is a normMal = =fo Grm =, c ∅h,ain production-free WRTG such that Σ, 10In which case Σ, the definition is functionally unchanged from before. 1062 w t[xLypN] LeT (a)pPresOYNer Qovsateiodn?f(Grwe´ca(Fsd¨uMrg(lKeS¨coa punelgsid cowtihuSza,[rtxlbce1.2i]9,l0Nn2tybT091), 984w [txy](pbL Ne)T PrespvatiosYnNero svfebda?ckw(rFdu¨MlSeo¨c apesgl onwtiuza[r,xlcb.2]ei,NlL0t2yT 91)0 Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n00, M0, G0) is a WRTG in normal form with no chain productions 3: nin ∈ Nin 4: outputs∈ 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ? Gout and (nin ?−→w G u) ∈ Pout ⇔ (nin − →w u) ∈ M(G). 6: complex−i →ty 7: O(|R| ), where ˜y is the largest left side tree iOn (a|Rny| | rPul|e in R |P|size( y˜) 8: if Pincontains productions of the form nin− →w u then 9: return Gin 10: Nout ← Nin 11: Pout ←← P Nin 12: Let ni←n b Pe of the form (n, q), where n ∈ N and q ∈ Q. × × 13: for all (q.y −f −wt → 1he u) ∈ R do 14: for all (θ, w2) ∈ ∈RE RPL doACE(y,G, n) do 15: Form subs)ti ∈tu RtiEonP LmAaCppEi(nyg, Gϕ, n: Qo X → T∆ (N Q) such that, for all v ∈ ydQ Q(y) × ×and X Xq0 → →∈ Q, (ifN Nth ×ereQ e)x sisutc nh0 h∈a tN, f aonrd a lxl v∈ ∈X y sdu(cyh) th anatd θ q(v∈) = n0 and y(v) = x, t∈he Nn aϕn(dq0 x , x ∈) X= ( snu0c,h hq t0)ha. 16: p0 ((n, q) −w −1 −· −w →2 ϕ(u)) 17: for← ←all ( p ∈, qN)O− −R −M − →(p0 ϕ, N(uo)u)t) do ← 18: Let p b(ke) o.f the form n0− →w δ(n1,...,nk) for 19: δN ∈out ∆ ← Nout ∪ {n0 , . . . , nk } 20: Pout ←← P Nout ∪∪ { {pn} 21: return CHAIN-REM(Gout) M(G).. In the latter case, G is a stand-in for MG ?(G M).,( analogous to the stand-ins for WSAs and G ? WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that M, G) M(G).. PRODUCE obtains the effect of forward applica- Algorithm 4 REPLACE 1: 2: 3: 4: 5: 6: 7: 8: inputs y ∈ TΣ(X) WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions n∈ N outnpu ∈ts N set Π of pairs (θ, w) where θ is a mapping pos(y) → N and w ∈ R+∞ , each pair indicating pa ossu(cyc)ess →ful Nrun a nodn wy b ∈y p Rroductions in G, starting from n, and w is the weight of the run. complexity O(|P|size(y)) 9: Πlast← {({(ε,n)},1)} 10: for all← ←v {∈( { po(εs,(ny)) s,u1c)h} that y(v) ∈ X in prefix order fdoor 11: Πv ← ∅ 12: for ←all ∅(θ, w) ∈ Πlast do 13: ri fa Mll ( w∅) )a ∈nd Π G ∅ then 14: MG ←= ∅PR anOdD GUC6 =E ∅(G th, eθn(v)) = = −w →0 15: for all (θ(v) y(v) (n1, . . . , nk)) ∈ P do 16: Πv ← Πv∪− →{(θ y∪({v ()(vni, ni) , 1≤ )i) ≤ ∈ k P}, d dwo·w0) } 17: Πlast ← Π←v 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G0 = (N0, Σ, P0, n0, M, G), in normal form, such that if M ∅ and G ∅, LG0 = LM(G)., and otherwise Gf M0 = G. = = 56:: comOp(|lePx0it|y) 7: G0← 8: Ξ ←← { nG0} {seen nonterminals} 89:: ΞΨ ←← {{nn0}} {{speeenndi nnogn tneornmteinramlsi}nals} 190:: wΨh ←ile {Ψn =} ∅{ pdeon 11 10 : inl e← Ψa6n =y ∅el deoment of 12: nΨ ←←a nΨy \ e l{emn}e 13: iΨf M ← ∅\ a{nnd} G ∅ then 14: MG0 =← ∅ P aRnOdD GU 6=CE ∅(G the0,n nn) 15: for all (n P−→w RO σ(n1 , . . . , nk)) ∈ P0 do 16: for i= 1→ →to σ (kn ndo 17: if ni ∈ Ξ then 18: Ξ ←∈ Ξ ΞΞ t h∪e {nni} 19: ΞΨ ←← ΞΨ ∪∪ {{nni}} 20: return G0 Ψ = = 1063 g0 g0 −w − →1 −−w →→2 g0 − − → σ(g0, g1) α w − →3 g1 − − → α (a) Input WRTG G G a0 a0.σ(x1, x2) −w − →4 − w − → →5 σ(a0.x1, a1.x2) a0.σ(x1, x2) ψ(a2.x1, a1.x2) a0 .α − − → α a 1.α − − → α a2 .α w − →6 (w −a → →7 w − →8 −−→ ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) b0.α −w −1 →0 α −w − →9 σ(b0.x1, b0.x2) (c) Second transducer MB in the cascade g0a0 g0a0 −w −1 −· −w →4 σ(g0a0, g1a1) −−w −− 1− − ·w − − → →5 ψ(g0a2, g1a1) − − −·w − → α g1a1 − − −·w − → α w −− 2 −− − · w−− → →6 g0a0 w − 3 − −· w− → →7 (d) Productions of MA (G). built as a consequence of building the complete MB(MA(G).). g0a0b0 g0a0b0 −w −1 −· −w4 −·w − →9 σ(g0a0b0, g1a1b0) g0a0b0 −−w − − −2 −· −w −6 − −·−w − → −1 →0 σ α g1a1b0 −w − −3· w−7 −· −w −1 →0 α (e) Complete MB (MA (G).). Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where iGs1 in= F (i{g u0rae0 2}a, 1 {2σa,nψd,α M,ρA},is ∅ i,n g 20ab0., T MheA s,ta Gn)d,-i Gn WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G ◦ MA ◦ MB, wDhUeCreE MwhBe i uss eind wFitighu three c2acs. Oe uGr dMrivin◦gM algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) → JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, )x− 3→) → JJ VrB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” − → ”gentle”(a) Rotation rules iVB.NN(x1, x2) iVB.NN(x1, x2)) iVB.NN(x1, x2)) → →→ →→ NN(INS iNN.x1, iNN.x2) NNNN((iINNNS.x i1, iNN.x2) NNNN((iiNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1 , x2, x3) → X(t.x1 , t.x2, t.x3) t.”gentleman” →) → j →1 t . ””ggeennttl eemmaann”” →→ jE1PS t . ”INgSen →tle m j 1a t . I NNSS →→ j 21 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j 1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE=-E ({XgPLICI}T, c{aσl,lsα }P,R ∅O, gDUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) σ(b0.x1, b0.x2) with productions from G1, wh−i →ch i σs (ab stand-in for −w →9 MA(G).. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G). that must be built to form the complete MB (MA(G).). are shown in Figure 2d. The complete MB (MA(G).). is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G).; in particular we did not build g0a2 − −w2 −· −w →8 ρ, while a bucket brigade approach would −h −a −v −e → →bui ρlt, ,t whish production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing applica- tion. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB (MA (G).)., since G1 is the initial stand-in for MA (G).. 1064 obomcbtfethodW√ √STwx√L× NTwL√ √NTo mbctbfethodW√ √STw×x√ LTw√ ×LTwxL√ ×NTwL√ √NT (a) Forward application (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a samparlee ionf Fwighuicreh 3is. Tinh Fei rgoutareti o3na, t rhaanss d6u,4c5e3r R ru,l eas s, tmheinsertion transducer I,Figure 3b, has 8,122 rules, iannsde rtthieon ntr trananssladtuiocne rtr Ia,n Fsidguucreer, 3 bT, , Fasig 8u,r1e2 32c r,u lheass, 3a7nd,31 th h1e ertu rlaenss. We add an English syntax language model L to theW ceas acdadde a no Ef ntrgalinsshd uscyentras x ju lastn gdueascgrei mbeodd etol L be ttoter simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t tarnede tfi bndac kthwe 1rd-b tehsrto pugathh hine tchaes caapdpeli Lca◦tiRon◦ IW◦RTTG ◦t. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional L1eMp-xcsafe tgcyn tpemb u eo ct hkfoe tdime>/.21s 0.e78465nms tenc Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F ¨ul ¨op (2010). et al. Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1): 1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2): 186–213. Zolt a´n E´sik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt a´n F ¨ul ¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 3 13–404. Springer-Verlag. Zolt a´n F ¨ul ¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G ´ecseg and Magnus Steinby. 1984. Tree Automata. Akad e´miai Kiad o´, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14: 135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10): 1187–1 196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231: 17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Lin- guistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1): 1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066

3 0.60392475 167 acl-2010-Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems

Author: Srinivasan Janarthanam ; Oliver Lemon

Abstract: We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical ‘jargon’ names of the domain entities. In such cases, dialogue systems must be able to model the user’s (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the sys- tem learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user’s domain expertise.

4 0.4377116 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars

Author: Andreas Maletti

Abstract: A characterization of the expressive power of synchronous tree-adjoining grammars (STAGs) in terms of tree transducers (or equivalently, synchronous tree substitution grammars) is developed. Essentially, a STAG corresponds to an extended tree transducer that uses explicit substitution in both the input and output. This characterization allows the easy integration of STAG into toolkits for extended tree transducers. Moreover, the applicability of the characterization to several representational and algorithmic problems is demonstrated.

5 0.3985455 168 acl-2010-Learning to Follow Navigational Directions

Author: Adam Vogel ; Dan Jurafsky

Abstract: We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths.

6 0.3973285 70 acl-2010-Contextualizing Semantic Representations Using Syntactically Enriched Vector Models

7 0.39467663 158 acl-2010-Latent Variable Models of Selectional Preference

8 0.39311022 187 acl-2010-Optimising Information Presentation for Spoken Dialogue Systems

9 0.39293814 162 acl-2010-Learning Common Grammar from Multilingual Corpus

10 0.39263961 142 acl-2010-Importance-Driven Turn-Bidding for Spoken Dialogue Systems

11 0.39220521 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression

12 0.39169759 65 acl-2010-Complexity Metrics in an Incremental Right-Corner Parser

13 0.39136967 130 acl-2010-Hard Constraints for Grammatical Function Labelling

14 0.39082801 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

15 0.39072642 71 acl-2010-Convolution Kernel over Packed Parse Forest

16 0.3904576 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification

17 0.38975167 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses

18 0.38960701 53 acl-2010-Blocked Inference in Bayesian Tree Substitution Grammars

19 0.3882165 248 acl-2010-Unsupervised Ontology Induction from Text

20 0.38816643 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two