acl acl2012 acl2012-139 knowledge-graph by maker-knowledge-mining

139 acl-2012-MIX Is Not a Tree-Adjoining Language


Source: pdf

Author: Makoto Kanazawa ; Sylvain Salvati

Abstract: The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. We prove Joshi’s (1985) conjecture that MIX is not a tree-adjoining language.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. [sent-3, score-0.257]

2 Gazdar (1988) adopts a similar position regarding the relation between MIX 1If w is a string and d is a symbol, we write |w|d to mean the numIbfe rw o ifs occurrences odf i ds ian s w. [sent-9, score-0.285]

3 It therefore seems natural to assume that languages such as MIX should be excluded from any class of formal languages that purports to be a tight formal characterization of the possible natural languages. [sent-19, score-0.236]

4 (1991) suggested that MIX should not be in the class of socalled mildly context-sensitive languages: “[mildly context-sensitive grammars] capture only certain kinds of dependencies, e. [sent-21, score-0.125]

5 The first condition is only vaguely formulated, but the other two conditions are clearly satisfied by treeadjoining grammars. [sent-28, score-0.196]

6 3 (It is known that the indexed languages properly include the tree-adjoining languages. [sent-35, score-0.187]

7 , 1987; Weir, 1988), which has been customarily regarded as a formal counterpart of the informal notion of a mildly context-sensitive language. [sent-46, score-0.221]

8 4 It means that either we have to abandon the identification of multiple context-free languages with mildly context-sensitive languages, or we should revise our conception of limited crossserial dependencies and stop regarding MIX-like languages as violations of this condition. [sent-47, score-0.237]

9 Our proof is cast in terms of the formalism of head grammar (Pollard, 1984; Roach, 1987), which is known to be equivalent to TAG (Vijay-Shanker and Weir, 1994). [sent-50, score-0.355]

10 The key to our proof is the notion of an n-decomposition of a string over {a, b, c}, which is similar to the notion of a × doveerirva {taio,nb, icn} h,e wadh grammars, lbaurt independent nof o any particular grammar. [sent-51, score-0.31]

11 We first 3The relation of MIX with indexed languages is also of interest in combinatorial group theory. [sent-53, score-0.158]

12 Groenink (1997) wrote “The class of mildly context-sensitive languages seems to be most adequately approached by LCFRS. [sent-62, score-0.181]

13 ” 667 show that if MIX is generated by some head grammar, then there is an n such that every string in MIX has an n-decomposition. [sent-63, score-0.32]

14 We then prove that if every string in MIX has an n-decomposition, then every string in MIX must have a 2-decomposition. [sent-64, score-0.484]

15 Finally, we exhibit a particular string in MIX that has no 2decomposition. [sent-65, score-0.165]

16 The length of this string is 87, and the fact that it has no 2-decomposition was first verified by a computer program accompanying this paper. [sent-66, score-0.196]

17 2 Head Grammars A head grammar is a quadruple G = (N, Σ, P, S), where N is a finite set of nonterminals, Σ is a finite set of terminal symbols (alphabet), S is a distinguished element of N, and P is a finite set of rules. [sent-68, score-0.405]

18 Each nonterminal is interpreted as a binary predicate on strings in Σ∗ . [sent-69, score-0.218]

19 This definition of a head grammar actually corresponds to a normal form for head grammars that appears in section 3. [sent-72, score-0.621]

20 6 The rules of head grammars are interpreted as implications from right to left, where variables can be instantiated to any terminal strings. [sent-74, score-0.297]

21 The notation we use to express rules of head grammars is borrowed from elementary formal systems (Smullyan, 1961 ; Arikawa et al. [sent-77, score-0.317]

22 , 1992), also known as literal movement grammars (Groenink, 1997; Kracht, 2003), which are logic programs over strings. [sent-78, score-0.157]

23 The operation involved in the third rule is known as wrapping; the operations involved in the first two rules we call left concatenation and right concatenation, respectively. [sent-80, score-0.254]

24 If G = (N, P, S) is a head grammar, A ∈ N, and w1, w2 ∈ ,ΣΣ∗ , Pth,Sen) we say dth garta a fact A(w1 ,w2) ids derivabl∈e aΣnd write ? [sent-81, score-0.187]

25 aAngll binary erurl e{as, a¯of} tahnids grammar are wrapping rules. [sent-97, score-0.247]

26 G A(w1 ,w2), a derivation tree for A(w1, w2) is a fiInfi ? [sent-99, score-0.22]

27 te binary tree whose nodes are labeled by facts that are derived during the derivation of A(w1, w2). [sent-100, score-0.308]

28 A derivation tree for A(w1, w2) represents a “proof” of ? [sent-101, score-0.22]

29 G A(w1, w2), and is formally defined as follows: • If A(w1 ,w2) ← is a terminating rule, then a tree Iwfi tAh( a single n iosd ae elrambeilneadt by A(w1 ,w2) tirs a derivation tree for A(w1, w2). [sent-102, score-0.345]

30 ( a¯, a¯) Figure 1: An example of a derivation tree of a head gram- mar. [sent-106, score-0.375]

31 G C(v1,v2) by some binary rule, then a binary tree whose root is labeled by A(w1, w2) and whose immediate left (right) subtree is a derivation tree for B(u1,u2) (for C(v1, v2), respectively) is a derivation tree for A(w1 ,w2). [sent-111, score-0.746]

32 If w ∈ L(G), a derivation tree for w is a derivation tree f o∈r some S(w1 ,w2) osunc thre teha fto w1w2 = w. [sent-112, score-0.44]

33 Figure 1shows a derivation tree for aa a¯ ¯aa a¯# ¯aa a¯ a¯aa. [sent-114, score-0.344]

34 The following lemma should be intuitively clear from the definition of a derivation tree: Lemma 1. [sent-115, score-0.327]

35 Let G = (N, Σ, P, S) be a head grammar and A be a nonterminal in N. [sent-116, score-0.306]

36 We can prove by straightforward induction × on the height of derivation trees that whenever A(v1,v2) appears on a node in a derivation tree for B(w1, w2), then there exist z0, z1, z2, z3 that satisfy one of the following conditions: = = (a) w1 z0v1z1v2z2, w2 z3, and ? [sent-122, score-0.578]

37 We call a nonterminal A of a head grammar G useless if A does not appear in any derivation trees for strings in L(G). [sent-133, score-0.694]

38 Clearly, useless nonterminals can be eliminated from any head grammar without affecting the language of the grammar. [sent-134, score-0.393]

39 In other words, ψ is∈ a homomorphism from the free monoid Σ∗ to Z Z with addition as the monoid operation and (0, 0) as identity. [sent-140, score-0.19]

40 Suppose that G = (N, Σ, P, S) is a head grammar without useless nonterminals such that L(G) ⊆ MIX. [sent-142, score-0.393]

41 Since G has no useless nonterminals, for each nonterminal A of G, there is a derivation tree for some string in L(G) in which A appears in a node label. [sent-146, score-0.67]

42 • each leaf node is labeled by some (s1, s2) such tehaacth s1s2 ∈ {b, c}∗ ∪ {a, c}∗ ∪ {a, b}∗ . [sent-152, score-0.237]

43 Thus, the label of an internal node in a decomposition is obtained from the labels of its children by left concatenation, right concatenation, or wrapping. [sent-153, score-0.351]

44 It is easy to see that if G is a head grammar over the alphabet Σ, any derivation for w ∈ L(G) induces a decomposition yo dfewr. [sent-154, score-0.547]

45 ) aN doteethat unlike with derivation trees, we have placed no bound on the length of a string that may appear on ×× a leaf node of a decomposition. [sent-156, score-0.503]

46 If MIX = L(G) for some head grammar G = (Σ, N, P, S), then there exists an n such that each w ∈ MIX has an n-decomposition. [sent-162, score-0.249]

47 We may suppose without loss of generality that G has no useless nonterminal. [sent-164, score-0.144]

48 dm ∈ then for 0 ≤ i ≤ j ≤ m, we write w[i, j] to ∈ref Σer to tehne substring di+1 . [sent-173, score-0.192]

49 ) The following is a key lemma in our proof: Lemma 4. [sent-178, score-0.179]

50 If a node is assigned a tuple (i, j,k, l) a|wnd| h|was two c Ihfi ald nroedn ela i-s beled by (u1,u2) and (v1,v2), respectively, then the 4-tuples assigned to the children are determined according to how (u1,u2) and (v1,v2) are combined at the parent node: an, cn. [sent-195, score-0.179]

51 [f(k), f(l)]) must be an integral multiple of n, it follows that ψh(w? [sent-246, score-0.144]

52 is labeled by a pair hoaf strings onf tthhaet feoarcmh (γn(u), γn(v)) such that ψ(γn(u)γn(v)) {−2n, ∈ −n, 0, n, 2n} {−2n, −n, 0, n, 2n}. [sent-251, score-0.167]

53 Now it is easy to see that inverting the homomorphism γn at each node of D? [sent-252, score-0.223]

54 A String in MIX That Has No 2-Decomposition By Lemmas 3 and 4, in order to prove that there is no head grammar for MIX, it suffices to exhibit a string in MIX that has no 2-decomposition. [sent-255, score-0.474]

55 In this section, we prove that the string z has no 2decomposition. [sent-257, score-0.225]

56 If w is a string in MIX, by plotting the coordinates of ψ(v) for each prefix v of w, we can represent w by a closed curve C together with a map t : [0, |w|] → yC . [sent-259, score-0.165]

57 a T clhoes representation eothf tehre w string z aips given win| Figure 2 T. [sent-260, score-0.165]

58 Let us call a string w ∈ {a, b, c}∗ such that ψ(w) ∈ [−2, 2] [−2, 2] long wif w cao,bnt,aci}ns aucllh hth threaet letters, a[−nd2 s2h]o ×rt −ot2h,e2r]w liosen. [sent-261, score-0.243]

59 ) −It2 ,is2 easy to see that a short string w always satisfies |w|a ≤ 4, |w|b ≤ 4, |w|c ≤ 2. [sent-265, score-0.269]

60 (For example, a4c2 and b4c2 are short strings of length 6. [sent-267, score-0.188]

61 ) We also call a pair of strings (v1,v2) long (or short) if v1v2 is long (or short, respectively). [sent-268, score-0.234]

62 According to the definition of an n- decomposition, a leaf node in a 2-decomposition 7This fact was first verified by the computer program accompanying this paper. [sent-269, score-0.221]

63 The program, written in C, implements a generic, memoized top-down recognizer for the language { w ∈ MIX | w has a 2-decomposition }, and does not rely on any special properties ao f2 t-dhee string z. [sent-270, score-0.165]

64 671 19 a19 38 Figure 2: Graphical representation of the string z = Note that every point (i, j) on the diagonal segment has i> 7 or j < −2. [sent-271, score-0.165]

65 × × must be labeled by a short pair of strings. [sent-273, score-0.209]

66 We call a 2-decomposition normal if the label of every internal node is long. [sent-274, score-0.249]

67 Clearly, any 2-decomposition can be turned into a normal 2-decomposition by deleting all nodes that are descendants of nodes with short labels. [sent-275, score-0.155]

68 One important property of the string z is the following: Lemma 5. [sent-276, score-0.165]

69 If a substring v of z has ψ(v) ∈ [−2, 2] [−2, 2] , then the subcurve corresponding to v must h,a2v]e, ihneintia tlh ean sdu fcinuravl ec cooorrrdei-nates whose difference lies in [−2, 2] [−2, 2] . [sent-280, score-0.332]

70 as a substring at least one of ba19c, ac29b, and cb15a. [sent-282, score-0.16]

71 Let us call a decomposition of a string concatenationfree if each of its non-leaf labels is the wrapping of the labels of the children. [sent-287, score-0.395]

72 We only consider the case of left concatenation since the case of right concatenation is entirely analogous; so we suppose that the node μ is labeled by (u1u2v1 ,v2). [sent-297, score-0.503]

73 If u1u2 is short, then the left child of μ is a leaf because D is normal. [sent-299, score-0.189]

74 We can replace its label by (u1u2, ε); tDhe i sla nbeolr (u1u2v1 ,v2) no rfe μ lwacilel now bb eel lt bhey wrapping (as well as left concatenation) of the two child la- × bels, (u1u2, ε) and (v1,v2). [sent-300, score-0.231]

75 If x1x2 is short, then we can combine by wrapping a single node labeled by (x1, x2) with the subtree of D rooted at the left child of μ, to woibtthai thn a new 2-decomposition hoef z. [sent-301, score-0.398]

76 Another useful property of the string z is the following: Lemma 7. [sent-305, score-0.165]

77 Suppose that the following conditions hold: (i) z (ii) = x1u1v1yv2u2x2, x1yx2 is a short string, and (iii) both ψ(u1 u2) and ψ(v1v2) [−2, 2] . [sent-306, score-0.127]

78 Since (u1,u2) and (v1,v2) must both contain c, either u1 ends in c and v1 starts in c, or else v2 ends in c × and u2 starts in c. [sent-310, score-0.296]

79 (v1,v2) must contain at least one occurrence of a, the string v1yv2 must contain cb15a as a substring. [sent-313, score-0.472]

80 s sBhuotr v1yv2 aisv a substring o fof lco2w8bs15 tha5a,t so |v1v2|a ≤ . [sent-316, score-0.16]

81 This means that cb15a is a substring of u2 a|xnd| h≤en 4c. [sent-324, score-0.16]

82 s a5b14a19c29b15a5 u1 v1yv2 u2 x2 On the other hand, since (v1,v2) must contain at least one occurrence of b, the string v1yv2 must contain ba19c as a substring. [sent-326, score-0.472]

83 The i-th node on the path will be denoted by μi, counting the root as the 0-th node. [sent-336, score-0.221]

84 If μi is not a leaf node, let (ui,1,ui,2) and (vi,1,vi,2) be the labels of the left and right children of μi, respectively. [sent-341, score-0.267]

85 If the left child is not a leaf node, we let μi+1 be the left child, in which case we have (wi+1,1 ,wi+1,2) = (ui,1,ui,2), = = = xi+1,1 xi,1, xi+1,2 xi,2, and yi+1 vi,1yvi,2. [sent-342, score-0.285]

86 As long as xi,1yixi,2 is short, (wi,1 ,wi,2) must be long and μi has two children labeled by (ui,1,ui,2) and (vi,1,vi,2). [sent-350, score-0.272]

87 Since the length of z is 87 and the length of a short string is at most 6, exactly one of (ui,1 ,ui,2) and (vi,1,vi,2) must be long. [sent-352, score-0.327]

88 Note that at μm, we have ψ(xm,1ymxm,2) = ψ(xm−1,1ym−1xm−1,2) + ψ(v) for some short string v. [sent-355, score-0.233]

89 If u and v are short strings and ψ(uv) ∈ [−2, 2] [−2, 2], thnedn v |uv|d ≤ 4 for eiangchs adn ∈ {a, b, c}. [sent-358, score-0.188]

90 following two conditions must hold: (i) |u|a ≥ 1, |u|b = 0 and |v|a = 0, |v|b ≥ 1. [sent-376, score-0.153]

91 This means that the string z must mhavee j , bke,eln ∈ split i]n. [sent-383, score-0.259]

92 t oT two strings (w0,1, w0,2) at the root of D somewhere in the vicinity of position 6th7e (see Figure 2). [sent-384, score-0.171]

93 It immediately follows that for all i ≥ m, wi,1 is a substring oiaft eal5yb1 fo4lal1o9wc2s8 ahnadt wi,2 lisl a substring of b14a5 . [sent-385, score-0.32]

94 We show by induction that for all i ≥ m, the following ec sohnodwiti obny hinodldusc:t (†) ba19c17 is a substring of wi,1. [sent-386, score-0.193]

95 Since ui,2 is a substring of b14a5 , it cannot contain any occurrences of c. [sent-395, score-0.259]

96 Since ψ1(ui,1 ui,2) ∈ [−2, 2] , it follows that ui,1 must contain at least) )1 7∈ occurrences of c; sh tehnacte uba19c17 is a substring of ui,1. [sent-396, score-0.353]

97 Note that vi,1 must contain at least 17 occurrences of c, but vi,2 is a substring of b14a5 and hence cannot contain more than 14 occurrences ofb. [sent-402, score-0.452]

98 Since ψ2(vi,1vi,2) ∈ [−2, 2] , it follows that vi,1 must contain at least one occurrence of b. [sent-403, score-0.171]

99 Therefore, ba19c17 must be a substring of vi,1 = wi+1,1 ,which shows that (†) holds with i+ 1 in place of i. [sent-404, score-0.292]

100 There is a string in MIX that has no 2decomposition. [sent-413, score-0.165]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mix', 0.521), ('uv', 0.335), ('lemma', 0.179), ('string', 0.165), ('substring', 0.16), ('head', 0.155), ('derivation', 0.148), ('bach', 0.134), ('mildly', 0.125), ('aa', 0.124), ('node', 0.12), ('strings', 0.12), ('wrapping', 0.112), ('joshi', 0.103), ('indexed', 0.102), ('grammars', 0.1), ('must', 0.094), ('grammar', 0.094), ('implies', 0.09), ('salvati', 0.089), ('concatenation', 0.087), ('normal', 0.087), ('weir', 0.083), ('useless', 0.078), ('emmon', 0.078), ('proof', 0.077), ('decomposition', 0.076), ('tree', 0.072), ('leaf', 0.07), ('short', 0.068), ('homomorphism', 0.067), ('nonterminals', 0.066), ('suppose', 0.066), ('child', 0.065), ('conjecture', 0.063), ('formal', 0.062), ('prove', 0.06), ('children', 0.059), ('conditions', 0.059), ('sylvain', 0.058), ('occurrences', 0.057), ('nonterminal', 0.057), ('wi', 0.056), ('languages', 0.056), ('mn', 0.055), ('left', 0.054), ('terminating', 0.053), ('finite', 0.052), ('clearly', 0.051), ('root', 0.051), ('path', 0.05), ('integral', 0.05), ('labeled', 0.047), ('condition', 0.047), ('categorial', 0.047), ('mathematics', 0.047), ('arikawa', 0.045), ('gazdar', 0.045), ('groenink', 0.045), ('kracht', 0.045), ('labri', 0.045), ('marsh', 0.045), ('monoid', 0.045), ('ndecomposition', 0.045), ('reidel', 0.045), ('subcurve', 0.045), ('weaech', 0.045), ('call', 0.042), ('right', 0.042), ('contain', 0.042), ('let', 0.042), ('binary', 0.041), ('ends', 0.04), ('starts', 0.04), ('wnd', 0.039), ('aha', 0.039), ('kanazawa', 0.039), ('treeadjoining', 0.039), ('alphabet', 0.038), ('holds', 0.038), ('long', 0.036), ('easy', 0.036), ('princeton', 0.036), ('seki', 0.036), ('occurrence', 0.035), ('notion', 0.034), ('mild', 0.033), ('free', 0.033), ('ec', 0.033), ('aravind', 0.033), ('write', 0.032), ('accompanying', 0.031), ('odf', 0.031), ('appears', 0.03), ('la', 0.03), ('letters', 0.029), ('known', 0.029), ('nin', 0.029), ('editors', 0.028), ('logic', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999791 139 acl-2012-MIX Is Not a Tree-Adjoining Language

Author: Makoto Kanazawa ; Sylvain Salvati

Abstract: The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. We prove Joshi’s (1985) conjecture that MIX is not a tree-adjoining language.

2 0.12043514 185 acl-2012-Strong Lexicalization of Tree Adjoining Grammars

Author: Andreas Maletti ; Joost Engelfriet

Abstract: Recently, it was shown (KUHLMANN, SATTA: Tree-adjoining grammars are not closed under strong lexicalization. Comput. Linguist., 2012) that finitely ambiguous tree adjoining grammars cannot be transformed into a normal form (preserving the generated tree language), in which each production contains a lexical symbol. A more powerful model, the simple context-free tree grammar, admits such a normal form. It can be effectively constructed and the maximal rank of the nonterminals only increases by 1. Thus, simple context-free tree grammars strongly lexicalize tree adjoining grammars and themselves.

3 0.11629303 181 acl-2012-Spectral Learning of Latent-Variable PCFGs

Author: Shay B. Cohen ; Karl Stratos ; Michael Collins ; Dean P. Foster ; Lyle Ungar

Abstract: We introduce a spectral learning algorithm for latent-variable PCFGs (Petrov et al., 2006). Under a separability (singular value) condition, we prove that the method provides consistent parameter estimates.

4 0.093865916 108 acl-2012-Hierarchical Chunk-to-String Translation

Author: Yang Feng ; Dongdong Zhang ; Mu Li ; Qun Liu

Abstract: We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.

5 0.089385852 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

Author: Katsuhiko Hayashi ; Taro Watanabe ; Masayuki Asahara ; Yuji Matsumoto

Abstract: This paper presents a novel top-down headdriven parsing algorithm for data-driven projective dependency analysis. This algorithm handles global structures, such as clause and coordination, better than shift-reduce or other bottom-up algorithms. Experiments on the English Penn Treebank data and the Chinese CoNLL-06 data show that the proposed algorithm achieves comparable results with other data-driven dependency parsing algorithms.

6 0.085879415 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

7 0.083994433 184 acl-2012-String Re-writing Kernel

8 0.082495414 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

9 0.069559202 137 acl-2012-Lemmatisation as a Tagging Task

10 0.069321193 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

11 0.06834843 140 acl-2012-Machine Translation without Words through Substring Alignment

12 0.06741289 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

13 0.065846913 19 acl-2012-A Ranking-based Approach to Word Reordering for Statistical Machine Translation

14 0.065047324 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures

15 0.063775539 57 acl-2012-Concept-to-text Generation via Discriminative Reranking

16 0.057259195 109 acl-2012-Higher-order Constituent Parsing and Parser Combination

17 0.056042522 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

18 0.055404864 34 acl-2012-Automatically Learning Measures of Child Language Development

19 0.054643005 83 acl-2012-Error Mining on Dependency Trees

20 0.05453993 78 acl-2012-Efficient Search for Transformation-based Inference


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.155), (1, -0.004), (2, -0.094), (3, -0.059), (4, -0.076), (5, -0.01), (6, -0.008), (7, 0.119), (8, 0.029), (9, 0.004), (10, -0.052), (11, -0.046), (12, -0.044), (13, -0.002), (14, 0.05), (15, -0.107), (16, 0.023), (17, -0.015), (18, 0.008), (19, 0.027), (20, 0.028), (21, -0.028), (22, -0.048), (23, -0.029), (24, -0.118), (25, 0.052), (26, 0.008), (27, -0.041), (28, 0.015), (29, 0.065), (30, -0.019), (31, 0.168), (32, -0.157), (33, 0.014), (34, 0.043), (35, 0.049), (36, -0.11), (37, -0.026), (38, -0.215), (39, -0.171), (40, 0.163), (41, -0.028), (42, 0.056), (43, 0.106), (44, -0.018), (45, 0.01), (46, 0.016), (47, 0.139), (48, 0.084), (49, -0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9811902 139 acl-2012-MIX Is Not a Tree-Adjoining Language

Author: Makoto Kanazawa ; Sylvain Salvati

Abstract: The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. We prove Joshi’s (1985) conjecture that MIX is not a tree-adjoining language.

2 0.87613922 185 acl-2012-Strong Lexicalization of Tree Adjoining Grammars

Author: Andreas Maletti ; Joost Engelfriet

Abstract: Recently, it was shown (KUHLMANN, SATTA: Tree-adjoining grammars are not closed under strong lexicalization. Comput. Linguist., 2012) that finitely ambiguous tree adjoining grammars cannot be transformed into a normal form (preserving the generated tree language), in which each production contains a lexical symbol. A more powerful model, the simple context-free tree grammar, admits such a normal form. It can be effectively constructed and the maximal rank of the nonterminals only increases by 1. Thus, simple context-free tree grammars strongly lexicalize tree adjoining grammars and themselves.

3 0.7257002 181 acl-2012-Spectral Learning of Latent-Variable PCFGs

Author: Shay B. Cohen ; Karl Stratos ; Michael Collins ; Dean P. Foster ; Lyle Ungar

Abstract: We introduce a spectral learning algorithm for latent-variable PCFGs (Petrov et al., 2006). Under a separability (singular value) condition, we prove that the method provides consistent parameter estimates.

4 0.62299955 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

Author: Bevan Jones ; Mark Johnson ; Sharon Goldwater

Abstract: Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.

5 0.52850986 107 acl-2012-Heuristic Cube Pruning in Linear Time

Author: Andrea Gesmundo ; Giorgio Satta ; James Henderson

Abstract: We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.

6 0.50663012 83 acl-2012-Error Mining on Dependency Trees

7 0.43666846 196 acl-2012-The OpenGrm open-source finite-state grammar software libraries

8 0.42003518 137 acl-2012-Lemmatisation as a Tagging Task

9 0.40598199 184 acl-2012-String Re-writing Kernel

10 0.37773278 108 acl-2012-Hierarchical Chunk-to-String Translation

11 0.36202097 34 acl-2012-Automatically Learning Measures of Child Language Development

12 0.32000077 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

13 0.31394586 194 acl-2012-Text Segmentation by Language Using Minimum Description Length

14 0.3135269 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

15 0.3098051 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

16 0.30925301 122 acl-2012-Joint Evaluation of Morphological Segmentation and Syntactic Parsing

17 0.27832389 186 acl-2012-Structuring E-Commerce Inventory

18 0.2750158 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

19 0.27081105 163 acl-2012-Prediction of Learning Curves in Machine Translation

20 0.27063376 197 acl-2012-Tokenization: Returning to a Long Solved Problem A Survey, Contrastive Experiment, Recommendations, and Toolkit


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.016), (25, 0.014), (26, 0.043), (28, 0.062), (30, 0.049), (35, 0.272), (37, 0.019), (39, 0.057), (59, 0.012), (74, 0.028), (82, 0.037), (84, 0.036), (85, 0.028), (90, 0.076), (92, 0.074), (94, 0.027), (99, 0.066)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80260265 139 acl-2012-MIX Is Not a Tree-Adjoining Language

Author: Makoto Kanazawa ; Sylvain Salvati

Abstract: The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. We prove Joshi’s (1985) conjecture that MIX is not a tree-adjoining language.

2 0.48328382 84 acl-2012-Estimating Compact Yet Rich Tree Insertion Grammars

Author: Elif Yamangil ; Stuart Shieber

Abstract: We present a Bayesian nonparametric model for estimating tree insertion grammars (TIG), building upon recent work in Bayesian inference of tree substitution grammars (TSG) via Dirichlet processes. Under our general variant of TIG, grammars are estimated via the Metropolis-Hastings algorithm that uses a context free grammar transformation as a proposal, which allows for cubic-time string parsing as well as tree-wide joint sampling of derivations in the spirit of Cohn and Blunsom (2010). We use the Penn treebank for our experiments and find that our proposal Bayesian TIG model not only has competitive parsing performance but also finds compact yet linguistically rich TIG representations of the data.

3 0.48167869 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

Author: Bevan Jones ; Mark Johnson ; Sharon Goldwater

Abstract: Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.

4 0.47897333 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

Author: Jonathan Berant ; Ido Dagan ; Meni Adler ; Jacob Goldberger

Abstract: Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years. In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates (termed entailment graphs). We first identify that entailment graphs exhibit a “tree-like” property and are very similar to a novel type of graph termed forest-reducible graph. We utilize this property to develop an iterative efficient approximation algorithm for learning the graph edges, where each iteration takes linear time. We compare our approximation algorithm to a recently-proposed state-of-the-art exact algorithm and show that it is more efficient and scalable both theoretically and empirically, while its output quality is close to that given by the optimal solution of the exact algorithm.

5 0.47451687 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

Author: Gerard de Melo ; Gerhard Weikum

Abstract: We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.

6 0.47415373 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability

7 0.47160944 31 acl-2012-Authorship Attribution with Author-aware Topic Models

8 0.46995789 191 acl-2012-Temporally Anchored Relation Extraction

9 0.46580148 36 acl-2012-BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment

10 0.46491307 187 acl-2012-Subgroup Detection in Ideological Discussions

11 0.46481687 29 acl-2012-Assessing the Effect of Inconsistent Assessors on Summarization Evaluation

12 0.46314386 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle

13 0.45972916 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

14 0.45957109 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

15 0.45856822 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

16 0.45825624 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

17 0.45745188 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale

18 0.45645493 167 acl-2012-QuickView: NLP-based Tweet Search

19 0.45497781 40 acl-2012-Big Data versus the Crowd: Looking for Relationships in All the Right Places

20 0.4540506 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information