acl acl2010 acl2010-228 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Marco Kuhlmann ; Alexander Koller ; Giorgio Satta
Abstract: Combinatory Categorial Grammar (CCG) is generally construed as a fully lexicalized formalism, where all grammars use one and the same universal set of rules, and crosslinguistic variation is isolated in the lexicon. In this paper, we show that the weak generative capacity of this ‘pure’ form of CCG is strictly smaller than that of CCG with grammar-specific rules, and of other mildly context-sensitive grammar formalisms, including Tree Adjoining Grammar (TAG). Our result also carries over to a multi-modal extension of CCG.
Reference: text
sentIndex sentText sentNum sentScore
1 In this paper, we show that the weak generative capacity of this ‘pure’ form of CCG is strictly smaller than that of CCG with grammar-specific rules, and of other mildly context-sensitive grammar formalisms, including Tree Adjoining Grammar (TAG). [sent-4, score-0.524]
2 Our result also carries over to a multi-modal extension of CCG. [sent-5, score-0.085]
3 1 Introduction Combinatory Categorial Grammar (CCG) (Steedman, 2001 ; Steedman and Baldridge, 2010) is an expressive grammar formalism with formal roots in combinatory logic (Curry et al. [sent-6, score-0.493]
4 , 1958) and links to the type-logical tradition of categorial grammar (Moortgat, 1997). [sent-7, score-0.237]
5 It has been successfully used for a wide range of practical tasks, such as data-driven parsing (Hockenmaier and Steedman, 2002; Clark and Curran, 2007), wide-coverage semantic construction (Bos et al. [sent-8, score-0.027]
6 , 2004), and the modelling of syntactic priming (Reitter et al. [sent-9, score-0.035]
7 It is well-known that CCG can generate languages that are not context-free (which is necessary to capture natural languages), but can still be parsed in polynomial time. [sent-11, score-0.071]
8 Specifically, VijayShanker and Weir (1994) identified a version of CCG that is weakly equivalent to Tree Adjoining Grammar (TAG) (Joshi and Schabes, 1997) and other mildly context-sensitive grammar formalisms, and can generate non-context-free languages such as anbncn . [sent-12, score-0.369]
9 The generative capacity of CCG is commonly attributed to its flexible composition rules, which allow it to model more complex word orders that context-free grammar can. [sent-13, score-0.486]
10 The discussion of the (weak and strong) gener- ative capacity of CCG and TAG has recently been revived (Hockenmaier and Young, 2008; Koller and Kuhlmann, 2009). [sent-14, score-0.246]
11 In particular, Koller and Kuhlmann (2009) have shown that CCGs that are pure (i. [sent-15, score-0.314]
12 , they can only use generalized composition rules, and there is no way to restrict the instances of these rules that may be used) and first-order (i. [sent-17, score-0.243]
13 , all argument categories are atomic) can not generate anbncn . [sent-19, score-0.265]
14 This shows that the generative capacity of at least first-order CCG crucially relies on its ability to restrict rule instantiations, and is at odds with the general conception of CCG as a fully lexicalized formalism, in which all grammars use one and the same set of universal rules. [sent-20, score-0.629]
15 A question then is whether the result carries over to pure CCG with higher-order categories. [sent-21, score-0.37]
16 Our technical result is that every lan- guage L that can be generated by a pure CCG has a context-free sublanguage L0 ? [sent-23, score-0.314]
17 L such that every string in L is a permutation of? [sent-24, score-0.129]
18 This means that anbncn , for instance, cannot be generated by pure CCG, as it does not have any (non-trivial) permutation-equivalent sublanguages. [sent-26, score-0.47]
19 Conversely, we show that there are still languages that can be generated by pure CCG but not by context-free grammar. [sent-27, score-0.385]
20 We then show that our permutation language lemma also holds for pure multi-modal CCG as defined by Baldridge and Kruijff (2003), in which the use of rules can be controlled through the lexicon entries by assigning types to slashes. [sent-28, score-0.491]
21 Since this extension was intended to do away with the need for grammar-specific rule restrictions, it comes as quite a surprise that pure multi-modal 534 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h. [sent-29, score-0.422]
22 c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 534–543, CCG in the style of Baldridge and Kruijff (2003) is still less expressive than the CCG formalism used by Vijay-Shanker and Weir (1994). [sent-31, score-0.238]
23 This means that word order in CCG cannot be fully lexicalized with the current formal tools; some ordering constraints must be specified via language-specific combination rules and not in lexicon entries. [sent-32, score-0.201]
24 On the other hand, as pure multi-modal CCG has been successfully applied to model the syntax of a variety of natural languages, another way to read our results is as contributions to a discussion about the exact expressiveness needed to model natural language. [sent-33, score-0.407]
25 The remainder of this paper is structured as follows. [sent-34, score-0.043]
26 In Section 2, we introduce the formalism of pure CCG that we consider in this paper, and illustrate the relevance of rule restrictions. [sent-35, score-0.559]
27 We then study the generative capacity of pure CCG in Section 3; this section also presents our main result. [sent-36, score-0.564]
28 In Section 4, we show that this result still holds for multi-modal CCG. [sent-37, score-0.064]
29 Section 5 concludes the paper with a discussion of the relevance of our findings. [sent-38, score-0.089]
30 2 Combinatory Categorial Grammar We start by providing formal definitions for categories, syntactic rules, and grammars, and then discuss the relevance of rule restrictions for CCG. [sent-39, score-0.215]
31 1 Categories Given a finite set A of atomic categories, the set of categories over A is the smallest set C such that A ? [sent-41, score-0.189]
32 AA category represents a Cfunction that see2ks C a string with category y to the right (indicated by the forward slash) and returns a new string with category x; a category instead seeks its argument to otrhye xle; fat (indicated by nthstee abda csekewkasr idts slash). [sent-45, score-0.782]
33 nInt the remainder of this paper, we use lowercase sansserif letters such as x; y; z as variables for categories, and the vertical bar j as a variable for slashes. [sent-46, score-0.141]
34 In order to save some parentheses, we understand slashes as left-associative operators, and write a category such as . [sent-47, score-0.161]
35 x=y/nz as The list of arguments of a category c is defined recursively as follows: If c is atomic, then it has no arguments. [sent-48, score-0.186]
36 If c D xjy for some categories x and y, then the arguments ojf c are the slashed category jy, plus the arguments of x. [sent-49, score-0.327]
37 We number the arguments of a category from outermost to innermost. [sent-50, score-0.231]
38 The arity of a category is the number of its arguments. [sent-51, score-0.166]
39 The target of a category c is the atomic category x=y xny x=ynz. [sent-52, score-0.569]
40 x=y y ) x forward application > y xny ) x backward application < x=y y=z ) x=z forward harmonic composition >B ynz xny ) xnz backward harmonic composition B? [sent-54, score-1.094]
wordName wordTfidf (topN-words)
[('ccg', 0.663), ('pure', 0.314), ('xny', 0.208), ('capacity', 0.178), ('anbncn', 0.156), ('composition', 0.139), ('formalism', 0.137), ('category', 0.127), ('kuhlmann', 0.117), ('combinatory', 0.115), ('atomic', 0.107), ('categorial', 0.105), ('grammar', 0.097), ('weir', 0.094), ('baldridge', 0.092), ('padua', 0.091), ('slash', 0.091), ('kruijff', 0.091), ('categories', 0.082), ('koller', 0.082), ('steedman', 0.08), ('backward', 0.08), ('mildly', 0.074), ('permutation', 0.074), ('expressive', 0.072), ('generative', 0.072), ('rules', 0.068), ('restrictions', 0.065), ('forward', 0.064), ('weak', 0.062), ('hockenmaier', 0.061), ('adjoining', 0.061), ('relevance', 0.06), ('uppsala', 0.059), ('arguments', 0.059), ('lexicalized', 0.057), ('harmonic', 0.056), ('carries', 0.056), ('string', 0.055), ('formalisms', 0.051), ('universal', 0.05), ('rule', 0.048), ('philology', 0.045), ('ccgs', 0.045), ('moortgat', 0.045), ('outermost', 0.045), ('nint', 0.045), ('stripping', 0.045), ('grammars', 0.043), ('remainder', 0.043), ('languages', 0.042), ('formal', 0.042), ('fat', 0.042), ('jy', 0.042), ('curry', 0.042), ('conception', 0.042), ('odds', 0.042), ('strictly', 0.041), ('ative', 0.039), ('vijayshanker', 0.039), ('arity', 0.039), ('xle', 0.039), ('tag', 0.038), ('crossed', 0.037), ('expressiveness', 0.037), ('restrict', 0.036), ('construed', 0.035), ('crosslinguistic', 0.035), ('tradition', 0.035), ('priming', 0.035), ('holds', 0.035), ('indicated', 0.035), ('fully', 0.034), ('save', 0.034), ('italy', 0.034), ('bar', 0.034), ('lowercase', 0.033), ('giorgio', 0.033), ('schabes', 0.033), ('vertical', 0.031), ('surprise', 0.031), ('seeks', 0.031), ('satta', 0.031), ('sweden', 0.031), ('isolated', 0.03), ('roots', 0.03), ('discussion', 0.029), ('operators', 0.029), ('bos', 0.029), ('instantiations', 0.029), ('excellence', 0.029), ('joshi', 0.029), ('still', 0.029), ('extension', 0.029), ('vice', 0.028), ('parentheses', 0.028), ('argument', 0.027), ('successfully', 0.027), ('crucially', 0.027), ('saarbr', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999994 228 acl-2010-The Importance of Rule Restrictions in CCG
Author: Marco Kuhlmann ; Alexander Koller ; Giorgio Satta
Abstract: Combinatory Categorial Grammar (CCG) is generally construed as a fully lexicalized formalism, where all grammars use one and the same universal set of rules, and crosslinguistic variation is isolated in the lexicon. In this paper, we show that the weak generative capacity of this ‘pure’ form of CCG is strictly smaller than that of CCG with grammar-specific rules, and of other mildly context-sensitive grammar formalisms, including Tree Adjoining Grammar (TAG). Our result also carries over to a multi-modal extension of CCG.
2 0.46147788 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
Author: Timothy A. D. Fowler ; Gerald Penn
Abstract: The definition of combinatory categorial grammar (CCG) in the literature varies quite a bit from author to author. However, the differences between the definitions are important in terms of the language classes of each CCG. We prove that a wide range of CCGs are strongly context-free, including the CCG of CCGbank and of the parser of Clark and Curran (2007). In light of these new results, we train the PCFG parser of Petrov and Klein (2007) on CCGbank and achieve state of the art results in supertagging accuracy, PARSEVAL measures and dependency accuracy.
3 0.23250388 260 acl-2010-Wide-Coverage NLP with Linguistically Expressive Grammars
Author: Julia Hockenmaier ; Yusuke Miyao ; Josef van Genabith
Abstract: unkown-abstract
4 0.2077097 203 acl-2010-Rebanking CCGbank for Improved NP Interpretation
Author: Matthew Honnibal ; James R. Curran ; Johan Bos
Abstract: Once released, treebanks tend to remain unchanged despite any shortcomings in their depth of linguistic analysis or coverage of specific phenomena. Instead, separate resources are created to address such problems. In this paper we show how to improve the quality of a treebank, by integrating resources and implementing improved analyses for specific constructions. We demonstrate this rebanking process by creating an updated version of CCGbank that includes the predicate-argument structure of both verbs and nouns, baseNP brackets, verb-particle constructions, and restrictive and non-restrictive nominal modifiers; and evaluate the impact of these changes on a statistical parser.
5 0.19560002 172 acl-2010-Minimized Models and Grammar-Informed Initialization for Supertagging with Highly Ambiguous Lexicons
Author: Sujith Ravi ; Jason Baldridge ; Kevin Knight
Abstract: We combine two complementary ideas for learning supertaggers from highly ambiguous lexicons: grammar-informed tag transitions and models minimized via integer programming. Each strategy on its own greatly improves performance over basic expectation-maximization training with a bitag Hidden Markov Model, which we show on the CCGbank and CCG-TUT corpora. The strategies provide further error reductions when combined. We describe a new two-stage integer programming strategy that efficiently deals with the high degree of ambiguity on these datasets while obtaining the full effect of model minimization.
6 0.14494537 114 acl-2010-Faster Parsing by Supertagger Adaptation
7 0.085516214 119 acl-2010-Fixed Length Word Suffix for Factored Statistical Machine Translation
8 0.070404865 12 acl-2010-A Probabilistic Generative Model for an Intermediate Constituency-Dependency Representation
9 0.060509022 66 acl-2010-Compositional Matrix-Space Models of Language
10 0.059546523 169 acl-2010-Learning to Translate with Source and Target Syntax
11 0.05760666 217 acl-2010-String Extension Learning
12 0.053999308 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers
13 0.053451963 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
14 0.053115539 118 acl-2010-Fine-Grained Tree-to-String Translation Rule Extraction
15 0.051275611 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression
16 0.048731428 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
17 0.047124237 130 acl-2010-Hard Constraints for Grammatical Function Labelling
18 0.045210499 162 acl-2010-Learning Common Grammar from Multilingual Corpus
19 0.044622663 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations
20 0.042477138 67 acl-2010-Computing Weakest Readings
topicId topicWeight
[(0, -0.118), (1, -0.013), (2, 0.108), (3, -0.027), (4, -0.124), (5, -0.137), (6, 0.224), (7, 0.065), (8, 0.216), (9, -0.062), (10, 0.316), (11, 0.119), (12, -0.276), (13, 0.113), (14, -0.044), (15, 0.034), (16, 0.026), (17, -0.072), (18, -0.117), (19, -0.055), (20, 0.103), (21, -0.141), (22, -0.103), (23, 0.02), (24, -0.001), (25, -0.029), (26, -0.064), (27, -0.038), (28, 0.04), (29, 0.023), (30, -0.025), (31, 0.101), (32, 0.025), (33, 0.017), (34, -0.076), (35, 0.098), (36, 0.054), (37, 0.021), (38, 0.046), (39, 0.009), (40, -0.01), (41, -0.013), (42, -0.025), (43, 0.035), (44, 0.054), (45, 0.067), (46, 0.078), (47, -0.082), (48, -0.039), (49, -0.063)]
simIndex simValue paperId paperTitle
same-paper 1 0.98117369 228 acl-2010-The Importance of Rule Restrictions in CCG
Author: Marco Kuhlmann ; Alexander Koller ; Giorgio Satta
Abstract: Combinatory Categorial Grammar (CCG) is generally construed as a fully lexicalized formalism, where all grammars use one and the same universal set of rules, and crosslinguistic variation is isolated in the lexicon. In this paper, we show that the weak generative capacity of this ‘pure’ form of CCG is strictly smaller than that of CCG with grammar-specific rules, and of other mildly context-sensitive grammar formalisms, including Tree Adjoining Grammar (TAG). Our result also carries over to a multi-modal extension of CCG.
2 0.90204936 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
Author: Timothy A. D. Fowler ; Gerald Penn
Abstract: The definition of combinatory categorial grammar (CCG) in the literature varies quite a bit from author to author. However, the differences between the definitions are important in terms of the language classes of each CCG. We prove that a wide range of CCGs are strongly context-free, including the CCG of CCGbank and of the parser of Clark and Curran (2007). In light of these new results, we train the PCFG parser of Petrov and Klein (2007) on CCGbank and achieve state of the art results in supertagging accuracy, PARSEVAL measures and dependency accuracy.
3 0.65293139 260 acl-2010-Wide-Coverage NLP with Linguistically Expressive Grammars
Author: Julia Hockenmaier ; Yusuke Miyao ; Josef van Genabith
Abstract: unkown-abstract
4 0.6430319 203 acl-2010-Rebanking CCGbank for Improved NP Interpretation
Author: Matthew Honnibal ; James R. Curran ; Johan Bos
Abstract: Once released, treebanks tend to remain unchanged despite any shortcomings in their depth of linguistic analysis or coverage of specific phenomena. Instead, separate resources are created to address such problems. In this paper we show how to improve the quality of a treebank, by integrating resources and implementing improved analyses for specific constructions. We demonstrate this rebanking process by creating an updated version of CCGbank that includes the predicate-argument structure of both verbs and nouns, baseNP brackets, verb-particle constructions, and restrictive and non-restrictive nominal modifiers; and evaluate the impact of these changes on a statistical parser.
5 0.55747223 172 acl-2010-Minimized Models and Grammar-Informed Initialization for Supertagging with Highly Ambiguous Lexicons
Author: Sujith Ravi ; Jason Baldridge ; Kevin Knight
Abstract: We combine two complementary ideas for learning supertaggers from highly ambiguous lexicons: grammar-informed tag transitions and models minimized via integer programming. Each strategy on its own greatly improves performance over basic expectation-maximization training with a bitag Hidden Markov Model, which we show on the CCGbank and CCG-TUT corpora. The strategies provide further error reductions when combined. We describe a new two-stage integer programming strategy that efficiently deals with the high degree of ambiguity on these datasets while obtaining the full effect of model minimization.
6 0.52190453 114 acl-2010-Faster Parsing by Supertagger Adaptation
7 0.39216644 182 acl-2010-On the Computational Complexity of Dominance Links in Grammatical Formalisms
8 0.3322987 12 acl-2010-A Probabilistic Generative Model for an Intermediate Constituency-Dependency Representation
9 0.29009673 222 acl-2010-SystemT: An Algebraic Approach to Declarative Information Extraction
10 0.2838659 67 acl-2010-Computing Weakest Readings
11 0.27158639 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
12 0.24845114 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers
13 0.22362694 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two
14 0.2120655 217 acl-2010-String Extension Learning
15 0.20872447 66 acl-2010-Compositional Matrix-Space Models of Language
16 0.20206298 234 acl-2010-The Use of Formal Language Models in the Typology of the Morphology of Amerindian Languages
17 0.19724207 162 acl-2010-Learning Common Grammar from Multilingual Corpus
18 0.19602039 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations
19 0.19476433 19 acl-2010-A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation
20 0.18335347 169 acl-2010-Learning to Translate with Source and Target Syntax
topicId topicWeight
[(25, 0.046), (33, 0.016), (39, 0.012), (59, 0.044), (73, 0.015), (78, 0.653), (83, 0.032), (84, 0.017), (98, 0.062)]
simIndex simValue paperId paperTitle
same-paper 1 0.9533276 228 acl-2010-The Importance of Rule Restrictions in CCG
Author: Marco Kuhlmann ; Alexander Koller ; Giorgio Satta
Abstract: Combinatory Categorial Grammar (CCG) is generally construed as a fully lexicalized formalism, where all grammars use one and the same universal set of rules, and crosslinguistic variation is isolated in the lexicon. In this paper, we show that the weak generative capacity of this ‘pure’ form of CCG is strictly smaller than that of CCG with grammar-specific rules, and of other mildly context-sensitive grammar formalisms, including Tree Adjoining Grammar (TAG). Our result also carries over to a multi-modal extension of CCG.
2 0.82683718 94 acl-2010-Edit Tree Distance Alignments for Semantic Role Labelling
Author: Hector-Hugo Franco-Penya
Abstract: ―Tree SRL system‖ is a Semantic Role Labelling supervised system based on a tree-distance algorithm and a simple k-NN implementation. The novelty of the system lies in comparing the sentences as tree structures with multiple relations instead of extracting vectors of features for each relation and classifying them. The system was tested with the English CoNLL-2009 shared task data set where 79% accuracy was obtained. 1
3 0.72190857 229 acl-2010-The Influence of Discourse on Syntax: A Psycholinguistic Model of Sentence Processing
Author: Amit Dubey
Abstract: Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This paper introduces a novel sentence processing model that consists of a parser augmented with a probabilistic logic-based model of coreference resolution, which allows us to simulate how context interacts with syntax in a reading task. Our simulations show that a Weakly Interactive cognitive architecture can explain data which had been provided as evidence for the Strongly Interactive hypothesis.
4 0.64732838 10 acl-2010-A Latent Dirichlet Allocation Method for Selectional Preferences
Author: Alan Ritter ; Mausam Mausam ; Oren Etzioni
Abstract: The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al. ’s system (Pantel et al., 2007).
5 0.56090075 70 acl-2010-Contextualizing Semantic Representations Using Syntactically Enriched Vector Models
Author: Stefan Thater ; Hagen Furstenau ; Manfred Pinkal
Abstract: We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task.
6 0.46894598 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
7 0.46101993 21 acl-2010-A Tree Transducer Model for Synchronous Tree-Adjoining Grammars
8 0.45592389 158 acl-2010-Latent Variable Models of Selectional Preference
9 0.45097834 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
10 0.44563195 49 acl-2010-Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
11 0.40255138 67 acl-2010-Computing Weakest Readings
12 0.39949989 115 acl-2010-Filtering Syntactic Constraints for Statistical Machine Translation
13 0.39139292 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
14 0.38735244 130 acl-2010-Hard Constraints for Grammatical Function Labelling
15 0.37378332 107 acl-2010-Exemplar-Based Models for Word Meaning in Context
16 0.37280768 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
17 0.36746764 160 acl-2010-Learning Arguments and Supertypes of Semantic Relations Using Recursive Patterns
18 0.36713493 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
19 0.36516353 95 acl-2010-Efficient Inference through Cascades of Weighted Tree Transducers
20 0.36503598 71 acl-2010-Convolution Kernel over Packed Parse Forest