emnlp emnlp2011 emnlp2011-121 knowledge-graph by maker-knowledge-mining

121 emnlp-2011-Semi-supervised CCG Lexicon Extension


Source: pdf

Author: Emily Thomforde ; Mark Steedman

Abstract: This paper introduces Chart Inference (CI), an algorithm for deriving a CCG category for an unknown word from a partial parse chart. It is shown to be faster and more precise than a baseline brute-force method, and to achieve wider coverage than a rule-based system. In addition, we show the application of CI to a domain adaptation task for question words, which are largely missing in the Penn Treebank. When used in combination with self-training, CI increases the precision of the baseline StatCCG parser over subjectextraction questions by 50%. An error analysis shows that CI contributes to the increase by expanding the number of category types available to the parser, while self-training adjusts the counts.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract This paper introduces Chart Inference (CI), an algorithm for deriving a CCG category for an unknown word from a partial parse chart. [sent-6, score-0.535]

2 In addition, we show the application of CI to a domain adaptation task for question words, which are largely missing in the Penn Treebank. [sent-8, score-0.141]

3 When used in combination with self-training, CI increases the precision of the baseline StatCCG parser over subjectextraction questions by 50%. [sent-9, score-0.122]

4 An error analysis shows that CI contributes to the increase by expanding the number of category types available to the parser, while self-training adjusts the counts. [sent-10, score-0.424]

5 1 Introduction Unseen lexical items are a major cause of error in strongly lexicalised parsers such as those based on CCG (Clark and Curran, 2003; Hockenmaier, 2003). [sent-11, score-0.169]

6 The problem is especially acute for less privileged languages, but even in the case of English, we are aware of many category types entirely missing from the Penn Treebank (Clark et al. [sent-12, score-0.566]

7 In the case of totally unseen words, the standard method used by StatCCG (Hockenmaier, 2003) and many other treebank parsers is part-of-speech backoff, which is quite effective, affording an F-score of 93% over dependencies in §00 in the optimal configu9r3a%tio onv. [sent-14, score-0.057]

8 e Irt d iesp deinfdficeunlcti etos say 0h0o win backing oaffl caoffnefcigtsdependency errors, but when we examine category match accuracy of the CCGBank-trained parser, we find that POS backoff has been used on 19. [sent-15, score-0.573]

9 Of the 3320 items the parser labelled incorrectly, 675 (20. [sent-20, score-0.255]

10 3%) are words that are missing from the lexicon entirely. [sent-21, score-0.315]

11 1 In the best case, if we were able to learn lexical entries for those 675, we could transfer them to lexical treatment, which is 93. [sent-22, score-0.086]

12 Under these conditions, we predict a further 63 1 word/category pairs to be tagged correctly by the parser, reducing the error rate from 7. [sent-25, score-0.053]

13 d F fuortrh learning uwcionrgds p afrrsoimng unlabelled data would result in the recovery of interesting and important category types that are missing from our standard lexical resources. [sent-28, score-0.787]

14 This paper introduces Chart Inference (CI) as a strategy for deducing a ranked set of possible categories for an unknown word using the partial chart formed from the known words that surround it. [sent-29, score-0.644]

15 CCG (Steedman, 2000) is particularly suited to this problem, because category types can be inferred from the types of the surrounding constituents. [sent-30, score-0.378]

16 CI is designed to take advantage of this property of generative CCGBank-trained parser, and of access to the full inventory of CCG combinators and non- combinatory unary rules from the trained model. [sent-31, score-0.345]

17 It is capable of learning category types that are completely missing from the lexicon, and is superior to existing learning systems in both precision and efficiency. [sent-32, score-0.511]

18 The first compares three word-learning methods for their ability to converge to a toy target lexicon. [sent-34, score-0.19]

19 The final experiment shows how Chart Induction can be effectively used in a domain adaptation task where a small number of category types are known to be missing from the lexicon. [sent-39, score-0.529]

20 Since the learning portion of the algorithm is unsupervised, it has access to an essentially unlimited amount of unlabelled data, and it can afford to skip any sentence that does not conform to the one-unseen-word restriction. [sent-41, score-0.369]

21 Attempting two or more OOL words at a time from one sentence would compound the search space and the error rate. [sent-42, score-0.053]

22 We do not address the much harder problem of hypothesising missing categories for known words, which should presumably be handled by quite other methods, such as prior offline generalization of the lexicon. [sent-43, score-0.268]

23 1 A Brute-force System One of the early lexical acquisition systems using Categorial Grammar was that of Watkinson and Manandhar (1999; 2000; 2001a; 2001b). [sent-45, score-0.043]

24 This system attempted to simultaneously learn a CG lexicon and annotate unlabelled text with parse derivations. [sent-46, score-0.517]

25 Using a stripped-down parser that only utilised the forward- and backward-application rules, they iteratively learned the lexicon from the feedback from online parsing. [sent-47, score-0.296]

26 The system decided which parse was best based on the lexicon, and then decided which additions to the lexicon to make based on principles of compression. [sent-48, score-0.385]

27 After each change, the system reexamined the parses for previous sentences and updated them to reflect the new lexicon. [sent-49, score-0.07]

28 They report fully convergent results on two toy corpora, but the parsing accuracy of the system trained on natural language data was far below the state of the art. [sent-50, score-0.308]

29 However, they do show categorial grammar to be a promising basis for artificial language acquisition, because CCG makes learning the lexicon and learning the grammar the same task (Watkinson and Manandhar, 1999). [sent-51, score-0.397]

30 They 1247 also showed that seeding the lexicon with examples of lexical items (closed-class words in their case), rather than just a list of possible category types, increased its chances of converging. [sent-52, score-0.66]

31 This approach of automating the learning process differs from the previous language learning methods described, in that it doesn’t require the specification of any particular patterns, only knowledge ofthe grammar formalism. [sent-53, score-0.118]

32 For this paper, as a baseline, we implement a generalised version of Watkinson and Manandhar’s mechanism for determining the category γ of a single OOL word in a sentence where the rest of the words C1. [sent-54, score-0.449]

33 This is equivalent to backing off to the set of all known category types; the learner returns the category that maximises the probability of the completed parse tree. [sent-61, score-0.806]

34 We ignore the optimisation and compression steps of the original system. [sent-62, score-0.046]

35 (2009a; 2009b) developed a learning system based on handwritten translation rules for deducing the category (X) of a single unknown word in a sentence consisting of a sequence of partiallyparsed constituents (A. [sent-65, score-0.662]

36 Their system was based on a small inventory of inference rules that eliminated ambiguity in the ordering of arguments. [sent-68, score-0.24]

37 For example, one of the Level 3 inference rules specifies the order of the arguments in the deduced category: A X B C → D ⇒ X = ((D\A)/C)/B WA iXth Bou Ct →this D i⇒nd Xuct =iv (e( D bi\aAs /tChe) Blearner would have to deal with the ambiguity of the options ((D/C)/B)\A and ((D/C)\A)/B at minimum. [sent-69, score-0.171]

38 In (a(dDdi/tiCo)n/ they Alim anitded (( (tDh/eCir l)e\Aarn)/eBr to CG-compatible parse structures and their constituent strings to length 4. [sent-70, score-0.048]

39 Their argument is that only this minimal bias is needed to learn syntactic structures, including the fronting of polar interrogative auxiliaries and auxiliary word order (should > have > been), from a training set that did not explicitly contain full evidence for them. [sent-71, score-0.137]

40 (2009b) used the full set of CCG combinators to generate learned categories, they employed a post-processing step to filter spurious categories by checking whether the category DERIVE([C1 . [sent-73, score-0.567]

41 /\CAn]−a,1βd\AγC,)/1n∈γS,X Figure 1: Generalised recursive rule-based algorithm, where [C1. [sent-79, score-0.049]

42 Cn] is a sequence of categories, one of which is X, β is a result category, and γ is the (initially empty) category set. [sent-82, score-0.278]

43 participated in a CG-only derivation (using application rules only). [sent-83, score-0.13]

44 This is effective in limiting spurious derivations, but at the expense of reduced recall on those sentences for whose analysis CCG rules of composition etc. [sent-84, score-0.156]

45 Their rules were effective for their toy-scale datasets, but for the purposes of this paper we have implemented a generalised version of the recursive algorithm for use in wide-coverage parsing. [sent-86, score-0.304]

46 It takes a sequence of categorial constituents, all known except one (X), and builds a candidate set of categories (γ) for the unknown word by recursively applying Yao’s Level 0 and Level 1inference rules. [sent-88, score-0.305]

47 3 Chart Inference Both Watkinson’s and Yao’s experiments were fully convergent over toy datasets, but did not scale to realistic corpora. [sent-90, score-0.308]

48 Watkinson attempted to learn from the LLL corpus (Kazakov et al. [sent-91, score-0.058]

49 , 1998), but attributed the failure to the small amount of training data relative to the corpus, and the naive initial category set. [sent-92, score-0.363]

50 Yao’s method was only ever designed as a proof-ofconcept to show how much of the language can be learned from partial evidence, and was not meant to be run in earnest in a real-world learning setting. [sent-93, score-0.121]

51 For 1248 one, the rules do not cover the full set of partial parse conditions, and further to that, they do not allow for partial parses to be reanalysed within the learning framework. [sent-94, score-0.408]

52 To that end, we have developed a learning algorithm that is capable of operating within the oneunknown-word-per-sentence learning setting established by the two baseline systems, that is able to invent new category types, and that is able to take advantage of the full generality of CCG. [sent-95, score-0.465]

53 This section shows that it performs as well as the previous two systems on a toy corpus, and the next section proves that it more readily scales to natural language domains. [sent-96, score-0.19]

54 Mellish (1989) established a two-stage bidirectional chart parser for diagnosing errors in input text. [sent-97, score-0.409]

55 His method relied heavily on heuristic rules, and the only evaluation he did was on number of cycles needed for each type of error, and number of solutions produced. [sent-98, score-0.118]

56 His method was designed for use in producing parses where the original parser failed, dealing with omissions, insertions, and misspelled/unknown words. [sent-99, score-0.234]

57 The only method used to rank the possible solutions was heuristic scores. [sent-100, score-0.042]

58 Kato (1994) implemented a revised system that used a generalised top-down parser, rather than a chart, and was able to get the number of cycles to decrease. [sent-101, score-0.287]

59 In both cases the evaluation was only on a toy corpus, and they did not evaluate on whether the systems diagnosed the errors correctly, or whether the solution they offered was accurate. [sent-102, score-0.279]

60 They also had to deal with cases where the error was ambiguous, for example, where an inserted word could be interpreted as a misspelling or vice-versa. [sent-103, score-0.096]

61 Where Mellish uses the two-stage parsing process to complete malformed parses, we use it to diagnose unknown lexical items. [sent-104, score-0.183]

62 In addition, we work on the scale of a full grammar and wide-coverage parser, using modern lexical corpora. [sent-105, score-0.158]

63 Our method is a wrapper for a naive generative CCG parser StatOpenCCG (Christodoulopoulos, 2008), a statistical extension to OpenCCG (White and Baldridge, 2003). [sent-106, score-0.164]

64 In the general case, the parser is trained on all the labelled data available in a particular learning setting, then the learner discovers new lexical items from unlabelled text. [sent-107, score-0.575]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('category', 0.278), ('ccg', 0.271), ('watkinson', 0.255), ('unlabelled', 0.237), ('yao', 0.201), ('toy', 0.19), ('chart', 0.19), ('lexicon', 0.174), ('generalised', 0.171), ('manandhar', 0.153), ('ci', 0.142), ('missing', 0.141), ('parser', 0.122), ('convergent', 0.118), ('deducing', 0.118), ('statccg', 0.118), ('thomforde', 0.102), ('combinator', 0.102), ('combinators', 0.102), ('backing', 0.102), ('mellish', 0.102), ('ool', 0.102), ('backoff', 0.096), ('steedman', 0.092), ('categorial', 0.089), ('unknown', 0.089), ('rules', 0.084), ('partial', 0.079), ('cycles', 0.076), ('hockenmaier', 0.076), ('items', 0.073), ('spurious', 0.072), ('parses', 0.07), ('inventory', 0.069), ('grammar', 0.067), ('categories', 0.067), ('ax', 0.066), ('known', 0.06), ('labelled', 0.06), ('attempted', 0.058), ('iv', 0.058), ('unseen', 0.057), ('decided', 0.056), ('error', 0.053), ('xx', 0.051), ('diagnosing', 0.051), ('diagnose', 0.051), ('diagnosed', 0.051), ('etos', 0.051), ('automating', 0.051), ('bou', 0.051), ('privileged', 0.051), ('additions', 0.051), ('invent', 0.051), ('irt', 0.051), ('vulnerable', 0.051), ('types', 0.05), ('constituents', 0.05), ('recursive', 0.049), ('full', 0.048), ('parse', 0.048), ('clark', 0.047), ('edinburgh', 0.046), ('established', 0.046), ('participated', 0.046), ('lll', 0.046), ('seeding', 0.046), ('win', 0.046), ('polar', 0.046), ('optimisation', 0.046), ('outlined', 0.046), ('chances', 0.046), ('cg', 0.046), ('conform', 0.046), ('acute', 0.046), ('inference', 0.044), ('lexical', 0.043), ('christodoulopoulos', 0.043), ('handwritten', 0.043), ('deduced', 0.043), ('failure', 0.043), ('misspelling', 0.043), ('interrogative', 0.043), ('afford', 0.043), ('adjusts', 0.043), ('unlimited', 0.043), ('eliminated', 0.043), ('capable', 0.042), ('naive', 0.042), ('solutions', 0.042), ('designed', 0.042), ('introduces', 0.041), ('learner', 0.04), ('brute', 0.04), ('revised', 0.04), ('attempting', 0.04), ('sms', 0.038), ('recovery', 0.038), ('bi', 0.038), ('offered', 0.038)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 121 emnlp-2011-Semi-supervised CCG Lexicon Extension

Author: Emily Thomforde ; Mark Steedman

Abstract: This paper introduces Chart Inference (CI), an algorithm for deriving a CCG category for an unknown word from a partial parse chart. It is shown to be faster and more precise than a baseline brute-force method, and to achieve wider coverage than a rule-based system. In addition, we show the application of CI to a domain adaptation task for question words, which are largely missing in the Penn Treebank. When used in combination with self-training, CI increases the precision of the baseline StatCCG parser over subjectextraction questions by 50%. An error analysis shows that CI contributes to the increase by expanding the number of category types available to the parser, while self-training adjusts the counts.

2 0.18695882 132 emnlp-2011-Syntax-Based Grammaticality Improvement using CCG and Guided Search

Author: Yue Zhang ; Stephen Clark

Abstract: Machine-produced text often lacks grammaticality and fluency. This paper studies grammaticality improvement using a syntax-based algorithm based on CCG. The goal of the search problem is to find an optimal parse tree among all that can be constructed through selection and ordering of the input words. The search problem, which is significantly harder than parsing, is solved by guided learning for best-first search. In a standard word ordering task, our system gives a BLEU score of 40. 1, higher than the previous result of 33.7 achieved by a dependency-based system.

3 0.17111504 87 emnlp-2011-Lexical Generalization in CCG Grammar Induction for Semantic Parsing

Author: Tom Kwiatkowski ; Luke Zettlemoyer ; Sharon Goldwater ; Mark Steedman

Abstract: We consider the problem of learning factored probabilistic CCG grammars for semantic parsing from data containing sentences paired with logical-form meaning representations. Traditional CCG lexicons list lexical items that pair words and phrases with syntactic and semantic content. Such lexicons can be inefficient when words appear repeatedly with closely related lexical content. In this paper, we introduce factored lexicons, which include both lexemes to model word meaning and templates to model systematic variation in word usage. We also present an algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model. Evaluations on benchmark datasets demonstrate that the approach learns highly accurate parsers, whose generalization performance greatly from the lexical factoring. benefits

4 0.10341901 20 emnlp-2011-Augmenting String-to-Tree Translation Models with Fuzzy Use of Source-side Syntax

Author: Jiajun Zhang ; Feifei Zhai ; Chengqing Zong

Abstract: Due to its explicit modeling of the grammaticality of the output via target-side syntax, the string-to-tree model has been shown to be one of the most successful syntax-based translation models. However, a major limitation of this model is that it does not utilize any useful syntactic information on the source side. In this paper, we analyze the difficulties of incorporating source syntax in a string-totree model. We then propose a new way to use the source syntax in a fuzzy manner, both in source syntactic annotation and in rule matching. We further explore three algorithms in rule matching: 0-1 matching, likelihood matching, and deep similarity matching. Our method not only guarantees grammatical output with an explicit target tree, but also enables the system to choose the proper translation rules via fuzzy use of the source syntax. Our extensive experiments have shown significant improvements over the state-of-the-art string-to-tree system. 1

5 0.086105689 103 emnlp-2011-Parser Evaluation over Local and Non-Local Deep Dependencies in a Large Corpus

Author: Emily M. Bender ; Dan Flickinger ; Stephan Oepen ; Yi Zhang

Abstract: In order to obtain a fine-grained evaluation of parser accuracy over naturally occurring text, we study 100 examples each of ten reasonably frequent linguistic phenomena, randomly selected from a parsed version of the English Wikipedia. We construct a corresponding set of gold-standard target dependencies for these 1000 sentences, operationalize mappings to these targets from seven state-of-theart parsers, and evaluate the parsers against this data to measure their level of success in identifying these dependencies.

6 0.085000463 95 emnlp-2011-Multi-Source Transfer of Delexicalized Dependency Parsers

7 0.078076959 24 emnlp-2011-Bootstrapping Semantic Parsers from Conversations

8 0.067813158 10 emnlp-2011-A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions

9 0.063923135 4 emnlp-2011-A Fast, Accurate, Non-Projective, Semantically-Enriched Parser

10 0.063721217 137 emnlp-2011-Training dependency parsers by jointly optimizing multiple objectives

11 0.063511483 141 emnlp-2011-Unsupervised Dependency Parsing without Gold Part-of-Speech Tags

12 0.060493797 65 emnlp-2011-Heuristic Search for Non-Bottom-Up Tree Structure Prediction

13 0.059738733 136 emnlp-2011-Training a Parser for Machine Translation Reordering

14 0.058519378 57 emnlp-2011-Extreme Extraction - Machine Reading in a Week

15 0.057568815 54 emnlp-2011-Exploiting Parse Structures for Native Language Identification

16 0.054158352 111 emnlp-2011-Reducing Grounded Learning Tasks To Grammatical Inference

17 0.052487157 108 emnlp-2011-Quasi-Synchronous Phrase Dependency Grammars for Machine Translation

18 0.052186195 53 emnlp-2011-Experimental Support for a Categorical Compositional Distributional Model of Meaning

19 0.051848359 33 emnlp-2011-Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! Using Word Lengthening to Detect Sentiment in Microblogs

20 0.050647829 1 emnlp-2011-A Bayesian Mixture Model for PoS Induction Using Multiple Features


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.194), (1, 0.031), (2, -0.031), (3, 0.075), (4, 0.027), (5, -0.036), (6, -0.129), (7, 0.067), (8, 0.092), (9, 0.022), (10, -0.148), (11, 0.097), (12, -0.208), (13, -0.13), (14, 0.032), (15, 0.004), (16, -0.005), (17, -0.152), (18, 0.026), (19, -0.081), (20, -0.074), (21, -0.061), (22, 0.108), (23, -0.094), (24, -0.175), (25, -0.097), (26, 0.005), (27, -0.071), (28, -0.219), (29, 0.064), (30, 0.056), (31, 0.015), (32, 0.098), (33, 0.259), (34, -0.049), (35, -0.087), (36, -0.044), (37, 0.022), (38, 0.145), (39, -0.032), (40, -0.176), (41, 0.091), (42, 0.062), (43, -0.018), (44, 0.069), (45, 0.099), (46, -0.164), (47, -0.072), (48, 0.013), (49, 0.076)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96735686 121 emnlp-2011-Semi-supervised CCG Lexicon Extension

Author: Emily Thomforde ; Mark Steedman

Abstract: This paper introduces Chart Inference (CI), an algorithm for deriving a CCG category for an unknown word from a partial parse chart. It is shown to be faster and more precise than a baseline brute-force method, and to achieve wider coverage than a rule-based system. In addition, we show the application of CI to a domain adaptation task for question words, which are largely missing in the Penn Treebank. When used in combination with self-training, CI increases the precision of the baseline StatCCG parser over subjectextraction questions by 50%. An error analysis shows that CI contributes to the increase by expanding the number of category types available to the parser, while self-training adjusts the counts.

2 0.81212926 132 emnlp-2011-Syntax-Based Grammaticality Improvement using CCG and Guided Search

Author: Yue Zhang ; Stephen Clark

Abstract: Machine-produced text often lacks grammaticality and fluency. This paper studies grammaticality improvement using a syntax-based algorithm based on CCG. The goal of the search problem is to find an optimal parse tree among all that can be constructed through selection and ordering of the input words. The search problem, which is significantly harder than parsing, is solved by guided learning for best-first search. In a standard word ordering task, our system gives a BLEU score of 40. 1, higher than the previous result of 33.7 achieved by a dependency-based system.

3 0.58393496 87 emnlp-2011-Lexical Generalization in CCG Grammar Induction for Semantic Parsing

Author: Tom Kwiatkowski ; Luke Zettlemoyer ; Sharon Goldwater ; Mark Steedman

Abstract: We consider the problem of learning factored probabilistic CCG grammars for semantic parsing from data containing sentences paired with logical-form meaning representations. Traditional CCG lexicons list lexical items that pair words and phrases with syntactic and semantic content. Such lexicons can be inefficient when words appear repeatedly with closely related lexical content. In this paper, we introduce factored lexicons, which include both lexemes to model word meaning and templates to model systematic variation in word usage. We also present an algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model. Evaluations on benchmark datasets demonstrate that the approach learns highly accurate parsers, whose generalization performance greatly from the lexical factoring. benefits

4 0.36891398 103 emnlp-2011-Parser Evaluation over Local and Non-Local Deep Dependencies in a Large Corpus

Author: Emily M. Bender ; Dan Flickinger ; Stephan Oepen ; Yi Zhang

Abstract: In order to obtain a fine-grained evaluation of parser accuracy over naturally occurring text, we study 100 examples each of ten reasonably frequent linguistic phenomena, randomly selected from a parsed version of the English Wikipedia. We construct a corresponding set of gold-standard target dependencies for these 1000 sentences, operationalize mappings to these targets from seven state-of-theart parsers, and evaluate the parsers against this data to measure their level of success in identifying these dependencies.

5 0.30834094 20 emnlp-2011-Augmenting String-to-Tree Translation Models with Fuzzy Use of Source-side Syntax

Author: Jiajun Zhang ; Feifei Zhai ; Chengqing Zong

Abstract: Due to its explicit modeling of the grammaticality of the output via target-side syntax, the string-to-tree model has been shown to be one of the most successful syntax-based translation models. However, a major limitation of this model is that it does not utilize any useful syntactic information on the source side. In this paper, we analyze the difficulties of incorporating source syntax in a string-totree model. We then propose a new way to use the source syntax in a fuzzy manner, both in source syntactic annotation and in rule matching. We further explore three algorithms in rule matching: 0-1 matching, likelihood matching, and deep similarity matching. Our method not only guarantees grammatical output with an explicit target tree, but also enables the system to choose the proper translation rules via fuzzy use of the source syntax. Our extensive experiments have shown significant improvements over the state-of-the-art string-to-tree system. 1

6 0.30474305 24 emnlp-2011-Bootstrapping Semantic Parsers from Conversations

7 0.2923395 54 emnlp-2011-Exploiting Parse Structures for Native Language Identification

8 0.28852925 65 emnlp-2011-Heuristic Search for Non-Bottom-Up Tree Structure Prediction

9 0.27007505 95 emnlp-2011-Multi-Source Transfer of Delexicalized Dependency Parsers

10 0.24843718 137 emnlp-2011-Training dependency parsers by jointly optimizing multiple objectives

11 0.2425179 19 emnlp-2011-Approximate Scalable Bounded Space Sketch for Large Data NLP

12 0.24146739 57 emnlp-2011-Extreme Extraction - Machine Reading in a Week

13 0.23155928 115 emnlp-2011-Relaxed Cross-lingual Projection of Constituent Syntax

14 0.22815579 111 emnlp-2011-Reducing Grounded Learning Tasks To Grammatical Inference

15 0.22404277 141 emnlp-2011-Unsupervised Dependency Parsing without Gold Part-of-Speech Tags

16 0.21823092 53 emnlp-2011-Experimental Support for a Categorical Compositional Distributional Model of Meaning

17 0.21613197 33 emnlp-2011-Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! Using Word Lengthening to Detect Sentiment in Microblogs

18 0.20475504 10 emnlp-2011-A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions

19 0.20391451 122 emnlp-2011-Simple Effective Decipherment via Combinatorial Optimization

20 0.20342225 85 emnlp-2011-Learning to Simplify Sentences with Quasi-Synchronous Grammar and Integer Programming


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(23, 0.05), (37, 0.013), (45, 0.032), (53, 0.011), (57, 0.017), (66, 0.02), (79, 0.721), (96, 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96823299 121 emnlp-2011-Semi-supervised CCG Lexicon Extension

Author: Emily Thomforde ; Mark Steedman

Abstract: This paper introduces Chart Inference (CI), an algorithm for deriving a CCG category for an unknown word from a partial parse chart. It is shown to be faster and more precise than a baseline brute-force method, and to achieve wider coverage than a rule-based system. In addition, we show the application of CI to a domain adaptation task for question words, which are largely missing in the Penn Treebank. When used in combination with self-training, CI increases the precision of the baseline StatCCG parser over subjectextraction questions by 50%. An error analysis shows that CI contributes to the increase by expanding the number of category types available to the parser, while self-training adjusts the counts.

2 0.88094264 115 emnlp-2011-Relaxed Cross-lingual Projection of Constituent Syntax

Author: Wenbin Jiang ; Qun Liu ; Yajuan Lv

Abstract: We propose a relaxed correspondence assumption for cross-lingual projection of constituent syntax, which allows a supposed constituent of the target sentence to correspond to an unrestricted treelet in the source parse. Such a relaxed assumption fundamentally tolerates the syntactic non-isomorphism between languages, and enables us to learn the target-language-specific syntactic idiosyncrasy rather than a strained grammar directly projected from the source language syntax. Based on this assumption, a novel constituency projection method is also proposed in order to induce a projected constituent treebank from the source-parsed bilingual corpus. Experiments show that, the parser trained on the projected treebank dramatically outperforms previous projected and unsupervised parsers.

3 0.83247364 36 emnlp-2011-Corroborating Text Evaluation Results with Heterogeneous Measures

Author: Enrique Amigo ; Julio Gonzalo ; Jesus Gimenez ; Felisa Verdejo

Abstract: Automatically produced texts (e.g. translations or summaries) are usually evaluated with n-gram based measures such as BLEU or ROUGE, while the wide set of more sophisticated measures that have been proposed in the last years remains largely ignored for practical purposes. In this paper we first present an indepth analysis of the state of the art in order to clarify this issue. After this, we formalize and verify empirically a set of properties that every text evaluation measure based on similarity to human-produced references satisfies. These properties imply that corroborating system improvements with additional measures always increases the overall reliability of the evaluation process. In addition, the greater the heterogeneity of the measures (which is measurable) the higher their combined reliability. These results support the use of heterogeneous measures in order to consolidate text evaluation results.

4 0.81750607 34 emnlp-2011-Corpus-Guided Sentence Generation of Natural Images

Author: Yezhou Yang ; Ching Teo ; Hal Daume III ; Yiannis Aloimonos

Abstract: We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and de- , scriptive sentences compared to naive strategies that use vision alone.

5 0.47190279 87 emnlp-2011-Lexical Generalization in CCG Grammar Induction for Semantic Parsing

Author: Tom Kwiatkowski ; Luke Zettlemoyer ; Sharon Goldwater ; Mark Steedman

Abstract: We consider the problem of learning factored probabilistic CCG grammars for semantic parsing from data containing sentences paired with logical-form meaning representations. Traditional CCG lexicons list lexical items that pair words and phrases with syntactic and semantic content. Such lexicons can be inefficient when words appear repeatedly with closely related lexical content. In this paper, we introduce factored lexicons, which include both lexemes to model word meaning and templates to model systematic variation in word usage. We also present an algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model. Evaluations on benchmark datasets demonstrate that the approach learns highly accurate parsers, whose generalization performance greatly from the lexical factoring. benefits

6 0.42864287 8 emnlp-2011-A Model of Discourse Predictions in Human Sentence Processing

7 0.41251579 111 emnlp-2011-Reducing Grounded Learning Tasks To Grammatical Inference

8 0.40244168 57 emnlp-2011-Extreme Extraction - Machine Reading in a Week

9 0.39715096 35 emnlp-2011-Correcting Semantic Collocation Errors with L1-induced Paraphrases

10 0.3955434 132 emnlp-2011-Syntax-Based Grammaticality Improvement using CCG and Guided Search

11 0.38983542 22 emnlp-2011-Better Evaluation Metrics Lead to Better Machine Translation

12 0.3778671 54 emnlp-2011-Exploiting Parse Structures for Native Language Identification

13 0.37603709 31 emnlp-2011-Computation of Infix Probabilities for Probabilistic Context-Free Grammars

14 0.36465988 20 emnlp-2011-Augmenting String-to-Tree Translation Models with Fuzzy Use of Source-side Syntax

15 0.36164144 147 emnlp-2011-Using Syntactic and Semantic Structural Kernels for Classifying Definition Questions in Jeopardy!

16 0.36139494 70 emnlp-2011-Identifying Relations for Open Information Extraction

17 0.36085644 83 emnlp-2011-Learning Sentential Paraphrases from Bilingual Parallel Corpora for Text-to-Text Generation

18 0.35918066 85 emnlp-2011-Learning to Simplify Sentences with Quasi-Synchronous Grammar and Integer Programming

19 0.35629261 40 emnlp-2011-Discovering Relations between Noun Categories

20 0.35432819 38 emnlp-2011-Data-Driven Response Generation in Social Media