acl acl2010 acl2010-139 knowledge-graph by maker-knowledge-mining

139 acl-2010-Identifying Generic Noun Phrases


Source: pdf

Author: Nils Reiter ; Anette Frank

Abstract: This paper presents a supervised approach for identifying generic noun phrases in context. Generic statements express rulelike knowledge about kinds or events. Therefore, their identification is important for the automatic construction of knowledge bases. In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. Generic expressions have been studied extensively in formal semantics. Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. Our results perform well above the baseline and existing prior work.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract This paper presents a supervised approach for identifying generic noun phrases in context. [sent-2, score-0.692]

2 In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. [sent-5, score-1.297]

3 Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. [sent-7, score-0.599]

4 1 Introduction Generic expressions come in two basic forms: generic noun phrases and generic sentences. [sent-9, score-1.383]

5 A generic noun phrase is a noun phrase that does not refer to a specific (set of) individual(s), but rather to a kind or class of individuals. [sent-11, score-0.786]

6 de Generic sentences are characterising sentences that quantify over situations or events, expressing rule-like knowledge about habitual actions or situations (2. [sent-27, score-0.323]

7 The genericity of an expression may arise from the generic (kind-referring, class-denoting) interpretation of the NP or the characterising interpretation of the sentence predicate. [sent-38, score-1.091]

8 Both sources may concur in a single sentence, as illustrated in Table 1, where we have cross-classified the examples above according to the genericity of the NP and the sentence. [sent-39, score-0.396]

9 This classification is extremely difficult, because (i) the criteria for generic interpretation are far from being clear-cut and (ii) both sources of genericity may freely interact. [sent-40, score-1.035]

10 abe-])ntecs The above classification of generic expressions is well established in traditional formal semantics (cf. [sent-43, score-0.835]

11 With appropriate semantic analysis of generic statements, we can not only formally capture and exploit generic knowledge, 2The literature draws some finer distinctions including aspects like specificity, which we will ignore in this work. [sent-47, score-1.245]

12 In this paper, we build on insights from formal semantics to establish a corpus-based machine learning approach for the automatic classification of generic expressions. [sent-53, score-0.782]

13 In principle our approach is applicable to the detection of both generic NPs and generic sentences, and in fact it would be highly desirable and possibly advantageous to cover both types of genericity simultaneously. [sent-54, score-1.594]

14 Our current work is confined to generic NPs, as there are no corpora available at present that contain annotations for genericity at the sentence level. [sent-55, score-0.995]

15 Section 2 introduces generic expressions and motivates their relevance for knowledge acquisition and semantic processing tasks in computational linguistics. [sent-57, score-0.811]

16 In section 4 we motivate the choice of feature sets for the au- tomatic identification of generic NPs in context. [sent-59, score-0.736]

17 1 Interpretation of generic expressions Generic NPs There are two contrasting views on how to formally interpret generic NPs. [sent-63, score-1.29]

18 According to the first one, a generic NP involves a special form of quantification. [sent-64, score-0.599]

19 Quine (1960), for example, proposes a universally quantified reading for generic NPs. [sent-65, score-0.635]

20 According to the second view, generic noun phrases denote kinds. [sent-75, score-0.692]

21 On this view, the generic NP cannot be analysed as a quantifier over individuals pertaining to the kind. [sent-77, score-0.769]

22 The dyadic operator relates two semantic constituents, the restrictor and the matrix: Q[x1, . [sent-82, score-0.2]

23 , Ma{tzrix By choosing GEN as a generic dyadic operator, it is possible to represent the two readings (a) and (b) of the characterising sentence (4) by variation in the specification of restrictor and matrix (Krifka et al. [sent-96, score-0.805]

24 a), we must allow the generic operator to quantify over situations or events, in this case, “normal” situations which were such that Erd o˝s took amphetamines. [sent-102, score-0.722]

25 2 Relevance for computational linguistics Knowledge acquisition The automatic acquisition of formal knowledge for computational applications is a major endeavour in current research 4Most rats are not even noticed by people. [sent-104, score-0.174]

26 There are manually built formal ontologies such as SUMO (Niles and Pease, 2001) or Cyc (Lenat, 1995) and linguistic ontologies like WordNet (Fellbaum, 1998) that capture linguistic and world knowledge to a certain extent. [sent-109, score-0.17]

27 Attempts to automatically induce knowledge bases from text or encyclopaedic sources are currently not concerned with the distinction between generic and non-generic expressions, concentrating mainly on factual knowledge. [sent-112, score-0.731]

28 However, rulelike knowledge can be found in textual sources in the form of generic expressions5. [sent-113, score-0.68]

29 In view of the properties of generic expressions discussed above, this lack of attention bears two types of risks. [sent-114, score-0.772]

30 The distinction between classes and instances is a serious challenge even for the simplest methods in automatic ontology construction, e. [sent-117, score-0.185]

31 We are not aware of any detailed assessment of the proportion of generic noun phrases in educational text genres or encyclopaedic resources like Wikipedia. [sent-129, score-0.731]

32 Concerning generic sentences, Mathew and Katz (2009) report that 19. [sent-130, score-0.599]

33 Modelling exceptions is a cumbersome but necessary problem to be handled in ontology building, be it manually or by automatic means, and whether or not the genericity of knowledge is formalised explicitly. [sent-140, score-0.523]

34 Hence, the recognition of generic expressions is an important precondition for the adequate representation and processing of generic knowledge. [sent-147, score-1.29]

35 3 Prior Work Suh (2006) applied a rule-based approach to automatically identify generic noun phrases. [sent-148, score-0.657]

36 9% for generic entities, measured against an annotated corpus, the ACE 2005 (Ferro et al. [sent-150, score-0.599]

37 To our knowledge, this is the single prior work on the task of identifying generic NPs. [sent-153, score-0.599]

38 Next to the ACE corpus (described in more detail below), Herbelot and Copestake (2008) offer a study on annotating genericity in a corpus. [sent-154, score-0.396]

39 Two annotators annotated 48 noun phrases from the British National Corpus for their genericity (and specificity) properties, obtaining a kappa value of 0. [sent-155, score-0.489]

40 Herbelot and Copestake (2008) leave su42 pervised learning for the identification of generic expressions as future work. [sent-157, score-0.733]

41 Recent work by Mathew and Katz (2009) presents automatic classification of generic and non-generic sentences, yet restricted to habitual interpretations of generic sentences. [sent-158, score-1.348]

42 Using a selection of syntactic and semantic features operating mainly on the sentence level, they achieved precision between 81. [sent-160, score-0.182]

43 1 Properties of generic expressions Generic NPs come in various syntactic forms. [sent-166, score-0.729]

44 These include definite and indefinite singular count nouns, bare plural count and singular and plural mass nouns as in (5. [sent-167, score-0.184]

45 As Carlson (1977) observed, the generic reading of “well-established” kinds seems to be more prominent (g vs. [sent-171, score-0.664]

46 Similarly, generic sentences come in a range of syntactic forms (6). [sent-193, score-0.637]

47 Although generic NPs and generic sentences can be combined freely (cf. [sent-201, score-1.198]

48 , may be indicative for genericity, but with appropriate temporal modification, generic sentences may occur in past or future tense (6). [sent-207, score-0.634]

49 a,b,d) may indicate a generic NP reading, but again we find generic NPs with event verbs, as in (5. [sent-209, score-1.198]

50 g) may favour a generic reading, but such lexical factors are difficult to capture in a rule-based setting. [sent-214, score-0.628]

51 In our view, these observations call for a corpusbased machine learning approach that is able to capture a variety of factors indicating genericity in combination and in context. [sent-215, score-0.425]

52 2 Feature set and feature classes In Table 2 we give basic information about the individual features we investigate for identifying generic NPs. [sent-217, score-0.821]

53 In the following, we will structure this feature space along two dimensions, distinguishing NP- and sentence-level factors as well as syntactic and semantic (including lexical semantic) factors. [sent-218, score-0.209]

54 Semantic features include semantic features abstracted from syntax, such as tense and aspect or type of modification, but also lexical semantic features such as word sense classes, sense granularity or verbal predicates. [sent-225, score-0.297]

55 Our aim is to determine indicators for genericity from combinations of these feature classes. [sent-226, score-0.491]

56 Annotation guidelines The ACE-2 annotation guidelines describe generic NPs as referring to an arbitrary member of the set in question, rather than to a particular individual. [sent-241, score-0.719]

57 Thus, a property attributed to a generic NP is in principle applicable to arbitrary members of the set (although not to all of them). [sent-242, score-0.635]

58 The guidelines list several tests that are either local syntactic tests involving determiners or tests that cannot be operationalised as they involve world knowledge and context information. [sent-243, score-0.222]

59 The guidelines give a number of criteria to identify generic NPs referring to specific properties. [sent-244, score-0.659]

60 The general description of generic NPs as denoting arbitrary members of sets obviously does not capture kind-referring readings. [sent-250, score-0.599]

61 In fact, all of the examples for generic noun phrases presented in this paper would also be classified as generic according to the ACE-2 guidelines. [sent-256, score-1.291]

62 We also find annotated examples of generic NPs that are not discussed in the formal semantics literature (8. [sent-257, score-0.703]

63 Data analysis A first investigation of the corpus shows that generic NPs are much less common than non-generic ones, at least in the newspaper genre at hand. [sent-267, score-0.599]

64 In order to control for bias effects in our classifier, we will experiment with two different training sets, a balanced and an unbalanced one. [sent-270, score-0.313]

65 In fact, a number of feature selection tests uncovered feature dependencies (see below). [sent-298, score-0.253]

66 To control for bias effects, we created balanced data sets by oversampling the number of generic entities and simultaneously undersampling nongeneric entities. [sent-300, score-0.82]

67 All experiments are performed on balanced and unbalanced data sets using 10-fold crossvalidation, where balancing has been performed for each training fold separately (if any). [sent-303, score-0.283]

68 Feature classes We performed evaluation runs for different combinations of feature sets: NP- vs. [sent-304, score-0.166]

69 S-level features (with further distinction between syntactic and semantic NP-/S-level features), as well as overall syntactic vs. [sent-305, score-0.235]

70 This was done in order to determine the effect of different types of linguistic factors for the detection of genericity (cf. [sent-307, score-0.425]

71 In ablation testing, a single feature in turn is temporarily omitted from the feature set. [sent-311, score-0.235]

72 We select the feature set that yields the best balanced performance, at 45. [sent-319, score-0.214]

73 As ablation testing does not uncover feature dependencies, we also experimented with single, tuple and triple feature combinations to determine features that perform well in combination. [sent-323, score-0.291]

74 As a second baseline we chose the performance of the feature Person, as this feature gave the best performance in precision among those that are similarly easy to extract. [sent-330, score-0.231]

75 Comparison to baselines Given the bias for non-generic NPs in the unbalanced data, the majority baseline achieves high performance overall (F: 80. [sent-335, score-0.194]

76 (Suh, 2006) reported only precision of the generic class, so we can only compare against this value (28. [sent-341, score-0.64]

77 Most of the features and feature sets yield precision values above the results of Suh. [sent-343, score-0.192]

78 Feature classes, unbalanced data For the identification of generic NPs, syntactic features achieve the highest precision and recall (P: 40. [sent-344, score-0.972]

79 Using syntactic features on the NPor sentence-level only, however, leads to a drop in precision as well as recall. [sent-347, score-0.166]

80 The recall achieved by syntactic features can be improved at the cost of precision by adding semantic features (R: 66. [sent-348, score-0.27]

81 All feature classes perform lower than on the unbalanced data set, yielding an increase in recall and a drop in precision. [sent-357, score-0.393]

82 The overall performance differences between the balanced and unbalanced data for the best achieved values for the generic class are -4. [sent-358, score-0.918]

83 We observe that generally, the recall for the generic class improves for the balanced data. [sent-363, score-0.786]

84 We also observe that the margin between syntactic and semantic features reduces in the balanced dataset, and that both NP- and S-level features contribute to classification performance, with NP-features generally outperforming the S-level features. [sent-369, score-0.356]

85 This confirms our hypothesis that all feature classes contribute important information. [sent-370, score-0.166]

86 Feature selection While the above figures were obtained for the entire feature space, we now discuss the effects of feature selection both on performance and the distribution over feature classes. [sent-371, score-0.285]

87 u{Tre ncslea,s the homogeneous classes, in that balanced training data increases recall at the cost of precision. [sent-375, score-0.222]

88 With respect to overall f-measure, the best single features are strong on the unbalanced data. [sent-376, score-0.22]

89 They even yield a relatively high precision for the generic NPs (49. [sent-377, score-0.64]

90 The best performing feature in terms of f-measure on both balanced and unbalanced data is Set 5 with Set 4 as a close follow-up. [sent-380, score-0.378]

91 The classifier learned to classify singular proper names as non-generic, while the genericity of singular nouns depends on their predicate. [sent-393, score-0.48]

92 We presented a data-driven machine learning approach for identifying generic NPs in context that in turn can be used to improve tasks such as knowledge acquisition and organisation. [sent-399, score-0.672]

93 The classification of generic NPs has proven difficult even for humans. [sent-400, score-0.639]

94 Therefore, a machine learning approach seemed promising, both for the identification of relevant features as for capturing contex- Figure 1: A decision tree trained on feature Set 5 tual factors. [sent-401, score-0.193]

95 We explored a range of features using homogeneous and mixed classes gained by alternative methods of feature selection. [sent-402, score-0.293]

96 In terms of f-measure on the generic class, all feature sets performed above the baseline(s). [sent-403, score-0.694]

97 The final feature set that we established characterises generic NPs as a phenomenon that exhibits both syntactic and semantic as well as sentenceand NP-level properties. [sent-405, score-0.779]

98 As a next step, we will apply our approach to the classification of generic sentences. [sent-408, score-0.639]

99 The classification of generic expressions is only a first step towards a full treatment of the challenges involved in their semantic processing. [sent-410, score-0.778]

100 As discussed, this requires a contextually appropriate selection of the quantifier restriction8, as well as determining inheritance of properties from classes to individuals and the formalisation of defaults. [sent-411, score-0.277]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('generic', 0.599), ('genericity', 0.396), ('nps', 0.234), ('unbalanced', 0.164), ('balanced', 0.119), ('habitual', 0.11), ('characterising', 0.096), ('lions', 0.096), ('feature', 0.095), ('expressions', 0.092), ('krifka', 0.088), ('lion', 0.078), ('suh', 0.077), ('carlson', 0.075), ('entities', 0.072), ('classes', 0.071), ('quantifier', 0.071), ('qr', 0.071), ('homogeneous', 0.071), ('dyadic', 0.066), ('erd', 0.066), ('herbelot', 0.066), ('mammals', 0.066), ('typhoons', 0.066), ('formal', 0.065), ('guidelines', 0.06), ('individuals', 0.059), ('noun', 0.058), ('ontology', 0.058), ('mathew', 0.058), ('np', 0.057), ('features', 0.056), ('distinction', 0.056), ('plural', 0.05), ('asp', 0.05), ('xle', 0.05), ('semantic', 0.047), ('answer', 0.047), ('properties', 0.046), ('ablation', 0.045), ('ferro', 0.045), ('jeffry', 0.044), ('lifschitz', 0.044), ('niles', 0.044), ('pappas', 0.044), ('pelletier', 0.044), ('restrictor', 0.044), ('rulelike', 0.044), ('typhoon', 0.044), ('zirn', 0.044), ('statements', 0.043), ('operator', 0.043), ('gen', 0.043), ('singular', 0.042), ('identification', 0.042), ('ponzetto', 0.041), ('precision', 0.041), ('predicates', 0.04), ('classification', 0.04), ('pertaining', 0.04), ('situations', 0.04), ('insights', 0.039), ('semantics', 0.039), ('bottle', 0.039), ('encyclopaedic', 0.039), ('episodic', 0.039), ('norman', 0.039), ('syntactic', 0.038), ('knowledge', 0.037), ('widespread', 0.037), ('class', 0.036), ('reading', 0.036), ('attributed', 0.036), ('acquisition', 0.036), ('cyc', 0.035), ('kind', 0.035), ('view', 0.035), ('gregory', 0.035), ('tense', 0.035), ('phrases', 0.035), ('dependencies', 0.034), ('ontologies', 0.034), ('recognised', 0.033), ('confronted', 0.033), ('wikipedia', 0.032), ('recall', 0.032), ('exceptions', 0.032), ('drop', 0.031), ('indiana', 0.031), ('ace', 0.031), ('contextually', 0.03), ('katz', 0.03), ('francis', 0.03), ('tracy', 0.03), ('bias', 0.03), ('kinds', 0.029), ('tests', 0.029), ('factors', 0.029), ('copestake', 0.029), ('butt', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 139 acl-2010-Identifying Generic Noun Phrases

Author: Nils Reiter ; Anette Frank

Abstract: This paper presents a supervised approach for identifying generic noun phrases in context. Generic statements express rulelike knowledge about kinds or events. Therefore, their identification is important for the automatic construction of knowledge bases. In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. Generic expressions have been studied extensively in formal semantics. Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. Our results perform well above the baseline and existing prior work.

2 0.16668795 233 acl-2010-The Same-Head Heuristic for Coreference

Author: Micha Elsner ; Eugene Charniak

Abstract: We investigate coreference relationships between NPs with the same head noun. It is relatively common in unsupervised work to assume that such pairs are coreferent– but this is not always true, especially if realistic mention detection is used. We describe the distribution of noncoreferent same-head pairs in news text, and present an unsupervised generative model which learns not to link some samehead NPs using syntactic features, improving precision.

3 0.11227199 219 acl-2010-Supervised Noun Phrase Coreference Research: The First Fifteen Years

Author: Vincent Ng

Abstract: The research focus of computational coreference resolution has exhibited a shift from heuristic approaches to machine learning approaches in the past decade. This paper surveys the major milestones in supervised coreference research since its inception fifteen years ago.

4 0.078550085 203 acl-2010-Rebanking CCGbank for Improved NP Interpretation

Author: Matthew Honnibal ; James R. Curran ; Johan Bos

Abstract: Once released, treebanks tend to remain unchanged despite any shortcomings in their depth of linguistic analysis or coverage of specific phenomena. Instead, separate resources are created to address such problems. In this paper we show how to improve the quality of a treebank, by integrating resources and implementing improved analyses for specific constructions. We demonstrate this rebanking process by creating an updated version of CCGbank that includes the predicate-argument structure of both verbs and nouns, baseNP brackets, verb-particle constructions, and restrictive and non-restrictive nominal modifiers; and evaluate the impact of these changes on a statistical parser.

5 0.07820414 25 acl-2010-Adapting Self-Training for Semantic Role Labeling

Author: Rasoul Samad Zadeh Kaljahi

Abstract: Supervised semantic role labeling (SRL) systems trained on hand-crafted annotated corpora have recently achieved state-of-the-art performance. However, creating such corpora is tedious and costly, with the resulting corpora not sufficiently representative of the language. This paper describes a part of an ongoing work on applying bootstrapping methods to SRL to deal with this problem. Previous work shows that, due to the complexity of SRL, this task is not straight forward. One major difficulty is the propagation of classification noise into the successive iterations. We address this problem by employing balancing and preselection methods for self-training, as a bootstrapping algorithm. The proposed methods could achieve improvement over the base line, which do not use these methods. 1

6 0.072047114 150 acl-2010-Inducing Domain-Specific Semantic Class Taggers from (Almost) Nothing

7 0.066688068 4 acl-2010-A Cognitive Cost Model of Annotations Based on Eye-Tracking Data

8 0.066294797 258 acl-2010-Weakly Supervised Learning of Presupposition Relations between Verbs

9 0.065750793 64 acl-2010-Complexity Assumptions in Ontology Verbalisation

10 0.063712776 231 acl-2010-The Prevalence of Descriptive Referring Expressions in News and Narrative

11 0.063190825 72 acl-2010-Coreference Resolution across Corpora: Languages, Coding Schemes, and Preprocessing Information

12 0.061031874 38 acl-2010-Automatic Evaluation of Linguistic Quality in Multi-Document Summarization

13 0.060506493 31 acl-2010-Annotation

14 0.058999367 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing

15 0.058258958 247 acl-2010-Unsupervised Event Coreference Resolution with Rich Linguistic Features

16 0.058014024 220 acl-2010-Syntactic and Semantic Factors in Processing Difficulty: An Integrated Measure

17 0.057589564 59 acl-2010-Cognitively Plausible Models of Human Language Processing

18 0.057308957 85 acl-2010-Detecting Experiences from Weblogs

19 0.056615558 108 acl-2010-Expanding Verb Coverage in Cyc with VerbNet

20 0.056438152 156 acl-2010-Knowledge-Rich Word Sense Disambiguation Rivaling Supervised Systems


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.195), (1, 0.096), (2, 0.013), (3, -0.084), (4, -0.017), (5, 0.06), (6, 0.028), (7, 0.01), (8, 0.034), (9, 0.047), (10, 0.046), (11, -0.015), (12, -0.038), (13, -0.032), (14, 0.015), (15, 0.059), (16, 0.021), (17, 0.048), (18, 0.029), (19, 0.048), (20, -0.007), (21, 0.049), (22, 0.024), (23, -0.019), (24, -0.007), (25, -0.047), (26, -0.018), (27, 0.082), (28, 0.045), (29, -0.054), (30, -0.082), (31, 0.003), (32, 0.001), (33, 0.01), (34, 0.013), (35, 0.026), (36, 0.013), (37, -0.016), (38, -0.028), (39, -0.002), (40, 0.034), (41, 0.048), (42, 0.08), (43, -0.031), (44, 0.052), (45, 0.044), (46, -0.108), (47, 0.082), (48, 0.048), (49, -0.05)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93355078 139 acl-2010-Identifying Generic Noun Phrases

Author: Nils Reiter ; Anette Frank

Abstract: This paper presents a supervised approach for identifying generic noun phrases in context. Generic statements express rulelike knowledge about kinds or events. Therefore, their identification is important for the automatic construction of knowledge bases. In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. Generic expressions have been studied extensively in formal semantics. Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. Our results perform well above the baseline and existing prior work.

2 0.68721837 19 acl-2010-A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation

Author: Stephen Tratz ; Eduard Hovy

Abstract: The automatic interpretation of noun-noun compounds is an important subproblem within many natural language processing applications and is an area of increasing interest. The problem is difficult, with disagreement regarding the number and nature of the relations, low inter-annotator agreement, and limited annotated data. In this paper, we present a novel taxonomy of relations that integrates previous relations, the largest publicly-available annotated dataset, and a supervised classification method for automatic noun compound interpretation.

3 0.63419718 196 acl-2010-Plot Induction and Evolutionary Search for Story Generation

Author: Neil McIntyre ; Mirella Lapata

Abstract: In this paper we develop a story generator that leverages knowledge inherent in corpora without requiring extensive manual involvement. A key feature in our approach is the reliance on a story planner which we acquire automatically by recording events, their participants, and their precedence relationships in a training corpus. Contrary to previous work our system does not follow a generate-and-rank architecture. Instead, we employ evolutionary search techniques to explore the space of possible stories which we argue are well suited to the story generation task. Experiments on generating simple children’s stories show that our system outperforms pre- vious data-driven approaches.

4 0.61036903 64 acl-2010-Complexity Assumptions in Ontology Verbalisation

Author: Richard Power

Abstract: We describe the strategy currently pursued for verbalising OWL ontologies by sentences in Controlled Natural Language (i.e., combining generic rules for realising logical patterns with ontology-specific lexicons for realising atomic terms for individuals, classes, and properties) and argue that its success depends on assumptions about the complexity of terms and axioms in the ontology. We then show, through analysis of a corpus of ontologies, that although these assumptions could in principle be violated, they are overwhelmingly respected in practice by ontology developers.

5 0.59057426 43 acl-2010-Automatically Generating Term Frequency Induced Taxonomies

Author: Karin Murthy ; Tanveer A Faruquie ; L Venkata Subramaniam ; Hima Prasad K ; Mukesh Mohania

Abstract: We propose a novel method to automatically acquire a term-frequency-based taxonomy from a corpus using an unsupervised method. A term-frequency-based taxonomy is useful for application domains where the frequency with which terms occur on their own and in combination with other terms imposes a natural term hierarchy. We highlight an application for our approach and demonstrate its effectiveness and robustness in extracting knowledge from real-world data.

6 0.57789904 58 acl-2010-Classification of Feedback Expressions in Multimodal Data

7 0.56945199 230 acl-2010-The Manually Annotated Sub-Corpus: A Community Resource for and by the People

8 0.56614554 233 acl-2010-The Same-Head Heuristic for Coreference

9 0.56528598 76 acl-2010-Creating Robust Supervised Classifiers via Web-Scale N-Gram Data

10 0.55631971 166 acl-2010-Learning Word-Class Lattices for Definition and Hypernym Extraction

11 0.54593009 219 acl-2010-Supervised Noun Phrase Coreference Research: The First Fifteen Years

12 0.54582548 72 acl-2010-Coreference Resolution across Corpora: Languages, Coding Schemes, and Preprocessing Information

13 0.5411461 85 acl-2010-Detecting Experiences from Weblogs

14 0.53467453 7 acl-2010-A Generalized-Zero-Preserving Method for Compact Encoding of Concept Lattices

15 0.53334647 149 acl-2010-Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue

16 0.5304653 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing

17 0.52947408 111 acl-2010-Extracting Sequences from the Web

18 0.52895212 226 acl-2010-The Human Language Project: Building a Universal Corpus of the World's Languages

19 0.52649093 258 acl-2010-Weakly Supervised Learning of Presupposition Relations between Verbs

20 0.51900864 140 acl-2010-Identifying Non-Explicit Citing Sentences for Citation-Based Summarization.


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.015), (25, 0.078), (39, 0.015), (42, 0.032), (44, 0.01), (59, 0.101), (73, 0.045), (76, 0.335), (78, 0.03), (80, 0.016), (83, 0.106), (84, 0.035), (98, 0.081)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80803478 139 acl-2010-Identifying Generic Noun Phrases

Author: Nils Reiter ; Anette Frank

Abstract: This paper presents a supervised approach for identifying generic noun phrases in context. Generic statements express rulelike knowledge about kinds or events. Therefore, their identification is important for the automatic construction of knowledge bases. In particular, the distinction between generic and non-generic statements is crucial for the correct encoding of generic and instance-level information. Generic expressions have been studied extensively in formal semantics. Building on this work, we explore a corpus-based learning approach for identifying generic NPs, using selections of linguistically motivated features. Our results perform well above the baseline and existing prior work.

2 0.76806831 241 acl-2010-Transition-Based Parsing with Confidence-Weighted Classification

Author: Martin Haulrich

Abstract: We show that using confidence-weighted classification in transition-based parsing gives results comparable to using SVMs with faster training and parsing time. We also compare with other online learning algorithms and investigate the effect of pruning features when using confidenceweighted classification.

3 0.70584428 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining

Author: Peng Li ; Jing Jiang ; Yinglin Wang

Abstract: In this paper, we propose a novel approach to automatic generation of summary templates from given collections of summary articles. This kind of summary templates can be useful in various applications. We first develop an entity-aspect LDA model to simultaneously cluster both sentences and words into aspects. We then apply frequent subtree pattern mining on the dependency parse trees of the clustered and labeled sentences to discover sentence patterns that well represent the aspects. Key features of our method include automatic grouping of semantically related sentence patterns and automatic identification of template slots that need to be filled in. We apply our method on five Wikipedia entity categories and compare our method with two baseline methods. Both quantitative evaluation based on human judgment and qualitative comparison demonstrate the effectiveness and advantages of our method.

4 0.65047538 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

Author: Mohit Bansal ; Dan Klein

Abstract: We present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set, in contrast to much recent work on tree selection in data-oriented parsing and treesubstitution grammar learning. We require only simple, deterministic grammar symbol refinement, in contrast to recent work on latent symbol refinement. Moreover, our parser requires no explicit lexicon machinery, instead parsing input sentences as character streams. Despite its simplicity, our parser achieves accuracies of over 88% F1 on the standard English WSJ task, which is competitive with substantially more complicated state-of-theart lexicalized and latent-variable parsers. Additional specific contributions center on making implicit all-fragments parsing efficient, including a coarse-to-fine inference scheme and a new graph encoding.

5 0.52542287 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations

Author: Markus Dickinson

Abstract: We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001 ; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov a´k, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. 1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h.e ?c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 729–738, Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP → DT JJ JJ NN is very similar to eNsePn → eD.gT. ,J NJ PN N→ b DeTcau JsJe J Jth Ne sequences mdiiflfearr by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´ orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det g a˚r bara inte ihop . it goes just not together ‘It just doesn’t add up.’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. 12.. 23.. 34.. TOP → root ROOT:VV TROOPOT → → SoSt R:POOO VT:VV MVA:AB NA:AB PL:AB RSSO → P →O :5A. BN AN → ABB P SMSA → → AOB 56.. NPLA → A ABB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO → PO b. OO → DT:EN AT:AJ NN ET:VV c. OO → SS:PO QV VG:VV d. OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. TTh:eE question ixs now Tho:EwN we dNe)-, fsin(De s(ei, c) fOor)} t.he T comparable sele nmowen hto c. (3) S(ei) = max{s(ei, h) , s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long ar- gument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . and shall as previously be taxed togeher . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F → ++:++ SV pArAob:VleVm afrtiocm e figure 3, Cwohnesried eArA + Fsh →ould + +b:e+ a d SifVferent category (VG). The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP → root ROOT:AV ROOT:NN. Correspondingly, Pth →ere r are no rOulTe:sA wVith R OthOrTee: NeNle-. ments containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. s Ibnr tlheis i case, d+ tFo → ++:++ SfoVr VG:BV, +dF S → ++:++ S cVas VG:AV, a +nd+ ++F+ → ++:++ VSV, +VFG →:VV + a:l+l +ad SdV support Vfo,r a n+dF → ++:++ +SV+ being a legitimate dsdub sruuplep.o Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F → SV AA:VV and +F → ++:++ mAsA.:V SVin hcaev +e very l SittVle support i ann tdhe + trFai →ning data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS → VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, SinS keeping wE Tith:P tRhe E fTac:Pt Rth,a Ct t(hVerNe are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA → PR HinD f:igIDur HeD 4.:ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet ... assessment year 1972 ) shall the child . . . ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA → SS:NN XX:XX HV OO:VN, as iCl ounsstirdaeterd P iAn figure :5N foNr XthXe :sXeXnte HncVe OinO (8). NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . . . you also in . . . ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) ... PR ID ID RO IR NN NN RO TAHDHDPAETIRDTANJR. N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR ... ) ... IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET Uppgift NN DT om vilka PR PO SS orter NN XX PA som har XX OO utk o¨rning HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR → ETL, eN 5N, t hPeO p → NIL, eStc. p We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PfoRr hPAar:H (‘Vha asn’)d, not wloohken a itt iftusn scctoiornes w as a head, as in PA → SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to ex- amine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default set- tings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 .622). Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66.6% vs. 90. 1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA → UK:UK SS:PO TA:AJ AV SP:AJ sOtaAn:PceR, +nF A:HAV → +F:HV, Kth SeS fi:rPsOt + TFA:H:AVJ AisV correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46. 1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K ¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories Dependencies 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint a´(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1 123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V ´aclav Nov a´k (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V ¨axj¨ o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K ¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´ orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O ´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´ e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C ´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738

6 0.52389389 130 acl-2010-Hard Constraints for Grammatical Function Labelling

7 0.51509243 59 acl-2010-Cognitively Plausible Models of Human Language Processing

8 0.51342672 60 acl-2010-Collocation Extraction beyond the Independence Assumption

9 0.51313508 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System

10 0.51148516 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses

11 0.51077884 252 acl-2010-Using Parse Features for Preposition Selection and Error Detection

12 0.50894666 108 acl-2010-Expanding Verb Coverage in Cyc with VerbNet

13 0.50815064 195 acl-2010-Phylogenetic Grammar Induction

14 0.50548291 1 acl-2010-"Ask Not What Textual Entailment Can Do for You..."

15 0.50409931 85 acl-2010-Detecting Experiences from Weblogs

16 0.50380731 162 acl-2010-Learning Common Grammar from Multilingual Corpus

17 0.50243288 181 acl-2010-On Learning Subtypes of the Part-Whole Relation: Do Not Mix Your Seeds

18 0.4992277 251 acl-2010-Using Anaphora Resolution to Improve Opinion Target Identification in Movie Reviews

19 0.49796471 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation

20 0.49761632 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing