acl acl2010 acl2010-84 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Markus Dickinson
Abstract: We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001 ; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov a´k, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. 1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h.e ?c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 729–738, Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP → DT JJ JJ NN is very similar to eNsePn → eD.gT. ,J NJ PN N→ b DeTcau JsJe J Jth Ne sequences mdiiflfearr by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´ orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det g a˚r bara inte ihop . it goes just not together ‘It just doesn’t add up.’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. 12.. 23.. 34.. TOP → root ROOT:VV TROOPOT → → SoSt R:POOO VT:VV MVA:AB NA:AB PL:AB RSSO → P →O :5A. BN AN → ABB P SMSA → → AOB 56.. NPLA → A ABB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO → PO b. OO → DT:EN AT:AJ NN ET:VV c. OO → SS:PO QV VG:VV d. OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. TTh:eE question ixs now Tho:EwN we dNe)-, fsin(De s(ei, c) fOor)} t.he T comparable sele nmowen hto c. (3) S(ei) = max{s(ei, h) , s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long ar- gument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . and shall as previously be taxed togeher . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F → ++:++ SV pArAob:VleVm afrtiocm e figure 3, Cwohnesried eArA + Fsh →ould + +b:e+ a d SifVferent category (VG). The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP → root ROOT:AV ROOT:NN. Correspondingly, Pth →ere r are no rOulTe:sA wVith R OthOrTee: NeNle-. ments containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. s Ibnr tlheis i case, d+ tFo → ++:++ SfoVr VG:BV, +dF S → ++:++ S cVas VG:AV, a +nd+ ++F+ → ++:++ VSV, +VFG →:VV + a:l+l +ad SdV support Vfo,r a n+dF → ++:++ +SV+ being a legitimate dsdub sruuplep.o Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F → SV AA:VV and +F → ++:++ mAsA.:V SVin hcaev +e very l SittVle support i ann tdhe + trFai →ning data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS → VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, SinS keeping wE Tith:P tRhe E fTac:Pt Rth,a Ct t(hVerNe are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA → PR HinD f:igIDur HeD 4.:ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet ... assessment year 1972 ) shall the child . . . ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA → SS:NN XX:XX HV OO:VN, as iCl ounsstirdaeterd P iAn figure :5N foNr XthXe :sXeXnte HncVe OinO (8). NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . . . you also in . . . ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) ... PR ID ID RO IR NN NN RO TAHDHDPAETIRDTANJR. N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR ... ) ... IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET Uppgift NN DT om vilka PR PO SS orter NN XX PA som har XX OO utk o¨rning HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR → ETL, eN 5N, t hPeO p → NIL, eStc. p We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PfoRr hPAar:H (‘Vha asn’)d, not wloohken a itt iftusn scctoiornes w as a head, as in PA → SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to ex- amine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default set- tings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 .622). Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66.6% vs. 90. 1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA → UK:UK SS:PO TA:AJ AV SP:AJ sOtaAn:PceR, +nF A:HAV → +F:HV, Kth SeS fi:rPsOt + TFA:H:AVJ AisV correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46. 1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K ¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories Dependencies 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint a´(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1 123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V ´aclav Nov a´k (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V ¨axj¨ o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K ¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´ orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O ´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´ e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C ´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. [sent-2, score-0.719]
2 By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. [sent-3, score-0.438]
3 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al. [sent-5, score-0.171]
4 , 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). [sent-7, score-0.342]
5 If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al. [sent-15, score-0.156]
6 There is work on detecting errors in dependency corpus annotation (Boyd et al. [sent-18, score-0.296]
7 While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. [sent-22, score-0.171]
8 Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. [sent-28, score-0.312]
9 Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. [sent-29, score-0.295]
10 also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). [sent-36, score-0.141]
11 We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. [sent-37, score-0.466]
12 We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. [sent-38, score-0.158]
13 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). [sent-41, score-0.346]
14 Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). [sent-42, score-0.506]
15 Each method compares a given CFG rule to all the rules in a treebank grammar. [sent-43, score-0.416]
16 Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. [sent-44, score-0.324]
17 This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i. [sent-45, score-0.197]
18 First, the bigram method abstracts a rule to its bigrams. [sent-49, score-0.356]
19 Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. [sent-50, score-0.219]
20 By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. [sent-51, score-0.501]
21 While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. [sent-57, score-0.265]
22 1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs. [sent-62, score-0.342]
23 ’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. [sent-65, score-0.542]
24 The extent to which this rule is odd depends upon whether comparable rules—i. [sent-79, score-0.219]
25 , other ROOT rules or other VV rules (see section 3. [sent-81, score-0.282]
26 While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. [sent-83, score-0.141]
27 With a TOP rule, we have a rule for every 2Category definitions are in appendix A. [sent-84, score-0.219]
28 Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. [sent-88, score-0.389]
29 2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. [sent-90, score-0.219]
30 Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. [sent-91, score-0.171]
31 Thus, we use the pairs of dependency relations and POS tags as the units of comparison. [sent-95, score-0.171]
32 Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. [sent-96, score-0.179]
33 Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. [sent-98, score-0.36]
34 Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. [sent-99, score-0.171]
35 Consider the four different object (OO) rules in (2). [sent-100, score-0.141]
36 OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). [sent-107, score-0.141]
37 Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. [sent-108, score-0.205]
38 A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. [sent-109, score-0.424]
39 Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1. [sent-110, score-0.361]
40 em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). [sent-113, score-0.267]
41 1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. [sent-120, score-0.781]
42 , an adverb modifying a noun), dependencies in the wrong overall location of a rule (e. [sent-123, score-0.266]
43 , an adverb before an object), and rules with unnecessarily long ar- gument structures. [sent-125, score-0.141]
44 For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. [sent-126, score-0.269]
45 The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. [sent-128, score-0.219]
46 (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . [sent-129, score-0.426]
47 ’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3. [sent-132, score-0.762]
48 2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. [sent-134, score-0.274]
49 The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. [sent-136, score-0.36]
50 The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. [sent-137, score-0.247]
51 We thus determine reliability by seeing how often each subsequence occurs in the training rule set. [sent-138, score-0.219]
52 Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. [sent-139, score-0.625]
53 For example, TOP rules with more than one dependent are problematic, e. [sent-141, score-0.141]
54 We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. [sent-146, score-0.314]
55 (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. [sent-147, score-0.199]
56 Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. [sent-152, score-0.276]
57 1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. [sent-157, score-0.419]
58 For the long parsed rule TA → PR HinD f:igIDur HeD 4. [sent-159, score-0.306]
59 :ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i. [sent-160, score-0.282]
60 But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. [sent-163, score-0.156]
61 (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet . [sent-164, score-0.364]
62 2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). [sent-175, score-0.189]
63 (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. [sent-176, score-0.219]
64 , POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. [sent-188, score-0.219]
65 Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. [sent-189, score-0.39]
66 NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. [sent-192, score-0.638]
67 (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . [sent-193, score-0.179]
68 N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) . [sent-205, score-0.207]
69 N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR . [sent-209, score-0.207]
70 We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al. [sent-219, score-0.233]
71 We therefore currently examine the scores for elements functioning as dependents in a rule. [sent-230, score-0.203]
72 Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. [sent-232, score-0.307]
73 We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. [sent-233, score-0.281]
74 Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. [sent-235, score-0.142]
75 Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. [sent-237, score-0.191]
76 We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. [sent-238, score-0.258]
77 We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al. [sent-247, score-0.219]
78 In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. [sent-256, score-0.141]
79 To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). [sent-257, score-0.144]
80 For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 . [sent-258, score-0.263]
81 Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66. [sent-260, score-0.166]
82 1% LAS), showing that simply using a complete rule helps sort dependencies. [sent-263, score-0.219]
83 The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. [sent-270, score-0.481]
84 With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. [sent-271, score-0.174]
85 Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. [sent-275, score-0.171]
86 If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. [sent-284, score-0.171]
87 For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. [sent-285, score-0.896]
88 One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. [sent-288, score-0.44]
89 Other false positives are correctly-parsed elements that are a part of erroneous rules. [sent-289, score-0.141]
90 At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. [sent-306, score-0.185]
91 For Alpino, error detection is better with frequency than, for example, bigram scores. [sent-307, score-0.185]
92 This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. [sent-308, score-0.171]
93 For the whole rule scores, the DDT data is worse (compare its 46. [sent-312, score-0.282]
94 One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e. [sent-315, score-0.171]
95 5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. [sent-319, score-0.419]
96 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. [sent-323, score-0.354]
97 The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. [sent-324, score-0.312]
98 Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. [sent-326, score-0.39]
99 Our methods should be able to find POS errors in addition to dependency errors. [sent-327, score-0.249]
100 A transformational-based learner for dependency grammars in discharge summaries. [sent-356, score-0.171]
wordName wordTfidf (topN-words)
[('vv', 0.237), ('rule', 0.219), ('ir', 0.178), ('dickinson', 0.177), ('dependency', 0.171), ('rules', 0.141), ('sv', 0.141), ('bigram', 0.137), ('nn', 0.136), ('ddt', 0.135), ('subrule', 0.135), ('oo', 0.127), ('vg', 0.126), ('ret', 0.118), ('pos', 0.113), ('skall', 0.112), ('som', 0.112), ('subrules', 0.112), ('aa', 0.11), ('hv', 0.108), ('alpino', 0.108), ('xx', 0.106), ('hd', 0.102), ('ss', 0.098), ('ro', 0.095), ('id', 0.095), ('hoc', 0.09), ('bosque', 0.09), ('mstparser', 0.09), ('sambeskattas', 0.09), ('talbanken', 0.09), ('ei', 0.09), ('parsed', 0.087), ('lhs', 0.084), ('dt', 0.084), ('root', 0.082), ('pr', 0.081), ('vn', 0.08), ('boyd', 0.079), ('flagging', 0.079), ('anomalous', 0.079), ('errors', 0.078), ('aj', 0.077), ('elements', 0.077), ('det', 0.073), ('ig', 0.07), ('valency', 0.07), ('ad', 0.068), ('anomalies', 0.067), ('har', 0.067), ('inkomst', 0.067), ('taxed', 0.067), ('taxerings', 0.067), ('tidigare', 0.067), ('ab', 0.065), ('po', 0.065), ('scores', 0.064), ('erroneous', 0.064), ('las', 0.064), ('av', 0.064), ('maltparser', 0.064), ('whole', 0.063), ('dependents', 0.062), ('swedish', 0.062), ('head', 0.062), ('marsi', 0.061), ('hall', 0.059), ('aller', 0.059), ('attardi', 0.059), ('bick', 0.059), ('markus', 0.056), ('treebank', 0.056), ('problematic', 0.055), ('attachment', 0.054), ('revision', 0.054), ('shall', 0.053), ('element', 0.052), ('nilsson', 0.051), ('jr', 0.051), ('flagged', 0.051), ('noord', 0.051), ('precision', 0.048), ('parser', 0.048), ('threshold', 0.048), ('uas', 0.048), ('detection', 0.048), ('detecting', 0.047), ('uk', 0.047), ('dependencies', 0.047), ('athens', 0.046), ('flag', 0.046), ('sorting', 0.046), ('abb', 0.045), ('bara', 0.045), ('eckhard', 0.045), ('ihop', 0.045), ('income', 0.045), ('incomes', 0.045), ('loftsson', 0.045), ('nil', 0.045)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999905 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations
Author: Markus Dickinson
Abstract: We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001 ; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov a´k, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. 1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h.e ?c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 729–738, Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP → DT JJ JJ NN is very similar to eNsePn → eD.gT. ,J NJ PN N→ b DeTcau JsJe J Jth Ne sequences mdiiflfearr by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´ orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det g a˚r bara inte ihop . it goes just not together ‘It just doesn’t add up.’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. 12.. 23.. 34.. TOP → root ROOT:VV TROOPOT → → SoSt R:POOO VT:VV MVA:AB NA:AB PL:AB RSSO → P →O :5A. BN AN → ABB P SMSA → → AOB 56.. NPLA → A ABB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO → PO b. OO → DT:EN AT:AJ NN ET:VV c. OO → SS:PO QV VG:VV d. OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. TTh:eE question ixs now Tho:EwN we dNe)-, fsin(De s(ei, c) fOor)} t.he T comparable sele nmowen hto c. (3) S(ei) = max{s(ei, h) , s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long ar- gument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . and shall as previously be taxed togeher . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F → ++:++ SV pArAob:VleVm afrtiocm e figure 3, Cwohnesried eArA + Fsh →ould + +b:e+ a d SifVferent category (VG). The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP → root ROOT:AV ROOT:NN. Correspondingly, Pth →ere r are no rOulTe:sA wVith R OthOrTee: NeNle-. ments containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. s Ibnr tlheis i case, d+ tFo → ++:++ SfoVr VG:BV, +dF S → ++:++ S cVas VG:AV, a +nd+ ++F+ → ++:++ VSV, +VFG →:VV + a:l+l +ad SdV support Vfo,r a n+dF → ++:++ +SV+ being a legitimate dsdub sruuplep.o Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F → SV AA:VV and +F → ++:++ mAsA.:V SVin hcaev +e very l SittVle support i ann tdhe + trFai →ning data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS → VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, SinS keeping wE Tith:P tRhe E fTac:Pt Rth,a Ct t(hVerNe are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA → PR HinD f:igIDur HeD 4.:ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet ... assessment year 1972 ) shall the child . . . ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA → SS:NN XX:XX HV OO:VN, as iCl ounsstirdaeterd P iAn figure :5N foNr XthXe :sXeXnte HncVe OinO (8). NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . . . you also in . . . ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) ... PR ID ID RO IR NN NN RO TAHDHDPAETIRDTANJR. N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR ... ) ... IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET Uppgift NN DT om vilka PR PO SS orter NN XX PA som har XX OO utk o¨rning HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR → ETL, eN 5N, t hPeO p → NIL, eStc. p We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PfoRr hPAar:H (‘Vha asn’)d, not wloohken a itt iftusn scctoiornes w as a head, as in PA → SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to ex- amine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default set- tings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 .622). Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66.6% vs. 90. 1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA → UK:UK SS:PO TA:AJ AV SP:AJ sOtaAn:PceR, +nF A:HAV → +F:HV, Kth SeS fi:rPsOt + TFA:H:AVJ AisV correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46. 1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K ¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories Dependencies 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint a´(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1 123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V ´aclav Nov a´k (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V ¨axj¨ o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K ¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´ orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O ´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´ e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C ´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738
2 0.18695854 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar
Author: Yoshihide Kato ; Shigeki Matsubara
Abstract: This paper proposes a method of correcting annotation errors in a treebank. By using a synchronous grammar, the method transforms parse trees containing annotation errors into the ones whose errors are corrected. The synchronous grammar is automatically induced from the treebank. We report an experimental result of applying our method to the Penn Treebank. The result demonstrates that our method corrects syntactic annotation errors with high precision.
3 0.15898733 69 acl-2010-Constituency to Dependency Translation with Forests
Author: Haitao Mi ; Qun Liu
Abstract: Tree-to-string systems (and their forestbased extensions) have gained steady popularity thanks to their simplicity and efficiency, but there is a major limitation: they are unable to guarantee the grammaticality of the output, which is explicitly modeled in string-to-tree systems via targetside syntax. We thus propose to combine the advantages of both, and present a novel constituency-to-dependency translation model, which uses constituency forests on the source side to direct the translation, and dependency trees on the target side (as a language model) to ensure grammaticality. Medium-scale experiments show an absolute and statistically significant improvement of +0.7 BLEU points over a state-of-the-art forest-based tree-to-string system even with fewer rules. This is also the first time that a treeto-tree model can surpass tree-to-string counterparts.
4 0.14065842 48 acl-2010-Better Filtration and Augmentation for Hierarchical Phrase-Based Translation Rules
Author: Zhiyang Wang ; Yajuan Lv ; Qun Liu ; Young-Sook Hwang
Abstract: This paper presents a novel filtration criterion to restrict the rule extraction for the hierarchical phrase-based translation model, where a bilingual but relaxed wellformed dependency restriction is used to filter out bad rules. Furthermore, a new feature which describes the regularity that the source/target dependency edge triggers the target/source word is also proposed. Experimental results show that, the new criteria weeds out about 40% rules while with translation performance improvement, and the new feature brings an- other improvement to the baseline system, especially on larger corpus.
5 0.1372142 214 acl-2010-Sparsity in Dependency Grammar Induction
Author: Jennifer Gillenwater ; Kuzman Ganchev ; Joao Graca ; Fernando Pereira ; Ben Taskar
Abstract: A strong inductive bias is essential in unsupervised grammar induction. We explore a particular sparsity bias in dependency grammars that encourages a small number of unique dependency types. Specifically, we investigate sparsity-inducing penalties on the posterior distributions of parent-child POS tag pairs in the posterior regularization (PR) framework of Graça et al. (2007). In ex- periments with 12 languages, we achieve substantial gains over the standard expectation maximization (EM) baseline, with average improvement in attachment accuracy of 6.3%. Further, our method outperforms models based on a standard Bayesian sparsity-inducing prior by an average of 4.9%. On English in particular, we show that our approach improves on several other state-of-the-art techniques.
6 0.12986565 169 acl-2010-Learning to Translate with Source and Target Syntax
7 0.11907735 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification
8 0.11888769 241 acl-2010-Transition-Based Parsing with Confidence-Weighted Classification
9 0.11072469 9 acl-2010-A Joint Rule Selection Model for Hierarchical Phrase-Based Translation
10 0.10269238 99 acl-2010-Efficient Third-Order Dependency Parsers
11 0.093687855 20 acl-2010-A Transition-Based Parser for 2-Planar Dependency Structures
12 0.093602106 143 acl-2010-Importance of Linguistic Constraints in Statistical Dependency Parsing
13 0.087872751 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing
14 0.085504986 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
15 0.08289738 57 acl-2010-Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation
16 0.081546873 118 acl-2010-Fine-Grained Tree-to-String Translation Rule Extraction
17 0.075891316 195 acl-2010-Phylogenetic Grammar Induction
18 0.074188717 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression
19 0.074183077 102 acl-2010-Error Detection for Statistical Machine Translation Using Linguistic Features
20 0.073940106 130 acl-2010-Hard Constraints for Grammatical Function Labelling
topicId topicWeight
[(0, -0.229), (1, -0.066), (2, 0.077), (3, 0.016), (4, -0.1), (5, -0.08), (6, 0.152), (7, -0.001), (8, -0.051), (9, 0.086), (10, -0.031), (11, 0.033), (12, -0.006), (13, 0.016), (14, 0.142), (15, -0.072), (16, -0.011), (17, 0.004), (18, 0.042), (19, 0.022), (20, -0.074), (21, 0.083), (22, 0.111), (23, -0.004), (24, 0.044), (25, -0.076), (26, 0.056), (27, -0.04), (28, 0.063), (29, -0.002), (30, -0.032), (31, -0.009), (32, 0.018), (33, -0.057), (34, 0.062), (35, -0.043), (36, -0.001), (37, 0.063), (38, 0.004), (39, -0.047), (40, -0.007), (41, -0.017), (42, -0.163), (43, 0.101), (44, 0.123), (45, 0.025), (46, 0.038), (47, -0.246), (48, -0.02), (49, -0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.96555287 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations
Author: Markus Dickinson
Abstract: We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001 ; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov a´k, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. 1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h.e ?c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 729–738, Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP → DT JJ JJ NN is very similar to eNsePn → eD.gT. ,J NJ PN N→ b DeTcau JsJe J Jth Ne sequences mdiiflfearr by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´ orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det g a˚r bara inte ihop . it goes just not together ‘It just doesn’t add up.’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. 12.. 23.. 34.. TOP → root ROOT:VV TROOPOT → → SoSt R:POOO VT:VV MVA:AB NA:AB PL:AB RSSO → P →O :5A. BN AN → ABB P SMSA → → AOB 56.. NPLA → A ABB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO → PO b. OO → DT:EN AT:AJ NN ET:VV c. OO → SS:PO QV VG:VV d. OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. TTh:eE question ixs now Tho:EwN we dNe)-, fsin(De s(ei, c) fOor)} t.he T comparable sele nmowen hto c. (3) S(ei) = max{s(ei, h) , s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long ar- gument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . and shall as previously be taxed togeher . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F → ++:++ SV pArAob:VleVm afrtiocm e figure 3, Cwohnesried eArA + Fsh →ould + +b:e+ a d SifVferent category (VG). The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP → root ROOT:AV ROOT:NN. Correspondingly, Pth →ere r are no rOulTe:sA wVith R OthOrTee: NeNle-. ments containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. s Ibnr tlheis i case, d+ tFo → ++:++ SfoVr VG:BV, +dF S → ++:++ S cVas VG:AV, a +nd+ ++F+ → ++:++ VSV, +VFG →:VV + a:l+l +ad SdV support Vfo,r a n+dF → ++:++ +SV+ being a legitimate dsdub sruuplep.o Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F → SV AA:VV and +F → ++:++ mAsA.:V SVin hcaev +e very l SittVle support i ann tdhe + trFai →ning data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS → VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, SinS keeping wE Tith:P tRhe E fTac:Pt Rth,a Ct t(hVerNe are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA → PR HinD f:igIDur HeD 4.:ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet ... assessment year 1972 ) shall the child . . . ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA → SS:NN XX:XX HV OO:VN, as iCl ounsstirdaeterd P iAn figure :5N foNr XthXe :sXeXnte HncVe OinO (8). NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . . . you also in . . . ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) ... PR ID ID RO IR NN NN RO TAHDHDPAETIRDTANJR. N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR ... ) ... IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET Uppgift NN DT om vilka PR PO SS orter NN XX PA som har XX OO utk o¨rning HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR → ETL, eN 5N, t hPeO p → NIL, eStc. p We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PfoRr hPAar:H (‘Vha asn’)d, not wloohken a itt iftusn scctoiornes w as a head, as in PA → SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to ex- amine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default set- tings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 .622). Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66.6% vs. 90. 1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA → UK:UK SS:PO TA:AJ AV SP:AJ sOtaAn:PceR, +nF A:HAV → +F:HV, Kth SeS fi:rPsOt + TFA:H:AVJ AisV correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46. 1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K ¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories Dependencies 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint a´(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1 123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V ´aclav Nov a´k (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V ¨axj¨ o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K ¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´ orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O ´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´ e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C ´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738
2 0.67364007 48 acl-2010-Better Filtration and Augmentation for Hierarchical Phrase-Based Translation Rules
Author: Zhiyang Wang ; Yajuan Lv ; Qun Liu ; Young-Sook Hwang
Abstract: This paper presents a novel filtration criterion to restrict the rule extraction for the hierarchical phrase-based translation model, where a bilingual but relaxed wellformed dependency restriction is used to filter out bad rules. Furthermore, a new feature which describes the regularity that the source/target dependency edge triggers the target/source word is also proposed. Experimental results show that, the new criteria weeds out about 40% rules while with translation performance improvement, and the new feature brings an- other improvement to the baseline system, especially on larger corpus.
3 0.65777385 143 acl-2010-Importance of Linguistic Constraints in Statistical Dependency Parsing
Author: Bharat Ram Ambati
Abstract: Statistical systems with high accuracy are very useful in real-world applications. If these systems can capture basic linguistic information, then the usefulness of these statistical systems improve a lot. This paper is an attempt at incorporating linguistic constraints in statistical dependency parsing. We consider a simple linguistic constraint that a verb should not have multiple subjects/objects as its children in the dependency tree. We first describe the importance of this constraint considering Machine Translation systems which use dependency parser output, as an example application. We then show how the current state-ofthe-art dependency parsers violate this constraint. We present two new methods to handle this constraint. We evaluate our methods on the state-of-the-art dependency parsers for Hindi and Czech. 1
4 0.62165278 214 acl-2010-Sparsity in Dependency Grammar Induction
Author: Jennifer Gillenwater ; Kuzman Ganchev ; Joao Graca ; Fernando Pereira ; Ben Taskar
Abstract: A strong inductive bias is essential in unsupervised grammar induction. We explore a particular sparsity bias in dependency grammars that encourages a small number of unique dependency types. Specifically, we investigate sparsity-inducing penalties on the posterior distributions of parent-child POS tag pairs in the posterior regularization (PR) framework of Graça et al. (2007). In ex- periments with 12 languages, we achieve substantial gains over the standard expectation maximization (EM) baseline, with average improvement in attachment accuracy of 6.3%. Further, our method outperforms models based on a standard Bayesian sparsity-inducing prior by an average of 4.9%. On English in particular, we show that our approach improves on several other state-of-the-art techniques.
5 0.59830183 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification
Author: Wenbin Jiang ; Qun Liu
Abstract: In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.
6 0.57581776 69 acl-2010-Constituency to Dependency Translation with Forests
7 0.57460678 9 acl-2010-A Joint Rule Selection Model for Hierarchical Phrase-Based Translation
8 0.56513888 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar
9 0.55002213 12 acl-2010-A Probabilistic Generative Model for an Intermediate Constituency-Dependency Representation
10 0.52595806 195 acl-2010-Phylogenetic Grammar Induction
11 0.52278268 169 acl-2010-Learning to Translate with Source and Target Syntax
12 0.47581536 99 acl-2010-Efficient Third-Order Dependency Parsers
13 0.46106005 130 acl-2010-Hard Constraints for Grammatical Function Labelling
14 0.45049885 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
15 0.43694907 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing
16 0.43532977 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression
17 0.42672837 124 acl-2010-Generating Image Descriptions Using Dependency Relational Patterns
18 0.42049813 162 acl-2010-Learning Common Grammar from Multilingual Corpus
19 0.40934497 67 acl-2010-Computing Weakest Readings
20 0.40772909 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints
topicId topicWeight
[(14, 0.027), (18, 0.308), (25, 0.081), (33, 0.034), (39, 0.024), (42, 0.018), (44, 0.018), (59, 0.059), (73, 0.042), (76, 0.046), (78, 0.043), (83, 0.09), (84, 0.026), (98, 0.115)]
simIndex simValue paperId paperTitle
same-paper 1 0.7714777 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations
Author: Markus Dickinson
Abstract: We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001 ; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and eval- uation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov a´k, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3. 1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Proce dinUgsp osfa tlhae, 4S8wthed Aen n,u 1a1l-1 M6e Jeutilnyg 2 o0f1 t0h.e ?c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 729–738, Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → rJJu NeN to provides support fso,r aN rPu → uDcTh aJJs J NJ NN, iJnJ NthNat pitr vshidareess tuhpep oJrJt NfoNr sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP → DT JJ JJ NN is very similar to eNsePn → eD.gT. ,J NJ PN N→ b DeTcau JsJe J Jth Ne sequences mdiiflfearr by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´ orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det g a˚r bara inte ihop . it goes just not together ‘It just doesn’t add up.’ SS MA NA PL Det g a˚r bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. 12.. 23.. 34.. TOP → root ROOT:VV TROOPOT → → SoSt R:POOO VT:VV MVA:AB NA:AB PL:AB RSSO → P →O :5A. BN AN → ABB P SMSA → → AOB 56.. NPLA → A ABB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP → root ROOT:AV ROOT:NN, wulheesre su multiple categories hROavOe T b:AeeVn parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO → PO b. OO → DT:EN AT:AJ NN ET:VV c. OO → SS:PO QV VG:VV d. OO → DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same depen- dency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. TTh:eE question ixs now Tho:EwN we dNe)-, fsin(De s(ei, c) fOor)} t.he T comparable sele nmowen hto c. (3) S(ei) = max{s(ei, h) , s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long ar- gument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F → ++:++ SV AA:VV, and the AA:VV position wF i→ll b +e low-scoring b:VecVau,s aen dth teh ++:++ VSVV context does not support it. (4) Makars o¨vriga inkomster a¨r B-inkomster spouses’ other incomes are B-incomes och skall som tidigare sambeskattas . and shall as previously be taxed togeher . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA och skall som tidigare ++ SV UK AJ VG sambeskattas VV ++ +F UK SS och skall som tidigare ++ SV UK AJ AA sambeskattas VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F → ++:++ SV pArAob:VleVm afrtiocm e figure 3, Cwohnesried eArA + Fsh →ould + +b:e+ a d SifVferent category (VG). The rule without this error, +F → ++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP → root ROOT:AV ROOT:NN. Correspondingly, Pth →ere r are no rOulTe:sA wVith R OthOrTee: NeNle-. ments containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = Psub∈B:ei∈sub C(sub, c) For example, Pwith c = +F, the frequency of +F → ++:++ SV as a subrule is added to the scores f→or ++:++ aVnd a sS aV. s Ibnr tlheis i case, d+ tFo → ++:++ SfoVr VG:BV, +dF S → ++:++ S cVas VG:AV, a +nd+ ++F+ → ++:++ VSV, +VFG →:VV + a:l+l +ad SdV support Vfo,r a n+dF → ++:++ +SV+ being a legitimate dsdub sruuplep.o Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F → SV AA:VV and +F → ++:++ mAsA.:V SVin hcaev +e very l SittVle support i ann tdhe + trFai →ning data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS → VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, SinS keeping wE Tith:P tRhe E fTac:Pt Rth,a Ct t(hVerNe are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA → PR HinD f:igIDur HeD 4.:ID F IoRr t:hIeR lAonNg:R pOar JR:IR, ea lTl Aele →men PtRs get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N a¨r det g ¨aller inkomst a˚ret 1971 ( when it concerns the income year 1971 ( taxerings a˚ret 1972 ) skall barnet ... assessment year 1972 ) shall the child . . . ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1 , c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA → SS:NN XX:XX HV OO:VN, as iCl ounsstirdaeterd P iAn figure :5N foNr XthXe :sXeXnte HncVe OinO (8). NT,h aiss rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift om vilka orter som information of which neighborhood who har utk o¨rning finner Ni has delivery find ocks a˚ i . . . you also in . . . ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N a¨r det g ¨aller inkomst a˚ret 1971 ( taxerings a˚ret 1972 ) ... PR ID ID RO IR NN NN RO TAHDHDPAETIRDTANJR. N a¨r det g ¨aller PR ID inkomst a˚ret ID NN 1971 ( RO IR taxerings a˚ret NN 1972 RO IR ... ) ... IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET Uppgift NN DT om vilka PR PO SS orter NN XX PA som har XX OO utk o¨rning HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR → ETL, eN 5N, t hPeO p → NIL, eStc. p We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PfoRr hPAar:H (‘Vha asn’)d, not wloohken a itt iftusn scctoiornes w as a head, as in PA → SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to ex- amine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default set- tings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For ex- × ample, the bigram method with a threshold of 39 leads to finding 283 errors (455 .622). Dependency e 2le8m3e enrrtos rws (it4h5 frequency below the lowest threshold have lower attachment scores (66.6% vs. 90. 1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA → IG:IG RO has a correct dependency relation (IG) G be:tIwGee RnO Oth hea aPsO aS c tags IcGt d daenpde nitsd e hnecayd RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA → UK:UK SS:PO TA:AJ AV SP:AJ sOtaAn:PceR, +nF A:HAV → +F:HV, Kth SeS fi:rPsOt + TFA:H:AVJ AisV correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46. 1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anoma- lous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K ¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories Dependencies 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint a´(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1 123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V ´aclav Nov a´k (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V ¨axj¨ o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K ¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´ orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O ´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´ e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C ´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738
2 0.69815683 115 acl-2010-Filtering Syntactic Constraints for Statistical Machine Translation
Author: Hailong Cao ; Eiichiro Sumita
Abstract: Source language parse trees offer very useful but imperfect reordering constraints for statistical machine translation. A lot of effort has been made for soft applications of syntactic constraints. We alternatively propose the selective use of syntactic constraints. A classifier is built automatically to decide whether a node in the parse trees should be used as a reordering constraint or not. Using this information yields a 0.8 BLEU point improvement over a full constraint-based system.
3 0.52418584 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar
Author: Mohit Bansal ; Dan Klein
Abstract: We present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set, in contrast to much recent work on tree selection in data-oriented parsing and treesubstitution grammar learning. We require only simple, deterministic grammar symbol refinement, in contrast to recent work on latent symbol refinement. Moreover, our parser requires no explicit lexicon machinery, instead parsing input sentences as character streams. Despite its simplicity, our parser achieves accuracies of over 88% F1 on the standard English WSJ task, which is competitive with substantially more complicated state-of-theart lexicalized and latent-variable parsers. Additional specific contributions center on making implicit all-fragments parsing efficient, including a coarse-to-fine inference scheme and a new graph encoding.
4 0.5187937 71 acl-2010-Convolution Kernel over Packed Parse Forest
Author: Min Zhang ; Hui Zhang ; Haizhou Li
Abstract: This paper proposes a convolution forest kernel to effectively explore rich structured features embedded in a packed parse forest. As opposed to the convolution tree kernel, the proposed forest kernel does not have to commit to a single best parse tree, is thus able to explore very large object spaces and much more structured features embedded in a forest. This makes the proposed kernel more robust against parsing errors and data sparseness issues than the convolution tree kernel. The paper presents the formal definition of convolution forest kernel and also illustrates the computing algorithm to fast compute the proposed convolution forest kernel. Experimental results on two NLP applications, relation extraction and semantic role labeling, show that the proposed forest kernel significantly outperforms the baseline of the convolution tree kernel. 1
Author: Marine Carpuat ; Yuval Marton ; Nizar Habash
Abstract: We study the challenges raised by Arabic verb and subject detection and reordering in Statistical Machine Translation (SMT). We show that post-verbal subject (VS) constructions are hard to translate because they have highly ambiguous reordering patterns when translated to English. In addition, implementing reordering is difficult because the boundaries of VS constructions are hard to detect accurately, even with a state-of-the-art Arabic dependency parser. We therefore propose to reorder VS constructions into SV order for SMT word alignment only. This strategy significantly improves BLEU and TER scores, even on a strong large-scale baseline and despite noisy parses.
6 0.50141299 185 acl-2010-Open Information Extraction Using Wikipedia
7 0.50137591 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
8 0.50110376 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
9 0.50079608 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing
10 0.49974683 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields
11 0.49927321 128 acl-2010-Grammar Prototyping and Testing with the LinGO Grammar Matrix Customization System
12 0.49901271 75 acl-2010-Correcting Errors in a Treebank Based on Synchronous Tree Substitution Grammar
13 0.49847287 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
14 0.49712858 130 acl-2010-Hard Constraints for Grammatical Function Labelling
15 0.4967162 139 acl-2010-Identifying Generic Noun Phrases
16 0.49639842 169 acl-2010-Learning to Translate with Source and Target Syntax
17 0.49560249 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation
18 0.49419677 162 acl-2010-Learning Common Grammar from Multilingual Corpus
19 0.49393064 248 acl-2010-Unsupervised Ontology Induction from Text
20 0.49327558 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining