emnlp emnlp2010 emnlp2010-52 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Dominic Espinosa ; Rajakrishnan Rajkumar ; Michael White ; Shoshana Berleant
Abstract: We present the first evaluation of the utility of automatic evaluation metrics on surface realizations of Penn Treebank data. Using outputs of the OpenCCG and XLE realizers, along with ranked WordNet synonym substitutions, we collected a corpus of generated surface realizations. These outputs were then rated and post-edited by human annotators. We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. In contrast to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. We also find that all of the metrics correctly predict more than half of the significant systemlevel differences, though none are correct in all cases. We conclude with a discussion ofthe implications for the utility of such metrics in evaluating generation in the presence of variation. A further result of our research is a corpus of post-edited realizations, which will be made available to the research community. 1 Introduction and Background In building surface-realization systems for natural language generation, there is a need for reliable automated metrics to evaluate the output. Unlike in parsing, where there is usually a single goldstandard parse for a sentence, in surface realization there are usually many grammatically-acceptable ways to express the same concept. This parallels the task of evaluating machine-translation (MT) systems: for a given segment in the source language, 564 there are usually several acceptable translations into the target language. As human evaluation of translation quality is time-consuming and expensive, a number of automated metrics have been developed to evaluate the quality of MT outputs. In this study, we investigate whether the metrics developed for MT evaluation tasks can be used to reliably evaluate the outputs of surface realizers, and which of these metrics are best suited to this task. A number of surface realizers have been developed using the Penn Treebank (PTB), and BLEU scores are often reported in the evaluations of these systems. But how useful is BLEU in this context? The original BLEU study (Papineni et al., 2001) scored MT outputs, which are of generally lower quality than grammar-based surface realizations. Furthermore, even for MT systems, the usefulness of BLEU has been called into question (Callison-Burch et al., 2006). BLEU is designed to work with multiple reference sentences, but in treebank realization, there is only a single reference sentence available for comparison. A few other studies have investigated the use of such metrics in evaluating the output of NLG systems, notably (Reiter and Belz, 2009) and (Stent et al., 2005). The former examined the performance of BLEU and ROUGE with computer-generated weather reports, finding a moderate correlation with human fluency judgments. The latter study applied several MT metrics to paraphrase data from Barzilay and Lee’s corpus-based system (Barzilay and Lee, 2003), and found moderate correlations with human adequacy judgments, but little correlation with fluency judgments. Cahill (2009) examined the performance of six MT metrics (including BLEU) in evaluating the output of a LFG-based surface realizer for ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e1 on O Ectmobpeir ic 2a0l1 M0.e ?tc ho2d0s10 in A Nsastoucira tlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinag eusis 5t6ic4s–574, German, also finding only weak correlations with the human judgments. To study the usefulness of evaluation metrics such as BLEU on the output of grammar-based surface realizers used with the PTB, we assembled a corpus of surface realizations from three different realizers operating on Section 00 of the PTB. Two human judges evaluated the adequacy and fluency of each of the realizations with respect to the reference sentence. The realizations were then scored with a number of automated evaluation metrics developed for machine translation. In order to investigate the correlation of targeted metrics with human evaluations, and gather other acceptable realizations for future evaluations, the judges manually repaired each unacceptable realization during the rating task. In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. However, when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. This suggests that multiple metrics should be routinely consulted when comparing realizer systems. Overall, our methodology is similar to that of previous MT meta-evaluations, in that we collected human judgments of system outputs, and compared these scores with those assigned by automatic metrics. A recent alternative approach to paraphrase evaluation is ParaMetric (Callison-Burch et al., 2008); however, it requires a corpus of annotated (aligned) paraphrases (which does not yet exist for PTB data), and is arguably focused more on paraphrase analysis than paraphrase generation. The plan of the paper is as follows: Section 2 discusses the preparation of the corpus of surface realizations. Section 3 describes the human evaluation task and the automated metrics applied. Sections 4 and 5 present and discuss the results of these evaluations. We conclude with some general observations about automatic evaluation of surface realizers, and some directions for further research. 565 2 Data Preparation We collected realizations of the sentences in Section 00 of the WSJ corpus from the following three sources: 1. OpenCCG, a CCG-based chart realizer (White, 2006) 2. The XLE Generator, a LFG-based system developed by Xerox PARC (Crouch et al., 2008) 3. WordNet synonym substitutions, to investigate how differences in lexical choice compare to grammar-based variation.1 Although all three systems used Section 00 of the PTB, they were applied with various parameters (e.g., language models, multiple-output versus single-output) and on different input structures. Accordingly, our study does not compare OpenCCG to XLE, or either of these to the WordNet system. 2.1 OpenCCG realizations OpenCCG is an open source parsing/realization library with multimodal extensions to CCG (Baldridge, 2002). The OpenCCG chart realizer takes logical forms as input and produces strings by combining signs for lexical items. Alternative realizations are scored using integrated n-gram and perceptron models. For robustness, fragments are greedily assembled when necessary. Realizations were generated from 1,895 gold standard logical forms, created by constrained parsing of development-section derivations. The following OpenCCG models (which differ essentially in the way the output is ranked) were used: 1. Baseline 1: Output ranked by a trigram word model 2. Baseline 2: Output ranked using three language models (3-gram words 3-gram words with named entity class replacement factored language model of words, POS tags and CCG supertags) + + 1Not strictly surface realizations, since they do not involve an abstract input specification, but for simplicity we refer to them as realizations throughout. 3. Baseline 3: Perceptron with syntax features and the three LMs mentioned above 4. Perceptron full-model: n-best realizations ranked using perceptron with syntax features and the three n-gram models, as well as discriminative n-grams The perceptron model was trained on sections 0221 of the CCGbank, while a grammar extracted from section 00-21 was used for realization. In addition, oracle supertags were inserted into the chart during realization. The purpose of such a non-blind testing strategy was to evaluate the quality of the output produced by the statistical ranking models in isolation, rather than focusing on grammar coverage, and avoid the problems associated with lexical smoothing, i.e. lexical categories in the development section not being present in the training section. To enrich the variation in the generated realizations, dative-alternation was enforced during realization by ensuring alternate lexical categories of the verb in question, as in the following example: (1) the executives gave [the chefs] [a standing ovation] (2) the executives gave [a standing ovation] [to the chefs] 2.2 XLE realizations The corpus of realizations generated by the XLE system contained 42,527 surface realizations of approximately 1,421 section 00 sentences (an average of 30 per sentence), initially unranked. The LFG f-structures used as input to the XLE generator were derived from automatic parses, as described in (Riezler et al., 2002). The realizations were first tokenized using Penn Treebank conventions, then ranked using perplexities calculated from the same trigram word model used with OpenCCG. For each sentence, the top 4 realizations were selected. The XLE generator provides an interesting point of comparison to OpenCCG as it uses a manuallydeveloped grammar with inputs that are less abstract but potentially noisier, as they are derived from automatic parses rather than gold-standard ones. 566 2.3 WordNet synonymizer To produce an additional source of variation, the nouns and verbs of the sentences in section 00 of the PTB were replaced with all of their WordNet synonyms. Verb forms were generated using verb stems, part-of-speech tags, and the morphg tool.2 These substituted outputs were then filtered using the n-gram data which Google Inc. has made available.3 Those without any 5-gram matches centered on the substituted word (or 3-gram matches, in the case of short sentences) were eliminated. 3 Evaluation From the data sources described in the previous sec- tion, a corpus of realizations to be evaluated by the human judges was constructed by randomly choosing 305 sentences from section 00, then selecting surface realizations of these sentences using the following algorithm: 1. Add OpenCCG’s best-scored realization. 2. Add other OpenCCG realizations until all four models are represented, to a maximum of 4. 3. Add up to 4 realizations from either the XLE system or the WordNet pool, chosen randomly. The intent was to give reasonable coverage of all realizer systems discussed in Section 2 without overloading the human judges. “System” here means any instantiation that emits surface realizations, including various configurations of OpenCCG (using different language models or ranking systems), and these can be multiple-output, such as an n-best list, or single-output (best-only, worst-only, etc.). Accordingly, more realizations were selected from the OpenCCG realizer because 5 different systems were being represented. Realizations were chosen randomly, rather than according to sentence types or other criteria, in order to produce a representative sample of the corpus. In total, 2,114 realizations were selected for evaluation. 2http : //www. informatics . sussex. ac .uk/ re search/ groups / nlp / carro l /morph .html l 3http : //www . ldc . upenn .edu/Catalog/docs/ LDC2 0 0 6T 13 / readme .txt 3.1 Human judgments Two human judges evaluated each surface realization on two criteria: adequacy, which represents the extent to which the output conveys all and only the meaning of the reference sentence; and fluency, the extent to which it is grammatically acceptable. The realizations were presented to the judges in sets containing a reference sentence and the 1-8 outputs selected for that sentence. To aid in the evaluation of adequacy, one sentence each of leading and trailing context were displayed. Judges used the guidelines given in Figure 1, based on the scales developed by the NIST Machine Translation Evaluation Workshop. In addition to rating each realization on the two five-point scales, each judge also repaired each output which he or she did not judge to be fully adequate and fluent. An example is shown in Figure 2. These repairs resulted in new reference sentences for a substantial number of sentences. These repaired realizations were later used to calculate targeted versions of the evaluation metrics, i.e., using the repaired sentence as the reference sentence. Although targeted metrics are not fully automatic, they are of interest because they allow the evaluation algorithm to focus on what is actually wrong with the input, rather than all textual differences. Notably, targeted TER (HTER) has been shown to be more consistent with human judgments than human annotators are with one another (Snover et al., 2006). 3.2 Automatic evaluation The realizations were also evaluated using seven automatic metrics: • IBM’s BLEU, which scores a hypothesis by counting n-gram matches with the reference sentence (Papineni et al., 2001), with smoothing as described in (Lin and Och, 2004) • • • • • • The NIST n-gram evaluation metric, similar to BLEU, but rewarding rarer n-gram matches, and using a different length penalty METEOR, which measures the harmonic mean of unigram precision and recall, with a higher weight for recall (Banerjee and Lavie, 2005) 567 TER (Translation Edit Rate), a measure of the number of edits required to transform a hypothesis sentence into the reference sentence (Snover et al., 2006) TERP, an augmented version of TER which performs phrasal substitutions, stemming, and checks for synonyms, among other improvements (Snover et al., 2009) TERPA, an instantiation of TERP with edit weights optimized for correlation with adequacy in MT evaluations GTM (General Text Matcher), a generaliza- tion of the F-measure that rewards contiguous matching spans (Turian et al., 2003) Additionally, targeted versions of BLEU, METEOR, TER, and GTM were computed by using the human-repaired outputs as the reference set. The human repair was different from the reference sentence in 193 cases (about 9% of the total), and we expected this to result in better scores and correlations with the human judgments overall. 4 Results 4.1 Human judgments Table 1 summarizes the dataset, as well as the mean adequacy and fluency scores garnered from the human evaluation. Overall adequacy and fluency judgments were high (4.16, 3.63) for the realizer systems on average, and the best-rated realizer systems achieved mean fluency scores above 4. 4.2 Inter-annotator agreement Inter-annotator agreement was measured using the κ-coefficient, which is commonly used to measure the extent to which annotators agree in category P(1A−)P−(PE()E), judgment tasks. κ is defined as where P(A) is the observed agreement 1 b−etPw(eEe)n annotators and P(E) is the probability of agreement due to chance (Carletta, 1996). Chance agreement for this data is calculated by the method discussed in Carletta’s squib. However, in previous work in MT meta-evaluation, Callison-Burch et al. (2007), assume the less strict criterion of uniform chance agreement, i.e. for a five-point scale. They also 51 Score Adequacy Fluency 5All the meaning of the referencePerfectly grammatical 4 Most of the meaning Awkward or non-native; punctuation errors 3 Much of the meaning Agreement errors or minor syntactic problems 2 Meaning substantially different Major syntactic problems, such as missing words 1 Meaning completely different Completely ungrammatical Figure Ref. Realiz. Repair 1: Rating scale and guidelines It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf spurns them again It weren’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them Figure 2: Example of repair introduce the notion of “relative” κ, which measures how often two or more judges agreed that A > B, A = B, or A < B for two outputs A and B, irrespective of the specific values given on the five-point scale; here, uniform chance agreement is taken to be We report both absolute and relative κ in Table 2, using actual chance agreement rather than uniform chance agreement. 31. The κ scores of0.60 for adequacy and 0.63 for fluency across the entire dataset represent “substantial” agreement, according to the guidelines discussed in (Landis and Koch, 1977), better than is typically reported for machine translation evaluation tasks; for example, Callison-Burch et al. (2007) reported “fair” agreement, with κ = 0.281 for fluency and κ = 0.307 for adequacy (relative). Assuming the uniform chance agreement that the previously cited work adopts, our inter-annotator agreements (both absolute and relative) are still higher. This is likely due to the generally high quality of the realizations evaluated, leading to easier judgments. 4.3 Correlation with automatic evaluation To determine how well the automatic evaluation methods described in Section 3 correlate with the human judgments, we averaged the human judgments for adequacy and fluency, respectively, for each of the rated realizations, and then computed both Pearson’s correlation coefficient and Spearman’s rank correlation coefficient between these scores and each of the metrics. Spearman’s correlation makes fewer assumptions about the distribu- tion of the data, but may not reflect a linear rela568 tionship that is actually present. Both are frequently reported in the literature. Due to space constraints, we show only Spearman’s correlation, although the TER family scored slightly better on Pearson’s coefficient, relatively. The results for Spearman’s correlation are given in Table 3. Additionally, the average scores for adequacy and fluency were themselves averaged into a single score, following (Snover et al., 2009), and the Spearman’s correlation of each of the automatic metrics with these scores are given in Table 4. All reported correlations are significant at p < 0.001. 4.4 Bootstrap sampling of correlations For each of the sub-corpora shown in Table 1, we computed confidence intervals for the correlations between adequacy and fluency human scores with selected automatic metrics (BLEU, HBLEU, TER, TERP, and HTER) as described in (Koenh, 2004). We sampled each sub-corpus 1000 times with replace- ment, and calculated correlations between the rankings induced by the human scores and those induced by the metrics for each reference sentence. We then used these coefficients to estimate the confidence interval, after excluding the top 25 and bottom 25 coefficients, following (Lin and Och, 2004). The results of this for the BLEU metric are shown in Table 5. We determined which correlations lay within the 95% confidence interval of the best performing metric in each row of Table Table 3; these figures are italicized. 5 Discussion 5.1 Human judgments of systems The results for the four OpenCCG perceptron models mostly confirm those reported in (White and Rajkumar, 2009), with one exception: the B-3 model was below B-2, though the P-B (perceptron-best) model still scored highest. This may have been due to differences in the testing scenario. None of the differences in adequacy scores among the individual systems are significant, with the exception of the WordNet system. In this case, the lack of wordsense disambiguation for the substituted words results in a poor overall adequacy score (e.g., wage floor → wage story). Conversely, it scores highest ffoloro fluency, as substituting a noun or tve srcbo rwesith h a synonym does not usually introduce ungrammaticality. 5.2 Correlations of human judgments with MT metrics Of the non-human-targeted metrics evaluated, BLEU and TER/TERP demonstrate the highest correlations with the human judgments of fluency (r = 0.62, 0.64). The TER family of evaluation metrics have been observed to perform very well in MTevaluation tasks, and although the data evaluated here differs from typical MT data in some important ways, the correlation of TERP with the human judgments is substantial. In contrast with previous MT evaluations where TERP performs considerably better than TER, these scored close to equal on our data, possibly because TERP’s stem, synonym, and paraphrase matching are less useful when most of the variation is syntactic. The correlations with BLEU and METEOR are lower than those reported in (Callison-Burch et al., 2007); in that study, BLEU achieved adequacy and fluency correlations of 0.690 and 0.722, respectively, and METEOR achieved 0.701 and 0.719. The correlations for these metrics might be expected to be lower for our data, since overall quality is higher, making the metrics’ task more difficult as the outputs involve subtler differences between acceptable and unacceptable variation. The human-targeted metrics (represented by the prefixed H in the data tables) correlated even more strongly with the human judgments, compared to the non-targeted versions. HTER demonstrated the best 569 correlation with realizer fluency (r = 0.75). For several kinds of acceptable variation involving the rearrangement of constituents (such as dative shift), TERP gives a more reasonable score than BLEU, due to its ability to directly evaluate phrasal shifts. The following realization was rated 4.5 for fluency, and was more correctly ranked by TERP than BLEU: (3) Ref: The deal also gave Mitsui access to a high-tech medical product. (4) Realiz.: The deal also gave access to a high-tech medical product to Mitsui. For each reference sentence, we compared the ranking of its realizations induced from the human scores to the ranking induced from the TERP score, and counted the rank errors by the latter, informally categorizing them by error type (see Table 7). In the 50 sentences with the highest numbers of rank errors, 17 were affected by punctuation differences, typically involving variation in comma placement. Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. However, TERP penalizes such insertions or deletions. Agreement errors are another frequent source of ranking errors for TERP. The human judges tended to harshly penalize sentences with number-agreement or tense errors, whereas TERP applies only a single substitution penalty for each such error. We expect that with suitable optimization of edit weights to avoid over-penalizing punctuation shifts and underpenalizing agreement errors, TERP would exhibit an even stronger correlation with human fluency judgments. None of the evaluation metrics can distinguish an acceptable movement of a word or constituent from an unacceptable movement, with only one reference sentence. A substantial source of error for both TERP and BLEU is variation in adverbial placement, as shown in (7). Similar errors are seen with prepositional phrases and some commonly-occurring temporal adverbs, which typically admit a number of variations in placement. Another important example of acceptable variation which these metrics do not generally rank correctly is dative alternation: Ref. We need to clarify what exactly is wrong with it. Realiz. Flu. TERP BLEU We need to clarify exactly what is wrong with it.50.10.5555 We need to clarify exactly what ’s wrong with it. 5 0.2 0.4046 (7) We need to clarify what , exactly , is wrong with it. 5 0.2 0.5452 We need to clarify what is wrong with it exactly. 4.5 0.1 0.6756 We need to clarify what exactly , is wrong with it. 4 0.1 0.7017 We need to clarify what , exactly is wrong with it. 4 0.1 0.7017 We needs to clarify exactly what is wrong with it. (5) Ref. When test booklets were passed out 48 hours ahead of time, she says she copied questions in the social studies section and gave the answers to students. (6) Realiz. When test booklets were passed out 48 hours ahead of time , she says she copied questions in the social studies section and gave students the answers. The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al., 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output ofa surface realizer for German, which found only a weak correlation. However, the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. In Cahill’s study, the outputs of the realizer were almost always grammatically correct, and the automated evaluation metrics were ranking markedness instead of grammatical acceptability. 5.3 System-level comparisons In order to investigate the efficacy of the metrics in ranking different realizer systems, or competing realizations from the same system generated using different ranking models, we considered seven different “systems” from the whole dataset of realizations. These consisted of five OpenCCG-based realizations (the best realization from three baseline models, and the best and the worst realization from the full perceptron model), and two XLE-based sys- tems (the best and the worst realization, after ranking the outputs of the XLE realizer with an n-gram model). The mean of the combined adequacy and 570 3 0.103 0.346 fluency scores of each of these seven systems was compared with that of every other system, resulting in 21 pairwise comparisons. Then Tukey’s HSD test was performed to determine the systems which differed significantly in terms of the average adequacy and fluency rating they received.4 The test revealed five pairwise comparisons where the scores were significantly different. Subsequently, for each of these systems, an overall system-level score for each of the MT metrics was calculated. For the five pairwise comparisons where the adequacy-fluency group means differed significantly, we checked whether the metric ranked the systems correctly. Table 8 shows the results of a pairwise comparison between the ranking induced by each evaluation metric, and the ranking induced by the human judgments. Five of the seven non- targeted metrics correctly rank more than half of the systems. NIST, METEOR, and GTM get the most comparisons right, but neither NIST nor GTM correctly rank the OpenCCG-baseline model 1 with respect to the XLE-best model. TER and TERP get two of the five comparisons correct, and they incorrectly rank two of the five OpenCCG model comparisons, as well as the comparison between the XLE-worst and OpenCCG-best systems. For the targeted metrics, HNIST is correct for all five comparisons, while neither HBLEU nor HMETEOR correctly rank all the OpenCCG models. On the other hand, HTER and HGTM incorrectly rank the XLE-best system versus OpenCCG-based models. In summary, some of the metrics get some of the rankings correct, but none of the non-targeted metrics get all of them correct. Moreover, different metrics make different ranking errors. This argues for 4This particular test was chosen since it corrects for multiple post-hoc analyses conducted on the same data-set. the use of multiple metrics in comparing realizer systems. 6 Conclusion Our study suggests that although the task of evaluating the output from realizer systems differs from the task of evaluating machine translations, the automatic metrics used to evaluate MT outputs deliver moderate correlations with combined human fluency and adequacy scores when used on surface realizations. We also found that the MT-evaluation metrics are useful in evaluating different versions of the same realizer system (e.g., the various OpenCCG realization ranking models), and finding cases where a system is performing poorly. As in MT-evaluation tasks, human-targeted metrics have the highest correlations with human judgments overall. These results suggest that the MT-evaluation metrics are useful for developing surface realizers. However, the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. In our study, the targeted version of TERP correlated most strongly with human judgments of fluency. In future work, the performance of the TER family of metrics on this data might be improved by opti- mizing the edit weights used in computing its scores, so as to avoid over-penalizing punctuation movements or under-penalizing agreement errors, both of which were significant sources of ranking errors. Multiple reference sentences may also help mitigate these problems, and the corpus of human-repaired realizations that has resulted from our study is a step in this direction, as it provides multiple references for some cases. We expect the corpus to also prove useful for feature engineering and error analysis in developing better realization models.5 Acknowledgements We thank Aoife Cahill and Tracy King for providing us with the output of the XLE generator. We also thank Chris Callison-Burch and the anonymous reviewers for their helpful comments and suggestions. 5The corpus can be downloaded from http : / /www . l ing .ohio-st ate . edu / ˜mwhite / dat a / emnlp 10 / . 571 This material is based upon work supported by the National Science Foundation under Grant No. 0812297. References Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. R. Barzilay and L. Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In proceedings of HLT-NAACL, volume 2003, pages 16–23. Aoife Cahill. 2009. Correlating human and automatic evaluation of a german surface realiser. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 97–100, Suntec, Singapore, August. Association for Computational Linguistics. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Reevaluating the role of BLEU in machine translation research. In Proceedings of EACL, volume 2006, pages 249–256. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation ofmachine translation. In StatMT ’07: Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158, Morristown, NJ, USA. Association for Computational Linguistics. C. Callison-Burch, T. Cohn, and M. Lapata. 2008. Parametric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 97–104. Association for Computational Linguistics. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22(2):249–254. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2008. Xle documentation. Technical report, Palo Alto Research Center. Philip Koenh. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1): 159–174. Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In COLING ’04: Proceedings Chin-Yew of the 20th international conference on Computational 501, Morristown, NJ, USA. Associfor Computational Linguistics. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Linguistics, page ation K. Bleu: a method for automatic evaluation of machine translation. E. Technical report, IBM Research. Reiter and A. Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529–558. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–23 1. M. Snover, N. Madnani, B.J. Dorr, and R. Schwartz. 2009. Fluency, adequacy, or HTER?: exploring different human judgments with a tunable MT metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259–268. Association for Computational Linguistics. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of CICLing. J.P. Turian, L. Shen, and I.D. Melamed. 2003. Evaluation of machine translation and its evaluation. recall (C— R), 100:2. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410–419, Singapore, August. Association for Computational Linguistics. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. 572 Table 1: Descriptive statistics Table 2: Corpora-wise inter-annotator agreement (absolute and relative κ values shown) SXAROWlpeyLos-aErFndAliCzueqrtd-GAFluq0 N.354217690 B.356219470M .35287410G .35241780 TP.465329170T.A34521670T.465230 H.54T76321H0 .543N89270H.653B7491280H.563M41270H.5643G218 Table 3: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (-Adq: adequacy and -Flu: Fluency); Scores which fall within the 95 %CI of the best are italicized. SROXAWlLeypoasErldniCze rtG0 N.35246 190 B.5618740 M.542719G0 .5341890T .P632180T.A54268 0T .629310 H.7T6 3985H0 .546N180 H.765B8730H.673M5190 H.56G 4318 Table 4: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (combined adequacy and fluency scores) 573 SRAXOWylLpeosatrEldniezCm rtG0S A.p61d35q94107 .5304%65874L09.5462%136U0SF .lp256u 1209 .51 6%9213L0 .562%91845U Table 5: Spearman’s correlation analysis (bootstrap sampling) of the BLEU scores of various systems with human adequacy and fluency scores SRXOAWylLpeosarEndiCztGH J -12 0 N.6543210 B.6512830 M.4532 960 G.13457960T.P56374210T.A45268730T.562738140 H.7T6854910H.56N482390H.675B1398240H.567M3 240H.56G41290H.8J71562- Table 6: Spearman’s correlations of NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), human variants (HT, HN, HB, HM, HG), and individual human judgments (combined adq. and flu. scores) Factor Count Punctuation17 Adverbial shift Agreement Other shifts Conjunct rearrangement Complementizer ins/del PP shift 16 14 8 8 5 4 Table 7: Factors influencing TERP ranking errors for 50 worst-ranked realization groups Table 8: Metric-wise ranking performance in terms of agreement with a ranking induced by combined adequacy and fluency scores; each metric gets a score out of 5 (i.e. number of system-level comparisons that emerged significant as per the Tukey’s HSD test) Legend: Perceptron Best (PB); Perceptron Worst (PW); XLE Best (XB); XLE Worst (XW); OpenCCG baseline models 1 to 3 (C1 ... C3) 574
Reference: text
sentIndex sentText sentNum sentScore
1 edu , , , Abstract We present the first evaluation of the utility of automatic evaluation metrics on surface realizations of Penn Treebank data. [sent-3, score-0.788]
2 Using outputs of the OpenCCG and XLE realizers, along with ranked WordNet synonym substitutions, we collected a corpus of generated surface realizations. [sent-4, score-0.243]
3 These outputs were then rated and post-edited by human annotators. [sent-5, score-0.166]
4 We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. [sent-6, score-0.929]
5 In contrast to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. [sent-7, score-0.759]
6 We also find that all of the metrics correctly predict more than half of the significant systemlevel differences, though none are correct in all cases. [sent-8, score-0.219]
7 We conclude with a discussion ofthe implications for the utility of such metrics in evaluating generation in the presence of variation. [sent-9, score-0.223]
8 1 Introduction and Background In building surface-realization systems for natural language generation, there is a need for reliable automated metrics to evaluate the output. [sent-11, score-0.191]
9 Unlike in parsing, where there is usually a single goldstandard parse for a sentence, in surface realization there are usually many grammatically-acceptable ways to express the same concept. [sent-12, score-0.268]
10 As human evaluation of translation quality is time-consuming and expensive, a number of automated metrics have been developed to evaluate the quality of MT outputs. [sent-14, score-0.3]
11 In this study, we investigate whether the metrics developed for MT evaluation tasks can be used to reliably evaluate the outputs of surface realizers, and which of these metrics are best suited to this task. [sent-15, score-0.556]
12 A number of surface realizers have been developed using the Penn Treebank (PTB), and BLEU scores are often reported in the evaluations of these systems. [sent-16, score-0.263]
13 A few other studies have investigated the use of such metrics in evaluating the output of NLG systems, notably (Reiter and Belz, 2009) and (Stent et al. [sent-23, score-0.223]
14 The former examined the performance of BLEU and ROUGE with computer-generated weather reports, finding a moderate correlation with human fluency judgments. [sent-25, score-0.493]
15 The latter study applied several MT metrics to paraphrase data from Barzilay and Lee’s corpus-based system (Barzilay and Lee, 2003), and found moderate correlations with human adequacy judgments, but little correlation with fluency judgments. [sent-26, score-1.148]
16 Cahill (2009) examined the performance of six MT metrics (including BLEU) in evaluating the output of a LFG-based surface realizer for ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e1 on O Ectmobpeir ic 2a0l1 M0. [sent-27, score-0.593]
17 tc ho2d0s10 in A Nsastoucira tlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinag eusis 5t6ic4s–574, German, also finding only weak correlations with the human judgments. [sent-29, score-0.229]
18 To study the usefulness of evaluation metrics such as BLEU on the output of grammar-based surface realizers used with the PTB, we assembled a corpus of surface realizations from three different realizers operating on Section 00 of the PTB. [sent-30, score-1.114]
19 Two human judges evaluated the adequacy and fluency of each of the realizations with respect to the reference sentence. [sent-31, score-1.212]
20 The realizations were then scored with a number of automated evaluation metrics developed for machine translation. [sent-32, score-0.707]
21 In order to investigate the correlation of targeted metrics with human evaluations, and gather other acceptable realizations for future evaluations, the judges manually repaired each unacceptable realization during the rating task. [sent-33, score-1.267]
22 In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. [sent-34, score-0.759]
23 However, when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. [sent-35, score-0.583]
24 This suggests that multiple metrics should be routinely consulted when comparing realizer systems. [sent-36, score-0.439]
25 Overall, our methodology is similar to that of previous MT meta-evaluations, in that we collected human judgments of system outputs, and compared these scores with those assigned by automatic metrics. [sent-37, score-0.312]
26 Section 3 describes the human evaluation task and the automated metrics applied. [sent-41, score-0.266]
27 565 2 Data Preparation We collected realizations of the sentences in Section 00 of the WSJ corpus from the following three sources: 1. [sent-44, score-0.475]
28 1 OpenCCG realizations OpenCCG is an open source parsing/realization library with multimodal extensions to CCG (Baldridge, 2002). [sent-54, score-0.475]
29 The OpenCCG chart realizer takes logical forms as input and produces strings by combining signs for lexical items. [sent-55, score-0.248]
30 Alternative realizations are scored using integrated n-gram and perceptron models. [sent-56, score-0.571]
31 Perceptron full-model: n-best realizations ranked using perceptron with syntax features and the three n-gram models, as well as discriminative n-grams The perceptron model was trained on sections 0221 of the CCGbank, while a grammar extracted from section 00-21 was used for realization. [sent-64, score-0.619]
32 2 XLE realizations The corpus of realizations generated by the XLE system contained 42,527 surface realizations of approximately 1,421 section 00 sentences (an average of 30 per sentence), initially unranked. [sent-70, score-1.547]
33 The realizations were first tokenized using Penn Treebank conventions, then ranked using perplexities calculated from the same trigram word model used with OpenCCG. [sent-73, score-0.509]
34 For each sentence, the top 4 realizations were selected. [sent-74, score-0.475]
35 3 Evaluation From the data sources described in the previous sec- tion, a corpus of realizations to be evaluated by the human judges was constructed by randomly choosing 305 sentences from section 00, then selecting surface realizations of these sentences using the following algorithm: 1. [sent-82, score-1.213]
36 Add other OpenCCG realizations until all four models are represented, to a maximum of 4. [sent-85, score-0.475]
37 Add up to 4 realizations from either the XLE system or the WordNet pool, chosen randomly. [sent-87, score-0.475]
38 The intent was to give reasonable coverage of all realizer systems discussed in Section 2 without overloading the human judges. [sent-88, score-0.323]
39 “System” here means any instantiation that emits surface realizations, including various configurations of OpenCCG (using different language models or ranking systems), and these can be multiple-output, such as an n-best list, or single-output (best-only, worst-only, etc. [sent-89, score-0.189]
40 Accordingly, more realizations were selected from the OpenCCG realizer because 5 different systems were being represented. [sent-91, score-0.723]
41 1 Human judgments Two human judges evaluated each surface realization on two criteria: adequacy, which represents the extent to which the output conveys all and only the meaning of the reference sentence; and fluency, the extent to which it is grammatically acceptable. [sent-104, score-0.685]
42 The realizations were presented to the judges in sets containing a reference sentence and the 1-8 outputs selected for that sentence. [sent-105, score-0.642]
43 In addition to rating each realization on the two five-point scales, each judge also repaired each output which he or she did not judge to be fully adequate and fluent. [sent-108, score-0.239]
44 These repaired realizations were later used to calculate targeted versions of the evaluation metrics, i. [sent-111, score-0.591]
45 Although targeted metrics are not fully automatic, they are of interest because they allow the evaluation algorithm to focus on what is actually wrong with the input, rather than all textual differences. [sent-114, score-0.303]
46 Notably, targeted TER (HTER) has been shown to be more consistent with human judgments than human annotators are with one another (Snover et al. [sent-115, score-0.406]
47 2 Automatic evaluation The realizations were also evaluated using seven automatic metrics: • IBM’s BLEU, which scores a hypothesis by counting n-gram matches with the reference sentence (Papineni et al. [sent-118, score-0.59]
48 , 2009) TERPA, an instantiation of TERP with edit weights optimized for correlation with adequacy in MT evaluations GTM (General Text Matcher), a generaliza- tion of the F-measure that rewards contiguous matching spans (Turian et al. [sent-121, score-0.372]
49 The human repair was different from the reference sentence in 193 cases (about 9% of the total), and we expected this to result in better scores and correlations with the human judgments overall. [sent-123, score-0.634]
50 1 Human judgments Table 1 summarizes the dataset, as well as the mean adequacy and fluency scores garnered from the human evaluation. [sent-125, score-0.859]
51 63) for the realizer systems on average, and the best-rated realizer systems achieved mean fluency scores above 4. [sent-128, score-0.829]
52 κ is defined as where P(A) is the observed agreement 1 b−etPw(eEe)n annotators and P(E) is the probability of agreement due to chance (Carletta, 1996). [sent-131, score-0.204]
53 63 for fluency across the entire dataset represent “substantial” agreement, according to the guidelines discussed in (Landis and Koch, 1977), better than is typically reported for machine translation evaluation tasks; for example, Callison-Burch et al. [sent-146, score-0.328]
54 This is likely due to the generally high quality of the realizations evaluated, leading to easier judgments. [sent-151, score-0.475]
55 Additionally, the average scores for adequacy and fluency were themselves averaged into a single score, following (Snover et al. [sent-158, score-0.586]
56 , 2009), and the Spearman’s correlation of each of the automatic metrics with these scores are given in Table 4. [sent-159, score-0.317]
57 4 Bootstrap sampling of correlations For each of the sub-corpora shown in Table 1, we computed confidence intervals for the correlations between adequacy and fluency human scores with selected automatic metrics (BLEU, HBLEU, TER, TERP, and HTER) as described in (Koenh, 2004). [sent-163, score-1.16]
58 We sampled each sub-corpus 1000 times with replace- ment, and calculated correlations between the rankings induced by the human scores and those induced by the metrics for each reference sentence. [sent-164, score-0.607]
59 1 Human judgments of systems The results for the four OpenCCG perceptron models mostly confirm those reported in (White and Rajkumar, 2009), with one exception: the B-3 model was below B-2, though the P-B (perceptron-best) model still scored highest. [sent-169, score-0.294]
60 None of the differences in adequacy scores among the individual systems are significant, with the exception of the WordNet system. [sent-171, score-0.292]
61 In this case, the lack of wordsense disambiguation for the substituted words results in a poor overall adequacy score (e. [sent-172, score-0.253]
62 2 Correlations of human judgments with MT metrics Of the non-human-targeted metrics evaluated, BLEU and TER/TERP demonstrate the highest correlations with the human judgments of fluency (r = 0. [sent-177, score-1.376]
63 The TER family of evaluation metrics have been observed to perform very well in MTevaluation tasks, and although the data evaluated here differs from typical MT data in some important ways, the correlation of TERP with the human judgments is substantial. [sent-180, score-0.593]
64 , 2007); in that study, BLEU achieved adequacy and fluency correlations of 0. [sent-183, score-0.701]
65 The correlations for these metrics might be expected to be lower for our data, since overall quality is higher, making the metrics’ task more difficult as the outputs involve subtler differences between acceptable and unacceptable variation. [sent-188, score-0.473]
66 The human-targeted metrics (represented by the prefixed H in the data tables) correlated even more strongly with the human judgments, compared to the non-targeted versions. [sent-189, score-0.266]
67 HTER demonstrated the best 569 correlation with realizer fluency (r = 0. [sent-190, score-0.629]
68 For each reference sentence, we compared the ranking of its realizations induced from the human scores to the ranking induced from the TERP score, and counted the rank errors by the latter, informally categorizing them by error type (see Table 7). [sent-197, score-0.878]
69 Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. [sent-199, score-1.058]
70 We expect that with suitable optimization of edit weights to avoid over-penalizing punctuation shifts and underpenalizing agreement errors, TERP would exhibit an even stronger correlation with human fluency judgments. [sent-203, score-0.57]
71 None of the evaluation metrics can distinguish an acceptable movement of a word or constituent from an unacceptable movement, with only one reference sentence. [sent-204, score-0.316]
72 Another important example of acceptable variation which these metrics do not generally rank correctly is dative alternation: Ref. [sent-207, score-0.306]
73 The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al. [sent-235, score-1.197]
74 , 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output ofa surface realizer for German, which found only a weak correlation. [sent-236, score-0.514]
75 However, the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. [sent-237, score-0.179]
76 In Cahill’s study, the outputs of the realizer were almost always grammatically correct, and the automated evaluation metrics were ranking markedness instead of grammatical acceptability. [sent-238, score-0.587]
77 3 System-level comparisons In order to investigate the efficacy of the metrics in ranking different realizer systems, or competing realizations from the same system generated using different ranking models, we considered seven different “systems” from the whole dataset of realizations. [sent-240, score-1.123]
78 346 fluency scores of each of these seven systems was compared with that of every other system, resulting in 21 pairwise comparisons. [sent-244, score-0.36]
79 Then Tukey’s HSD test was performed to determine the systems which differed significantly in terms of the average adequacy and fluency rating they received. [sent-245, score-0.611]
80 Subsequently, for each of these systems, an overall system-level score for each of the MT metrics was calculated. [sent-247, score-0.191]
81 Table 8 shows the results of a pairwise comparison between the ranking induced by each evaluation metric, and the ranking induced by the human judgments. [sent-249, score-0.277]
82 Five of the seven non- targeted metrics correctly rank more than half of the systems. [sent-250, score-0.314]
83 In summary, some of the metrics get some of the rankings correct, but none of the non-targeted metrics get all of them correct. [sent-255, score-0.441]
84 the use of multiple metrics in comparing realizer systems. [sent-258, score-0.439]
85 We also found that the MT-evaluation metrics are useful in evaluating different versions of the same realizer system (e. [sent-260, score-0.471]
86 , the various OpenCCG realization ranking models), and finding cases where a system is performing poorly. [sent-262, score-0.213]
87 As in MT-evaluation tasks, human-targeted metrics have the highest correlations with human judgments overall. [sent-263, score-0.618]
88 These results suggest that the MT-evaluation metrics are useful for developing surface realizers. [sent-264, score-0.313]
89 However, the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. [sent-265, score-0.345]
90 In our study, the targeted version of TERP correlated most strongly with human judgments of fluency. [sent-266, score-0.331]
91 Multiple reference sentences may also help mitigate these problems, and the corpus of human-repaired realizations that has resulted from our study is a step in this direction, as it provides multiple references for some cases. [sent-268, score-0.524]
92 Correlating human and automatic evaluation of a german surface realiser. [sent-298, score-0.197]
93 Orange: a method for evaluating automatic evaluation metrics for machine translation. [sent-344, score-0.223]
94 An investigation into the validity of some metrics for automatically evaluating natural language generation systems. [sent-359, score-0.223]
95 A study of translation edit rate with targeted human annotation. [sent-371, score-0.199]
96 : exploring different human judgments with a tunable MT metric. [sent-381, score-0.273]
97 5643G218 Table 3: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (-Adq: adequacy and -Flu: Fluency); Scores which fall within the 95 %CI of the best are italicized. [sent-418, score-0.755]
98 56G 4318 Table 4: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (combined adequacy and fluency scores) 573 SRAXOWylLpeosatrEldniezCm rtG0S A. [sent-431, score-1.049]
99 562%91845U Table 5: Spearman’s correlation analysis (bootstrap sampling) of the BLEU scores of various systems with human adequacy and fluency scores SRXOAWylLpeosarEndiCztGH J -12 0 N. [sent-437, score-0.787]
100 8J71562- Table 6: Spearman’s correlations of NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), human variants (HT, HN, HB, HM, HG), and individual human judgments (combined adq. [sent-450, score-0.502]
wordName wordTfidf (topN-words)
[('realizations', 0.475), ('fluency', 0.294), ('adequacy', 0.253), ('openccg', 0.248), ('realizer', 0.248), ('terp', 0.226), ('judgments', 0.198), ('metrics', 0.191), ('xle', 0.19), ('correlations', 0.154), ('realization', 0.146), ('bleu', 0.126), ('surface', 0.122), ('realizers', 0.102), ('ter', 0.096), ('meteor', 0.094), ('spearman', 0.088), ('gtm', 0.088), ('correlation', 0.087), ('agreement', 0.082), ('human', 0.075), ('mt', 0.074), ('clarify', 0.07), ('ranking', 0.067), ('judges', 0.066), ('hter', 0.063), ('repaired', 0.058), ('terpa', 0.058), ('targeted', 0.058), ('paraphrase', 0.057), ('perceptron', 0.055), ('wrong', 0.054), ('outputs', 0.052), ('reference', 0.049), ('comparisons', 0.048), ('snover', 0.048), ('ptb', 0.046), ('cahill', 0.045), ('nist', 0.045), ('crouch', 0.044), ('gulf', 0.044), ('hb', 0.044), ('hg', 0.044), ('nlg', 0.044), ('repair', 0.044), ('simmons', 0.044), ('spurns', 0.044), ('acceptable', 0.042), ('family', 0.042), ('scored', 0.041), ('chance', 0.04), ('scores', 0.039), ('rated', 0.039), ('rank', 0.038), ('stent', 0.038), ('georgia', 0.038), ('tracy', 0.038), ('gave', 0.037), ('moderate', 0.037), ('wordnet', 0.037), ('synonym', 0.035), ('rating', 0.035), ('variation', 0.035), ('induced', 0.034), ('hn', 0.034), ('unacceptable', 0.034), ('translation', 0.034), ('ranked', 0.034), ('evaluating', 0.032), ('edit', 0.032), ('white', 0.032), ('king', 0.031), ('generator', 0.031), ('rankings', 0.031), ('respond', 0.031), ('aoife', 0.029), ('berleant', 0.029), ('booklets', 0.029), ('carletta', 0.029), ('chefs', 0.029), ('differed', 0.029), ('grammatically', 0.029), ('hbleu', 0.029), ('hsd', 0.029), ('kaplan', 0.029), ('landis', 0.029), ('mwhite', 0.029), ('ovation', 0.029), ('rajakrishnan', 0.029), ('rajkumar', 0.029), ('rearrangement', 0.029), ('turian', 0.029), ('wage', 0.029), ('wasn', 0.029), ('hm', 0.029), ('none', 0.028), ('substitutions', 0.028), ('ccg', 0.028), ('worst', 0.027), ('seven', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999881 52 emnlp-2010-Further Meta-Evaluation of Broad-Coverage Surface Realization
Author: Dominic Espinosa ; Rajakrishnan Rajkumar ; Michael White ; Shoshana Berleant
Abstract: We present the first evaluation of the utility of automatic evaluation metrics on surface realizations of Penn Treebank data. Using outputs of the OpenCCG and XLE realizers, along with ranked WordNet synonym substitutions, we collected a corpus of generated surface realizations. These outputs were then rated and post-edited by human annotators. We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. In contrast to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. We also find that all of the metrics correctly predict more than half of the significant systemlevel differences, though none are correct in all cases. We conclude with a discussion ofthe implications for the utility of such metrics in evaluating generation in the presence of variation. A further result of our research is a corpus of post-edited realizations, which will be made available to the research community. 1 Introduction and Background In building surface-realization systems for natural language generation, there is a need for reliable automated metrics to evaluate the output. Unlike in parsing, where there is usually a single goldstandard parse for a sentence, in surface realization there are usually many grammatically-acceptable ways to express the same concept. This parallels the task of evaluating machine-translation (MT) systems: for a given segment in the source language, 564 there are usually several acceptable translations into the target language. As human evaluation of translation quality is time-consuming and expensive, a number of automated metrics have been developed to evaluate the quality of MT outputs. In this study, we investigate whether the metrics developed for MT evaluation tasks can be used to reliably evaluate the outputs of surface realizers, and which of these metrics are best suited to this task. A number of surface realizers have been developed using the Penn Treebank (PTB), and BLEU scores are often reported in the evaluations of these systems. But how useful is BLEU in this context? The original BLEU study (Papineni et al., 2001) scored MT outputs, which are of generally lower quality than grammar-based surface realizations. Furthermore, even for MT systems, the usefulness of BLEU has been called into question (Callison-Burch et al., 2006). BLEU is designed to work with multiple reference sentences, but in treebank realization, there is only a single reference sentence available for comparison. A few other studies have investigated the use of such metrics in evaluating the output of NLG systems, notably (Reiter and Belz, 2009) and (Stent et al., 2005). The former examined the performance of BLEU and ROUGE with computer-generated weather reports, finding a moderate correlation with human fluency judgments. The latter study applied several MT metrics to paraphrase data from Barzilay and Lee’s corpus-based system (Barzilay and Lee, 2003), and found moderate correlations with human adequacy judgments, but little correlation with fluency judgments. Cahill (2009) examined the performance of six MT metrics (including BLEU) in evaluating the output of a LFG-based surface realizer for ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e1 on O Ectmobpeir ic 2a0l1 M0.e ?tc ho2d0s10 in A Nsastoucira tlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinag eusis 5t6ic4s–574, German, also finding only weak correlations with the human judgments. To study the usefulness of evaluation metrics such as BLEU on the output of grammar-based surface realizers used with the PTB, we assembled a corpus of surface realizations from three different realizers operating on Section 00 of the PTB. Two human judges evaluated the adequacy and fluency of each of the realizations with respect to the reference sentence. The realizations were then scored with a number of automated evaluation metrics developed for machine translation. In order to investigate the correlation of targeted metrics with human evaluations, and gather other acceptable realizations for future evaluations, the judges manually repaired each unacceptable realization during the rating task. In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. However, when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. This suggests that multiple metrics should be routinely consulted when comparing realizer systems. Overall, our methodology is similar to that of previous MT meta-evaluations, in that we collected human judgments of system outputs, and compared these scores with those assigned by automatic metrics. A recent alternative approach to paraphrase evaluation is ParaMetric (Callison-Burch et al., 2008); however, it requires a corpus of annotated (aligned) paraphrases (which does not yet exist for PTB data), and is arguably focused more on paraphrase analysis than paraphrase generation. The plan of the paper is as follows: Section 2 discusses the preparation of the corpus of surface realizations. Section 3 describes the human evaluation task and the automated metrics applied. Sections 4 and 5 present and discuss the results of these evaluations. We conclude with some general observations about automatic evaluation of surface realizers, and some directions for further research. 565 2 Data Preparation We collected realizations of the sentences in Section 00 of the WSJ corpus from the following three sources: 1. OpenCCG, a CCG-based chart realizer (White, 2006) 2. The XLE Generator, a LFG-based system developed by Xerox PARC (Crouch et al., 2008) 3. WordNet synonym substitutions, to investigate how differences in lexical choice compare to grammar-based variation.1 Although all three systems used Section 00 of the PTB, they were applied with various parameters (e.g., language models, multiple-output versus single-output) and on different input structures. Accordingly, our study does not compare OpenCCG to XLE, or either of these to the WordNet system. 2.1 OpenCCG realizations OpenCCG is an open source parsing/realization library with multimodal extensions to CCG (Baldridge, 2002). The OpenCCG chart realizer takes logical forms as input and produces strings by combining signs for lexical items. Alternative realizations are scored using integrated n-gram and perceptron models. For robustness, fragments are greedily assembled when necessary. Realizations were generated from 1,895 gold standard logical forms, created by constrained parsing of development-section derivations. The following OpenCCG models (which differ essentially in the way the output is ranked) were used: 1. Baseline 1: Output ranked by a trigram word model 2. Baseline 2: Output ranked using three language models (3-gram words 3-gram words with named entity class replacement factored language model of words, POS tags and CCG supertags) + + 1Not strictly surface realizations, since they do not involve an abstract input specification, but for simplicity we refer to them as realizations throughout. 3. Baseline 3: Perceptron with syntax features and the three LMs mentioned above 4. Perceptron full-model: n-best realizations ranked using perceptron with syntax features and the three n-gram models, as well as discriminative n-grams The perceptron model was trained on sections 0221 of the CCGbank, while a grammar extracted from section 00-21 was used for realization. In addition, oracle supertags were inserted into the chart during realization. The purpose of such a non-blind testing strategy was to evaluate the quality of the output produced by the statistical ranking models in isolation, rather than focusing on grammar coverage, and avoid the problems associated with lexical smoothing, i.e. lexical categories in the development section not being present in the training section. To enrich the variation in the generated realizations, dative-alternation was enforced during realization by ensuring alternate lexical categories of the verb in question, as in the following example: (1) the executives gave [the chefs] [a standing ovation] (2) the executives gave [a standing ovation] [to the chefs] 2.2 XLE realizations The corpus of realizations generated by the XLE system contained 42,527 surface realizations of approximately 1,421 section 00 sentences (an average of 30 per sentence), initially unranked. The LFG f-structures used as input to the XLE generator were derived from automatic parses, as described in (Riezler et al., 2002). The realizations were first tokenized using Penn Treebank conventions, then ranked using perplexities calculated from the same trigram word model used with OpenCCG. For each sentence, the top 4 realizations were selected. The XLE generator provides an interesting point of comparison to OpenCCG as it uses a manuallydeveloped grammar with inputs that are less abstract but potentially noisier, as they are derived from automatic parses rather than gold-standard ones. 566 2.3 WordNet synonymizer To produce an additional source of variation, the nouns and verbs of the sentences in section 00 of the PTB were replaced with all of their WordNet synonyms. Verb forms were generated using verb stems, part-of-speech tags, and the morphg tool.2 These substituted outputs were then filtered using the n-gram data which Google Inc. has made available.3 Those without any 5-gram matches centered on the substituted word (or 3-gram matches, in the case of short sentences) were eliminated. 3 Evaluation From the data sources described in the previous sec- tion, a corpus of realizations to be evaluated by the human judges was constructed by randomly choosing 305 sentences from section 00, then selecting surface realizations of these sentences using the following algorithm: 1. Add OpenCCG’s best-scored realization. 2. Add other OpenCCG realizations until all four models are represented, to a maximum of 4. 3. Add up to 4 realizations from either the XLE system or the WordNet pool, chosen randomly. The intent was to give reasonable coverage of all realizer systems discussed in Section 2 without overloading the human judges. “System” here means any instantiation that emits surface realizations, including various configurations of OpenCCG (using different language models or ranking systems), and these can be multiple-output, such as an n-best list, or single-output (best-only, worst-only, etc.). Accordingly, more realizations were selected from the OpenCCG realizer because 5 different systems were being represented. Realizations were chosen randomly, rather than according to sentence types or other criteria, in order to produce a representative sample of the corpus. In total, 2,114 realizations were selected for evaluation. 2http : //www. informatics . sussex. ac .uk/ re search/ groups / nlp / carro l /morph .html l 3http : //www . ldc . upenn .edu/Catalog/docs/ LDC2 0 0 6T 13 / readme .txt 3.1 Human judgments Two human judges evaluated each surface realization on two criteria: adequacy, which represents the extent to which the output conveys all and only the meaning of the reference sentence; and fluency, the extent to which it is grammatically acceptable. The realizations were presented to the judges in sets containing a reference sentence and the 1-8 outputs selected for that sentence. To aid in the evaluation of adequacy, one sentence each of leading and trailing context were displayed. Judges used the guidelines given in Figure 1, based on the scales developed by the NIST Machine Translation Evaluation Workshop. In addition to rating each realization on the two five-point scales, each judge also repaired each output which he or she did not judge to be fully adequate and fluent. An example is shown in Figure 2. These repairs resulted in new reference sentences for a substantial number of sentences. These repaired realizations were later used to calculate targeted versions of the evaluation metrics, i.e., using the repaired sentence as the reference sentence. Although targeted metrics are not fully automatic, they are of interest because they allow the evaluation algorithm to focus on what is actually wrong with the input, rather than all textual differences. Notably, targeted TER (HTER) has been shown to be more consistent with human judgments than human annotators are with one another (Snover et al., 2006). 3.2 Automatic evaluation The realizations were also evaluated using seven automatic metrics: • IBM’s BLEU, which scores a hypothesis by counting n-gram matches with the reference sentence (Papineni et al., 2001), with smoothing as described in (Lin and Och, 2004) • • • • • • The NIST n-gram evaluation metric, similar to BLEU, but rewarding rarer n-gram matches, and using a different length penalty METEOR, which measures the harmonic mean of unigram precision and recall, with a higher weight for recall (Banerjee and Lavie, 2005) 567 TER (Translation Edit Rate), a measure of the number of edits required to transform a hypothesis sentence into the reference sentence (Snover et al., 2006) TERP, an augmented version of TER which performs phrasal substitutions, stemming, and checks for synonyms, among other improvements (Snover et al., 2009) TERPA, an instantiation of TERP with edit weights optimized for correlation with adequacy in MT evaluations GTM (General Text Matcher), a generaliza- tion of the F-measure that rewards contiguous matching spans (Turian et al., 2003) Additionally, targeted versions of BLEU, METEOR, TER, and GTM were computed by using the human-repaired outputs as the reference set. The human repair was different from the reference sentence in 193 cases (about 9% of the total), and we expected this to result in better scores and correlations with the human judgments overall. 4 Results 4.1 Human judgments Table 1 summarizes the dataset, as well as the mean adequacy and fluency scores garnered from the human evaluation. Overall adequacy and fluency judgments were high (4.16, 3.63) for the realizer systems on average, and the best-rated realizer systems achieved mean fluency scores above 4. 4.2 Inter-annotator agreement Inter-annotator agreement was measured using the κ-coefficient, which is commonly used to measure the extent to which annotators agree in category P(1A−)P−(PE()E), judgment tasks. κ is defined as where P(A) is the observed agreement 1 b−etPw(eEe)n annotators and P(E) is the probability of agreement due to chance (Carletta, 1996). Chance agreement for this data is calculated by the method discussed in Carletta’s squib. However, in previous work in MT meta-evaluation, Callison-Burch et al. (2007), assume the less strict criterion of uniform chance agreement, i.e. for a five-point scale. They also 51 Score Adequacy Fluency 5All the meaning of the referencePerfectly grammatical 4 Most of the meaning Awkward or non-native; punctuation errors 3 Much of the meaning Agreement errors or minor syntactic problems 2 Meaning substantially different Major syntactic problems, such as missing words 1 Meaning completely different Completely ungrammatical Figure Ref. Realiz. Repair 1: Rating scale and guidelines It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf spurns them again It weren’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them Figure 2: Example of repair introduce the notion of “relative” κ, which measures how often two or more judges agreed that A > B, A = B, or A < B for two outputs A and B, irrespective of the specific values given on the five-point scale; here, uniform chance agreement is taken to be We report both absolute and relative κ in Table 2, using actual chance agreement rather than uniform chance agreement. 31. The κ scores of0.60 for adequacy and 0.63 for fluency across the entire dataset represent “substantial” agreement, according to the guidelines discussed in (Landis and Koch, 1977), better than is typically reported for machine translation evaluation tasks; for example, Callison-Burch et al. (2007) reported “fair” agreement, with κ = 0.281 for fluency and κ = 0.307 for adequacy (relative). Assuming the uniform chance agreement that the previously cited work adopts, our inter-annotator agreements (both absolute and relative) are still higher. This is likely due to the generally high quality of the realizations evaluated, leading to easier judgments. 4.3 Correlation with automatic evaluation To determine how well the automatic evaluation methods described in Section 3 correlate with the human judgments, we averaged the human judgments for adequacy and fluency, respectively, for each of the rated realizations, and then computed both Pearson’s correlation coefficient and Spearman’s rank correlation coefficient between these scores and each of the metrics. Spearman’s correlation makes fewer assumptions about the distribu- tion of the data, but may not reflect a linear rela568 tionship that is actually present. Both are frequently reported in the literature. Due to space constraints, we show only Spearman’s correlation, although the TER family scored slightly better on Pearson’s coefficient, relatively. The results for Spearman’s correlation are given in Table 3. Additionally, the average scores for adequacy and fluency were themselves averaged into a single score, following (Snover et al., 2009), and the Spearman’s correlation of each of the automatic metrics with these scores are given in Table 4. All reported correlations are significant at p < 0.001. 4.4 Bootstrap sampling of correlations For each of the sub-corpora shown in Table 1, we computed confidence intervals for the correlations between adequacy and fluency human scores with selected automatic metrics (BLEU, HBLEU, TER, TERP, and HTER) as described in (Koenh, 2004). We sampled each sub-corpus 1000 times with replace- ment, and calculated correlations between the rankings induced by the human scores and those induced by the metrics for each reference sentence. We then used these coefficients to estimate the confidence interval, after excluding the top 25 and bottom 25 coefficients, following (Lin and Och, 2004). The results of this for the BLEU metric are shown in Table 5. We determined which correlations lay within the 95% confidence interval of the best performing metric in each row of Table Table 3; these figures are italicized. 5 Discussion 5.1 Human judgments of systems The results for the four OpenCCG perceptron models mostly confirm those reported in (White and Rajkumar, 2009), with one exception: the B-3 model was below B-2, though the P-B (perceptron-best) model still scored highest. This may have been due to differences in the testing scenario. None of the differences in adequacy scores among the individual systems are significant, with the exception of the WordNet system. In this case, the lack of wordsense disambiguation for the substituted words results in a poor overall adequacy score (e.g., wage floor → wage story). Conversely, it scores highest ffoloro fluency, as substituting a noun or tve srcbo rwesith h a synonym does not usually introduce ungrammaticality. 5.2 Correlations of human judgments with MT metrics Of the non-human-targeted metrics evaluated, BLEU and TER/TERP demonstrate the highest correlations with the human judgments of fluency (r = 0.62, 0.64). The TER family of evaluation metrics have been observed to perform very well in MTevaluation tasks, and although the data evaluated here differs from typical MT data in some important ways, the correlation of TERP with the human judgments is substantial. In contrast with previous MT evaluations where TERP performs considerably better than TER, these scored close to equal on our data, possibly because TERP’s stem, synonym, and paraphrase matching are less useful when most of the variation is syntactic. The correlations with BLEU and METEOR are lower than those reported in (Callison-Burch et al., 2007); in that study, BLEU achieved adequacy and fluency correlations of 0.690 and 0.722, respectively, and METEOR achieved 0.701 and 0.719. The correlations for these metrics might be expected to be lower for our data, since overall quality is higher, making the metrics’ task more difficult as the outputs involve subtler differences between acceptable and unacceptable variation. The human-targeted metrics (represented by the prefixed H in the data tables) correlated even more strongly with the human judgments, compared to the non-targeted versions. HTER demonstrated the best 569 correlation with realizer fluency (r = 0.75). For several kinds of acceptable variation involving the rearrangement of constituents (such as dative shift), TERP gives a more reasonable score than BLEU, due to its ability to directly evaluate phrasal shifts. The following realization was rated 4.5 for fluency, and was more correctly ranked by TERP than BLEU: (3) Ref: The deal also gave Mitsui access to a high-tech medical product. (4) Realiz.: The deal also gave access to a high-tech medical product to Mitsui. For each reference sentence, we compared the ranking of its realizations induced from the human scores to the ranking induced from the TERP score, and counted the rank errors by the latter, informally categorizing them by error type (see Table 7). In the 50 sentences with the highest numbers of rank errors, 17 were affected by punctuation differences, typically involving variation in comma placement. Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. However, TERP penalizes such insertions or deletions. Agreement errors are another frequent source of ranking errors for TERP. The human judges tended to harshly penalize sentences with number-agreement or tense errors, whereas TERP applies only a single substitution penalty for each such error. We expect that with suitable optimization of edit weights to avoid over-penalizing punctuation shifts and underpenalizing agreement errors, TERP would exhibit an even stronger correlation with human fluency judgments. None of the evaluation metrics can distinguish an acceptable movement of a word or constituent from an unacceptable movement, with only one reference sentence. A substantial source of error for both TERP and BLEU is variation in adverbial placement, as shown in (7). Similar errors are seen with prepositional phrases and some commonly-occurring temporal adverbs, which typically admit a number of variations in placement. Another important example of acceptable variation which these metrics do not generally rank correctly is dative alternation: Ref. We need to clarify what exactly is wrong with it. Realiz. Flu. TERP BLEU We need to clarify exactly what is wrong with it.50.10.5555 We need to clarify exactly what ’s wrong with it. 5 0.2 0.4046 (7) We need to clarify what , exactly , is wrong with it. 5 0.2 0.5452 We need to clarify what is wrong with it exactly. 4.5 0.1 0.6756 We need to clarify what exactly , is wrong with it. 4 0.1 0.7017 We need to clarify what , exactly is wrong with it. 4 0.1 0.7017 We needs to clarify exactly what is wrong with it. (5) Ref. When test booklets were passed out 48 hours ahead of time, she says she copied questions in the social studies section and gave the answers to students. (6) Realiz. When test booklets were passed out 48 hours ahead of time , she says she copied questions in the social studies section and gave students the answers. The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al., 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output ofa surface realizer for German, which found only a weak correlation. However, the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. In Cahill’s study, the outputs of the realizer were almost always grammatically correct, and the automated evaluation metrics were ranking markedness instead of grammatical acceptability. 5.3 System-level comparisons In order to investigate the efficacy of the metrics in ranking different realizer systems, or competing realizations from the same system generated using different ranking models, we considered seven different “systems” from the whole dataset of realizations. These consisted of five OpenCCG-based realizations (the best realization from three baseline models, and the best and the worst realization from the full perceptron model), and two XLE-based sys- tems (the best and the worst realization, after ranking the outputs of the XLE realizer with an n-gram model). The mean of the combined adequacy and 570 3 0.103 0.346 fluency scores of each of these seven systems was compared with that of every other system, resulting in 21 pairwise comparisons. Then Tukey’s HSD test was performed to determine the systems which differed significantly in terms of the average adequacy and fluency rating they received.4 The test revealed five pairwise comparisons where the scores were significantly different. Subsequently, for each of these systems, an overall system-level score for each of the MT metrics was calculated. For the five pairwise comparisons where the adequacy-fluency group means differed significantly, we checked whether the metric ranked the systems correctly. Table 8 shows the results of a pairwise comparison between the ranking induced by each evaluation metric, and the ranking induced by the human judgments. Five of the seven non- targeted metrics correctly rank more than half of the systems. NIST, METEOR, and GTM get the most comparisons right, but neither NIST nor GTM correctly rank the OpenCCG-baseline model 1 with respect to the XLE-best model. TER and TERP get two of the five comparisons correct, and they incorrectly rank two of the five OpenCCG model comparisons, as well as the comparison between the XLE-worst and OpenCCG-best systems. For the targeted metrics, HNIST is correct for all five comparisons, while neither HBLEU nor HMETEOR correctly rank all the OpenCCG models. On the other hand, HTER and HGTM incorrectly rank the XLE-best system versus OpenCCG-based models. In summary, some of the metrics get some of the rankings correct, but none of the non-targeted metrics get all of them correct. Moreover, different metrics make different ranking errors. This argues for 4This particular test was chosen since it corrects for multiple post-hoc analyses conducted on the same data-set. the use of multiple metrics in comparing realizer systems. 6 Conclusion Our study suggests that although the task of evaluating the output from realizer systems differs from the task of evaluating machine translations, the automatic metrics used to evaluate MT outputs deliver moderate correlations with combined human fluency and adequacy scores when used on surface realizations. We also found that the MT-evaluation metrics are useful in evaluating different versions of the same realizer system (e.g., the various OpenCCG realization ranking models), and finding cases where a system is performing poorly. As in MT-evaluation tasks, human-targeted metrics have the highest correlations with human judgments overall. These results suggest that the MT-evaluation metrics are useful for developing surface realizers. However, the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. In our study, the targeted version of TERP correlated most strongly with human judgments of fluency. In future work, the performance of the TER family of metrics on this data might be improved by opti- mizing the edit weights used in computing its scores, so as to avoid over-penalizing punctuation movements or under-penalizing agreement errors, both of which were significant sources of ranking errors. Multiple reference sentences may also help mitigate these problems, and the corpus of human-repaired realizations that has resulted from our study is a step in this direction, as it provides multiple references for some cases. We expect the corpus to also prove useful for feature engineering and error analysis in developing better realization models.5 Acknowledgements We thank Aoife Cahill and Tracy King for providing us with the output of the XLE generator. We also thank Chris Callison-Burch and the anonymous reviewers for their helpful comments and suggestions. 5The corpus can be downloaded from http : / /www . l ing .ohio-st ate . edu / ˜mwhite / dat a / emnlp 10 / . 571 This material is based upon work supported by the National Science Foundation under Grant No. 0812297. References Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. R. Barzilay and L. Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In proceedings of HLT-NAACL, volume 2003, pages 16–23. Aoife Cahill. 2009. Correlating human and automatic evaluation of a german surface realiser. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 97–100, Suntec, Singapore, August. Association for Computational Linguistics. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Reevaluating the role of BLEU in machine translation research. In Proceedings of EACL, volume 2006, pages 249–256. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation ofmachine translation. In StatMT ’07: Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158, Morristown, NJ, USA. Association for Computational Linguistics. C. Callison-Burch, T. Cohn, and M. Lapata. 2008. Parametric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 97–104. Association for Computational Linguistics. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22(2):249–254. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2008. Xle documentation. Technical report, Palo Alto Research Center. Philip Koenh. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1): 159–174. Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In COLING ’04: Proceedings Chin-Yew of the 20th international conference on Computational 501, Morristown, NJ, USA. Associfor Computational Linguistics. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Linguistics, page ation K. Bleu: a method for automatic evaluation of machine translation. E. Technical report, IBM Research. Reiter and A. Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529–558. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–23 1. M. Snover, N. Madnani, B.J. Dorr, and R. Schwartz. 2009. Fluency, adequacy, or HTER?: exploring different human judgments with a tunable MT metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259–268. Association for Computational Linguistics. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of CICLing. J.P. Turian, L. Shen, and I.D. Melamed. 2003. Evaluation of machine translation and its evaluation. recall (C— R), 100:2. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410–419, Singapore, August. Association for Computational Linguistics. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. 572 Table 1: Descriptive statistics Table 2: Corpora-wise inter-annotator agreement (absolute and relative κ values shown) SXAROWlpeyLos-aErFndAliCzueqrtd-GAFluq0 N.354217690 B.356219470M .35287410G .35241780 TP.465329170T.A34521670T.465230 H.54T76321H0 .543N89270H.653B7491280H.563M41270H.5643G218 Table 3: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (-Adq: adequacy and -Flu: Fluency); Scores which fall within the 95 %CI of the best are italicized. SROXAWlLeypoasErldniCze rtG0 N.35246 190 B.5618740 M.542719G0 .5341890T .P632180T.A54268 0T .629310 H.7T6 3985H0 .546N180 H.765B8730H.673M5190 H.56G 4318 Table 4: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (combined adequacy and fluency scores) 573 SRAXOWylLpeosatrEldniezCm rtG0S A.p61d35q94107 .5304%65874L09.5462%136U0SF .lp256u 1209 .51 6%9213L0 .562%91845U Table 5: Spearman’s correlation analysis (bootstrap sampling) of the BLEU scores of various systems with human adequacy and fluency scores SRXOAWylLpeosarEndiCztGH J -12 0 N.6543210 B.6512830 M.4532 960 G.13457960T.P56374210T.A45268730T.562738140 H.7T6854910H.56N482390H.675B1398240H.567M3 240H.56G41290H.8J71562- Table 6: Spearman’s correlations of NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), human variants (HT, HN, HB, HM, HG), and individual human judgments (combined adq. and flu. scores) Factor Count Punctuation17 Adverbial shift Agreement Other shifts Conjunct rearrangement Complementizer ins/del PP shift 16 14 8 8 5 4 Table 7: Factors influencing TERP ranking errors for 50 worst-ranked realization groups Table 8: Metric-wise ranking performance in terms of agreement with a ranking induced by combined adequacy and fluency scores; each metric gets a score out of 5 (i.e. number of system-level comparisons that emerged significant as per the Tukey’s HSD test) Legend: Perceptron Best (PB); Perceptron Worst (PW); XLE Best (XB); XLE Worst (XW); OpenCCG baseline models 1 to 3 (C1 ... C3) 574
2 0.24496259 22 emnlp-2010-Automatic Evaluation of Translation Quality for Distant Language Pairs
Author: Hideki Isozaki ; Tsutomu Hirao ; Kevin Duh ; Katsuhito Sudoh ; Hajime Tsukada
Abstract: Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as the de facto standard metric. However, when we consider translation between distant language pairs such as Japanese and English, most popular metrics (e.g., BLEU, NIST, PER, and TER) do not work well. It is well known that Japanese and English have completely different word orders, and special care must be paid to word order in translation. Otherwise, translations with wrong word order often lead to misunderstanding and incomprehensibility. For instance, SMT-based Japanese-to-English translators tend to translate ‘A because B’ as ‘B because A.’ Thus, word order is the most important problem for distant language translation. However, conventional evaluation metrics do not significantly penalize such word order mistakes. Therefore, locally optimizing these metrics leads to inadequate translations. In this paper, we propose an automatic evaluation metric based on rank correlation coefficients modified with precision. Our meta-evaluation of the NTCIR-7 PATMT JE task data shows that this metric outperforms conventional metrics.
3 0.235415 63 emnlp-2010-Improving Translation via Targeted Paraphrasing
Author: Philip Resnik ; Olivia Buzek ; Chang Hu ; Yakov Kronrod ; Alex Quinn ; Benjamin B. Bederson
Abstract: Targeted paraphrasing is a new approach to the problem of obtaining cost-effective, reasonable quality translation that makes use of simple and inexpensive human computations by monolingual speakers in combination with machine translation. The key insight behind the process is that it is possible to spot likely translation errors with only monolingual knowledge of the target language, and it is possible to generate alternative ways to say the same thing (i.e. paraphrases) with only monolingual knowledge of the source language. Evaluations demonstrate that this approach can yield substantial improvements in translation quality.
4 0.22758447 89 emnlp-2010-PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts
Author: Chang Liu ; Daniel Dahlmeier ; Hwee Tou Ng
Abstract: We present PEM, the first fully automatic metric to evaluate the quality of paraphrases, and consequently, that of paraphrase generation systems. Our metric is based on three criteria: adequacy, fluency, and lexical dissimilarity. The key component in our metric is a robust and shallow semantic similarity measure based on pivot language N-grams that allows us to approximate adequacy independently of lexical similarity. Human evaluation shows that PEM achieves high correlation with human judgments.
5 0.075765803 29 emnlp-2010-Combining Unsupervised and Supervised Alignments for MT: An Empirical Study
Author: Jinxi Xu ; Antti-Veikko Rosti
Abstract: Word alignment plays a central role in statistical MT (SMT) since almost all SMT systems extract translation rules from word aligned parallel training data. While most SMT systems use unsupervised algorithms (e.g. GIZA++) for training word alignment, supervised methods, which exploit a small amount of human-aligned data, have become increasingly popular recently. This work empirically studies the performance of these two classes of alignment algorithms and explores strategies to combine them to improve overall system performance. We used two unsupervised aligners, GIZA++ and HMM, and one supervised aligner, ITG, in this study. To avoid language and genre specific conclusions, we ran experiments on test sets consisting of two language pairs (Chinese-to-English and Arabicto-English) and two genres (newswire and weblog). Results show that the two classes of algorithms achieve the same level of MT perfor- mance. Modest improvements were achieved by taking the union of the translation grammars extracted from different alignments. Significant improvements (around 1.0 in BLEU) were achieved by combining outputs of different systems trained with different alignments. The improvements are consistent across languages and genres.
6 0.066820681 47 emnlp-2010-Example-Based Paraphrasing for Improved Phrase-Based Statistical Machine Translation
7 0.066809297 78 emnlp-2010-Minimum Error Rate Training by Sampling the Translation Lattice
8 0.061011154 5 emnlp-2010-A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
9 0.060558356 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding
10 0.060341295 99 emnlp-2010-Statistical Machine Translation with a Factorized Grammar
11 0.059714727 35 emnlp-2010-Discriminative Sample Selection for Statistical Machine Translation
12 0.05061616 13 emnlp-2010-A Simple Domain-Independent Probabilistic Approach to Generation
13 0.049840067 50 emnlp-2010-Facilitating Translation Using Source Language Paraphrase Lattices
14 0.040192451 105 emnlp-2010-Title Generation with Quasi-Synchronous Grammar
15 0.038051561 7 emnlp-2010-A Mixture Model with Sharing for Lexical Semantics
16 0.0380207 39 emnlp-2010-EMNLP 044
17 0.035636976 114 emnlp-2010-Unsupervised Parse Selection for HPSG
18 0.035534132 74 emnlp-2010-Learning the Relative Usefulness of Questions in Community QA
19 0.035528988 95 emnlp-2010-SRL-Based Verb Selection for ESL
20 0.035297889 65 emnlp-2010-Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification
topicId topicWeight
[(0, 0.175), (1, -0.161), (2, -0.121), (3, 0.055), (4, -0.08), (5, 0.175), (6, -0.059), (7, -0.029), (8, 0.277), (9, -0.133), (10, -0.08), (11, -0.065), (12, 0.119), (13, 0.066), (14, -0.055), (15, 0.004), (16, 0.135), (17, -0.151), (18, -0.086), (19, -0.374), (20, -0.094), (21, -0.197), (22, 0.172), (23, -0.102), (24, 0.107), (25, -0.096), (26, 0.009), (27, 0.188), (28, -0.102), (29, 0.05), (30, -0.029), (31, -0.013), (32, -0.03), (33, 0.076), (34, 0.038), (35, -0.016), (36, -0.038), (37, 0.002), (38, 0.07), (39, -0.003), (40, -0.026), (41, -0.065), (42, -0.045), (43, 0.046), (44, 0.055), (45, 0.006), (46, -0.035), (47, 0.05), (48, 0.036), (49, -0.039)]
simIndex simValue paperId paperTitle
same-paper 1 0.97150892 52 emnlp-2010-Further Meta-Evaluation of Broad-Coverage Surface Realization
Author: Dominic Espinosa ; Rajakrishnan Rajkumar ; Michael White ; Shoshana Berleant
Abstract: We present the first evaluation of the utility of automatic evaluation metrics on surface realizations of Penn Treebank data. Using outputs of the OpenCCG and XLE realizers, along with ranked WordNet synonym substitutions, we collected a corpus of generated surface realizations. These outputs were then rated and post-edited by human annotators. We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. In contrast to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. We also find that all of the metrics correctly predict more than half of the significant systemlevel differences, though none are correct in all cases. We conclude with a discussion ofthe implications for the utility of such metrics in evaluating generation in the presence of variation. A further result of our research is a corpus of post-edited realizations, which will be made available to the research community. 1 Introduction and Background In building surface-realization systems for natural language generation, there is a need for reliable automated metrics to evaluate the output. Unlike in parsing, where there is usually a single goldstandard parse for a sentence, in surface realization there are usually many grammatically-acceptable ways to express the same concept. This parallels the task of evaluating machine-translation (MT) systems: for a given segment in the source language, 564 there are usually several acceptable translations into the target language. As human evaluation of translation quality is time-consuming and expensive, a number of automated metrics have been developed to evaluate the quality of MT outputs. In this study, we investigate whether the metrics developed for MT evaluation tasks can be used to reliably evaluate the outputs of surface realizers, and which of these metrics are best suited to this task. A number of surface realizers have been developed using the Penn Treebank (PTB), and BLEU scores are often reported in the evaluations of these systems. But how useful is BLEU in this context? The original BLEU study (Papineni et al., 2001) scored MT outputs, which are of generally lower quality than grammar-based surface realizations. Furthermore, even for MT systems, the usefulness of BLEU has been called into question (Callison-Burch et al., 2006). BLEU is designed to work with multiple reference sentences, but in treebank realization, there is only a single reference sentence available for comparison. A few other studies have investigated the use of such metrics in evaluating the output of NLG systems, notably (Reiter and Belz, 2009) and (Stent et al., 2005). The former examined the performance of BLEU and ROUGE with computer-generated weather reports, finding a moderate correlation with human fluency judgments. The latter study applied several MT metrics to paraphrase data from Barzilay and Lee’s corpus-based system (Barzilay and Lee, 2003), and found moderate correlations with human adequacy judgments, but little correlation with fluency judgments. Cahill (2009) examined the performance of six MT metrics (including BLEU) in evaluating the output of a LFG-based surface realizer for ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e1 on O Ectmobpeir ic 2a0l1 M0.e ?tc ho2d0s10 in A Nsastoucira tlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinag eusis 5t6ic4s–574, German, also finding only weak correlations with the human judgments. To study the usefulness of evaluation metrics such as BLEU on the output of grammar-based surface realizers used with the PTB, we assembled a corpus of surface realizations from three different realizers operating on Section 00 of the PTB. Two human judges evaluated the adequacy and fluency of each of the realizations with respect to the reference sentence. The realizations were then scored with a number of automated evaluation metrics developed for machine translation. In order to investigate the correlation of targeted metrics with human evaluations, and gather other acceptable realizations for future evaluations, the judges manually repaired each unacceptable realization during the rating task. In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. However, when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. This suggests that multiple metrics should be routinely consulted when comparing realizer systems. Overall, our methodology is similar to that of previous MT meta-evaluations, in that we collected human judgments of system outputs, and compared these scores with those assigned by automatic metrics. A recent alternative approach to paraphrase evaluation is ParaMetric (Callison-Burch et al., 2008); however, it requires a corpus of annotated (aligned) paraphrases (which does not yet exist for PTB data), and is arguably focused more on paraphrase analysis than paraphrase generation. The plan of the paper is as follows: Section 2 discusses the preparation of the corpus of surface realizations. Section 3 describes the human evaluation task and the automated metrics applied. Sections 4 and 5 present and discuss the results of these evaluations. We conclude with some general observations about automatic evaluation of surface realizers, and some directions for further research. 565 2 Data Preparation We collected realizations of the sentences in Section 00 of the WSJ corpus from the following three sources: 1. OpenCCG, a CCG-based chart realizer (White, 2006) 2. The XLE Generator, a LFG-based system developed by Xerox PARC (Crouch et al., 2008) 3. WordNet synonym substitutions, to investigate how differences in lexical choice compare to grammar-based variation.1 Although all three systems used Section 00 of the PTB, they were applied with various parameters (e.g., language models, multiple-output versus single-output) and on different input structures. Accordingly, our study does not compare OpenCCG to XLE, or either of these to the WordNet system. 2.1 OpenCCG realizations OpenCCG is an open source parsing/realization library with multimodal extensions to CCG (Baldridge, 2002). The OpenCCG chart realizer takes logical forms as input and produces strings by combining signs for lexical items. Alternative realizations are scored using integrated n-gram and perceptron models. For robustness, fragments are greedily assembled when necessary. Realizations were generated from 1,895 gold standard logical forms, created by constrained parsing of development-section derivations. The following OpenCCG models (which differ essentially in the way the output is ranked) were used: 1. Baseline 1: Output ranked by a trigram word model 2. Baseline 2: Output ranked using three language models (3-gram words 3-gram words with named entity class replacement factored language model of words, POS tags and CCG supertags) + + 1Not strictly surface realizations, since they do not involve an abstract input specification, but for simplicity we refer to them as realizations throughout. 3. Baseline 3: Perceptron with syntax features and the three LMs mentioned above 4. Perceptron full-model: n-best realizations ranked using perceptron with syntax features and the three n-gram models, as well as discriminative n-grams The perceptron model was trained on sections 0221 of the CCGbank, while a grammar extracted from section 00-21 was used for realization. In addition, oracle supertags were inserted into the chart during realization. The purpose of such a non-blind testing strategy was to evaluate the quality of the output produced by the statistical ranking models in isolation, rather than focusing on grammar coverage, and avoid the problems associated with lexical smoothing, i.e. lexical categories in the development section not being present in the training section. To enrich the variation in the generated realizations, dative-alternation was enforced during realization by ensuring alternate lexical categories of the verb in question, as in the following example: (1) the executives gave [the chefs] [a standing ovation] (2) the executives gave [a standing ovation] [to the chefs] 2.2 XLE realizations The corpus of realizations generated by the XLE system contained 42,527 surface realizations of approximately 1,421 section 00 sentences (an average of 30 per sentence), initially unranked. The LFG f-structures used as input to the XLE generator were derived from automatic parses, as described in (Riezler et al., 2002). The realizations were first tokenized using Penn Treebank conventions, then ranked using perplexities calculated from the same trigram word model used with OpenCCG. For each sentence, the top 4 realizations were selected. The XLE generator provides an interesting point of comparison to OpenCCG as it uses a manuallydeveloped grammar with inputs that are less abstract but potentially noisier, as they are derived from automatic parses rather than gold-standard ones. 566 2.3 WordNet synonymizer To produce an additional source of variation, the nouns and verbs of the sentences in section 00 of the PTB were replaced with all of their WordNet synonyms. Verb forms were generated using verb stems, part-of-speech tags, and the morphg tool.2 These substituted outputs were then filtered using the n-gram data which Google Inc. has made available.3 Those without any 5-gram matches centered on the substituted word (or 3-gram matches, in the case of short sentences) were eliminated. 3 Evaluation From the data sources described in the previous sec- tion, a corpus of realizations to be evaluated by the human judges was constructed by randomly choosing 305 sentences from section 00, then selecting surface realizations of these sentences using the following algorithm: 1. Add OpenCCG’s best-scored realization. 2. Add other OpenCCG realizations until all four models are represented, to a maximum of 4. 3. Add up to 4 realizations from either the XLE system or the WordNet pool, chosen randomly. The intent was to give reasonable coverage of all realizer systems discussed in Section 2 without overloading the human judges. “System” here means any instantiation that emits surface realizations, including various configurations of OpenCCG (using different language models or ranking systems), and these can be multiple-output, such as an n-best list, or single-output (best-only, worst-only, etc.). Accordingly, more realizations were selected from the OpenCCG realizer because 5 different systems were being represented. Realizations were chosen randomly, rather than according to sentence types or other criteria, in order to produce a representative sample of the corpus. In total, 2,114 realizations were selected for evaluation. 2http : //www. informatics . sussex. ac .uk/ re search/ groups / nlp / carro l /morph .html l 3http : //www . ldc . upenn .edu/Catalog/docs/ LDC2 0 0 6T 13 / readme .txt 3.1 Human judgments Two human judges evaluated each surface realization on two criteria: adequacy, which represents the extent to which the output conveys all and only the meaning of the reference sentence; and fluency, the extent to which it is grammatically acceptable. The realizations were presented to the judges in sets containing a reference sentence and the 1-8 outputs selected for that sentence. To aid in the evaluation of adequacy, one sentence each of leading and trailing context were displayed. Judges used the guidelines given in Figure 1, based on the scales developed by the NIST Machine Translation Evaluation Workshop. In addition to rating each realization on the two five-point scales, each judge also repaired each output which he or she did not judge to be fully adequate and fluent. An example is shown in Figure 2. These repairs resulted in new reference sentences for a substantial number of sentences. These repaired realizations were later used to calculate targeted versions of the evaluation metrics, i.e., using the repaired sentence as the reference sentence. Although targeted metrics are not fully automatic, they are of interest because they allow the evaluation algorithm to focus on what is actually wrong with the input, rather than all textual differences. Notably, targeted TER (HTER) has been shown to be more consistent with human judgments than human annotators are with one another (Snover et al., 2006). 3.2 Automatic evaluation The realizations were also evaluated using seven automatic metrics: • IBM’s BLEU, which scores a hypothesis by counting n-gram matches with the reference sentence (Papineni et al., 2001), with smoothing as described in (Lin and Och, 2004) • • • • • • The NIST n-gram evaluation metric, similar to BLEU, but rewarding rarer n-gram matches, and using a different length penalty METEOR, which measures the harmonic mean of unigram precision and recall, with a higher weight for recall (Banerjee and Lavie, 2005) 567 TER (Translation Edit Rate), a measure of the number of edits required to transform a hypothesis sentence into the reference sentence (Snover et al., 2006) TERP, an augmented version of TER which performs phrasal substitutions, stemming, and checks for synonyms, among other improvements (Snover et al., 2009) TERPA, an instantiation of TERP with edit weights optimized for correlation with adequacy in MT evaluations GTM (General Text Matcher), a generaliza- tion of the F-measure that rewards contiguous matching spans (Turian et al., 2003) Additionally, targeted versions of BLEU, METEOR, TER, and GTM were computed by using the human-repaired outputs as the reference set. The human repair was different from the reference sentence in 193 cases (about 9% of the total), and we expected this to result in better scores and correlations with the human judgments overall. 4 Results 4.1 Human judgments Table 1 summarizes the dataset, as well as the mean adequacy and fluency scores garnered from the human evaluation. Overall adequacy and fluency judgments were high (4.16, 3.63) for the realizer systems on average, and the best-rated realizer systems achieved mean fluency scores above 4. 4.2 Inter-annotator agreement Inter-annotator agreement was measured using the κ-coefficient, which is commonly used to measure the extent to which annotators agree in category P(1A−)P−(PE()E), judgment tasks. κ is defined as where P(A) is the observed agreement 1 b−etPw(eEe)n annotators and P(E) is the probability of agreement due to chance (Carletta, 1996). Chance agreement for this data is calculated by the method discussed in Carletta’s squib. However, in previous work in MT meta-evaluation, Callison-Burch et al. (2007), assume the less strict criterion of uniform chance agreement, i.e. for a five-point scale. They also 51 Score Adequacy Fluency 5All the meaning of the referencePerfectly grammatical 4 Most of the meaning Awkward or non-native; punctuation errors 3 Much of the meaning Agreement errors or minor syntactic problems 2 Meaning substantially different Major syntactic problems, such as missing words 1 Meaning completely different Completely ungrammatical Figure Ref. Realiz. Repair 1: Rating scale and guidelines It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf spurns them again It weren’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them Figure 2: Example of repair introduce the notion of “relative” κ, which measures how often two or more judges agreed that A > B, A = B, or A < B for two outputs A and B, irrespective of the specific values given on the five-point scale; here, uniform chance agreement is taken to be We report both absolute and relative κ in Table 2, using actual chance agreement rather than uniform chance agreement. 31. The κ scores of0.60 for adequacy and 0.63 for fluency across the entire dataset represent “substantial” agreement, according to the guidelines discussed in (Landis and Koch, 1977), better than is typically reported for machine translation evaluation tasks; for example, Callison-Burch et al. (2007) reported “fair” agreement, with κ = 0.281 for fluency and κ = 0.307 for adequacy (relative). Assuming the uniform chance agreement that the previously cited work adopts, our inter-annotator agreements (both absolute and relative) are still higher. This is likely due to the generally high quality of the realizations evaluated, leading to easier judgments. 4.3 Correlation with automatic evaluation To determine how well the automatic evaluation methods described in Section 3 correlate with the human judgments, we averaged the human judgments for adequacy and fluency, respectively, for each of the rated realizations, and then computed both Pearson’s correlation coefficient and Spearman’s rank correlation coefficient between these scores and each of the metrics. Spearman’s correlation makes fewer assumptions about the distribu- tion of the data, but may not reflect a linear rela568 tionship that is actually present. Both are frequently reported in the literature. Due to space constraints, we show only Spearman’s correlation, although the TER family scored slightly better on Pearson’s coefficient, relatively. The results for Spearman’s correlation are given in Table 3. Additionally, the average scores for adequacy and fluency were themselves averaged into a single score, following (Snover et al., 2009), and the Spearman’s correlation of each of the automatic metrics with these scores are given in Table 4. All reported correlations are significant at p < 0.001. 4.4 Bootstrap sampling of correlations For each of the sub-corpora shown in Table 1, we computed confidence intervals for the correlations between adequacy and fluency human scores with selected automatic metrics (BLEU, HBLEU, TER, TERP, and HTER) as described in (Koenh, 2004). We sampled each sub-corpus 1000 times with replace- ment, and calculated correlations between the rankings induced by the human scores and those induced by the metrics for each reference sentence. We then used these coefficients to estimate the confidence interval, after excluding the top 25 and bottom 25 coefficients, following (Lin and Och, 2004). The results of this for the BLEU metric are shown in Table 5. We determined which correlations lay within the 95% confidence interval of the best performing metric in each row of Table Table 3; these figures are italicized. 5 Discussion 5.1 Human judgments of systems The results for the four OpenCCG perceptron models mostly confirm those reported in (White and Rajkumar, 2009), with one exception: the B-3 model was below B-2, though the P-B (perceptron-best) model still scored highest. This may have been due to differences in the testing scenario. None of the differences in adequacy scores among the individual systems are significant, with the exception of the WordNet system. In this case, the lack of wordsense disambiguation for the substituted words results in a poor overall adequacy score (e.g., wage floor → wage story). Conversely, it scores highest ffoloro fluency, as substituting a noun or tve srcbo rwesith h a synonym does not usually introduce ungrammaticality. 5.2 Correlations of human judgments with MT metrics Of the non-human-targeted metrics evaluated, BLEU and TER/TERP demonstrate the highest correlations with the human judgments of fluency (r = 0.62, 0.64). The TER family of evaluation metrics have been observed to perform very well in MTevaluation tasks, and although the data evaluated here differs from typical MT data in some important ways, the correlation of TERP with the human judgments is substantial. In contrast with previous MT evaluations where TERP performs considerably better than TER, these scored close to equal on our data, possibly because TERP’s stem, synonym, and paraphrase matching are less useful when most of the variation is syntactic. The correlations with BLEU and METEOR are lower than those reported in (Callison-Burch et al., 2007); in that study, BLEU achieved adequacy and fluency correlations of 0.690 and 0.722, respectively, and METEOR achieved 0.701 and 0.719. The correlations for these metrics might be expected to be lower for our data, since overall quality is higher, making the metrics’ task more difficult as the outputs involve subtler differences between acceptable and unacceptable variation. The human-targeted metrics (represented by the prefixed H in the data tables) correlated even more strongly with the human judgments, compared to the non-targeted versions. HTER demonstrated the best 569 correlation with realizer fluency (r = 0.75). For several kinds of acceptable variation involving the rearrangement of constituents (such as dative shift), TERP gives a more reasonable score than BLEU, due to its ability to directly evaluate phrasal shifts. The following realization was rated 4.5 for fluency, and was more correctly ranked by TERP than BLEU: (3) Ref: The deal also gave Mitsui access to a high-tech medical product. (4) Realiz.: The deal also gave access to a high-tech medical product to Mitsui. For each reference sentence, we compared the ranking of its realizations induced from the human scores to the ranking induced from the TERP score, and counted the rank errors by the latter, informally categorizing them by error type (see Table 7). In the 50 sentences with the highest numbers of rank errors, 17 were affected by punctuation differences, typically involving variation in comma placement. Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. However, TERP penalizes such insertions or deletions. Agreement errors are another frequent source of ranking errors for TERP. The human judges tended to harshly penalize sentences with number-agreement or tense errors, whereas TERP applies only a single substitution penalty for each such error. We expect that with suitable optimization of edit weights to avoid over-penalizing punctuation shifts and underpenalizing agreement errors, TERP would exhibit an even stronger correlation with human fluency judgments. None of the evaluation metrics can distinguish an acceptable movement of a word or constituent from an unacceptable movement, with only one reference sentence. A substantial source of error for both TERP and BLEU is variation in adverbial placement, as shown in (7). Similar errors are seen with prepositional phrases and some commonly-occurring temporal adverbs, which typically admit a number of variations in placement. Another important example of acceptable variation which these metrics do not generally rank correctly is dative alternation: Ref. We need to clarify what exactly is wrong with it. Realiz. Flu. TERP BLEU We need to clarify exactly what is wrong with it.50.10.5555 We need to clarify exactly what ’s wrong with it. 5 0.2 0.4046 (7) We need to clarify what , exactly , is wrong with it. 5 0.2 0.5452 We need to clarify what is wrong with it exactly. 4.5 0.1 0.6756 We need to clarify what exactly , is wrong with it. 4 0.1 0.7017 We need to clarify what , exactly is wrong with it. 4 0.1 0.7017 We needs to clarify exactly what is wrong with it. (5) Ref. When test booklets were passed out 48 hours ahead of time, she says she copied questions in the social studies section and gave the answers to students. (6) Realiz. When test booklets were passed out 48 hours ahead of time , she says she copied questions in the social studies section and gave students the answers. The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al., 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output ofa surface realizer for German, which found only a weak correlation. However, the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. In Cahill’s study, the outputs of the realizer were almost always grammatically correct, and the automated evaluation metrics were ranking markedness instead of grammatical acceptability. 5.3 System-level comparisons In order to investigate the efficacy of the metrics in ranking different realizer systems, or competing realizations from the same system generated using different ranking models, we considered seven different “systems” from the whole dataset of realizations. These consisted of five OpenCCG-based realizations (the best realization from three baseline models, and the best and the worst realization from the full perceptron model), and two XLE-based sys- tems (the best and the worst realization, after ranking the outputs of the XLE realizer with an n-gram model). The mean of the combined adequacy and 570 3 0.103 0.346 fluency scores of each of these seven systems was compared with that of every other system, resulting in 21 pairwise comparisons. Then Tukey’s HSD test was performed to determine the systems which differed significantly in terms of the average adequacy and fluency rating they received.4 The test revealed five pairwise comparisons where the scores were significantly different. Subsequently, for each of these systems, an overall system-level score for each of the MT metrics was calculated. For the five pairwise comparisons where the adequacy-fluency group means differed significantly, we checked whether the metric ranked the systems correctly. Table 8 shows the results of a pairwise comparison between the ranking induced by each evaluation metric, and the ranking induced by the human judgments. Five of the seven non- targeted metrics correctly rank more than half of the systems. NIST, METEOR, and GTM get the most comparisons right, but neither NIST nor GTM correctly rank the OpenCCG-baseline model 1 with respect to the XLE-best model. TER and TERP get two of the five comparisons correct, and they incorrectly rank two of the five OpenCCG model comparisons, as well as the comparison between the XLE-worst and OpenCCG-best systems. For the targeted metrics, HNIST is correct for all five comparisons, while neither HBLEU nor HMETEOR correctly rank all the OpenCCG models. On the other hand, HTER and HGTM incorrectly rank the XLE-best system versus OpenCCG-based models. In summary, some of the metrics get some of the rankings correct, but none of the non-targeted metrics get all of them correct. Moreover, different metrics make different ranking errors. This argues for 4This particular test was chosen since it corrects for multiple post-hoc analyses conducted on the same data-set. the use of multiple metrics in comparing realizer systems. 6 Conclusion Our study suggests that although the task of evaluating the output from realizer systems differs from the task of evaluating machine translations, the automatic metrics used to evaluate MT outputs deliver moderate correlations with combined human fluency and adequacy scores when used on surface realizations. We also found that the MT-evaluation metrics are useful in evaluating different versions of the same realizer system (e.g., the various OpenCCG realization ranking models), and finding cases where a system is performing poorly. As in MT-evaluation tasks, human-targeted metrics have the highest correlations with human judgments overall. These results suggest that the MT-evaluation metrics are useful for developing surface realizers. However, the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. In our study, the targeted version of TERP correlated most strongly with human judgments of fluency. In future work, the performance of the TER family of metrics on this data might be improved by opti- mizing the edit weights used in computing its scores, so as to avoid over-penalizing punctuation movements or under-penalizing agreement errors, both of which were significant sources of ranking errors. Multiple reference sentences may also help mitigate these problems, and the corpus of human-repaired realizations that has resulted from our study is a step in this direction, as it provides multiple references for some cases. We expect the corpus to also prove useful for feature engineering and error analysis in developing better realization models.5 Acknowledgements We thank Aoife Cahill and Tracy King for providing us with the output of the XLE generator. We also thank Chris Callison-Burch and the anonymous reviewers for their helpful comments and suggestions. 5The corpus can be downloaded from http : / /www . l ing .ohio-st ate . edu / ˜mwhite / dat a / emnlp 10 / . 571 This material is based upon work supported by the National Science Foundation under Grant No. 0812297. References Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. R. Barzilay and L. Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In proceedings of HLT-NAACL, volume 2003, pages 16–23. Aoife Cahill. 2009. Correlating human and automatic evaluation of a german surface realiser. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 97–100, Suntec, Singapore, August. Association for Computational Linguistics. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Reevaluating the role of BLEU in machine translation research. In Proceedings of EACL, volume 2006, pages 249–256. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation ofmachine translation. In StatMT ’07: Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158, Morristown, NJ, USA. Association for Computational Linguistics. C. Callison-Burch, T. Cohn, and M. Lapata. 2008. Parametric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 97–104. Association for Computational Linguistics. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22(2):249–254. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2008. Xle documentation. Technical report, Palo Alto Research Center. Philip Koenh. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1): 159–174. Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In COLING ’04: Proceedings Chin-Yew of the 20th international conference on Computational 501, Morristown, NJ, USA. Associfor Computational Linguistics. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Linguistics, page ation K. Bleu: a method for automatic evaluation of machine translation. E. Technical report, IBM Research. Reiter and A. Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529–558. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–23 1. M. Snover, N. Madnani, B.J. Dorr, and R. Schwartz. 2009. Fluency, adequacy, or HTER?: exploring different human judgments with a tunable MT metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259–268. Association for Computational Linguistics. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of CICLing. J.P. Turian, L. Shen, and I.D. Melamed. 2003. Evaluation of machine translation and its evaluation. recall (C— R), 100:2. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410–419, Singapore, August. Association for Computational Linguistics. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. 572 Table 1: Descriptive statistics Table 2: Corpora-wise inter-annotator agreement (absolute and relative κ values shown) SXAROWlpeyLos-aErFndAliCzueqrtd-GAFluq0 N.354217690 B.356219470M .35287410G .35241780 TP.465329170T.A34521670T.465230 H.54T76321H0 .543N89270H.653B7491280H.563M41270H.5643G218 Table 3: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (-Adq: adequacy and -Flu: Fluency); Scores which fall within the 95 %CI of the best are italicized. SROXAWlLeypoasErldniCze rtG0 N.35246 190 B.5618740 M.542719G0 .5341890T .P632180T.A54268 0T .629310 H.7T6 3985H0 .546N180 H.765B8730H.673M5190 H.56G 4318 Table 4: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (combined adequacy and fluency scores) 573 SRAXOWylLpeosatrEldniezCm rtG0S A.p61d35q94107 .5304%65874L09.5462%136U0SF .lp256u 1209 .51 6%9213L0 .562%91845U Table 5: Spearman’s correlation analysis (bootstrap sampling) of the BLEU scores of various systems with human adequacy and fluency scores SRXOAWylLpeosarEndiCztGH J -12 0 N.6543210 B.6512830 M.4532 960 G.13457960T.P56374210T.A45268730T.562738140 H.7T6854910H.56N482390H.675B1398240H.567M3 240H.56G41290H.8J71562- Table 6: Spearman’s correlations of NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), human variants (HT, HN, HB, HM, HG), and individual human judgments (combined adq. and flu. scores) Factor Count Punctuation17 Adverbial shift Agreement Other shifts Conjunct rearrangement Complementizer ins/del PP shift 16 14 8 8 5 4 Table 7: Factors influencing TERP ranking errors for 50 worst-ranked realization groups Table 8: Metric-wise ranking performance in terms of agreement with a ranking induced by combined adequacy and fluency scores; each metric gets a score out of 5 (i.e. number of system-level comparisons that emerged significant as per the Tukey’s HSD test) Legend: Perceptron Best (PB); Perceptron Worst (PW); XLE Best (XB); XLE Worst (XW); OpenCCG baseline models 1 to 3 (C1 ... C3) 574
2 0.8003211 22 emnlp-2010-Automatic Evaluation of Translation Quality for Distant Language Pairs
Author: Hideki Isozaki ; Tsutomu Hirao ; Kevin Duh ; Katsuhito Sudoh ; Hajime Tsukada
Abstract: Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as the de facto standard metric. However, when we consider translation between distant language pairs such as Japanese and English, most popular metrics (e.g., BLEU, NIST, PER, and TER) do not work well. It is well known that Japanese and English have completely different word orders, and special care must be paid to word order in translation. Otherwise, translations with wrong word order often lead to misunderstanding and incomprehensibility. For instance, SMT-based Japanese-to-English translators tend to translate ‘A because B’ as ‘B because A.’ Thus, word order is the most important problem for distant language translation. However, conventional evaluation metrics do not significantly penalize such word order mistakes. Therefore, locally optimizing these metrics leads to inadequate translations. In this paper, we propose an automatic evaluation metric based on rank correlation coefficients modified with precision. Our meta-evaluation of the NTCIR-7 PATMT JE task data shows that this metric outperforms conventional metrics.
3 0.68926805 89 emnlp-2010-PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts
Author: Chang Liu ; Daniel Dahlmeier ; Hwee Tou Ng
Abstract: We present PEM, the first fully automatic metric to evaluate the quality of paraphrases, and consequently, that of paraphrase generation systems. Our metric is based on three criteria: adequacy, fluency, and lexical dissimilarity. The key component in our metric is a robust and shallow semantic similarity measure based on pivot language N-grams that allows us to approximate adequacy independently of lexical similarity. Human evaluation shows that PEM achieves high correlation with human judgments.
4 0.39417624 63 emnlp-2010-Improving Translation via Targeted Paraphrasing
Author: Philip Resnik ; Olivia Buzek ; Chang Hu ; Yakov Kronrod ; Alex Quinn ; Benjamin B. Bederson
Abstract: Targeted paraphrasing is a new approach to the problem of obtaining cost-effective, reasonable quality translation that makes use of simple and inexpensive human computations by monolingual speakers in combination with machine translation. The key insight behind the process is that it is possible to spot likely translation errors with only monolingual knowledge of the target language, and it is possible to generate alternative ways to say the same thing (i.e. paraphrases) with only monolingual knowledge of the source language. Evaluations demonstrate that this approach can yield substantial improvements in translation quality.
5 0.27994481 29 emnlp-2010-Combining Unsupervised and Supervised Alignments for MT: An Empirical Study
Author: Jinxi Xu ; Antti-Veikko Rosti
Abstract: Word alignment plays a central role in statistical MT (SMT) since almost all SMT systems extract translation rules from word aligned parallel training data. While most SMT systems use unsupervised algorithms (e.g. GIZA++) for training word alignment, supervised methods, which exploit a small amount of human-aligned data, have become increasingly popular recently. This work empirically studies the performance of these two classes of alignment algorithms and explores strategies to combine them to improve overall system performance. We used two unsupervised aligners, GIZA++ and HMM, and one supervised aligner, ITG, in this study. To avoid language and genre specific conclusions, we ran experiments on test sets consisting of two language pairs (Chinese-to-English and Arabicto-English) and two genres (newswire and weblog). Results show that the two classes of algorithms achieve the same level of MT perfor- mance. Modest improvements were achieved by taking the union of the translation grammars extracted from different alignments. Significant improvements (around 1.0 in BLEU) were achieved by combining outputs of different systems trained with different alignments. The improvements are consistent across languages and genres.
6 0.20739312 99 emnlp-2010-Statistical Machine Translation with a Factorized Grammar
7 0.18281873 7 emnlp-2010-A Mixture Model with Sharing for Lexical Semantics
8 0.15589862 5 emnlp-2010-A Hybrid Morpheme-Word Representation for Machine Translation of Morphologically Rich Languages
9 0.15198918 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding
10 0.14839919 111 emnlp-2010-Two Decades of Unsupervised POS Induction: How Far Have We Come?
11 0.14230432 13 emnlp-2010-A Simple Domain-Independent Probabilistic Approach to Generation
12 0.1398941 114 emnlp-2010-Unsupervised Parse Selection for HPSG
13 0.13728574 91 emnlp-2010-Practical Linguistic Steganography Using Contextual Synonym Substitution and Vertex Colour Coding
14 0.13723947 40 emnlp-2010-Effects of Empty Categories on Machine Translation
15 0.13646433 25 emnlp-2010-Better Punctuation Prediction with Dynamic Conditional Random Fields
16 0.13594416 120 emnlp-2010-What's with the Attitude? Identifying Sentences with Attitude in Online Discussions
17 0.13082108 48 emnlp-2010-Exploiting Conversation Structure in Unsupervised Topic Segmentation for Emails
18 0.12933679 35 emnlp-2010-Discriminative Sample Selection for Statistical Machine Translation
19 0.1242674 82 emnlp-2010-Multi-Document Summarization Using A* Search and Discriminative Learning
20 0.12156263 1 emnlp-2010-"Poetic" Statistical Machine Translation: Rhyme and Meter
topicId topicWeight
[(10, 0.015), (12, 0.027), (29, 0.621), (30, 0.025), (32, 0.011), (52, 0.023), (56, 0.037), (66, 0.071), (72, 0.026), (76, 0.02), (79, 0.012), (83, 0.019), (89, 0.011)]
simIndex simValue paperId paperTitle
same-paper 1 0.9677043 52 emnlp-2010-Further Meta-Evaluation of Broad-Coverage Surface Realization
Author: Dominic Espinosa ; Rajakrishnan Rajkumar ; Michael White ; Shoshana Berleant
Abstract: We present the first evaluation of the utility of automatic evaluation metrics on surface realizations of Penn Treebank data. Using outputs of the OpenCCG and XLE realizers, along with ranked WordNet synonym substitutions, we collected a corpus of generated surface realizations. These outputs were then rated and post-edited by human annotators. We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. In contrast to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. We also find that all of the metrics correctly predict more than half of the significant systemlevel differences, though none are correct in all cases. We conclude with a discussion ofthe implications for the utility of such metrics in evaluating generation in the presence of variation. A further result of our research is a corpus of post-edited realizations, which will be made available to the research community. 1 Introduction and Background In building surface-realization systems for natural language generation, there is a need for reliable automated metrics to evaluate the output. Unlike in parsing, where there is usually a single goldstandard parse for a sentence, in surface realization there are usually many grammatically-acceptable ways to express the same concept. This parallels the task of evaluating machine-translation (MT) systems: for a given segment in the source language, 564 there are usually several acceptable translations into the target language. As human evaluation of translation quality is time-consuming and expensive, a number of automated metrics have been developed to evaluate the quality of MT outputs. In this study, we investigate whether the metrics developed for MT evaluation tasks can be used to reliably evaluate the outputs of surface realizers, and which of these metrics are best suited to this task. A number of surface realizers have been developed using the Penn Treebank (PTB), and BLEU scores are often reported in the evaluations of these systems. But how useful is BLEU in this context? The original BLEU study (Papineni et al., 2001) scored MT outputs, which are of generally lower quality than grammar-based surface realizations. Furthermore, even for MT systems, the usefulness of BLEU has been called into question (Callison-Burch et al., 2006). BLEU is designed to work with multiple reference sentences, but in treebank realization, there is only a single reference sentence available for comparison. A few other studies have investigated the use of such metrics in evaluating the output of NLG systems, notably (Reiter and Belz, 2009) and (Stent et al., 2005). The former examined the performance of BLEU and ROUGE with computer-generated weather reports, finding a moderate correlation with human fluency judgments. The latter study applied several MT metrics to paraphrase data from Barzilay and Lee’s corpus-based system (Barzilay and Lee, 2003), and found moderate correlations with human adequacy judgments, but little correlation with fluency judgments. Cahill (2009) examined the performance of six MT metrics (including BLEU) in evaluating the output of a LFG-based surface realizer for ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e1 on O Ectmobpeir ic 2a0l1 M0.e ?tc ho2d0s10 in A Nsastoucira tlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinag eusis 5t6ic4s–574, German, also finding only weak correlations with the human judgments. To study the usefulness of evaluation metrics such as BLEU on the output of grammar-based surface realizers used with the PTB, we assembled a corpus of surface realizations from three different realizers operating on Section 00 of the PTB. Two human judges evaluated the adequacy and fluency of each of the realizations with respect to the reference sentence. The realizations were then scored with a number of automated evaluation metrics developed for machine translation. In order to investigate the correlation of targeted metrics with human evaluations, and gather other acceptable realizations for future evaluations, the judges manually repaired each unacceptable realization during the rating task. In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. However, when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. This suggests that multiple metrics should be routinely consulted when comparing realizer systems. Overall, our methodology is similar to that of previous MT meta-evaluations, in that we collected human judgments of system outputs, and compared these scores with those assigned by automatic metrics. A recent alternative approach to paraphrase evaluation is ParaMetric (Callison-Burch et al., 2008); however, it requires a corpus of annotated (aligned) paraphrases (which does not yet exist for PTB data), and is arguably focused more on paraphrase analysis than paraphrase generation. The plan of the paper is as follows: Section 2 discusses the preparation of the corpus of surface realizations. Section 3 describes the human evaluation task and the automated metrics applied. Sections 4 and 5 present and discuss the results of these evaluations. We conclude with some general observations about automatic evaluation of surface realizers, and some directions for further research. 565 2 Data Preparation We collected realizations of the sentences in Section 00 of the WSJ corpus from the following three sources: 1. OpenCCG, a CCG-based chart realizer (White, 2006) 2. The XLE Generator, a LFG-based system developed by Xerox PARC (Crouch et al., 2008) 3. WordNet synonym substitutions, to investigate how differences in lexical choice compare to grammar-based variation.1 Although all three systems used Section 00 of the PTB, they were applied with various parameters (e.g., language models, multiple-output versus single-output) and on different input structures. Accordingly, our study does not compare OpenCCG to XLE, or either of these to the WordNet system. 2.1 OpenCCG realizations OpenCCG is an open source parsing/realization library with multimodal extensions to CCG (Baldridge, 2002). The OpenCCG chart realizer takes logical forms as input and produces strings by combining signs for lexical items. Alternative realizations are scored using integrated n-gram and perceptron models. For robustness, fragments are greedily assembled when necessary. Realizations were generated from 1,895 gold standard logical forms, created by constrained parsing of development-section derivations. The following OpenCCG models (which differ essentially in the way the output is ranked) were used: 1. Baseline 1: Output ranked by a trigram word model 2. Baseline 2: Output ranked using three language models (3-gram words 3-gram words with named entity class replacement factored language model of words, POS tags and CCG supertags) + + 1Not strictly surface realizations, since they do not involve an abstract input specification, but for simplicity we refer to them as realizations throughout. 3. Baseline 3: Perceptron with syntax features and the three LMs mentioned above 4. Perceptron full-model: n-best realizations ranked using perceptron with syntax features and the three n-gram models, as well as discriminative n-grams The perceptron model was trained on sections 0221 of the CCGbank, while a grammar extracted from section 00-21 was used for realization. In addition, oracle supertags were inserted into the chart during realization. The purpose of such a non-blind testing strategy was to evaluate the quality of the output produced by the statistical ranking models in isolation, rather than focusing on grammar coverage, and avoid the problems associated with lexical smoothing, i.e. lexical categories in the development section not being present in the training section. To enrich the variation in the generated realizations, dative-alternation was enforced during realization by ensuring alternate lexical categories of the verb in question, as in the following example: (1) the executives gave [the chefs] [a standing ovation] (2) the executives gave [a standing ovation] [to the chefs] 2.2 XLE realizations The corpus of realizations generated by the XLE system contained 42,527 surface realizations of approximately 1,421 section 00 sentences (an average of 30 per sentence), initially unranked. The LFG f-structures used as input to the XLE generator were derived from automatic parses, as described in (Riezler et al., 2002). The realizations were first tokenized using Penn Treebank conventions, then ranked using perplexities calculated from the same trigram word model used with OpenCCG. For each sentence, the top 4 realizations were selected. The XLE generator provides an interesting point of comparison to OpenCCG as it uses a manuallydeveloped grammar with inputs that are less abstract but potentially noisier, as they are derived from automatic parses rather than gold-standard ones. 566 2.3 WordNet synonymizer To produce an additional source of variation, the nouns and verbs of the sentences in section 00 of the PTB were replaced with all of their WordNet synonyms. Verb forms were generated using verb stems, part-of-speech tags, and the morphg tool.2 These substituted outputs were then filtered using the n-gram data which Google Inc. has made available.3 Those without any 5-gram matches centered on the substituted word (or 3-gram matches, in the case of short sentences) were eliminated. 3 Evaluation From the data sources described in the previous sec- tion, a corpus of realizations to be evaluated by the human judges was constructed by randomly choosing 305 sentences from section 00, then selecting surface realizations of these sentences using the following algorithm: 1. Add OpenCCG’s best-scored realization. 2. Add other OpenCCG realizations until all four models are represented, to a maximum of 4. 3. Add up to 4 realizations from either the XLE system or the WordNet pool, chosen randomly. The intent was to give reasonable coverage of all realizer systems discussed in Section 2 without overloading the human judges. “System” here means any instantiation that emits surface realizations, including various configurations of OpenCCG (using different language models or ranking systems), and these can be multiple-output, such as an n-best list, or single-output (best-only, worst-only, etc.). Accordingly, more realizations were selected from the OpenCCG realizer because 5 different systems were being represented. Realizations were chosen randomly, rather than according to sentence types or other criteria, in order to produce a representative sample of the corpus. In total, 2,114 realizations were selected for evaluation. 2http : //www. informatics . sussex. ac .uk/ re search/ groups / nlp / carro l /morph .html l 3http : //www . ldc . upenn .edu/Catalog/docs/ LDC2 0 0 6T 13 / readme .txt 3.1 Human judgments Two human judges evaluated each surface realization on two criteria: adequacy, which represents the extent to which the output conveys all and only the meaning of the reference sentence; and fluency, the extent to which it is grammatically acceptable. The realizations were presented to the judges in sets containing a reference sentence and the 1-8 outputs selected for that sentence. To aid in the evaluation of adequacy, one sentence each of leading and trailing context were displayed. Judges used the guidelines given in Figure 1, based on the scales developed by the NIST Machine Translation Evaluation Workshop. In addition to rating each realization on the two five-point scales, each judge also repaired each output which he or she did not judge to be fully adequate and fluent. An example is shown in Figure 2. These repairs resulted in new reference sentences for a substantial number of sentences. These repaired realizations were later used to calculate targeted versions of the evaluation metrics, i.e., using the repaired sentence as the reference sentence. Although targeted metrics are not fully automatic, they are of interest because they allow the evaluation algorithm to focus on what is actually wrong with the input, rather than all textual differences. Notably, targeted TER (HTER) has been shown to be more consistent with human judgments than human annotators are with one another (Snover et al., 2006). 3.2 Automatic evaluation The realizations were also evaluated using seven automatic metrics: • IBM’s BLEU, which scores a hypothesis by counting n-gram matches with the reference sentence (Papineni et al., 2001), with smoothing as described in (Lin and Och, 2004) • • • • • • The NIST n-gram evaluation metric, similar to BLEU, but rewarding rarer n-gram matches, and using a different length penalty METEOR, which measures the harmonic mean of unigram precision and recall, with a higher weight for recall (Banerjee and Lavie, 2005) 567 TER (Translation Edit Rate), a measure of the number of edits required to transform a hypothesis sentence into the reference sentence (Snover et al., 2006) TERP, an augmented version of TER which performs phrasal substitutions, stemming, and checks for synonyms, among other improvements (Snover et al., 2009) TERPA, an instantiation of TERP with edit weights optimized for correlation with adequacy in MT evaluations GTM (General Text Matcher), a generaliza- tion of the F-measure that rewards contiguous matching spans (Turian et al., 2003) Additionally, targeted versions of BLEU, METEOR, TER, and GTM were computed by using the human-repaired outputs as the reference set. The human repair was different from the reference sentence in 193 cases (about 9% of the total), and we expected this to result in better scores and correlations with the human judgments overall. 4 Results 4.1 Human judgments Table 1 summarizes the dataset, as well as the mean adequacy and fluency scores garnered from the human evaluation. Overall adequacy and fluency judgments were high (4.16, 3.63) for the realizer systems on average, and the best-rated realizer systems achieved mean fluency scores above 4. 4.2 Inter-annotator agreement Inter-annotator agreement was measured using the κ-coefficient, which is commonly used to measure the extent to which annotators agree in category P(1A−)P−(PE()E), judgment tasks. κ is defined as where P(A) is the observed agreement 1 b−etPw(eEe)n annotators and P(E) is the probability of agreement due to chance (Carletta, 1996). Chance agreement for this data is calculated by the method discussed in Carletta’s squib. However, in previous work in MT meta-evaluation, Callison-Burch et al. (2007), assume the less strict criterion of uniform chance agreement, i.e. for a five-point scale. They also 51 Score Adequacy Fluency 5All the meaning of the referencePerfectly grammatical 4 Most of the meaning Awkward or non-native; punctuation errors 3 Much of the meaning Agreement errors or minor syntactic problems 2 Meaning substantially different Major syntactic problems, such as missing words 1 Meaning completely different Completely ungrammatical Figure Ref. Realiz. Repair 1: Rating scale and guidelines It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf spurns them again It weren’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them It wasn’t clear how NL and Mr. Simmons would respond if Georgia Gulf again spurns them Figure 2: Example of repair introduce the notion of “relative” κ, which measures how often two or more judges agreed that A > B, A = B, or A < B for two outputs A and B, irrespective of the specific values given on the five-point scale; here, uniform chance agreement is taken to be We report both absolute and relative κ in Table 2, using actual chance agreement rather than uniform chance agreement. 31. The κ scores of0.60 for adequacy and 0.63 for fluency across the entire dataset represent “substantial” agreement, according to the guidelines discussed in (Landis and Koch, 1977), better than is typically reported for machine translation evaluation tasks; for example, Callison-Burch et al. (2007) reported “fair” agreement, with κ = 0.281 for fluency and κ = 0.307 for adequacy (relative). Assuming the uniform chance agreement that the previously cited work adopts, our inter-annotator agreements (both absolute and relative) are still higher. This is likely due to the generally high quality of the realizations evaluated, leading to easier judgments. 4.3 Correlation with automatic evaluation To determine how well the automatic evaluation methods described in Section 3 correlate with the human judgments, we averaged the human judgments for adequacy and fluency, respectively, for each of the rated realizations, and then computed both Pearson’s correlation coefficient and Spearman’s rank correlation coefficient between these scores and each of the metrics. Spearman’s correlation makes fewer assumptions about the distribu- tion of the data, but may not reflect a linear rela568 tionship that is actually present. Both are frequently reported in the literature. Due to space constraints, we show only Spearman’s correlation, although the TER family scored slightly better on Pearson’s coefficient, relatively. The results for Spearman’s correlation are given in Table 3. Additionally, the average scores for adequacy and fluency were themselves averaged into a single score, following (Snover et al., 2009), and the Spearman’s correlation of each of the automatic metrics with these scores are given in Table 4. All reported correlations are significant at p < 0.001. 4.4 Bootstrap sampling of correlations For each of the sub-corpora shown in Table 1, we computed confidence intervals for the correlations between adequacy and fluency human scores with selected automatic metrics (BLEU, HBLEU, TER, TERP, and HTER) as described in (Koenh, 2004). We sampled each sub-corpus 1000 times with replace- ment, and calculated correlations between the rankings induced by the human scores and those induced by the metrics for each reference sentence. We then used these coefficients to estimate the confidence interval, after excluding the top 25 and bottom 25 coefficients, following (Lin and Och, 2004). The results of this for the BLEU metric are shown in Table 5. We determined which correlations lay within the 95% confidence interval of the best performing metric in each row of Table Table 3; these figures are italicized. 5 Discussion 5.1 Human judgments of systems The results for the four OpenCCG perceptron models mostly confirm those reported in (White and Rajkumar, 2009), with one exception: the B-3 model was below B-2, though the P-B (perceptron-best) model still scored highest. This may have been due to differences in the testing scenario. None of the differences in adequacy scores among the individual systems are significant, with the exception of the WordNet system. In this case, the lack of wordsense disambiguation for the substituted words results in a poor overall adequacy score (e.g., wage floor → wage story). Conversely, it scores highest ffoloro fluency, as substituting a noun or tve srcbo rwesith h a synonym does not usually introduce ungrammaticality. 5.2 Correlations of human judgments with MT metrics Of the non-human-targeted metrics evaluated, BLEU and TER/TERP demonstrate the highest correlations with the human judgments of fluency (r = 0.62, 0.64). The TER family of evaluation metrics have been observed to perform very well in MTevaluation tasks, and although the data evaluated here differs from typical MT data in some important ways, the correlation of TERP with the human judgments is substantial. In contrast with previous MT evaluations where TERP performs considerably better than TER, these scored close to equal on our data, possibly because TERP’s stem, synonym, and paraphrase matching are less useful when most of the variation is syntactic. The correlations with BLEU and METEOR are lower than those reported in (Callison-Burch et al., 2007); in that study, BLEU achieved adequacy and fluency correlations of 0.690 and 0.722, respectively, and METEOR achieved 0.701 and 0.719. The correlations for these metrics might be expected to be lower for our data, since overall quality is higher, making the metrics’ task more difficult as the outputs involve subtler differences between acceptable and unacceptable variation. The human-targeted metrics (represented by the prefixed H in the data tables) correlated even more strongly with the human judgments, compared to the non-targeted versions. HTER demonstrated the best 569 correlation with realizer fluency (r = 0.75). For several kinds of acceptable variation involving the rearrangement of constituents (such as dative shift), TERP gives a more reasonable score than BLEU, due to its ability to directly evaluate phrasal shifts. The following realization was rated 4.5 for fluency, and was more correctly ranked by TERP than BLEU: (3) Ref: The deal also gave Mitsui access to a high-tech medical product. (4) Realiz.: The deal also gave access to a high-tech medical product to Mitsui. For each reference sentence, we compared the ranking of its realizations induced from the human scores to the ranking induced from the TERP score, and counted the rank errors by the latter, informally categorizing them by error type (see Table 7). In the 50 sentences with the highest numbers of rank errors, 17 were affected by punctuation differences, typically involving variation in comma placement. Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. However, TERP penalizes such insertions or deletions. Agreement errors are another frequent source of ranking errors for TERP. The human judges tended to harshly penalize sentences with number-agreement or tense errors, whereas TERP applies only a single substitution penalty for each such error. We expect that with suitable optimization of edit weights to avoid over-penalizing punctuation shifts and underpenalizing agreement errors, TERP would exhibit an even stronger correlation with human fluency judgments. None of the evaluation metrics can distinguish an acceptable movement of a word or constituent from an unacceptable movement, with only one reference sentence. A substantial source of error for both TERP and BLEU is variation in adverbial placement, as shown in (7). Similar errors are seen with prepositional phrases and some commonly-occurring temporal adverbs, which typically admit a number of variations in placement. Another important example of acceptable variation which these metrics do not generally rank correctly is dative alternation: Ref. We need to clarify what exactly is wrong with it. Realiz. Flu. TERP BLEU We need to clarify exactly what is wrong with it.50.10.5555 We need to clarify exactly what ’s wrong with it. 5 0.2 0.4046 (7) We need to clarify what , exactly , is wrong with it. 5 0.2 0.5452 We need to clarify what is wrong with it exactly. 4.5 0.1 0.6756 We need to clarify what exactly , is wrong with it. 4 0.1 0.7017 We need to clarify what , exactly is wrong with it. 4 0.1 0.7017 We needs to clarify exactly what is wrong with it. (5) Ref. When test booklets were passed out 48 hours ahead of time, she says she copied questions in the social studies section and gave the answers to students. (6) Realiz. When test booklets were passed out 48 hours ahead of time , she says she copied questions in the social studies section and gave students the answers. The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al., 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output ofa surface realizer for German, which found only a weak correlation. However, the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. In Cahill’s study, the outputs of the realizer were almost always grammatically correct, and the automated evaluation metrics were ranking markedness instead of grammatical acceptability. 5.3 System-level comparisons In order to investigate the efficacy of the metrics in ranking different realizer systems, or competing realizations from the same system generated using different ranking models, we considered seven different “systems” from the whole dataset of realizations. These consisted of five OpenCCG-based realizations (the best realization from three baseline models, and the best and the worst realization from the full perceptron model), and two XLE-based sys- tems (the best and the worst realization, after ranking the outputs of the XLE realizer with an n-gram model). The mean of the combined adequacy and 570 3 0.103 0.346 fluency scores of each of these seven systems was compared with that of every other system, resulting in 21 pairwise comparisons. Then Tukey’s HSD test was performed to determine the systems which differed significantly in terms of the average adequacy and fluency rating they received.4 The test revealed five pairwise comparisons where the scores were significantly different. Subsequently, for each of these systems, an overall system-level score for each of the MT metrics was calculated. For the five pairwise comparisons where the adequacy-fluency group means differed significantly, we checked whether the metric ranked the systems correctly. Table 8 shows the results of a pairwise comparison between the ranking induced by each evaluation metric, and the ranking induced by the human judgments. Five of the seven non- targeted metrics correctly rank more than half of the systems. NIST, METEOR, and GTM get the most comparisons right, but neither NIST nor GTM correctly rank the OpenCCG-baseline model 1 with respect to the XLE-best model. TER and TERP get two of the five comparisons correct, and they incorrectly rank two of the five OpenCCG model comparisons, as well as the comparison between the XLE-worst and OpenCCG-best systems. For the targeted metrics, HNIST is correct for all five comparisons, while neither HBLEU nor HMETEOR correctly rank all the OpenCCG models. On the other hand, HTER and HGTM incorrectly rank the XLE-best system versus OpenCCG-based models. In summary, some of the metrics get some of the rankings correct, but none of the non-targeted metrics get all of them correct. Moreover, different metrics make different ranking errors. This argues for 4This particular test was chosen since it corrects for multiple post-hoc analyses conducted on the same data-set. the use of multiple metrics in comparing realizer systems. 6 Conclusion Our study suggests that although the task of evaluating the output from realizer systems differs from the task of evaluating machine translations, the automatic metrics used to evaluate MT outputs deliver moderate correlations with combined human fluency and adequacy scores when used on surface realizations. We also found that the MT-evaluation metrics are useful in evaluating different versions of the same realizer system (e.g., the various OpenCCG realization ranking models), and finding cases where a system is performing poorly. As in MT-evaluation tasks, human-targeted metrics have the highest correlations with human judgments overall. These results suggest that the MT-evaluation metrics are useful for developing surface realizers. However, the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. In our study, the targeted version of TERP correlated most strongly with human judgments of fluency. In future work, the performance of the TER family of metrics on this data might be improved by opti- mizing the edit weights used in computing its scores, so as to avoid over-penalizing punctuation movements or under-penalizing agreement errors, both of which were significant sources of ranking errors. Multiple reference sentences may also help mitigate these problems, and the corpus of human-repaired realizations that has resulted from our study is a step in this direction, as it provides multiple references for some cases. We expect the corpus to also prove useful for feature engineering and error analysis in developing better realization models.5 Acknowledgements We thank Aoife Cahill and Tracy King for providing us with the output of the XLE generator. We also thank Chris Callison-Burch and the anonymous reviewers for their helpful comments and suggestions. 5The corpus can be downloaded from http : / /www . l ing .ohio-st ate . edu / ˜mwhite / dat a / emnlp 10 / . 571 This material is based upon work supported by the National Science Foundation under Grant No. 0812297. References Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. R. Barzilay and L. Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In proceedings of HLT-NAACL, volume 2003, pages 16–23. Aoife Cahill. 2009. Correlating human and automatic evaluation of a german surface realiser. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 97–100, Suntec, Singapore, August. Association for Computational Linguistics. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Reevaluating the role of BLEU in machine translation research. In Proceedings of EACL, volume 2006, pages 249–256. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation ofmachine translation. In StatMT ’07: Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158, Morristown, NJ, USA. Association for Computational Linguistics. C. Callison-Burch, T. Cohn, and M. Lapata. 2008. Parametric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 97–104. Association for Computational Linguistics. J. Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22(2):249–254. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2008. Xle documentation. Technical report, Palo Alto Research Center. Philip Koenh. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1): 159–174. Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In COLING ’04: Proceedings Chin-Yew of the 20th international conference on Computational 501, Morristown, NJ, USA. Associfor Computational Linguistics. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Linguistics, page ation K. Bleu: a method for automatic evaluation of machine translation. E. Technical report, IBM Research. Reiter and A. Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529–558. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–23 1. M. Snover, N. Madnani, B.J. Dorr, and R. Schwartz. 2009. Fluency, adequacy, or HTER?: exploring different human judgments with a tunable MT metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259–268. Association for Computational Linguistics. Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In Proceedings of CICLing. J.P. Turian, L. Shen, and I.D. Melamed. 2003. Evaluation of machine translation and its evaluation. recall (C— R), 100:2. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410–419, Singapore, August. Association for Computational Linguistics. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. 572 Table 1: Descriptive statistics Table 2: Corpora-wise inter-annotator agreement (absolute and relative κ values shown) SXAROWlpeyLos-aErFndAliCzueqrtd-GAFluq0 N.354217690 B.356219470M .35287410G .35241780 TP.465329170T.A34521670T.465230 H.54T76321H0 .543N89270H.653B7491280H.563M41270H.5643G218 Table 3: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (-Adq: adequacy and -Flu: Fluency); Scores which fall within the 95 %CI of the best are italicized. SROXAWlLeypoasErldniCze rtG0 N.35246 190 B.5618740 M.542719G0 .5341890T .P632180T.A54268 0T .629310 H.7T6 3985H0 .546N180 H.765B8730H.673M5190 H.56G 4318 Table 4: Spearman’s correlations among NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), TER (T), human variants (HN, HB, HM, HT, HG) and human judgments (combined adequacy and fluency scores) 573 SRAXOWylLpeosatrEldniezCm rtG0S A.p61d35q94107 .5304%65874L09.5462%136U0SF .lp256u 1209 .51 6%9213L0 .562%91845U Table 5: Spearman’s correlation analysis (bootstrap sampling) of the BLEU scores of various systems with human adequacy and fluency scores SRXOAWylLpeosarEndiCztGH J -12 0 N.6543210 B.6512830 M.4532 960 G.13457960T.P56374210T.A45268730T.562738140 H.7T6854910H.56N482390H.675B1398240H.567M3 240H.56G41290H.8J71562- Table 6: Spearman’s correlations of NIST (N), BLEU (B), METEOR (M), GTM (G), TERp (TP), TERpa (TA), human variants (HT, HN, HB, HM, HG), and individual human judgments (combined adq. and flu. scores) Factor Count Punctuation17 Adverbial shift Agreement Other shifts Conjunct rearrangement Complementizer ins/del PP shift 16 14 8 8 5 4 Table 7: Factors influencing TERP ranking errors for 50 worst-ranked realization groups Table 8: Metric-wise ranking performance in terms of agreement with a ranking induced by combined adequacy and fluency scores; each metric gets a score out of 5 (i.e. number of system-level comparisons that emerged significant as per the Tukey’s HSD test) Legend: Perceptron Best (PB); Perceptron Worst (PW); XLE Best (XB); XLE Worst (XW); OpenCCG baseline models 1 to 3 (C1 ... C3) 574
2 0.9482134 96 emnlp-2010-Self-Training with Products of Latent Variable Grammars
Author: Zhongqiang Huang ; Mary Harper ; Slav Petrov
Abstract: Mary Harper†‡ ‡HLT Center of Excellence Johns Hopkins University Baltimore, MD mharpe r@ umd .edu Slav Petrov∗ ∗Google Research 76 Ninth Avenue New York, NY s lav@ google . com ting the training data and eventually begins over- fitting (Liang et al., 2007). Moreover, EM is a loWe study self-training with products of latent variable grammars in this paper. We show that increasing the quality of the automatically parsed data used for self-training gives higher accuracy self-trained grammars. Our generative self-trained grammars reach F scores of 91.6 on the WSJ test set and surpass even discriminative reranking systems without selftraining. Additionally, we show that multiple self-trained grammars can be combined in a product model to achieve even higher accuracy. The product model is most effective when the individual underlying grammars are most diverse. Combining multiple grammars that were self-trained on disjoint sets of unlabeled data results in a final test accuracy of 92.5% on the WSJ test set and 89.6% on our Broadcast News test set.
3 0.93875015 77 emnlp-2010-Measuring Distributional Similarity in Context
Author: Georgiana Dinu ; Mirella Lapata
Abstract: The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.
4 0.9341287 97 emnlp-2010-Simple Type-Level Unsupervised POS Tagging
Author: Yoong Keok Lee ; Aria Haghighi ; Regina Barzilay
Abstract: Part-of-speech (POS) tag distributions are known to exhibit sparsity a word is likely to take a single predominant tag in a corpus. Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. However, in existing systems, this expansion come with a steep increase in model complexity. This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments. In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging — counterparts. On several languages, we report performance exceeding that of more complex state-of-the art systems.1
5 0.92669487 99 emnlp-2010-Statistical Machine Translation with a Factorized Grammar
Author: Libin Shen ; Bing Zhang ; Spyros Matsoukas ; Jinxi Xu ; Ralph Weischedel
Abstract: In modern machine translation practice, a statistical phrasal or hierarchical translation system usually relies on a huge set of translation rules extracted from bi-lingual training data. This approach not only results in space and efficiency issues, but also suffers from the sparse data problem. In this paper, we propose to use factorized grammars, an idea widely accepted in the field of linguistic grammar construction, to generalize translation rules, so as to solve these two problems. We designed a method to take advantage of the XTAG English Grammar to facilitate the extraction of factorized rules. We experimented on various setups of low-resource language translation, and showed consistent significant improvement in BLEU over state-ofthe-art string-to-dependency baseline systems with 200K words of bi-lingual training data.
6 0.79104179 89 emnlp-2010-PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts
7 0.76594365 34 emnlp-2010-Crouching Dirichlet, Hidden Markov Model: Unsupervised POS Tagging with Context Local Tag Generation
8 0.76229209 57 emnlp-2010-Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities
9 0.74803454 116 emnlp-2010-Using Universal Linguistic Knowledge to Guide Grammar Induction
10 0.74464357 7 emnlp-2010-A Mixture Model with Sharing for Lexical Semantics
11 0.74334455 98 emnlp-2010-Soft Syntactic Constraints for Hierarchical Phrase-Based Translation Using Latent Syntactic Distributions
12 0.74038863 75 emnlp-2010-Lessons Learned in Part-of-Speech Tagging of Conversational Speech
13 0.73929065 87 emnlp-2010-Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space
14 0.71940613 94 emnlp-2010-SCFG Decoding Without Binarization
15 0.70761681 29 emnlp-2010-Combining Unsupervised and Supervised Alignments for MT: An Empirical Study
16 0.69872093 22 emnlp-2010-Automatic Evaluation of Translation Quality for Distant Language Pairs
17 0.69758749 81 emnlp-2010-Modeling Perspective Using Adaptor Grammars
18 0.69462252 78 emnlp-2010-Minimum Error Rate Training by Sampling the Translation Lattice
19 0.68403137 42 emnlp-2010-Efficient Incremental Decoding for Tree-to-String Translation
20 0.67450738 6 emnlp-2010-A Latent Variable Model for Geographic Lexical Variation