emnlp emnlp2010 emnlp2010-80 knowledge-graph by maker-knowledge-mining

80 emnlp-2010-Modeling Organization in Student Essays


Source: pdf

Author: Isaac Persing ; Alan Davis ; Vincent Ng

Abstract: Automated essay scoring is one of the most important educational applications of natural language processing. Recently, researchers have begun exploring methods of scoring essays with respect to particular dimensions of quality such as coherence, technical errors, and relevance to prompt, but there is relatively little work on modeling organization. We present a new annotated corpus and propose heuristic-based and learning-based approaches to scoring essays along the organization dimension, utilizing techniques that involve sequence alignment, alignment kernels, and string kernels.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu a Abstract Automated essay scoring is one of the most important educational applications of natural language processing. [sent-3, score-0.795]

2 We present a new annotated corpus and propose heuristic-based and learning-based approaches to scoring essays along the organization dimension, utilizing techniques that involve sequence alignment, alignment kernels, and string kernels. [sent-5, score-0.822]

3 1 Introduction Automated essay scoring, the task of employing computer technology to evaluate and score written text, is one of the most important educational applications of natural language processing (NLP) (see Shermis and Burstein (2003) and Shermis et al. [sent-6, score-0.704]

4 Besides its potential commercial value, automated essay scoring brings about a number of relatively less-studied but arguably rather challenging discourse-level problems that involve the computational modeling of different facets of text structure, such as content, coherence, and organization. [sent-9, score-0.797]

5 229 A major weakness of many existing essay scoring engines such as IntelliMetric (Elliot, 2001) and Intelligent Essay Assessor (Landauer et al. [sent-10, score-0.782]

6 , 2003) is that they adopt a holistic scoring scheme, which summarizes the quality of an essay with a single score and thus provides very limited feedback to the writer. [sent-11, score-0.809]

7 In particular, it is not clear which dimension of an essay (e. [sent-12, score-0.646]

8 Recent work addresses this problem by scoring a particular dimension of essay quality such as coherence (Miltsakaki and Kukich, 2004), technical errors, and relevance to prompt (Higgins et al. [sent-15, score-0.9]

9 Automated systems that provide instructional feedback along multiple dimensions of essay quality such as Criterion (Burstein et al. [sent-17, score-0.618]

10 Nevertheless, there is an essay scoring dimension for which few computational models have been developed organization. [sent-19, score-0.787]

11 A well-organized essay is structured in a way that logically develops an argument. [sent-22, score-0.642]

12 While organization is an important dimension of essay quality, state-of-the-art — essay scoring software such as e-rater V. [sent-28, score-1.661]

13 2 (Attali and Burstein, 2006) employs rather simple heuristicbased methods for computing the score of an essay along this particular dimension. [sent-29, score-0.668]

14 We evaluate our organization model on a data set of 1003 essays annotated with organization scores. [sent-41, score-0.773]

15 First, we address a less-studied discourse-level task predicting the organization score of an essay by developing a computational model of organization, thus establishing a baseline against which future work on this task can be compared. [sent-43, score-0.924]

16 Second, we annotate a subset of our student essay corpus with organization scores and make this data set publicly available. [sent-44, score-0.954]

17 Similarly, about one quarter of essays contained 24 or fewer sentences and the longest quarter contained 36 or more sentences We selected a subset consisting of 1003 essays from the ICLE to annotate and use for training and testing of our model of essay organization. [sent-55, score-1.194]

18 For this reason, we selected only argumentative essays rather than narrative pieces, because they contain the discourse structures and kind of organization we are interested in modeling. [sent-58, score-0.571]

19 3 Corpus Annotation To develop our essay organization model, human an- notators scored 1003 essays using guidelines in an essay annotation rubric. [sent-63, score-1.753]

20 Annotators evaluated the organization of each essay using a numerical score from 1 to 4 at half-point increments. [sent-64, score-0.924]

21 This contrasts with previous work on essay scoring, where the corpus is annotated with a binary decision (i. [sent-65, score-0.618]

22 Analysis of these doubly annotated essays reveals that, though annotators only exactly agree on the organization score of an essay 29% of the time, the scores they apply are within 0. [sent-78, score-1.238]

23 , the conclusion appears before the introduction), the resulting essay will typically be considered poorly organized. [sent-96, score-0.641]

24 Hence, knowing the discourse function 231 label of each paragraph in an essay would be helpful for predicting its organization score. [sent-97, score-1.478]

25 One way is to automatically acquire such labels from a corpus of student essays where each paragraph is annotated with its discourse function label. [sent-100, score-0.882]

26 As a result, we will resort to labeling a paragraph with its function label heuristically. [sent-102, score-0.6]

27 Second, which paragraph function labels would be most useful for scoring the organization of an es- say? [sent-103, score-0.939]

28 Based on our linguistic intuition, we identify four potentially useful paragraph function labels: Introduction, Body, Rebuttal, and Conclusion. [sent-104, score-0.506]

29 Setting aside for the moment the problem of exactly how to predict an essay’s organization score given its paragraph sequence, the problem of obtaining paragraph labels to use for this task still remains. [sent-107, score-1.322]

30 As mentioned above, we adopt a heuristic ap- proach to paragraph function labeling. [sent-108, score-0.562]

31 The first of these are positional, dealing with where in the essay a paragraph appears. [sent-111, score-1.097]

32 So for example, the first paragraph in an essay is likely to be an Introduction, while the last is likely to be a Conclusion. [sent-112, score-1.097]

33 To illustrate why these sentence function labels may be useful for paragraph labeling, consider a paragraph containing a Thesis sentence. [sent-119, score-1.049]

34 The presence of a Thesis sentence is a strong indicator that the paragraph containing it is either an Introduction or Conclusion. [sent-120, score-0.53]

35 Because many of the paragraph labeling heuristics depend on the availability of sentence labels, we will describe the sentence labeling heuristics first. [sent-130, score-0.647]

36 Each content word the sentence shares with the essay prompt gives us evidence that the sentence is a restatement of the prompt. [sent-135, score-0.711]

37 The heuristic rules for paragraph labeling are similar in nature, though they depend heavily on the labels of a paragraph’s component sentences. [sent-137, score-0.6]

38 If a paragraph contains Thesis, Prompt, or Background sentences, the paragraph is likely to be an Introduction. [sent-138, score-0.958]

39 For example, a paragraph that is the first paragraph in an essay is likely to be an Introduction, but a paragraph that is neither the first nor the last is likely to be either a Rebuttal or Body paragraph. [sent-141, score-2.055]

40 After searching a paragraph for all these features, we gather the pieces of evidence in support of each paragraph label and assign the paragraph the label having the most support. [sent-142, score-1.567]

41 1 1Space limitations preclude 5 a complete listing of these para- Heuristic-Based Organization Scoring Having applied labels to each paragraph in an essay, how can we use these labels to predict the essay’s score? [sent-143, score-0.6]

42 Recall that the importance of each paragraph label stems not from the label itself, but from the sequence of labels it appears in. [sent-144, score-0.683]

43 , Barzilay and Lee (2002; 2003)), it has not been extensively applied to other areas of language processing, including essay scoring. [sent-148, score-0.618]

44 In this section, we will present two heuristic approaches to organization scoring, one based on aligning paragraph sequences and the other on aligning sentence sequences. [sent-149, score-0.997]

45 1 Aligning Paragraph Sequences As mentioned above, our first approach to heuristic organization scoring involves aligning paragraph sequences. [sent-151, score-0.986]

46 Given an essay e in the test set, we (1) find the k essays in the training set that are most similar to e via paragraph sequence alignment, and then (2) predict the organization score of e by aggregating the scores of its k nearest neighbors obtained in the first step. [sent-153, score-2.001]

47 First, to obtain the k nearest neighbors of e, we employ the Needleman-Wunsch alignment algorithm (Needleman and Wunsch, 1970), which computes a similarity score for any pair of essays by finding an optimal alignment between their para- graph sequences. [sent-155, score-0.743]

48 Another essay CRRRI begins with a paragraph stating its Conclusion, follows it with three Rebuttal paragraphs, and ends with a paragraph Introducing the essay’s topic. [sent-158, score-1.576]

49 The Needleman-Wunsch alignment algorithm has this effect since the score of the alignment it produces would be hurt by the facts that (1) there is not much overlap in the sets of paragraph labels each contains, and (2) the paragraph labels they do share (I and C) do not occur in the same order. [sent-165, score-1.262]

50 2 If we now consider a third essay whose paragraph sequence could be represented as IBRBC, a good similarity function should tell us that IBBBC and IBRBC are very similar. [sent-167, score-1.191]

51 The NeedlemanWunsch alignment score between the two paragraph sequences has this property, as the alignment algorithm would discover that the two sequences are identical except for the third paragraph label, which could be mismatched for a small penalty. [sent-168, score-1.348]

52 We would therefore conclude that the IBBBC and IBRBC essays should receive similar organization scores. [sent-169, score-0.517]

53 To fully specify how to find the k nearest neighbors of an essay, we need to define a similarity function between paragraph labels. [sent-170, score-0.732]

54 In sequence alignment, similarity function S(i, j) tells us how likely it is that symbol i (in our case, a paragraph label) will be substituted with another symbol j. [sent-171, score-0.621]

55 After obtaining the k nearest neighbors of e, the next step is to predict the organization score of e by aggregating the scores of its k nearest neighbors into one number. [sent-175, score-0.802]

56 2 Aligning Sentence Sequences An essay’s paragraph sequence captures information about its organization at a high level, but ignores much of its lower level structure. [sent-182, score-0.773]

57 The intuition is that at least some portion of an essay’s organization score can be attributed to the organization of the sentence sequences of its component paragraphs. [sent-184, score-0.66]

58 Given a test essay e, we first find for each paragraph in e the k paragraphs in the training set that are most similar to it. [sent-186, score-1.171]

59 Specifically, each paragraph is represented by its sequence of sentence function labels. [sent-187, score-0.572]

60 Given this paragraph representation, we can find the k nearest neighbors of a paragraph by applying the Needleman-Wunsch algorithm described in the previous subsection to align sentence sequences, using the same similarity function we defined above. [sent-188, score-1.239]

61 Next, we score each paragraph pi by aggregating the scores of its k nearest neighbors obtained in the first step, assuming the score of a nearest neighbor paragraph is the same as the organization score of the training set essay containing it. [sent-189, score-2.443]

62 Since we have three ways of aggregating the scores of a paragraph’s nearest neighbors and three ways of aggregating the resulting paragraph scores, this second method Hs for scoring organization has nine variants. [sent-193, score-1.199]

63 6 Learning-Based Organization Scoring In the previous section, we proposed two heuristic approaches to organization scoring, one based 234 on aligning paragraph label sequences and the other based on aligning sentence label sequences. [sent-194, score-1.127]

64 In the process of constructing these two systems, however, we created a lot of information about the essays which might also be useful for organization scoring, but which the heuristic systems are unable to exploit. [sent-195, score-0.573]

65 1 Linear Kernel Owing to the different ways we presented of combining the scores of an essay’s nearest neighbors, the paragraph label sequence alignment approach has three variants, and its sentence label sequence alignment counterpart has nine. [sent-199, score-1.063]

66 Second, it is not clear that the k nearest neighbors of an essay will always be similar to it with respect to organization score. [sent-202, score-1.071]

67 While we do expect the alignment scores between good essays with reasonable paragraph sequences to be high, poorly organized essays by their nature have more random paragraph sequences. [sent-203, score-1.698]

68 Hence, we have no intuition about the k nearest neighbors of a poor essay, as it may have as high an alignment score with another poorly organized essay as with a good essay. [sent-204, score-0.979]

69 The second weakness, on the other hand, is addressed by treating the organization score predictions obtained by the nearest neighbor methods as features for an SVM learner rather than as estimates of an essay’s organization score. [sent-207, score-0.772]

70 First, to give our learner more direct access to the information we used to heuristically predict essay scores, we can extract paragraph label subsequences3 from each essay and use them as features. [sent-211, score-1.861]

71 It is fairly typical to see the first subsequence, I–B, at the beginning of a good essay, so its occurrence should give us a small amount of evidence that the essay it occurs in is well-organized. [sent-213, score-0.618]

72 The presence of the second subsequence, R–I, however, should indicate that its essay’s organization is poor because, in general, a good essay should not give a Rebuttal before an Introduction. [sent-214, score-0.897]

73 Because we can envision subsequences of various lengths being useful, we create a binary presence or absence feature in the linear kernel for each paragraph subsequence of length 1, 2, 3, 4, or 5 appearing in the training set. [sent-215, score-0.76]

74 Recall that when describing our alignment-based nearest neighbor organization score prediction methods, we noted that an essay’s organization score may be partially attributable to how well the sentences within its paragraphs are organized. [sent-217, score-0.87]

75 For example, if one of an essay’s paragraphs contains the sentence label subsequence Main Idea–Elaboration–Support– Conclusion this gives us some evidence that the essay is overall well-organized since one of its component paragraphs contains this reasonably-organized subsequence. [sent-218, score-0.926]

76 An essay with a paragraph containing the subsequence Conclusion–Support–Thesis– Rebuttal, however, is likely to be poorly organized because this is a poorly-organized subsequence. [sent-219, score-1.187]

77 We call the system resulting from the use of these three types of features Rlnps because it uses Regression with linear kernel to predict essay scores, and it uses nearest neighbor, paragraph subsequence, and sentence subsequence features. [sent-224, score-1.481]

78 Our goal here is to explore this rarely-exploited capability of SVMs for the task of essay scoring. [sent-240, score-0.618]

79 We apply string kernels to essay scoring as follows: we represent each essay using its paragraph function label sequence, and employ a string kernel to compute the similarity between two essays based on this representation. [sent-244, score-2.572]

80 For K, since in the flat features we considered all paragraph label sequences of lengths from 1 to 5, we again take the middle value, setting it to 3. [sent-249, score-0.651]

81 We call the system using this kernel Rs because it uses a Regression SVM with a string kernel to predict essay scores. [sent-250, score-0.941]

82 The string kernel we described in the previous subsec- tion is just one way of measuring the similarity of two essays given their paragraph sequences. [sent-253, score-0.937]

83 While this may be the most obvious way to use paragraph sequence information from a machine learning perspective, our earlier use of the Needleman-Wunsch algorithm suggests a more direct way of extracting structured information from paragraph sequences. [sent-254, score-1.02]

84 More specifically, recall that the NeedlemanWunsch algorithm finds an optimal alignment between two paragraph sequences, where an optimal alignment is defined as an alignment having the highest possible alignment score. [sent-255, score-0.843]

85 As such, with some slight modifications, the alignment score between two paragraph sequences can be used as the 236 kernel value for an Alignment Kernel. [sent-257, score-0.823]

86 4 We call the system using this kernel Ra because it uses a Regression SVM with an alignment kernel to predict essay scores. [sent-258, score-0.997]

87 These three scores are given by: N1AXi6=E1i, N1iXN=1|Ai− Ei|, N1iXN=1(Ai− Ei)2, where Ai and Ei are the annotator assigned and system estimated scores respectively for essay i, and N is the number of essays. [sent-277, score-0.707]

88 Since many of the systems we have described assign test essays real-valued organization scores, to obtain Ei for system S1 we round the outputs of each system to the nearest of the seven scores the human annotators were permitted to assign (1. [sent-278, score-0.704]

89 To test our system, we performed 5-fold cross validation on our 1003 essay set, micro-averaging our results into three scores corresponding to the three scoring metrics described above. [sent-286, score-0.822]

90 Avg computes the average organization score of essays in the training set and assigns this score to each test set essay. [sent-291, score-0.617]

91 Though simple, this baseline is by no means easy-to-beat, since 41% of the essays have a score of 3, and 96% of the essays have a score that is within one point of 3. [sent-293, score-0.622]

92 5 In general, however, it appears to be the case that systems based on aligning paragraph label sequences achieve better results than systems that attempt to align sentence label sequences. [sent-301, score-0.761]

93 This suggests that, even though Rs does not perform exceptionally, it is extracting some useful information for organization scoring from the heuristically assigned paragraph label sequences. [sent-309, score-0.974]

94 By contrast, it initially appears that the alignment kernel is not extracting any useful information from these paragraph sequences at all, since its S1, S2, and S3 scores are all much worse than all of the baseline systems. [sent-314, score-0.807]

95 This result suggests that these two different methods of extracting information from paragraph sequences provide us with different kinds of evidence useful for organization scoring, although neither method by itself was exceptionally useful. [sent-321, score-0.826]

96 While space limitations preclude showing the actual numbers, the trend is consistent among all three scoring metrics: the first feature type to remove is paragraph sequences (meaning that they are the least important) and the last to remove is the nearest neighbor features. [sent-340, score-0.901]

97 The fact that flat paragraph sequence features proved to be least useful highlights the importance of the structured methods we presented for using paragraph sequence information. [sent-342, score-1.095]

98 The contributions of our work include the novel application of two techniques from bioinformatics and machine learning sequence alignment and string kernels, as well as the introduction of alignment kernels to essay scoring. [sent-344, score-0.98]

99 Automated essay evaluation: The Criterion online writing evaluation service. [sent-376, score-0.641]

100 Evaluation of text coherence for electronic essay scoring systems. [sent-426, score-0.835]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('essay', 0.618), ('paragraph', 0.479), ('essays', 0.261), ('organization', 0.256), ('scoring', 0.141), ('nearest', 0.134), ('kernel', 0.133), ('kernels', 0.107), ('rebuttal', 0.107), ('burstein', 0.096), ('alignment', 0.091), ('hp', 0.085), ('coherence', 0.076), ('rlnps', 0.075), ('paragraphs', 0.074), ('sequences', 0.07), ('subsequence', 0.067), ('label', 0.065), ('jill', 0.064), ('neighbors', 0.063), ('subsequences', 0.058), ('heuristic', 0.056), ('aligning', 0.054), ('rsa', 0.053), ('score', 0.05), ('neighbor', 0.05), ('student', 0.046), ('aggregating', 0.046), ('higgins', 0.043), ('shermis', 0.043), ('rs', 0.041), ('barzilay', 0.039), ('automated', 0.038), ('sequence', 0.038), ('body', 0.038), ('flat', 0.037), ('prompt', 0.037), ('estimations', 0.037), ('composite', 0.036), ('labels', 0.036), ('educational', 0.036), ('string', 0.035), ('scores', 0.034), ('heuristically', 0.033), ('icle', 0.033), ('discourse', 0.033), ('ibbbc', 0.032), ('ibrbc', 0.032), ('avg', 0.03), ('metrics', 0.029), ('labeling', 0.029), ('similarity', 0.029), ('sentence', 0.028), ('dimension', 0.028), ('preclude', 0.027), ('heuristics', 0.027), ('quarter', 0.027), ('function', 0.027), ('learner', 0.026), ('svms', 0.025), ('descriptions', 0.024), ('regina', 0.024), ('structured', 0.024), ('symbol', 0.024), ('employ', 0.024), ('presence', 0.023), ('writing', 0.023), ('alan', 0.023), ('weakness', 0.023), ('poorly', 0.023), ('svm', 0.023), ('predict', 0.022), ('argumentative', 0.021), ('attali', 0.021), ('cortes', 0.021), ('derrick', 0.021), ('exceptionally', 0.021), ('indel', 0.021), ('intellimetric', 0.021), ('lodhi', 0.021), ('needleman', 0.021), ('needlemanwunsch', 0.021), ('regressor', 0.021), ('removals', 0.021), ('rlanps', 0.021), ('rlsnps', 0.021), ('versley', 0.021), ('hs', 0.021), ('annotator', 0.021), ('hence', 0.02), ('regression', 0.02), ('ai', 0.019), ('ei', 0.019), ('annotators', 0.019), ('miltsakaki', 0.018), ('granger', 0.018), ('claudia', 0.018), ('mismatched', 0.018), ('median', 0.018), ('lillian', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999994 80 emnlp-2010-Modeling Organization in Student Essays

Author: Isaac Persing ; Alan Davis ; Vincent Ng

Abstract: Automated essay scoring is one of the most important educational applications of natural language processing. Recently, researchers have begun exploring methods of scoring essays with respect to particular dimensions of quality such as coherence, technical errors, and relevance to prompt, but there is relatively little work on modeling organization. We present a new annotated corpus and propose heuristic-based and learning-based approaches to scoring essays along the organization dimension, utilizing techniques that involve sequence alignment, alignment kernels, and string kernels.

2 0.069119863 20 emnlp-2010-Automatic Detection and Classification of Social Events

Author: Apoorv Agarwal ; Owen Rambow

Abstract: In this paper we introduce the new task of social event extraction from text. We distinguish two broad types of social events depending on whether only one or both parties are aware of the social contact. We annotate part of Automatic Content Extraction (ACE) data, and perform experiments using Support Vector Machines with Kernel methods. We use a combination of structures derived from phrase structure trees and dependency trees. A characteristic of our events (which distinguishes them from ACE events) is that the participating entities can be spread far across the parse trees. We use syntactic and semantic insights to devise a new structure derived from dependency trees and show that this plays a role in achieving the best performing system for both social event detection and classification tasks. We also use three data sampling approaches to solve the problem of data skewness. Sampling methods improve the F1-measure for the task of relation detection by over 20% absolute over the baseline.

3 0.068947621 64 emnlp-2010-Incorporating Content Structure into Text Analysis Applications

Author: Christina Sauper ; Aria Haghighi ; Regina Barzilay

Abstract: In this paper, we investigate how modeling content structure can benefit text analysis applications such as extractive summarization and sentiment analysis. This follows the linguistic intuition that rich contextual information should be useful in these tasks. We present a framework which combines a supervised text analysis application with the induction of latent content structure. Both of these elements are learned jointly using the EM algorithm. The induced content structure is learned from a large unannotated corpus and biased by the underlying text analysis task. We demonstrate that exploiting content structure yields significant improvements over approaches that rely only on local context.1

4 0.063381001 36 emnlp-2010-Discriminative Word Alignment with a Function Word Reordering Model

Author: Hendra Setiawan ; Chris Dyer ; Philip Resnik

Abstract: We address the modeling, parameter estimation and search challenges that arise from the introduction of reordering models that capture non-local reordering in alignment modeling. In particular, we introduce several reordering models that utilize (pairs of) function words as contexts for alignment reordering. To address the parameter estimation challenge, we propose to estimate these reordering models from a relatively small amount of manuallyaligned corpora. To address the search challenge, we devise an iterative local search algorithm that stochastically explores reordering possibilities. By capturing non-local reordering phenomena, our proposed alignment model bears a closer resemblance to stateof-the-art translation model. Empirical results show significant improvements in alignment quality as well as in translation performance over baselines in a large-scale ChineseEnglish translation task.

5 0.049508359 57 emnlp-2010-Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities

Author: Adria de Gispert ; Juan Pino ; William Byrne

Abstract: We report on investigations into hierarchical phrase-based translation grammars based on rules extracted from posterior distributions over alignments of the parallel text. Rather than restrict rule extraction to a single alignment, such as Viterbi, we instead extract rules based on posterior distributions provided by the HMM word-to-word alignmentmodel. We define translation grammars progressively by adding classes of rules to a basic phrase-based system. We assess these grammars in terms of their expressive power, measured by their ability to align the parallel text from which their rules are extracted, and the quality of the translations they yield. In Chinese-to-English translation, we find that rule extraction from posteriors gives translation improvements. We also find that grammars with rules with only one nonterminal, when extracted from posteri- ors, can outperform more complex grammars extracted from Viterbi alignments. Finally, we show that the best way to exploit source-totarget and target-to-source alignment models is to build two separate systems and combine their output translation lattices.

6 0.046671875 82 emnlp-2010-Multi-Document Summarization Using A* Search and Discriminative Learning

7 0.042747851 114 emnlp-2010-Unsupervised Parse Selection for HPSG

8 0.04254926 67 emnlp-2010-It Depends on the Translation: Unsupervised Dependency Parsing via Word Alignment

9 0.041206233 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding

10 0.040858202 11 emnlp-2010-A Semi-Supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension

11 0.040794339 87 emnlp-2010-Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space

12 0.039347548 122 emnlp-2010-WikiWars: A New Corpus for Research on Temporal Expressions

13 0.037743438 63 emnlp-2010-Improving Translation via Targeted Paraphrasing

14 0.037513498 81 emnlp-2010-Modeling Perspective Using Adaptor Grammars

15 0.034015249 61 emnlp-2010-Improving Gender Classification of Blog Authors

16 0.033109147 29 emnlp-2010-Combining Unsupervised and Supervised Alignments for MT: An Empirical Study

17 0.030703563 58 emnlp-2010-Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation

18 0.030375581 79 emnlp-2010-Mining Name Translations from Entity Graph Mapping

19 0.03020527 54 emnlp-2010-Generating Confusion Sets for Context-Sensitive Error Correction

20 0.030202709 12 emnlp-2010-A Semi-Supervised Method to Learn and Construct Taxonomies Using the Web


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.127), (1, 0.032), (2, -0.02), (3, 0.016), (4, 0.02), (5, -0.074), (6, 0.006), (7, -0.021), (8, 0.026), (9, -0.005), (10, -0.065), (11, -0.009), (12, 0.035), (13, -0.014), (14, 0.007), (15, 0.044), (16, 0.059), (17, -0.055), (18, -0.079), (19, 0.004), (20, 0.089), (21, 0.005), (22, 0.036), (23, -0.076), (24, -0.021), (25, -0.151), (26, 0.052), (27, -0.059), (28, -0.028), (29, -0.044), (30, -0.163), (31, -0.236), (32, -0.03), (33, -0.105), (34, -0.061), (35, 0.119), (36, 0.195), (37, 0.114), (38, 0.194), (39, 0.085), (40, 0.26), (41, 0.063), (42, -0.003), (43, 0.297), (44, 0.046), (45, -0.232), (46, 0.308), (47, -0.249), (48, 0.264), (49, 0.165)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9742294 80 emnlp-2010-Modeling Organization in Student Essays

Author: Isaac Persing ; Alan Davis ; Vincent Ng

Abstract: Automated essay scoring is one of the most important educational applications of natural language processing. Recently, researchers have begun exploring methods of scoring essays with respect to particular dimensions of quality such as coherence, technical errors, and relevance to prompt, but there is relatively little work on modeling organization. We present a new annotated corpus and propose heuristic-based and learning-based approaches to scoring essays along the organization dimension, utilizing techniques that involve sequence alignment, alignment kernels, and string kernels.

2 0.21903817 20 emnlp-2010-Automatic Detection and Classification of Social Events

Author: Apoorv Agarwal ; Owen Rambow

Abstract: In this paper we introduce the new task of social event extraction from text. We distinguish two broad types of social events depending on whether only one or both parties are aware of the social contact. We annotate part of Automatic Content Extraction (ACE) data, and perform experiments using Support Vector Machines with Kernel methods. We use a combination of structures derived from phrase structure trees and dependency trees. A characteristic of our events (which distinguishes them from ACE events) is that the participating entities can be spread far across the parse trees. We use syntactic and semantic insights to devise a new structure derived from dependency trees and show that this plays a role in achieving the best performing system for both social event detection and classification tasks. We also use three data sampling approaches to solve the problem of data skewness. Sampling methods improve the F1-measure for the task of relation detection by over 20% absolute over the baseline.

3 0.21624263 64 emnlp-2010-Incorporating Content Structure into Text Analysis Applications

Author: Christina Sauper ; Aria Haghighi ; Regina Barzilay

Abstract: In this paper, we investigate how modeling content structure can benefit text analysis applications such as extractive summarization and sentiment analysis. This follows the linguistic intuition that rich contextual information should be useful in these tasks. We present a framework which combines a supervised text analysis application with the induction of latent content structure. Both of these elements are learned jointly using the EM algorithm. The induced content structure is learned from a large unannotated corpus and biased by the underlying text analysis task. We demonstrate that exploiting content structure yields significant improvements over approaches that rely only on local context.1

4 0.20250526 81 emnlp-2010-Modeling Perspective Using Adaptor Grammars

Author: Eric Hardisty ; Jordan Boyd-Graber ; Philip Resnik

Abstract: Strong indications of perspective can often come from collocations of arbitrary length; for example, someone writing get the government out of my X is typically expressing a conservative rather than progressive viewpoint. However, going beyond unigram or bigram features in perspective classification gives rise to problems of data sparsity. We address this problem using nonparametric Bayesian modeling, specifically adaptor grammars (Johnson et al., 2006). We demonstrate that an adaptive na¨ ıve Bayes model captures multiword lexical usages associated with perspective, and establishes a new state-of-the-art for perspective classification results using the Bitter Lemons corpus, a collection of essays about mid-east issues from Israeli and Palestinian points of view.

5 0.19681166 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding

Author: Guillaume Wisniewski ; Alexandre Allauzen ; Francois Yvon

Abstract: Extant Statistical Machine Translation (SMT) systems are very complex softwares, which embed multiple layers of heuristics and embark very large numbers of numerical parameters. As a result, it is difficult to analyze output translations and there is a real need for tools that could help developers to better understand the various causes of errors. In this study, we make a step in that direction and present an attempt to evaluate the quality of the phrase-based translation model. In order to identify those translation errors that stem from deficiencies in the phrase table (PT), we propose to compute the oracle BLEU-4 score, that is the best score that a system based on this PT can achieve on a reference corpus. By casting the computation of the oracle BLEU-1 as an Integer Linear Programming (ILP) problem, we show that it is possible to efficiently compute accurate lower-bounds of this score, and report measures performed on several standard benchmarks. Various other applications of these oracle decoding techniques are also reported and discussed. 1 Phrase-Based Machine Translation 1.1 Principle A Phrase-Based Translation System (PBTS) consists of a ruleset and a scoring function (Lopez, 2009). The ruleset, represented in the phrase table, is a set of phrase1pairs {(f, e) }, each pair expressing that the source phrase f can ,bee) r}e,w earicthten p (atirra enxslparteedss)i inngto t a target phrase e. Trarsaens flation hypotheses are generated by iteratively rewriting portions of the source sentence as prescribed by the ruleset, until each source word has been consumed by exactly one rule. The order of target words in an hypothesis is uniquely determined by the order in which the rewrite operation are performed. The search space ofthe translation model corresponds to the set of all possible sequences of 1Following the usage in statistical machine translation literature, use “phrase” to denote a subsequence of consecutive words. we 933 rules applications. The scoring function aims to rank all possible translation hypotheses in such a way that the best one has the highest score. A PBTS is learned from a parallel corpus in two independent steps. In a first step, the corpus is aligned at the word level, by using alignment tools such as Gi z a++ (Och and Ney, 2003) and some symmetrisation heuristics; phrases are then extracted by other heuristics (Koehn et al., 2003) and assigned numerical weights. In the second step, the parameters of the scoring function are estimated, typically through Minimum Error Rate training (Och, 2003). Translating a sentence amounts to finding the best scoring translation hypothesis in the search space. Because of the combinatorial nature of this problem, translation has to rely on heuristic search techniques such as greedy hill-climbing (Germann, 2003) or variants of best-first search like multi-stack decoding (Koehn, 2004). Moreover, to reduce the overall complexity of decoding, the search space is typically pruned using simple heuristics. For instance, the state-of-the-art phrase-based decoder Moses (Koehn et al., 2007) considers only a restricted number of translations for each source sequence2 and enforces a distortion limit3 over which phrases can be reordered. As a consequence, the best translation hypothesis returned by the decoder is not always the one with the highest score. 1.2 Typology of PBTS Errors Analyzing the errors of a SMT system is not an easy task, because of the number of models that are combined, the size of these models, and the high complexity of the various decision making processes. For a SMT system, three different kinds of errors can be distinguished (Germann et al., 2004; Auli et al., 2009): search errors, induction errors and model errors. The former corresponds to cases where the hypothesis with the best score is missed by the search procedure, either because of the use of an ap2the 3the option of Moses, defaulting to 20. dl option of Moses, whose default value is 7. tt l ProceMedITin,g Ms oasfs thaceh 2u0se1t0ts C,o UnSfAer,e n9c-e11 on O Ectmobpeir ic 2a0l1 M0.e ?tc ho2d0s10 in A Nsastouciraatlio Lnan fogru Cagoem Ppruotcaetisosninagl, L pinaggeusis 9t3ic3s–943, proximate search method or because of the restrictions of the search space. Induction errors correspond to cases where, given the model, the search space does not contain the reference. Finally, model errors correspond to cases where the hypothesis with the highest score is not the best translation according to the evaluation metric. Model errors encompass several types oferrors that occur during learning (Bottou and Bousquet, 2008)4. Approximation errors are errors caused by the use of a restricted and oversimplistic class of functions (here, finitestate transducers to model the generation of hypotheses and a linear scoring function to discriminate them) to model the translation process. Estimation errors correspond to the use of sub-optimal values for both the phrase pairs weights and the parameters of the scoring function. The reasons behind these errors are twofold: first, training only considers a finite sample of data; second, it relies on error prone alignments. As a result, some “good” phrases are extracted with a small weight, or, in the limit, are not extracted at all; and conversely that some “poor” phrases are inserted into the phrase table, sometimes with a really optimistic score. Sorting out and assessing the impact of these various causes of errors is of primary interest for SMT system developers: for lack of such diagnoses, it is difficult to figure out which components of the system require the most urgent attention. Diagnoses are however, given the tight intertwining among the various component of a system, very difficult to obtain: most evaluations are limited to the computation of global scores and usually do not imply any kind of failure analysis. 1.3 Contribution and organization To systematically assess the impact of the multiple heuristic decisions made during training and decoding, we propose, following (Dreyer et al., 2007; Auli et al., 2009), to work out oracle scores, that is to evaluate the best achievable performances of a PBTS. We aim at both studying the expressive power of PBTS and at providing tools for identifying and quantifying causes of failure. Under standard metrics such as BLEU (Papineni et al., 2002), oracle scores are difficult (if not impossible) to compute, but, by casting the computation of the oracle unigram recall and precision as an Integer Linear Programming (ILP) problem, we show that it is possible to efficiently compute accurate lower-bounds of the oracle BLEU-4 scores and report measurements performed on several standard benchmarks. The main contributions of this paper are twofold. We first introduce an ILP program able to efficiently find the best hypothesis a PBTS can achieve. This program can be easily extended to test various improvements to 4We omit here optimization errors. 934 phrase-base systems or to evaluate the impact of different parameter settings. Second, we present a number of complementary results illustrating the usage of our oracle decoder for identifying and analyzing PBTS errors. Our experimental results confirm the main conclusions of (Turchi et al., 2008), showing that extant PBTs have the potential to generate hypotheses having very high BLEU4 score and that their main bottleneck is their scoring function. The rest of this paper is organized as follows: in Section 2, we introduce and formalize the oracle decoding problem, and present a series of ILP problems of increasing complexity designed so as to deliver accurate lowerbounds of oracle score. This section closes with various extensions allowing to model supplementary constraints, most notably reordering constraints (Section 2.5). Our experiments are reported in Section 3, where we first introduce the training and test corpora, along with a description of our system building pipeline (Section 3. 1). We then discuss the baseline oracle BLEU scores (Section 3.2), analyze the non-reachable parts of the reference translations, and comment several complementary results which allow to identify causes of failures. Section 4 discuss our approach and findings with respect to the existing literature on error analysis and oracle decoding. We conclude and discuss further prospects in Section 5. 2 Oracle Decoder 2.1 The Oracle Decoding Problem Definition To get some insights on the errors of phrasebased systems and better understand their limits, we propose to consider the oracle decoding problem defined as follows: given a source sentence, its reference translation5 and a phrase table, what is the “best” translation hypothesis a system can generate? As usual, the quality of an hypothesis is evaluated by the similarity between the reference and the hypothesis. Note that in the oracle decoding problem, we are only assessing the ability of PBT systems to generate good candidate translations, irrespective of their ability to score them properly. We believe that studying this problem is interesting for various reasons. First, as described in Section 3.4, comparing the best hypothesis a system could have generated and the hypothesis it actually generates allows us to carry on both quantitative and qualitative failure analysis. The oracle decoding problem can also be used to assess the expressive power of phrase-based systems (Auli et al., 2009). Other applications include computing acceptable pseudo-references for discriminative training (Tillmann and Zhang, 2006; Liang et al., 2006; Arun and 5The oracle decoding problem can be extended to the case of multiple references. For the sake of simplicity, we only describe the case of a single reference. Koehn, 2007) or combining machine translation systems in a multi-source setting (Li and Khudanpur, 2009). We have also used oracle decoding to identify erroneous or difficult to translate references (Section 3.3). Evaluation Measure To fully define the oracle decoding problem, a measure of the similarity between a translation hypothesis and its reference translation has to be chosen. The most obvious choice is the BLEU-4 score (Papineni et al., 2002) used in most machine translation evaluations. However, using this metric in the oracle decoding problem raises several issues. First, BLEU-4 is a metric defined at the corpus level and is hard to interpret at the sentence level. More importantly, BLEU-4 is not decomposable6: as it relies on 4-grams statistics, the contribution of each phrase pair to the global score depends on the translation of the previous and following phrases and can not be evaluated in isolation. Because of its nondecomposability, maximizing BLEU-4 is hard; in particular, the phrase-level decomposability of the evaluation × metric is necessary in our approach. To circumvent this difficulty, we propose to evaluate the similarity between a translation hypothesis and a reference by the number of their common words. This amounts to evaluating translation quality in terms of unigram precision and recall, which are highly correlated with human judgements (Lavie et al., ). This measure is closely related to the BLEU-1 evaluation metric and the Meteor (Banerjee and Lavie, 2005) metric (when it is evaluated without considering near-matches and the distortion penalty). We also believe that hypotheses that maximize the unigram precision and recall at the sentence level yield corpus level BLEU-4 scores close the maximal achievable. Indeed, in the setting we will introduce in the next section, BLEU-1 and BLEU-4 are highly correlated: as all correct words of the hypothesis will be compelled to be at their correct position, any hypothesis with a high 1-gram precision is also bound to have a high 2-gram precision, etc. 2.2 Formalizing the Oracle Decoding Problem The oracle decoding problem has already been considered in the case of word-based models, in which all translation units are bound to contain only one word. The problem can then be solved by a bipartite graph matching algorithm (Leusch et al., 2008): given a n m binary matarligxo describing possible t 2r0an08sl)a:ti goinv elinn aks n b×emtw beeinna source words and target words7, this algorithm finds the subset of links maximizing the number of words of the reference that have been translated, while ensuring that each word 6Neither at the sentence (Chiang et al., 2008), nor at the phrase level. 7The (i, j) entry of the matrix is 1if the ith word of the source can be translated by the jth word of the reference, 0 otherwise. 935 is translated only once. Generalizing this approach to phrase-based systems amounts to solving the following problem: given a set of possible translation links between potential phrases of the source and of the target, find the subset of links so that the unigram precision and recall are the highest possible. The corresponding oracle hypothesis can then be easily generated by selecting the target phrases that are aligned with one source phrase, disregarding the others. In addition, to mimic the way OOVs are usually handled, we match identical OOV tokens appearing both in the source and target sentences. In this approach, the unigram precision is always one (every word generated in the oracle hypothesis matches exactly one word in the reference). As a consequence, to find the oracle hypothesis, we just have to maximize the recall, that is the number of words appearing both in the hypothesis and in the reference. Considering phrases instead of isolated words has a major impact on the computational complexity: in this new setting, the optimal segmentations in phrases of both the source and of the target have to be worked out in addition to links selection. Moreover, constraints have to be taken into account so as to enforce a proper segmentation of the source and target sentences. These constraints make it impossible to use the approach of (Leusch et al., 2008) and concur in making the oracle decoding problem for phrase-based models more complex than it is for word-based models: it can be proven, using arguments borrowed from (De Nero and Klein, 2008), that this problem is NP-hard even for the simple unigram precision measure. 2.3 An Integer Program for Oracle Decoding To solve the combinatorial problem introduced in the previous section, we propose to cast it into an Integer Linear Programming (ILP) problem, for which many generic solvers exist. ILP has already been used in SMT to find the optimal translation for word-based (Germann et al., 2001) and to study the complexity of learning phrase alignments (De Nero and Klein, 2008) models. Following the latter reference, we introduce the following variables: fi,j (resp. ek,l) is a binary indicator variable that is true when the phrase contains all spans from betweenword position i to j (resp. k to l) of the source (resp. target) sentence. We also introduce a binary variable, denoted ai,j,k,l, to describe a possible link between source phrase fi,j and target phrase ek,l. These variables are built from the entries of the phrase table according to selection strategies introduced in Section 2.4. In the following, index variables are so that: 0 ≤ i< j ≤ n, in the source sentence and 0 ≤ k < l ≤ m, in the target sentence, where n (resp. m) is the length of the source (resp. target) sentence. Solving the oracle decoding problem then amounts to optimizing the following objective function: mi,j,akx,li,Xj,k,lai,j,k,l· (l − k), (1) under the constraints: X ∀x ∈ J1,mK : ek,l ≤ 1 (2) = (3) 1∀,kn,lK : Xai,j,k,l = fk,l (4) ∀i,j : Xai,j,k,l (5) k,l s.tX. Xk≤x≤l ∀∀xy ∈∈ J11,,mnKK : X i,j s.tX. Xi≤y≤j fi,j 1 Xi,j = ei,j Xk,l The objective function (1) corresponds to the number of target words that are generated. The first set of constraints (2) ensures that each word in the reference e ap- pears in no more than one phrase. Maximizing the objective under these constraints amounts to maximizing the unigram recall. The second set of constraints (3) ensures that each word in the source f is translated exactly once, which guarantees that the search space of the ILP problem is the same as the search space of a phrase-based system. Constraints (4) bind the fk,l and ai,j,k,l variables, ensuring that whenever a link ai,j,k,l is active, the corresponding phrase fk,l is also active. Constraints (5) play a similar role for the reference. The Relaxed Problem Even though it accurately models the search space of a phrase-based decoder, this programs is not really useful as is: due to out-ofvocabulary words or missing entries in the phrase table, the constraint that all source words should be translated yields infeasible problems8. We propose to relax this problem and allow some source words to remain untranslated. This is done by replacing constraints (3) by: ∀y ∈ J1,nK : X i,j s.tX. Xi≤y≤j fi,j ≤ 1 To better ref∀lyec ∈t th J1e, bneKh :avior of phrase-based decoders, which attempt to translate all source words, we also need to modify the objective function as follows: X i,Xj,k,l ai,j,k,l · (l − k) +Xfi,j · (j − i) Xi,j (6) The second term in this new objective ensures that optimal solutions translate as many source words as possible. 8An ILP problem is said to be infeasible when tion violates at least one constraint. every possible solu- 936 The Relaxed-Distortion Problem A last caveat with the Relaxed optimization program is caused by frequently occurring source tokens, such as function words or punctuation signs, which can often align with more than one target word. For lack of taking distortion information into account in our objective function, all these alignments are deemed equivalent, even if some of them are clearly more satisfactory than others. This situation is illustrated on Figure 1. le chat et the cat and le the chien dog Figure 1: Equivalent alignments between “le” and “the”. The dashed lines corresponds to a less interpretable solution. To overcome this difficulty, we propose a last change to the objective function: X i,Xj,k,l ai,j,k,l · (l − k) +Xfi,j · (j − i) X ai,j,k,l|k − i| Xi,j −α (7) i Xk ,l X,j, Compared to the objective function of the relaxed problem (6), we introduce here a supplementary penalty factor which favors monotonous alignments. For each phrase pair, the higher the difference between source and target positions, the higher this penalty. If α is small enough, this extra term allows us to select, among all the optimal alignments of the re l axed problem, the one with the lowest distortion. In our experiments, we set α to min {n, m} to ensure that the penalty factor is always smminall{enr, ,tmha}n tthoe e rneswuarred t fhoart aligning atwltyo single iwso ardlwsa. 2.4 Selecting Indicator Variables In the approach introduced in the previous sections, the oracle decoding problem is solved by selecting, among a set of possible translation links, the ones that yield the solution with the highest unigram recall. We propose two strategies to build this set of possible translation links. In the first one, denoted exact match, an indicator ai,j,k,l is created if there is an entry (f, e) so that f spans from word position ito j in the source and e from word position k to l in the target. In this strategy, the ILP program considers exactly the same ruleset as conventional phrase-based decoders. We also consider an alternative strategy, which could help us to identify errors made during the phrase extraction process. In this strategy, denoted inside match, an indicator ai,j,k,l is created when the following three criteria are met: i) f spans from position ito j of the source; ii) a substring of e, denoted e, spans from position k to l of the reference; iii) (f, e¯) is not an entry of the phrase table. The resulting set of indicator variables thus contains, at least, all the variables used in the exact match strategy. In addition, we license here the use of phrases containing words that do not occur in the reference. In fact, using such solutions can yield higher BLEU scores when the reward for additional correct matches exceeds the cost incurred by wrong predictions. These cases are symptoms of situations where the extraction heuristic failed to extract potentially useful subphrases. 2.5 Oracle Decoding with Reordering Constraints The ILP problem introduced in the previous section can be extended in several ways to describe and test various improvements to phrase-based systems or to evaluate the impact of different parameter settings. This flexibility mainly stems from the possibility offered by our framework to express arbitrary constraints over variables. In this section, we illustrate these possibilities by describing how reordering constraints can easily be considered. As a first example, the Moses decoder uses a distortion limit to constrain the set of possible reorderings. This constraint “enforces (...) that the last word of a phrase chosen for translation cannot be more than d9 words from the leftmost untranslated word in the source” (Lopez, 2009) and is expressed as: ∀aijkl , ai0j0k0l0 s.t. k > k0, aijkl · ai0j0k0l0 · |j − i0 + 1| ≤ d, The maximum distortion limit strategy (Lopez, 2009) is also easily expressed and take the following form (assuming this constraint is parameterized by d): ∀l < m − 1, ai,j,k,l·ai0,j0,l+1,l0 · |i0 − j − 1| 71is%t e6hs.a distortion greater that Moses default distortion limit. alignment decisions enabled by the use of larger training corpora and phrase table. To evaluate the impact ofthe second heuristic, we computed the number of phrases discarded by Moses (be- cause of the default ttl limit) but used in the oracle hypotheses. In the English to French NEWSCO setting, they account for 34.11% of the total number of phrases used in the oracle hypotheses. When the oracle decoder is constrained to use the same phrase table as Moses, its BLEU-4 score drops to 42.78. This shows that filtering the phrase table prior to decoding discards many useful phrase pairs and is seriously limiting the best achievable performance, a conclusion shared with (Auli et al., 2009). Search Errors Search errors can be identified by comparing the score of the best hypothesis found by Moses and the score of the oracle hypothesis. If the score of the oracle hypothesis is higher, then there has been a search error; on the contrary, there has been an estimation error when the score of the oracle hypothesis is lower than the score of the best hypothesis found by Moses. 940 Based on the comparison of the score of Moses hypotheses and of oracle hypotheses for the English to French NEWSCO setting, our preliminary conclusion is that the number of search errors is quite limited: only about 5% of the hypotheses of our oracle decoder are actually getting a better score than Moses solutions. Again, this shows that the scoring function (model error) is one of the main bottleneck of current PBTS. Comparing these hypotheses is nonetheless quite revealing: while Moses mostly selects phrase pairs with high translation scores and generates monotonous alignments, our ILP decoder uses larger reorderings and less probable phrases to achieve better solutions: on average, the reordering score of oracle solutions is −5.74, compared to −76.78 fscoro rMeo osfe osr outputs. iGonivsen is −the5 weight assigned through MERT training to the distortion score, no wonder that these hypotheses are severely penalized. The Impact of Phrase Length The observed outputs do not only depend on decisions made during the search, but also on decisions made during training. One such decision is the specification of maximal length for the source and target phrases. In our framework, evaluating the impact of this decision is simple: it suffices to change the definition of indicator variables so as to consider only alignments between phrases of a given length. In the English-French NEWSCO setting, the most restrictive choice, when only alignments between single words are authorized, yields an oracle BLEU-4 of 48.68; however, authorizing phrases up to length 2 allows to achieve an oracle value of 66.57, very close to the score achieved when considering all extracted phrases (67.77). This is corroborated with a further analysis of our oracle alignments, which use phrases whose average source length is 1.21 words (respectively 1.31 for target words). If many studies have already acknowledged the predomi- nance of “small” phrases in actual translations, our oracle scores suggest that, for this language pair, increasing the phrase length limit beyond 2 or 3 might be a waste of computational resources. 4 Related Work To the best of our knowledge, there are only a few works that try to study the expressive power ofphrase-based machine translation systems or to provide tools for analyzing potential causes of failure. The approach described in (Auli et al., 2009) is very similar to ours: in this study, the authors propose to find and analyze the limits of machine translation systems by studying the reference reachability. A reference is reachable for a given system if it can be exactly generated by this system. Reference reachability is assessed using Moses in forced decoding mode: during search, all hypotheses that deviate from the reference are simply discarded. Even though the main goal of this study was to compare the search space of phrase-based and hierarchical systems, it also provides some insights on the impact of various search parameters in Moses, delivering conclusions that are consistent with our main results. As described in Section 1.2, these authors also propose a typology of the errors of a statistical translation systems, but do not attempt to provide methods for identifying them. The authors of (Turchi et al., 2008) study the learn- ing capabilities of Moses by extensively analyzing learning curves representing the translation performances as a function of the number of examples, and by corrupting the model parameters. Even though their focus is more on assessing the scoring function, they reach conclusions similar to ours: the current bottleneck of translation performances is not the representation power of the PBTS but rather in their scoring functions. Oracle decoding is useful to compute reachable pseudo-references in the context of discriminative training. This is the main motivation of (Tillmann and Zhang, 2006), where the authors compute high BLEU hypotheses by running a conventional decoder so as to maximize a per-sentence approximation of BLEU-4, under a simple (local) reordering model. Oracle decoding has also been used to assess the limitations induced by various reordering constraints in (Dreyer et al., 2007). To this end, the authors propose to use a beam-search based oracle decoder, which computes lower bounds of the best achievable BLEU-4 using dynamic programming techniques over finite-state (for so-called local and IBM constraints) or hierarchically structured (for ITG constraints) sets of hypotheses. Even 941 though the numbers reported in this study are not directly comparable with ours17, it seems that our decoder is not only conceptually much simpler, but also achieves much more optimistic lower-bounds of the oracle BLEU score. The approach described in (Li and Khudanpur, 2009) employs a similar technique, which is to guide a heuristic search in an hypergraph representing possible translation hypotheses with n-gram counts matches, which amounts to decoding with a n-gram model trained on the sole reference translation. Additional tricks are presented in this article to speed-up decoding. Computing oracle BLEU scores is also the subject of (Zens and Ney, 2005; Leusch et al., 2008), yet with a different emphasis. These studies are concerned with finding the best hypotheses in a word graph or in a consensus network, a problem that has various implications for multi-pass decoding and/or system combination techniques. The former reference describes an exponential approximate algorithm, while the latter proves the NPcompleteness of this problem and discuss various heuristic approaches. Our problem is somewhat more complex and using their techniques would require us to built word graphs containing all the translations induced by arbitrary segmentations and permutations of the source sentence. 5 Conclusions In this paper, we have presented a methodology for analyzing the errors of PBTS, based on the computation of an approximation of the BLEU-4 oracle score. We have shown that this approximation could be computed fairly accurately and efficiently using Integer Linear Programming techniques. Our main result is a confirmation of the fact that extant PBTS systems are expressive enough to achieve very high translation performance with respect to conventional quality measurements. The main efforts should therefore strive to improve on the way phrases and hypotheses are scored during training. This gives further support to attempts aimed at designing context-dependent scoring functions as in (Stroppa et al., 2007; Gimpel and Smith, 2008), or at attempts to perform discriminative training of feature-rich models. (Bangalore et al., 2007). We have shown that the examination of difficult-totranslate sentences was an effective way to detect errors or inconsistencies in the reference translations, making our approach a potential aid for controlling the quality or assessing the difficulty of test data. Our experiments have also highlighted the impact of various parameters. Various extensions of the baseline ILP program have been suggested and/or evaluated. In particular, the ILP formalism lends itself well to expressing various constraints that are typically used in conventional PBTS. In 17The best BLEU-4 oracle they achieve on Europarl German to English is approximately 48; but they considered a smaller version of the training corpus and the WMT’06 test set. our future work, we aim at using this ILP framework to systematically assess various search configurations. We plan to explore how replacing non-reachable references with high-score pseudo-references can improve discrim- inative training of PBTS. We are also concerned by determining how tight is our approximation of the BLEU4 score is: to this end, we intend to compute the best BLEU-4 score within the n-best solutions of the oracle decoding problem. Acknowledgments Warm thanks to Houda Bouamor for helping us with the annotation tool. This work has been partly financed by OSEO, the French State Agency for Innovation, under the Quaero program. References Tobias Achterberg. 2007. Constraint Integer Programming. Ph.D. thesis, Technische Universit a¨t Berlin. http : / / opus .kobv .de /tuberl in/vol ltexte / 2 0 0 7 / 16 11/ . Abhishek Arun and Philipp Koehn. 2007. Online learning methods for discriminative training of phrase based statistical machine translation. In Proc. of MT Summit XI, Copenhagen, Denmark. Michael Auli, Adam Lopez, Hieu Hoang, and Philipp Koehn. 2009. A systematic analysis of translation model search spaces. In Proc. of WMT, pages 224–232, Athens, Greece. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proc. of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Srinivas Bangalore, Patrick Haffner, and Stephan Kanthak. 2007. Statistical machine translation through global lexical selection and sentence reconstruction. In Proc. of ACL, pages 152–159, Prague, Czech Republic. L e´on Bottou and Olivier Bousquet. 2008. The tradeoffs oflarge scale learning. In Proc. of NIPS, pages 161–168, Vancouver, B.C., Canada. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 Workshop on Statistical Machine Translation. In Proc. of WMT, pages 1–28, Athens, Greece. David Chiang, Steve DeNeefe, Yee Seng Chan, and Hwee Tou Ng. 2008. Decomposability of translation metrics for improved evaluation and efficient algorithms. In Proc. of ECML, pages 610–619, Honolulu, Hawaii. John De Nero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proc. of ACL: HLT, Short Papers, pages 25–28, Columbus, Ohio. Markus Dreyer, Keith B. Hall, and Sanjeev P. Khudanpur. 2007. Comparing reordering constraints for smt using efficient bleu oracle computation. In NAACL-HLT/AMTA Workshop on Syntax and Structure in Statistical Translation, pages 103– 110, Rochester, New York. 942 Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001 . Fast decoding and optimal decoding for machine translation. In Proc. of ACL, pages 228–235, Toulouse, France. Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2004. Fast and optimal decoding for machine translation. Artificial Intelligence, 154(1-2): 127– 143. Ulrich Germann. 2003. Greedy decoding for statistical machine translation in almost linear time. In Proc. of NAACL, pages 1–8, Edmonton, Canada. Kevin Gimpel and Noah A. Smith. 2008. Rich source-side context for statistical machine translation. In Proc. of WMT, pages 9–17, Columbus, Ohio. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL, pages 48–54, Edmonton, Canada. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris CallisonBurch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, demonstration session. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Proc. of AMTA, pages 115–124, Washington DC. Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proc. of HLT, pages 161–168, Vancouver, Canada. Alon Lavie, Kenji Sagae, and Shyamsundar Jayaraman. The significance of recall in automatic metrics for MT evaluation. In In Proc. of AMTA, pages 134–143, Washington DC. Gregor Leusch, Evgeny Matusov, and Hermann Ney. 2008. Complexity of finding the BLEU-optimal hypothesis in a confusion network. In Proc. of EMNLP, pages 839–847, Honolulu, Hawaii. Zhifei Li and Sanjeev Khudanpur. 2009. Efficient extraction of oracle-best translations from hypergraphs. In Proc. of NAACL, pages 9–12, Boulder, Colorado. Percy Liang, Alexandre Bouchard-C oˆt´ e, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of ACL, pages 761–768, Sydney, Australia. Adam Lopez. 2009. Translation as weighted deduction. In Proc. of EACL, pages 532–540, Athens, Greece. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist. , 29(1): 19–5 1. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, pages 160–167, Sapporo, Japan. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. Technical report, Philadelphia, Pennsylvania. D. Roth and W. Yih. 2005. Integer linear programming inference for conditional random fields. In Proc. of ICML, pages 737–744, Bonn, Germany. Nicolas Stroppa, Antal van den Bosch, and Andy Way. 2007. Exploiting source similarity for smt using context-informed features. In Andy Way and Barbara Proc. of TMI, pages Christoph Tillmann 231–240, Sk¨ ovde, and Tong Zhang. Gawronska, editors, Sweden. 2006. A discriminative global training algorithm for statistical mt. In Proc. of ACL, 721–728, Sydney, Australia. Turchi, Tijl De Bie, and Nello pages Marco Cristianini. 2008. Learn- ing performance of a machine translation system: a statistical and computational analysis. In Proc. of WMT, pages Columbus, Ohio. 35–43, Richard Zens and Hermann Ney. 2005. Word graphs for statistical machine translation. In Proc. of the ACL Workshop on Building and Using Parallel Texts, pages 191–198, Ann Arbor, Michigan. 943

6 0.19286521 87 emnlp-2010-Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space

7 0.18782265 122 emnlp-2010-WikiWars: A New Corpus for Research on Temporal Expressions

8 0.18779163 29 emnlp-2010-Combining Unsupervised and Supervised Alignments for MT: An Empirical Study

9 0.17974664 105 emnlp-2010-Title Generation with Quasi-Synchronous Grammar

10 0.17372763 118 emnlp-2010-Utilizing Extra-Sentential Context for Parsing

11 0.17330833 36 emnlp-2010-Discriminative Word Alignment with a Function Word Reordering Model

12 0.16762364 9 emnlp-2010-A New Approach to Lexical Disambiguation of Arabic Text

13 0.16075593 61 emnlp-2010-Improving Gender Classification of Blog Authors

14 0.15982048 82 emnlp-2010-Multi-Document Summarization Using A* Search and Discriminative Learning

15 0.15833311 114 emnlp-2010-Unsupervised Parse Selection for HPSG

16 0.14808598 72 emnlp-2010-Learning First-Order Horn Clauses from Web Text

17 0.13135332 79 emnlp-2010-Mining Name Translations from Entity Graph Mapping

18 0.12663941 57 emnlp-2010-Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities

19 0.12659709 11 emnlp-2010-A Semi-Supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension

20 0.12440387 30 emnlp-2010-Confidence in Structured-Prediction Using Confidence-Weighted Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.012), (10, 0.013), (12, 0.033), (29, 0.088), (30, 0.022), (32, 0.012), (49, 0.319), (52, 0.041), (56, 0.105), (62, 0.031), (66, 0.092), (72, 0.063), (76, 0.031), (79, 0.011), (87, 0.015), (89, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83728433 46 emnlp-2010-Evaluating the Impact of Alternative Dependency Graph Encodings on Solving Event Extraction Tasks

Author: Ekaterina Buyko ; Udo Hahn

Abstract: In state-of-the-art approaches to information extraction (IE), dependency graphs constitute the fundamental data structure for syntactic structuring and subsequent knowledge elicitation from natural language documents. The top-performing systems in the BioNLP 2009 Shared Task on Event Extraction all shared the idea to use dependency structures generated by a variety of parsers either directly or in some converted manner — and optionally modified their output to fit the special needs of IE. As there are systematic differences between various dependency representations being used in this competition, we scrutinize on different encoding styles for dependency information and their possible impact on solving several IE tasks. After assessing more or less established dependency representations such as the Stanford and CoNLL-X dependen— cies, we will then focus on trimming operations that pave the way to more effective IE. Our evaluation study covers data from a number of constituency- and dependency-based parsers and provides experimental evidence which dependency representations are particularly beneficial for the event extraction task. Based on empirical findings from our study we were able to achieve the performance of 57.2% F-score on the development data set of the BioNLP Shared Task 2009.

same-paper 2 0.75482863 80 emnlp-2010-Modeling Organization in Student Essays

Author: Isaac Persing ; Alan Davis ; Vincent Ng

Abstract: Automated essay scoring is one of the most important educational applications of natural language processing. Recently, researchers have begun exploring methods of scoring essays with respect to particular dimensions of quality such as coherence, technical errors, and relevance to prompt, but there is relatively little work on modeling organization. We present a new annotated corpus and propose heuristic-based and learning-based approaches to scoring essays along the organization dimension, utilizing techniques that involve sequence alignment, alignment kernels, and string kernels.

3 0.48001829 82 emnlp-2010-Multi-Document Summarization Using A* Search and Discriminative Learning

Author: Ahmet Aker ; Trevor Cohn ; Robert Gaizauskas

Abstract: In this paper we address two key challenges for extractive multi-document summarization: the search problem of finding the best scoring summary and the training problem of learning the best model parameters. We propose an A* search algorithm to find the best extractive summary up to a given length, which is both optimal and efficient to run. Further, we propose a discriminative training algorithm which directly maximises the quality ofthe best summary, rather than assuming a sentence-level decomposition as in earlier work. Our approach leads to significantly better results than earlier techniques across a number of evaluation metrics.

4 0.47758314 105 emnlp-2010-Title Generation with Quasi-Synchronous Grammar

Author: Kristian Woodsend ; Yansong Feng ; Mirella Lapata

Abstract: The task of selecting information and rendering it appropriately appears in multiple contexts in summarization. In this paper we present a model that simultaneously optimizes selection and rendering preferences. The model operates over a phrase-based representation of the source document which we obtain by merging PCFG parse trees and dependency graphs. Selection preferences for individual phrases are learned discriminatively, while a quasi-synchronous grammar (Smith and Eisner, 2006) captures rendering preferences such as paraphrases and compressions. Based on an integer linear programming formulation, the model learns to generate summaries that satisfy both types of preferences, while ensuring that length, topic coverage and grammar constraints are met. Experiments on headline and image caption generation show that our method obtains state-of-the-art performance using essentially the same model for both tasks without any major modifications.

5 0.46893832 107 emnlp-2010-Towards Conversation Entailment: An Empirical Investigation

Author: Chen Zhang ; Joyce Chai

Abstract: While a significant amount of research has been devoted to textual entailment, automated entailment from conversational scripts has received less attention. To address this limitation, this paper investigates the problem of conversation entailment: automated inference of hypotheses from conversation scripts. We examine two levels of semantic representations: a basic representation based on syntactic parsing from conversation utterances and an augmented representation taking into consideration of conversation structures. For each of these levels, we further explore two ways of capturing long distance relations between language constituents: implicit modeling based on the length of distance and explicit modeling based on actual patterns of relations. Our empirical findings have shown that the augmented representation with conversation structures is important, which achieves the best performance when combined with explicit modeling of long distance relations.

6 0.46856216 69 emnlp-2010-Joint Training and Decoding Using Virtual Nodes for Cascaded Segmentation and Tagging Tasks

7 0.46836627 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding

8 0.46708822 102 emnlp-2010-Summarizing Contrastive Viewpoints in Opinionated Text

9 0.46451429 65 emnlp-2010-Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification

10 0.46355119 49 emnlp-2010-Extracting Opinion Targets in a Single and Cross-Domain Setting with Conditional Random Fields

11 0.46285117 120 emnlp-2010-What's with the Attitude? Identifying Sentences with Attitude in Online Discussions

12 0.46232247 35 emnlp-2010-Discriminative Sample Selection for Statistical Machine Translation

13 0.46214017 58 emnlp-2010-Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation

14 0.46147364 78 emnlp-2010-Minimum Error Rate Training by Sampling the Translation Lattice

15 0.45464194 103 emnlp-2010-Tense Sense Disambiguation: A New Syntactic Polysemy Task

16 0.45455995 25 emnlp-2010-Better Punctuation Prediction with Dynamic Conditional Random Fields

17 0.45418733 32 emnlp-2010-Context Comparison of Bursty Events in Web Search and Online Media

18 0.45390475 86 emnlp-2010-Non-Isomorphic Forest Pair Translation

19 0.45255432 57 emnlp-2010-Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities

20 0.45112553 98 emnlp-2010-Soft Syntactic Constraints for Hierarchical Phrase-Based Translation Using Latent Syntactic Distributions