emnlp emnlp2012 emnlp2012-108 knowledge-graph by maker-knowledge-mining

108 emnlp-2012-Probabilistic Finite State Machines for Regression-based MT Evaluation


Source: pdf

Author: Mengqiu Wang ; Christopher D. Manning

Abstract: Accurate and robust metrics for automatic evaluation are key to the development of statistical machine translation (MT) systems. We first introduce a new regression model that uses a probabilistic finite state machine (pFSM) to compute weighted edit distance as predictions of translation quality. We also propose a novel pushdown automaton extension of the pFSM model for modeling word swapping and cross alignments that cannot be captured by standard edit distance models. Our models can easily incorporate a rich set of linguistic features, and automatically learn their weights, eliminating the need for ad-hoc parameter tuning. Our methods achieve state-of-the-art correlation with human judgments on two different prediction tasks across a diverse set of standard evaluations (NIST OpenMT06,08; WMT0608).

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Accurate and robust metrics for automatic evaluation are key to the development of statistical machine translation (MT) systems. [sent-4, score-0.147]

2 We first introduce a new regression model that uses a probabilistic finite state machine (pFSM) to compute weighted edit distance as predictions of translation quality. [sent-5, score-0.752]

3 We also propose a novel pushdown automaton extension of the pFSM model for modeling word swapping and cross alignments that cannot be captured by standard edit distance models. [sent-6, score-0.684]

4 Our methods achieve state-of-the-art correlation with human judgments on two different prediction tasks across a diverse set of standard evaluations (NIST OpenMT06,08; WMT0608). [sent-8, score-0.119]

5 1 Introduction Research in automatic machine translation (MT) evaluation metrics has been a key driving force behind the recent advances of statistical machine translation (SMT) systems. [sent-9, score-0.23]

6 Later metrics that move beyond n-grams achieve higher accuracy and improved robustness from resources like WordNet synonyms (Miller et al. [sent-18, score-0.139]

7 Recent models use linear or SVM regression and train them against human judgments to automatic learn feature weights, and have shown state-of-the-art correlation with human judgments (Kulesza and Shieber, 2004; Albrecht and Hwa, 2007a; Albrecht and Hwa, 2007b; Sun et al. [sent-22, score-0.314]

8 It is built on the backbone of weighted edit distance models, but learns to weight edit operations in a probabilistic regression framework. [sent-31, score-0.899]

9 2 pFSMs for MT Regression We start off by framing the problem of machine translation evaluation in terms of weighted edit distances calculated using probabilistic finite state machines (pFSMs). [sent-39, score-0.54]

10 Commonly used models such as HMMs, n-gram models, Markov Chains and probabilistic finite state transducers all fall in the broad family of pFSMs (Knight and Al-Onaizan, 1998; Eisner, 2002; Kumar and Byrne, 2003; Vidal et al. [sent-42, score-0.144]

11 Unlike all the other applications of FSMs where tokens in the language are words, in our language tokens are edit operations. [sent-44, score-0.313]

12 A string of tokens that our pFSM accepts is an edit sequence that transforms a reference translation (denoted as ref) into a system translation (sys). [sent-45, score-0.502]

13 Our pFSM has a unique start and stop state, and one state per edit operation (i. [sent-46, score-0.407]

14 In this basic pFSM model, the feature functions are simply identity functions that emit the current state, 985 and the state transition sequence of the previous state and the current state. [sent-50, score-0.195]

15 The feature weights are then automatically learned by training a global regression model where some translational equivalence judgment score (e. [sent-51, score-0.14]

16 , 2006)) for each sys and ref translation pair is the regression target ( yˆ). [sent-54, score-0.406]

17 We introduce a new regression variable y ∈ R which is the log-sum of the unnormalivzaerdia weights (Eqn. [sent-55, score-0.14]

18 (1)) so tfh aell l oegd-its sequences, formally expressed as: y = loge0∑⊆e∗k∏|e=01|exp θ ·f(ek−1,ek,s,r) (2) e∗ denotes a valid edit sequence. [sent-56, score-0.313]

19 Since the “gold” edit sequence are not given at training or prediction time, we treat the edit sequences as hidden variables and sum them out. [sent-57, score-0.626]

20 The sum over an exponential number of edit sequences in e∗ is solved efficiently using a forward-backward style dynamic program. [sent-58, score-0.313]

21 Any edit sequence that does not lead to a complete transformation ofthe translation pair has a probability of zero in our model. [sent-59, score-0.396]

22 We replaced the standard substitution edit operation with three new operations: Sword for same word substitution, Slemma for same lemma substitution, and Spunc for same punctuation substitution. [sent-65, score-0.461]

23 The start state can transition into any of the edit states with a constant unit cost, and each edit state can transition into any other edit state if and only if the edit operation involved is valid at the current edit position (e. [sent-67, score-1.944]

24 , the model cannot transition into Delete state if it is already at the end of ref; similarly it cannot transition into Slemma unless the Figure 1: This diagram illustrates an example translation pair in the Chinese-English portion of OpenMT08 (Doc:AFP CMN 20070703. [sent-69, score-0.281]

25 The three rows below are the best state transition sequences The corresponding alignments generated by the models (pFSM, pPDA, pPDA+f) are shown with different styled lines, with later models in the order generating strictly more alignments than earlier ones. [sent-72, score-0.179]

26 lemma of the two words under edit in sys and ref match). [sent-80, score-0.549]

27 When the end of both sentences are reached, the model transitions into the stop state and ends the edit sequence. [sent-81, score-0.377]

28 The first row in Figure 1 starting with pFSM shows a state transition sequence for an example sys/ref translation pair. [sent-82, score-0.246]

29 1 There exists a oneto-one correspondence between substitution edits and word alignments. [sent-83, score-0.117]

30 Therefore this example state transition sequence correctly generates an alignment for the word 43 and people. [sent-84, score-0.131]

31 , 2006), which is based on the idea of word error rate measured in terms of edit distance, to better understand the intuition behind our model. [sent-86, score-0.313]

32 2 Restricted pPDA Extension A shortcoming of edit distance models is that they cannot handle long-distance word swapping a pervasive phenomenon found in most natural languages. [sent-97, score-0.497]

33 2 Edit operations in standard edit distance models need to obey strict incremental order in their edit position, in order to admit efficient dynamic programming solutions. [sent-98, score-0.752]

34 The same limitation is shared by our pFSM model, where the Markov assumption is made based on the incremental order of edit positions. [sent-99, score-0.313]

35 Although there is no known solution to the general problem of computing edit distance where long-distance swapping is permitted (Dombb et al. [sent-100, score-0.529]

36 We present a simple but novel extension of the pFSM model to a restricted probabilistic pushdown automaton (pPDA), to capture non-nested word swapping within limited distance, which covers a majority of word swapping in observed in real data (Wu, 2010). [sent-102, score-0.434]

37 The addition of stacks for each transition state endows the machine with memory, extending its expressiveness beyond that of context— free formalisms. [sent-104, score-0.131]

38 By construction, at any stage in a normal edit sequence, the pPDA model can “jump” forward within a fixed distance (controlled by a max distance parameter) to a new edit position on either side of the sentence pair, and start a new edit subsequence from there. [sent-105, score-1.092]

39 Assuming the jump was made on the sys side, the machine remembers its current edit position in sys as Jstart, and the destination position on sys after the jump as Jlanding. [sent-106, score-1.27]

40 We constrain our model so that the only edit operations that are allowed immediately following a “jump” are from the set of substitution operations (e. [sent-107, score-0.526]

41 And after at least one substitution has been made, the device can now “jump” back to Jstart, remembering the current edit position as Jend. [sent-110, score-0.427]

42 Another constraint here is that after the backward “jump”, all edit operations are permitted except for Insert, which cannot take place until at least one 3 2The edit distance algorithm described in Cormen et al. [sent-111, score-0.784]

43 (2001) can only handle adjacent word swapping (transposition), but not long-distance swapping. [sent-112, score-0.119]

44 3Recall that we transform ref into sys, and thus on the sys side, we can only insert but not delete. [sent-113, score-0.243]

45 The argument applies equally to the case where the jump was made on the other side. [sent-114, score-0.248]

46 When the edit sequence advances to position Jlanding, the only operation allowed at that point is another “jump” forward operation to position Jend, at which point we also clear all memory about jump positions and reset. [sent-116, score-0.667]

47 An intuitive explanation is that when pPDA makes the first forward jump, a gap is left in sys that has not been edited yet. [sent-117, score-0.129]

48 It remembers where it left off, and comes back to it after some substitutions have been made to complete the edit sequence. [sent-118, score-0.378]

49 The second row in Figure 1 (starting with pPDA) illustrates an edit sequence in a pPDA model that involves three “jump” operations, which are annotated and indexed by number 1-3 in the example. [sent-119, score-0.345]

50 “Jump 1” creates an un-edited gap between word 43 and western, after two substitutions, the model makes “jump 2” to go back and edit the gap. [sent-120, score-0.313]

51 The only edit permitted immediately after “jump 2” is deleting the comma in ref, since inserting the word 43 in sys before any substitution is disallowed. [sent-121, score-0.565]

52 Once the gap is completed, the model resumes at position Jend by making “jump 3”, and completes the jump sequence. [sent-122, score-0.271]

53 In a general pPDA model without the limited distance and non-nestedness jump constraints, there could be recursive jump structures, which violates the finite state property that we are looking for. [sent-124, score-0.672]

54 Two variants of the ~0, 4The length of the longest edit sequence with jumps only increased by 0. [sent-134, score-0.337]

55 5 ∗ max(|s| , |r|) in the worst case, and on the winchroelae swapping 5is∗ rare i(|ns comparison t ow boarssitc c aesdeits,. [sent-135, score-0.119]

56 3 Rich Linguistic Features In this section we will add new substitution operations beyond those introduced in Section 2, to capture various linguistic phenomena. [sent-138, score-0.152]

57 These new substitution operations correspond to new transition states in the pPDA. [sent-139, score-0.219]

58 Synonyms have been found to be very useful in METEOR and TERplus, and can be easily built into our model as a new substitution operation Ssyn. [sent-143, score-0.121]

59 We add a substitution operator (Spara) that matches words that are paraphrases. [sent-147, score-0.116]

60 To better take advantage of paraphrase information at the multi-word phrase level, we extended our substitution operations to match longer phrases by adding one-to-many and many-to-many n-gram block substitutions. [sent-148, score-0.275]

61 Such translations usually score well with existing metrics but poorly among human evaluators. [sent-161, score-0.13]

62 We then show that modeling word swapping and rich linguistics features further improve our results. [sent-167, score-0.142]

63 Since our models are trained to regress human evaluation scores, to make a direct comparison in the same regression setting, we also train a small linear regression model for each baseline metric in the same way as described in Pado et al. [sent-172, score-0.317]

64 These regression models are strictly more powerful than the baseline metrics and show higher robustness and better correlation with human judgments. [sent-174, score-0.288]

65 We also compare our models with the state-of-the-art linear regression models reported in Pado et al. [sent-175, score-0.114]

66 fTnh-eg regression omnosdceol rleesar(n1s≤ ≤ton combine these fine-grained scores more intelligently, by optimizing their weights to regress human judgments. [sent-179, score-0.192]

67 combine features from multiple MT evaluation metrics (MT), as well as rich linguistic features from a textual entailment system (RTE). [sent-182, score-0.119]

68 Numbers in this table are Spearman’s ρ for correlation between human assessment scores and model predictions; tr stands for training set, and te stands for test set. [sent-220, score-0.116]

69 The second and third columns under the pFSM label in Table 1 compares our bigram block edit extension for the pFSM model. [sent-224, score-0.395]

70 Although we do not yet see a significant performance gain (or loss) from adding block edits, they will enable longer paraphrase matches in later experiments. [sent-225, score-0.148]

71 However, for Chinese, there is a substantial gain, particularly with jump distances of five or longer. [sent-230, score-0.248]

72 This trend is even more pronounced at the long jump distance of 10, consistent with the observation that Chinese-English translations exhibit much more medium and long distance reordering than languages like Arabic (Birch et al. [sent-231, score-0.448]

73 The first row is the pPDA model with jump distance limit 5, without other additional features. [sent-236, score-0.345]

74 TL17M62+ERUetTrEic)Rs refers to the regression model trained for each baseline metric, same as Pado et al. [sent-250, score-0.114]

75 Numbers in this table are Spearman’s rank correlation ρ between human assessment scores and model predictions. [sent-254, score-0.116]

76 The pPDA column describes our pPDA model with jump distance limit 5. [sent-255, score-0.313]

77 The row starting with pPDA+f in Figure 1 shows an example where adding paraphrase features allow pPDA+f to find more correct alignments and make better predictions than pPDA. [sent-266, score-0.176]

78 In combination, the joint feature set of synonym, 990 paraphrase and parse tree features gave modest improvements over the paraphrase feature alone on the Chinese test set. [sent-273, score-0.2]

79 On Chinese data set, the pPDA extension gives results significantly better than the best baseline metrics for Chinese (2. [sent-282, score-0.11]

80 Both the pFSM and pPDA models also significantly outperform the MTR linear regression model that combines the outputs of all four baselines, on all three source languages. [sent-284, score-0.114]

81 This demonstrates that our regression model is more robust and accurate than a state-ofthe-art system combination linear-regression model. [sent-285, score-0.114]

82 The RTER and MT+RTER linear regression models benefit from the rich linguistic features in the textual entailment system’s output. [sent-292, score-0.169]

83 2: Predicting Pairwise Preferences To further test our model’s robustness, we evaluate it on WMT data sets with a different prediction task in which metrics make pairwise preference judgments between translation systems. [sent-308, score-0.259]

84 This example exhibits a word swapping phenomenon, and our model was able to capture it correctly. [sent-343, score-0.119]

85 TERR clearly suffered from not being able to model word swapping in this case. [sent-344, score-0.119]

86 Modeling The idea of using extended edit distance models with block movements was also explored in Leusch et al. [sent-371, score-0.414]

87 Both models adopted a log-linear parameterization for the state transition distribution, 13 but in their case the HMM model and the pFSM arc weights are normalized locally, and the objective is non-convex. [sent-387, score-0.199]

88 6 Conclusion We described a probabilistic finite state machine based on string edits and a novel pushdown automaton extension for the task of machine translation evaluation. [sent-388, score-0.439]

89 The models admit a rich set of linguistic features, and are trained to learn feature weights automatically by optimizing a regression objective. [sent-389, score-0.163]

90 The proposed models achieve state-of-the-art results on a wide range of standard evaluations, and are much more lightweight than previous regression models, making them suitable candidates to be used in MERT training. [sent-390, score-0.114]

91 Meteor: An automatic metric for MT evaluation with improved correlation 13Similar parameterization was also used in much previous work, such as Riezler et al. [sent-428, score-0.117]

92 Extending the METEOR machine translation evaluation metric to the phrase level. [sent-509, score-0.12]

93 Quantitative analysis of probabilistic pushdown automata: Expectations and variances. [sent-539, score-0.102]

94 A weighted finite state transducer implementation of the alignment template model for statistical machine translation. [sent-576, score-0.111]

95 A novel stringto-string distance measure with applications to machine translation evaluation. [sent-583, score-0.148]

96 MEANT: An inexpensive, highaccuracy, semi-automatic metric for evaluating translation utility based on semantic roles. [sent-610, score-0.12]

97 A conditional random field for discriminatively-trained finitestate string edit distance. [sent-626, score-0.336]

98 NIST 2010 metrics for machine translation evaluation (MetricsMaTr10) official release of results. [sent-679, score-0.147]

99 A study of translation edit rate with targeted human annotation. [sent-719, score-0.42]

100 A re-examination on features in regression based approach to automatic MT evaluation. [sent-735, score-0.114]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ppda', 0.527), ('pfsm', 0.457), ('edit', 0.313), ('jump', 0.248), ('rter', 0.152), ('mt', 0.135), ('sys', 0.129), ('pado', 0.129), ('swapping', 0.119), ('regression', 0.114), ('urdu', 0.095), ('substitution', 0.091), ('paraphrase', 0.087), ('meteorr', 0.083), ('terplus', 0.083), ('translation', 0.083), ('ref', 0.08), ('snover', 0.071), ('pushdown', 0.069), ('transition', 0.067), ('distance', 0.065), ('state', 0.064), ('metrics', 0.064), ('nist', 0.063), ('arabic', 0.062), ('meteor', 0.062), ('operations', 0.061), ('saers', 0.06), ('judgments', 0.057), ('albrecht', 0.055), ('terr', 0.055), ('assessment', 0.054), ('robustness', 0.048), ('automaton', 0.048), ('spearman', 0.047), ('finite', 0.047), ('extension', 0.046), ('wmt', 0.046), ('translations', 0.042), ('parameterization', 0.042), ('fsm', 0.042), ('mtr', 0.042), ('pfsms', 0.042), ('monz', 0.04), ('peterson', 0.04), ('correlation', 0.038), ('substitutions', 0.037), ('metric', 0.037), ('synonym', 0.037), ('sword', 0.036), ('owczarzak', 0.036), ('denkowski', 0.036), ('hter', 0.036), ('block', 0.036), ('insert', 0.034), ('predictions', 0.033), ('probabilistic', 0.033), ('przybocki', 0.032), ('permitted', 0.032), ('entailment', 0.032), ('row', 0.032), ('transduction', 0.032), ('chinese', 0.032), ('operation', 0.03), ('preference', 0.03), ('reordering', 0.028), ('submitted', 0.028), ('amancio', 0.028), ('cormen', 0.028), ('dombb', 0.028), ('esparza', 0.028), ('jend', 0.028), ('jstart', 0.028), ('leusch', 0.028), ('porat', 0.028), ('regress', 0.028), ('remembers', 0.028), ('slemma', 0.028), ('highlighted', 0.028), ('tests', 0.027), ('birch', 0.027), ('lemma', 0.027), ('synonyms', 0.027), ('edits', 0.026), ('modest', 0.026), ('liu', 0.026), ('weights', 0.026), ('findings', 0.026), ('matches', 0.025), ('pairwise', 0.025), ('vidal', 0.024), ('jumps', 0.024), ('western', 0.024), ('human', 0.024), ('alignments', 0.024), ('proceedings', 0.023), ('rich', 0.023), ('koehn', 0.023), ('string', 0.023), ('position', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 108 emnlp-2012-Probabilistic Finite State Machines for Regression-based MT Evaluation

Author: Mengqiu Wang ; Christopher D. Manning

Abstract: Accurate and robust metrics for automatic evaluation are key to the development of statistical machine translation (MT) systems. We first introduce a new regression model that uses a probabilistic finite state machine (pFSM) to compute weighted edit distance as predictions of translation quality. We also propose a novel pushdown automaton extension of the pFSM model for modeling word swapping and cross alignments that cannot be captured by standard edit distance models. Our models can easily incorporate a rich set of linguistic features, and automatically learn their weights, eliminating the need for ad-hoc parameter tuning. Our methods achieve state-of-the-art correlation with human judgments on two different prediction tasks across a diverse set of standard evaluations (NIST OpenMT06,08; WMT0608).

2 0.11350653 58 emnlp-2012-Generalizing Sub-sentential Paraphrase Acquisition across Original Signal Type of Text Pairs

Author: Aurelien Max ; Houda Bouamor ; Anne Vilnat

Abstract: This paper describes a study on the impact of the original signal (text, speech, visual scene, event) of a text pair on the task of both manual and automatic sub-sentential paraphrase acquisition. A corpus of 2,500 annotated sentences in English and French is described, and performance on this corpus is reported for an efficient system combination exploiting a large set of features for paraphrase recognition. A detailed quantified typology of subsentential paraphrases found in our corpus types is given.

3 0.10642584 50 emnlp-2012-Extending Machine Translation Evaluation Metrics with Lexical Cohesion to Document Level

Author: Billy T. M. Wong ; Chunyu Kit

Abstract: This paper proposes the utilization of lexical cohesion to facilitate evaluation of machine translation at the document level. As a linguistic means to achieve text coherence, lexical cohesion ties sentences together into a meaningfully interwoven structure through words with the same or related meaning. A comparison between machine and human translation is conducted to illustrate one of their critical distinctions that human translators tend to use more cohesion devices than machine. Various ways to apply this feature to evaluate machinetranslated documents are presented, including one without reliance on reference translation. Experimental results show that incorporating this feature into sentence-level evaluation metrics can enhance their correlation with human judgements.

4 0.071823873 67 emnlp-2012-Inducing a Discriminative Parser to Optimize Machine Translation Reordering

Author: Graham Neubig ; Taro Watanabe ; Shinsuke Mori

Abstract: This paper proposes a method for learning a discriminative parser for machine translation reordering using only aligned parallel text. This is done by treating the parser’s derivation tree as a latent variable in a model that is trained to maximize reordering accuracy. We demonstrate that efficient large-margin training is possible by showing that two measures of reordering accuracy can be factored over the parse tree. Using this model in the pre-ordering framework results in significant gains in translation accuracy over standard phrasebased SMT and previously proposed unsupervised syntax induction methods.

5 0.069498949 96 emnlp-2012-Name Phylogeny: A Generative Model of String Variation

Author: Nicholas Andrews ; Jason Eisner ; Mark Dredze

Abstract: Many linguistic and textual processes involve transduction of strings. We show how to learn a stochastic transducer from an unorganized collection of strings (rather than string pairs). The role of the transducer is to organize the collection. Our generative model explains similarities among the strings by supposing that some strings in the collection were not generated ab initio, but were instead derived by transduction from other, “similar” strings in the collection. Our variational EM learning algorithm alternately reestimates this phylogeny and the transducer parameters. The final learned transducer can quickly link any test name into the final phylogeny, thereby locating variants of the test name. We find that our method can effectively find name variants in a corpus of web strings used to referto persons in Wikipedia, improving over standard untrained distances such as Jaro-Winkler and Levenshtein distance.

6 0.06867943 39 emnlp-2012-Enlarging Paraphrase Collections through Generalization and Instantiation

7 0.068632059 86 emnlp-2012-Locally Training the Log-Linear Model for SMT

8 0.068552263 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

9 0.067599192 35 emnlp-2012-Document-Wide Decoding for Phrase-Based Statistical Machine Translation

10 0.059453383 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

11 0.059260715 18 emnlp-2012-An Empirical Investigation of Statistical Significance in NLP

12 0.058055095 54 emnlp-2012-Forced Derivation Tree based Model Training to Statistical Machine Translation

13 0.05498505 31 emnlp-2012-Cross-Lingual Language Modeling with Syntactic Reordering for Low-Resource Speech Recognition

14 0.05282253 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

15 0.052219089 16 emnlp-2012-Aligning Predicates across Monolingual Comparable Texts using Graph-based Clustering

16 0.052055098 12 emnlp-2012-A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing

17 0.051957831 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

18 0.050729074 42 emnlp-2012-Entropy-based Pruning for Phrase-based Machine Translation

19 0.050216351 1 emnlp-2012-A Bayesian Model for Learning SCFGs with Discontiguous Rules

20 0.049873833 94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.173), (1, -0.077), (2, -0.14), (3, -0.005), (4, -0.003), (5, 0.054), (6, -0.008), (7, -0.039), (8, 0.014), (9, -0.016), (10, 0.035), (11, -0.002), (12, -0.032), (13, -0.023), (14, 0.003), (15, 0.037), (16, 0.038), (17, 0.02), (18, 0.09), (19, 0.134), (20, -0.034), (21, -0.059), (22, 0.006), (23, -0.021), (24, -0.032), (25, 0.06), (26, -0.042), (27, -0.069), (28, 0.06), (29, 0.179), (30, 0.261), (31, -0.14), (32, -0.199), (33, 0.018), (34, -0.22), (35, 0.092), (36, -0.14), (37, -0.004), (38, 0.071), (39, -0.133), (40, -0.117), (41, 0.014), (42, 0.085), (43, 0.011), (44, -0.041), (45, -0.038), (46, -0.209), (47, -0.171), (48, 0.009), (49, 0.096)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92140406 108 emnlp-2012-Probabilistic Finite State Machines for Regression-based MT Evaluation

Author: Mengqiu Wang ; Christopher D. Manning

Abstract: Accurate and robust metrics for automatic evaluation are key to the development of statistical machine translation (MT) systems. We first introduce a new regression model that uses a probabilistic finite state machine (pFSM) to compute weighted edit distance as predictions of translation quality. We also propose a novel pushdown automaton extension of the pFSM model for modeling word swapping and cross alignments that cannot be captured by standard edit distance models. Our models can easily incorporate a rich set of linguistic features, and automatically learn their weights, eliminating the need for ad-hoc parameter tuning. Our methods achieve state-of-the-art correlation with human judgments on two different prediction tasks across a diverse set of standard evaluations (NIST OpenMT06,08; WMT0608).

2 0.77189028 50 emnlp-2012-Extending Machine Translation Evaluation Metrics with Lexical Cohesion to Document Level

Author: Billy T. M. Wong ; Chunyu Kit

Abstract: This paper proposes the utilization of lexical cohesion to facilitate evaluation of machine translation at the document level. As a linguistic means to achieve text coherence, lexical cohesion ties sentences together into a meaningfully interwoven structure through words with the same or related meaning. A comparison between machine and human translation is conducted to illustrate one of their critical distinctions that human translators tend to use more cohesion devices than machine. Various ways to apply this feature to evaluate machinetranslated documents are presented, including one without reliance on reference translation. Experimental results show that incorporating this feature into sentence-level evaluation metrics can enhance their correlation with human judgements.

3 0.38868693 75 emnlp-2012-Large Scale Decipherment for Out-of-Domain Machine Translation

Author: Qing Dou ; Kevin Knight

Abstract: We apply slice sampling to Bayesian decipherment and use our new decipherment framework to improve out-of-domain machine translation. Compared with the state of the art algorithm, our approach is highly scalable and produces better results, which allows us to decipher ciphertext with billions of tokens and hundreds of thousands of word types with high accuracy. We decipher a large amount ofmonolingual data to improve out-of-domain translation and achieve significant gains of up to 3.8 BLEU points.

4 0.32962084 22 emnlp-2012-Automatically Constructing a Normalisation Dictionary for Microblogs

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: Microblog normalisation methods often utilise complex models and struggle to differentiate between correctly-spelled unknown words and lexical variants of known words. In this paper, we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution (e.g. tomorrow for tmrw). We use context information to generate possible variant and normalisation pairs and then rank these by string similarity. Highlyranked pairs are selected to populate the dictionary. We show that a dictionary-based approach achieves state-of-the-art performance for both F-score and word error rate on a standard dataset. Compared with other methods, this approach offers a fast, lightweight and easy-to-use solution, and is thus suitable for high-volume microblog pre-processing. 1 Lexical Normalisation A staggering number of short text “microblog” messages are produced every day through social media such as Twitter (Twitter, 2011). The immense volume of real-time, user-generated microblogs that flows through sites has been shown to have utility in applications such as disaster detection (Sakaki et al., 2010), sentiment analysis (Jiang et al., 2011; Gonz a´lez-Ib ´a n˜ez et al., 2011), and event discovery (Weng and Lee, 2011; Benson et al., 2011). However, due to the spontaneous nature of the posts, microblogs are notoriously noisy, containing many non-standard forms e.g., tmrw “tomorrow” and 2day “today” which degrade the performance of — — 421 natural language processing (NLP) tools (Ritter et al., 2010; Han and Baldwin, 2011). To reduce this effect, attempts have been made to adapt NLP tools to microblog data (Gimpel et al., 2011; Foster et al., 2011; Liu et al., 2011b; Ritter et al., 2011). An alternative approach is to pre-normalise non-standard lexical variants to their standard orthography (Liu et al., 2011a; Han and Baldwin, 2011; Xue et al., 2011; Gouws et al., 2011). For example, se u 2morw!!! would be normalised to see you tomorrow! The normalisation approach is especially attractive as a preprocessing step for applications which rely on keyword match or word frequency statistics. For example, earthqu, eathquake, and earthquakeee all attested in a Twitter corpus have the standard form earthquake; by normalising these types to their standard form, better coverage can be achieved for keyword-based methods, and better word frequency estimates can be obtained. In this paper, we focus on the task of lexical normalisation of English Twitter messages, in which out-of-vocabulary (OOV) tokens are normalised to their in-vocabulary (IV) standard form, i.e., a standard form that is in a dictionary. Following other recent work on lexical normalisation (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011; Liu et al., 2012), we specifically focus on one-to-one normalisation in which one OOV token is normalised to one IV word. Naturally, not all OOV words in microblogs are lexical variants of IV words: named entities, e.g., — — are prevalent in microblogs, but not all named entities are included in our dictionary. One challenge for lexical normalisation is therefore to disPLraoncge uadgineg Lse oafr tnhineg 2,0 p1a2g Jeosin 42t C1–o4n3f2e,re Jnecjue Iosnla Enmd,p Kiroicraela, M 1e2t–h1o4ds Ju ilny N 20a1tu2r.a ?lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl tinguish those OOV tokens that require normalisation from those that are well-formed. Recent unsupervised approaches have not attempted to distinguish such tokens from other types of OOV tokens (Cook and Stevenson, 2009; Liu et al., 2011a), limiting their applicability to real-world normalisation tasks. Other approaches (Han and Baldwin, 2011; Gouws et al., 2011) have followed a cascaded approach in which lexical variants are first identified, and then normalised. However, such two-step approaches suffer from poor lexical variant identification performance, which is propagated to the normalisation step. Motivated by the observation that most lexical variants have an unambiguous standard form (especially for longer tokens), and that a lexical variant and its standard form typically occur in similar contexts, in this paper we propose methods for automatically constructing a lexical normalisation dictionary a dictionary whose entries consist — of (lexical variant, standard form) pairs that enables type-based normalisation. Despite the simplicity of this dictionary-based normalisation method, we show it to outperform previously-proposed approaches. This very fast, lightweight solution is suitable for real-time processing of the large volume of streaming microblog data available from Twitter, and offers a simple solution to the lexical variant detection problem that hinders other normalisation methods. Furthermore, this dictionary-based method can be easily integrated with other more-complex normalisation approaches (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011) to produce hybrid systems. After discussing related work in Section 2, we present an overview of our dictionary-based approach to normalisation in Section 3. In Sections 4 and 5 we experimentally select the optimised context similarity parameters and string similarity reranking method. We present experimental results on the unseen test data in Section 6, and offer some concluding remarks in Section 7. — 2 Related Work Given a token t, lexical normalisation is the task of finding arg max P(s|t) ∝ arg max P(t| s)P(s), wofh efinred s igs tahreg smtaanxdaPrd(s form, i.e., an aIVx Pw(otr|sd). PSt(asn)-, dardly in lexical normalisation, t is assumed to be an 422 OOV token, relative to a fixed dictionary. In practice, not all OOV tokens should be normalised; i.e., only lexical variants (e.g., tmrw “tomorrow”) should be normalised and tokens that are OOV but otherwise not lexical variants (e.g., iPad “iPad”) should be unchanged. Most work in this area focuses only on the normalisation task itself, oftentimes assuming that the task of lexical variant detection has already been completed. Various approaches have been proposed to estimate the error model, P(t|s). For example, in work on spell-checking, eBl,ril Pl (atn|ds) M. Fooorre e (2000) improve on a standard edit-distance approach by considering multi-character edit operations; Toutanova and Moore (2002) build on this by incorporating phonological information. Li et al. (2006) utilise distributional similarity (Lin, 1998) to correct misspelled search queries. In text message normalisation, Choudhury et al. (2007) model the letter transformations and emissions using a hidden Markov model (Rabiner, 1989). Cook and Stevenson (2009) and Xue et al. (201 1) propose multiple simple error models, each of which captures a particular way in which lexical variants are formed, such as phonetic spelling (e.g., epik “epic”) or clipping (e.g., walkin “walking”). Nevertheless, optimally weighting the various error models in these approaches is challenging. Without pre-categorising lexical variants into different types, Liu et al. (201 1a) collect Google search snippets from carefully-designed queries from which they then extract noisy lexical variant– standard form pairs. These pairs are used to train a conditional random field (Lafferty et al., 2001) to estimate P(t|s) at the character level. One shortcoming eo fP querying a ese cahracrha engine teol. .o Obtanein strhaoirnt-ing pairs is it tends to be costly in terms of time and bandwidth. Here we exploit microblog data directly to derive (lexical variant, standard form) pairs, instead of relying on external resources. In morerecent work, Liu et al. (2012) endeavour to improve the accuracy of top-n normalisation candidates by integrating human cognitive inference, characterlevel transformations and spell checking in their normalisation model. The encouraging results shift the focus to reranking and promoting the correct normalisation to the top-1 position. However, like much previous work on lexical normalisation, this work assumes perfect lexical variant detection. Aw et al. (2006) and Kaufmann and Kalita (2010) consider normalisation as a machine translation task from lexical variants to standard forms using off-theshelf tools. These methods do not assume that lexical variants have been pre-identified; however, these methods do rely on large quantities of labelled training data, which is not available for microblogs. Recently, Han and Baldwin (201 1) and Gouws et al. (201 1) propose two-step unsupervised approaches to normalisation, in which lexical variants are first identified, and then normalised. They approach lexical variant detection by using a context fitness classifier (Han and Baldwin, 2011) or through dictionary lookup (Gouws et al., 2011). However, the lexical variant detection of both meth- ods is rather unreliable, indicating the challenge of this aspect of normalisation. Both of these approaches incorporate a relatively small normalisation dictionary to capture frequent lexical variants with high precision. In particular, Gouws et al. (201 1) produce a small normalisation lexicon based on distributional similarity and string similarity (Lodhi et al., 2002). Our method adopts a similar strategy using distributional/string similarity, but instead of constructing a small lexicon for preprocessing, we build a much wider-coverage normalisation dictionary and opt for a fully lexiconbased end-to-end normalisation approach. In contrast to the normalisation dictionaries of Han and Baldwin (201 1) and Gouws et al. (201 1) which focus on very frequent lexical variants, we focus on moderate frequency lexical variants of a minimum character length, which tend to have unambiguous standard forms; our intention is to produce normalisation lexicons that are complementary to those currently available. Furthermore, we investigate the impact of a variety of contextual and string similarity measures on the quality of the resulting lexicons. In summary, our dictionary-based normalisation ap- proach is a lightweight end-to-end method which performs both lexical variant detection and normalisation, and thus is suitable for practical online preprocessing, despite its simplicity. 423 3 A Lexical Normalisation Dictionary Before discussing our method for creating a normalisation dictionary, we first discuss the feasibility of such an approach. 3.1 Feasibility Dictionary lookup approaches to normalisation have been shown to have high precision but low recall (Han and Baldwin, 2011; Gouws et al., 2011). Frequent (lexical variant, standard form) pairs such as (u, you) are typically included in the dictionaries used by such methods, while less-frequent items such as (g0tta, gotta) are generally omitted. Because of the degree of lexical creativity and large number of non-standard forms observed on Twitter, a wide-coverage normalisation dictionary would be expensive to construct manually. Based on the assumption that lexical variants occur in similar con- texts to their standard forms, however, it should be possible to automatically construct a normalisation dictionary with wider coverage than is currently available. Dictionary lookup is a type-based approach to normalisation, i.e., every token instance of a given type will always be normalised in the same way. However, lexical variants can be ambiguous, e.g., y corresponds to “you” in yeah, y r right! LOL but “why” in AM CONFUSED!!! y you did that? Nevertheless, the relative occurrence of ambiguous lexical variants is small (Liu et al., 2011a), and it has been observed that while shorter variants such as y are often ambiguous, longer variants tend to be unambiguous. For example bthday and 4eva are unlikely to have standard forms other than “birthday” and “forever”, respectively. Therefore, the normalisation lexicons we produce will only contain entries for OOVs with character length greater than a specified threshold, which are likely to have an unambiguous standard form. 3.2 Overview of approach Our method for constructing a normalisation dictio- nary is as follows: Input: Tokenised English tweets 1. Extract (OOV, IV) pairs based on distributional similarity. 2. Re-rank the extracted pairs by string similarity. Output: A list of (OOV, IV) pairs ordered by string similarity; select the top-n pairs for inclusion in the normalisation lexicon. In Step 1, we leverage large volumes of Twitter data to identify the most distributionally-similar IV type for each OOV type. The result of this process is a set of (OOV, IV) pairs, ranked by distributional similarity. The extracted pairs will include (lexical variant, standard form) pairs, such as (tmrw, tomorrow), but will also contain false positives such as (Tusday, Sunday) Tusday is a lexical variant, but its standard form is not “Sunday” and (Youtube, web) Youtube is an OOV named entity, not a lexical variant. Nevertheless, lexical variants are typically formed from their standard forms through regular processes (Thurlow, 2003) e.g., the omission of characters and from this perspective Sunday and web are not plausible standard — — — — — forms for Tusday and Youtube, respectively. In Step 2, we therefore capture this intuition to re-rank the extracted pairs by string similarity. The top-n items in this re-ranked list then form the normalisation lexicon, which is based only on development data. Although computationally-expensive to build, this dictionary can be created offline. Once built, it then offers a very fast approach to normalisation. We can only reliably compute distributional similarity for types that are moderately frequent in a corpus. Nevertheless, many lexical variants are sufficiently frequent to be able to compute distributional similarity, and can potentially make their way into our normalisation lexicon. This approach is not suitable for normalising low-frequency lexical variants, nor is it suitable for shorter lexical variant types which as discussed in Section 3.1 are more likely to have an ambiguous standard form. Nevertheless, previously-proposed normalisation methods that can handle such phenomena also rely in part on a normalisation lexicon. The normalisation lexicons we create can therefore be easily integrated with previous approaches to form hybrid normalisation systems. — — 4 Contextually-similar Pair Generation Our objective is to extract contextually-similar (OOV, IV) pairs from a large-scale collection of mi424 croblog data. Fundamentally, the surrounding words define the primary context, but there are different ways of representing context and different similarity measures we can use, which may influence the quality of generated normalisation pairs. In representing the context, we experimentally explore the following factors: (1) context window size (from 1 to 3 tokens on both sides); (2) n-gram order ofthe context tokens (unigram, bigram, trigram); (3) whether context words are indexed for relative position or not; and (4) whether we use all context tokens, or only IV words. Because high-accuracy linguistic processing tools for Twitter are still under exploration (Liu et al., 2011b; Gimpel et al., 2011; Ritter et al., 2011; Foster et al., 2011), we do not consider richer representations of context, for example, incorporating information about part-of-speech tags or syntax. We also experiment with a number of simple but widely-used geometric and information theoretic distance/similarity measures. In particular, we use Kullback–Leibler (KL) divergence (Kullback and Leibler, 195 1), Jensen–Shannon (JS) divergence (Lin, 1991), Euclidean distance and Cosine distance. We use a corpus of 10 million English tweets to do parameter tuning over, and a larger corpus of tweets in the final candidate ranking. All tweets were collected from September 2010 to January 2011 via the Twitter API.1 From the raw data we extract English tweets using a language identification tool (Lui and Baldwin, 2011), and then apply a simplified Twitter tokeniser (adapted from O’Connor et al. (2010)). We use the Aspell dictionary (v6.06)2 to determine whether a word is IV, and only include in our normalisation dictionary OOV tokens with at least 64 occurrences in the corpus and character length ≥ 4, both of which were determined through empirical 4o,b bsoetrhva otifo wnh. Fcohr w weearceh d OetOeVrm winoedrd t type ginh the corpus, we select the most similar IV type to form (OOV, IV) pairs. To further narrow the search space, we only consider IV words which are morphophonemically similar to the OOV type, follow- ing settings in Han and Baldwin (201 1).3 1http s : / / dev .twitter . com/ docs / st reaming-api /methods 2http : / / aspe l .net / l 3We only consider IV words within an edit distance of 2 or a phonemic edit distance of 1from the OOV type, and we further In order to evaluate the generated pairs, we randomly selected 1000 OOV words from the 10 million tweet corpus. We set up an annotation task on Amazon Mechanical Turk,4 presenting five independent annotators with each word type (with no context) and asking for corrections where appropriate. For instance, given tmrw, the annotators would likely identify it as a non-standard variant of “tomorrow”. For correct OOV words like iPad, on the other hand, we would expect them to leave the word unchanged. If 3 or more of the 5 annotators make the same suggestion (in the form of either a canonical spelling or leaving the word unchanged), we include this in our gold standard for evaluation. In total, this resulted in 351 lexical variants and 282 correct OOV words, accounting for 63.3% of the 1000 OOV words. These 633 OOV words were used as (OOV, IV) pairs for parameter tuning. The remainder of the 1000 OOV words were ignored on the grounds that there was not sufficient consensus amongst the annotators.5 Contextually-similar pair generation aims to include as many correct normalisation pairs as possible. We evaluate the quality of the normalisation pairs using “Cumulative Gain” (CG): XN0 CG = Xreli0 Xi=1 Suppose there are N0 correct generated pairs (oovi, ivi), each of which is weighted by reli0, the frequency of oovi to indicate its relative importance; for example, (thinkin, thinking) has a higher weight than (g0tta, gotta) because thinkin is more frequent than g0tta in our corpus. In this evaluation we don’t consider the position of normalisation pairs, and nor do we penalise incorrect pairs. Instead, we push distinguishing between correct and incorrect pairs into the downstream re-ranking step in which we incorporate string similarity information. Given the development data and CG, we run an exhaustive search of parameter combinations over only consider the top 30% most-frequent of these IV words. 4https : / /www .mturk .com/mturk/welcome 5Note that the objective of this annotation task is to identify lexical variants that have agreed-upon standard forms irrespective of context, as a special case of the more general task of lexical normalisation (where context may or may not play a significant role in the determination of the normalisation). 425 our development corpus. The five best parameter combinations are shown in Table 1. We notice the CG is almost identical for the top combinations. As a context window size of 3 incurs a heavy processing and memory overhead over a size of 2, we use the 3rd-best parameter combination for subsequent experiments, namely: context window of ±2 tokens, teoxkpeenr bigrams, positional index, nadnodw wK oLf divergence as our distance measure. To better understand the sensitivity of the method to each parameter, we perform a post-hoc parameter analysis relative to a default setting (as underlined in Table 2), altering one parameter at a time. The results in Table 2 show that bigrams outperform other n-gram orders by a large margin (note that the evaluation is based on a log scale), and information-theoretic measures are superior to the geometric measures. Furthermore, it also indicates using the positional indexing better captures context. However, there is little to distinguish context modelling with just IV words or all tokens. Similarly, the context window size has relatively little impact on the overall performance, supporting our earlier observation from Table 1. 5 Pair Re-ranking by String Similarity Once the contextually-similar (OOV, IV) pairs are generated using the selected parameters in Section 4, we further re-rank this set of pairs in an attempt to boost morphophonemically-similar pairs like (bananaz, bananas), and penalise noisy pairs like (paninis, beans). Instead of using the small 10 million tweet corpus, from this step onwards, we use a larger corpus of 80 million English tweets (collected over the same period as the development corpus) to develop a larger-scale normalisation dictionary. This is because once pairs are generated, re-ranking based on string comparison is much faster. We only include in the dictionary OOV words with a token frequency > 15 to include more OOV types than in Section 4, and again apply a minimum length cutoff of 4 char- acters. To measure how well our re-ranking method promotes correct pairs and demotes incorrect pairs (including both OOV words that should not be normalised, e.g. (Youtube, web), and incorrect normalRankWindow sizen-gramPositional index?Lex. choiceSim/distance measurelog(CG) 1±32YesAllKL divergence19.571 2 ±±33 2 No All KL divergence 19.562 3 ±±23 2 Yes All KL divergence 19.562 4 ±±32 2 Yes IVs KL divergence 19.561 5 ±±23 2 Yes IVs JS divergence 19.554 ±2 Table 1: The five best parameter combinations in the exhaustive search of parameter combinations Window sizen-gramPositional index?Lexical choiceSimilarity/distance measure ±1 19.3251 19.328Yes 19.328IVs 19.335KL divergence 19.328 ±±21 1199..332275 2 19.571 No 19.263 All 19.328 Euclidean 19.227 ±±32 1199..332287 3 19.324 JS divergence 19.31 1 Cosine 19.170 Table 2: Parameter sensitivity analysis measured as log(CG) for correctly-generated pairs. We tune one parameter at a time, using the default (underlined) setting for other parameters; the non-exhaustive best-performing setting in each case is indicated in bold. isations for lexical variants, e.g. (bcuz, cause)), we modify our evaluation metric from Section 4 to evaluate the ranking at different points, using Discounted Cumulative Gain (DCG@N: Jarvelin and Kekalainen (2002)): DCG@N = rel1+XiN=2logr2el(i ) where reli again represents the frequency of the OOV, but it can be gain (a positive number) or loss (a negative number), depending on whether the ith pair is correct or incorrect. Because we also expect correct pairs to be ranked higher than incorrect pairs, DCG@N takes both factors into account. Given the generated pairs and the evaluation metric, we first consider three baselines: no re-ranking (i.e., the final ranking is that of the contextual similarity scores), and re-rankings of the pairs based on the frequencies of the OOVs in the Twitter corpus, and the IV unigram frequencies in the Google Web 1T corpus (Brants and Franz, 2006) to get less-noisy frequency estimates. We also compared a variety of re-rankings based on a number of string similarity measures that have been previously considered in normalisation work (reviewed in Section 2). We experiment with standard edit distance (Levenshtein, 1966), edit distance over double metaphone codes (phonetic edit distance: (Philips, 2000)), longest common subsequence ratio over the consonant edit distance of the paired words (hereafter, denoted as 426 consonant edit distance: (Contractor et al., 2010)), and a string subsequence kernel (Lodhi et al., 2002). In Figure 1, we present the DCG@N results for each of our ranking methods at different rank cutoffs. Ranking by OOV frequency is motivated by the assumption that lexical variants are frequently used by social media users. This is confirmed by our findings that lexical pairs like (goin, going) and (nite, night) are at the top of the ranking. However, many proper nouns and named entities are also used frequently and ranked at the top, mixed with lexical variants like (Facebook, speech) and (Youtube, web). In ranking by IV word frequency, we assume the lexical variants are usually derived from frequently-used IV equivalents, e.g. (abou, about). However, many less-frequent lexical variant types have high-frequency (IV) normalisations. For instance, the highest-frequency IV word the has more than 40 OOV lexical variants, such as tthe and thhe. These less-frequent types occupy the top positions, reducing the cumulative gain. Compared with these two baselines, ranking by default contextual similarity scores delivers promising results. It successfully ranks many more intuitive normalisation pairs at the top, such as (2day, today) and (wknd, weekend), but also ranks some incorrect pairs highly, such as (needa, gotta). The string similarity-based methods perform better than our baselines in general. Through manual analysis, we found that standard edit distance ranking is fairly accurate for lexical variants with low edit distance to their standard forms, but fails to identify heavily-altered variants like (tmrw, tomorrow). Consonant edit distance is similar to standard edit distance, but places many longer words at the top of the ranking. Edit distance over double metaphone codes (phonetic edit distance) performs particularly well for lexical variants that include character repetitions commonly used for emphasis on Twitter because such repetitions do not typically alter the phonetic codes. Compared with the other methods, the string subsequence kernel delivers encouraging results. It measures common character subsequences of length n between (OOV, IV) pairs. Because it is computationally expensive to calculate similarity for larger n, we choose n=2, following Gouws et al. (201 1). As N (the lexicon size cut-off) increases, the performance drops more slowly than the other meth— — ods. Although this method fails to rank heavilyaltered variants such as (4get,forget) highly, it typically works well for longer words. Given that we focus on longer OOVs (specifically those longer than 4 characters), this ultimately isn’t a great handicap. 6 Evaluation Given the re-ranked pairs from Section 5, here we apply them to a token-level normalisation task using the normalisation dataset of Han and Baldwin (201 1). 6.1 Metrics We evaluate using the standard evaluation metrics of precision (P), recall (R) and F-score (F) as detailed below. We also consider the false alarm rate (FA) and word error rate (WER), also as shown below. FA measures the negative effects of applying normalisation; a good approach to normalisation should not (incorrectly) normalise tokens that are already in their standard form and do not require normalisation.6 WER, like F-score, shows the overall benefits of normalisation, but unlike F-score, measures how many token-level edits are required for the output to be the same as the ground truth data. In general, dictionaries with a high F-score/low WER and low FA 6FA + P ≤ 1because some lexical variants might be incorrectly Ano +rm Pa ≤lise 1d b. 427 are preferable. P = R= F = FA = WER = # cor#re nctolrym naolrismedal tioskeden toskens # to ckoernresc rtelyqu niori nmga nloisremda tloiskaetniosn P2P +R R # inco#rr encotrlmya nliosremda tloikseedns tokens # token edits n#ee adlletd o akfetnesr normalisation 6.2 Results We select the three best re-ranking methods, and best cut-off N for each method, based on the highest DCG@N value for a given method over the development data, as presented in Figure 1. Namely, they are string subsequence kernel (S-dict, N=40,000), double metaphone edit distance (DMdict, N=10,000) and default contextual similarity without re-ranking (C-dict, N=10,000).7 We evaluate each of the learned dictionaries in Table 3. We also compare each dictionary with the performance of the manually-constructed Internet slang dictionary (HB-dict) used by Han and Baldwin (201 1), the small automatically-derived dictionary of Gouws et al. (201 1) (GHM-dict), and combinations of the different dictionaries. In addition, the contribution of these dictionaries in hybrid normalisation approaches is also presented, in which we first normalise OOVs using a given dictionary (combined or otherwise), and then apply the normalisation method of Gouws et al. (201 1) based on consonant edit distance (GHM-norm), or the approach of Han and Baldwin (201 1) based on the summation of many unsupervised approaches (HB-norm), to the remaining OOVs. Results are shown in Table 3, and discussed below. 6.2.1 Individual Dictionaries Overall, the individual dictionaries derived by the re-ranking methods (DM-dict, S-dict) perform bet- 7We also experimented with combining ranks using Mean Reciprocal Rank. However, the combined rank didn’t improve performance on the development data. We plan to explore other ranking aggregation methods in future work. 1 3 5 7 9 11 31 51 71 91 N cut−offs Figure 1: Re-ranking based on different string similarity methods. ter than that based on contextual similarity (C-dict) in terms of precision and false alarm rate, indicating the importance of re-ranking. Even though C-dict delivers higher recall indicating that many lexical variants are correctly normalised this is offset by its high false alarm rate, which is particularly undesirable in normalisation. Because S-dict has better performance than DM-dict in terms of both F-score and WER, and a much lower false alarm rate than C-dict, subsequent results are presented using S-dict only. — — Both HB-dict and GHM-dict achieve better than 90% precision with moderate recall. Compared to these methods, S-dict is not competitive in terms of either precision or recall. This result seems rather discouraging. However, considering that S-dict is an automatically-constructed dictionary targeting lexical variants of varying frequency, it is not surprising that the precision is worse than that of HB-dict which is manually-constructed and GHM-dict which includes entries only for more-frequent OOVs for which distributional similarity is more accurate. Additionally, the recall of S-dict is hampered by the — — — 428 restriction on lexical variant token length of 4 characters. 6.2.2 Combined Dictionaries Next we look to combining HB-dict, GHM-dict and S-dict. In combining the dictionaries, a given OOV word can be listed with different standard forms in different dictionaries. In such cases we use the following preferences for dictionaries motivated by our confidence in the normalisation pairs — of the dictionaries to resolve conflicts: HB-dict > GHM-dict > S-dict. When we combine dictionaries in the second section of Table 3, we find that they contain complementary information: in each case the recall and F-score are higher for the combined dictionary than any of the individual dictionaries. The combination of HB-dict+GHM-dict produces only a small improvement in terms of F-score over HBdict (the better-performing dictionary) suggesting that, as claimed, HB-dict and GHM-dict share many frequent normalisation pairs. HB-dict+S-dict and GHM-dict+S-dict, on the other hand, improve sub— MethodPrecisionRecallF-ScoreFalse AlarmWord Error Rate C-dict0.4740.2180.2990.2980.103 DM-dict S-dict HB-dict GHM-dict 0.727 0.700 0.915 0.982 0.106 0.179 0.435 0.319 0.185 0.285 0.590 0.482 0.145 0.162 0.048 0.000 0.102 0.097 0.066 0.076 HB-dict+S-dict0.8400.6010.7010.0900.052 GHM-dict+S-dict HB-dict+GHM-dict HB-dict+GHM-dict+S-dict 0.863 0.920 0.847 0.498 0.465 0.630 0.632 0.618 0.723 0.072 0.045 0.086 0.061 0.063 0.049 GHM-dict+GHM-norm0.3380.5780.4270.4580.135 HB-dict+GHM-dict+S-dict+GHM-norm HB-dict+HB-norm HB-dict+GHM-dict+S-dict+HB-norm 0.406 0.515 0.527 0.715 0.771 0.789 0.518 0.618 0.632 0.468 0.332 0.332 0.124 0.081 0.079 Table 3: Normalisation results using our derived dictionaries (contextual similarity (C-dict); double metaphone rendering (DM-dict); string subsequence kernel scores (S-dict)), the dictionary of Gouws et al. (201 1) (GHM-dict), the Internet slang dictionary (HB-dict) from Han and Baldwin (201 1), and combinations of these dictionaries. In addition, we combine the dictionaries with the normalisation method of Gouws et al. (201 1) (GHM-norm) and the combined unsupervised approach of Han and Baldwin (201 1) (HB-norm). stantially over HB-dict and GHM-dict, respectively, indicating that S-dict contains markedly different entries to both HB-dict and GHM-dict. The best Fscore and WER are obtained using the combination of all three dictionaries, HB-dict+GHM-dict+S-dict. Furthermore, the difference between the results using HB-dict+GHM-dict+S-dict and HB-dict+GHMdict is statistically significant (p < 0.01), based on the computationally-intensive Monte Carlo method of Yeh (2000), demonstrating the contribution of Sdict. 6.2.3 Hybrid Approaches The methods of Gouws et al. (201 1) (i.e. GHM-dict+GHM-norm) and Han and Baldwin (201 1) (i.e. HB-dict+HB-norm) have lower precision and higher false alarm rates than the dictionarybased approaches; this is largely caused by lexical variant detection errors.8 Using all dictionaries in combination with these methods HB-dict+GHM-dict+S-dict+GHM-norm and HBdict+GHM-dict+S-dict+HB-norm gives some improvements, but the false alarm rates remain high. Despite the limitations of a pure dictionary-based approach to normalisation discussed in Section 3.1 the current best practical approach to normal— — — — 8Here we report results that do not assume perfect detection of lexical variants, unlike the original published results in each case. 429 Error typeOOVDSitcat.ndard fGoromld (a) pluralsplayeplayersplayer (b) negation unlike like dislike (c) possessives anyones anyone anyone ’s (d) correct OOVs iphone phone iphone (e) test data errors durin during durin (f) ambiguity siging signing singing Table 4: Error types in the combined dictionary (HBdict+GHM-dict+S-dict) isation is to use a lexicon, combining hand-built and automatically-learned normalisation dictionaries. 6.3 Discussion and Error Analysis We first manually analyse the errors in the combined dictionary (HB-dict+GHM-dict+S-dict) and give examples of each error type in Table 4. The most frequent word errors are caused by slight morphologi- cal variations, including plural forms (a), negations (b), possessive cases (c), and OOVs that are correct and do not require normalisation (d). In addition, we also notice some missing annotations where lexical variants are skipped by human annotations but captured by our method (e). Ambiguity (f) definitely exists in longer OOVs, however, these cases do not appear to have a strong negative impact on the normalisation performance. An example of a remainLength cut-off (N)#VariantsPrecisionRecall (≥ N)Recall (all)False Alarm ≥45560.700Rec0al.l3 8(≥1 N)0.1790.162 ≥≥54 382 0.814 0.471 0.152 0.122 ≥≥65 254 0.804 0.484 0.104 0.131 ≥≥76 138 0.793 0.471 0.055 0.122 ≥71380.7930.4710.0550.122 Table 5: S-dict normalisation results broken down according to OOV token length. Recall is presented both over the subset of instances of length ≥ N in the data (“Recall (≥ N)”), and over the entirety of the dataset (“Recall (all)”); “su#bVsaertia onftis n” sitsa tnhcee snu omfb leenrg othf t≥ok Nen iinns tthaenc deast ao f( “tRhee cinadllic (a≥ted N length idn o othveer rt tehset d eanttaisreetty. ing miscellaneous error is bday “birthday”, which is mis-normalised as day. To further study the influence of OOV word length relative to the normalisation performance, we conduct a fine-grained analysis of the performance of the derived dictionary (S-dict) in Table 5, broken down across different OOV word lengths. The results generally support our hypothesis that our method works better for longer OOV words. The derived dictionary is much more reliable for longer tokens (length 5, 6, and 7 characters) in terms of precision and false alarm. Although the recall is relatively modest, in the future we intend to improve recall by mining more normalisation pairs from larger collections of microblog data. 7 Conclusions and Future Work In this paper, we describe a method for automatically constructing a normalisation dictionary that supports normalisation of microblog text through direct substitution of lexical variants with their standard forms. After investigating the impact of different distributional and string similarity methods on the quality of the dictionary, we present experimental results on a standard dataset showing that our proposed methods acquire high quality (lexical variant, standard form) pairs, with reasonable coverage, and achieve state-of-the-art end-toend lexical normalisation performance on a realworld token-level task. Furthermore, this dictionarylookup method combines the detection and normalisation of lexical variants into a simple, lightweight solution which is suitable for processing of highvolume microblog feeds. In the future, we intend to improve our dictionary by leveraging the constantly-growing volume of microblog data, and considering alternative ways to combine distributional and string similarity. In addi430 tion to direct evaluation, we also want to explore the benefits of applying normalisation for downstream social media text processing applications, e.g. event detection. Acknowledgements We would like to thank the three anonymous reviewers for their insightful comments, and Stephan Gouws for kindly sharing his data and discussing his work. NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT centre of Excellence programme. References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization. In Proceedings of COLING/ACL 2006, pages 33–40, Sydney, Australia. Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 389–398, Portland, Oregon, USA. Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 286–293, Hong Kong. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. International Journal on Document Analysis and Recognition, 10: 157–174. Danish Contractor, Tanveer A. Faruquie, and L. Venkata Subramaniam. 2010. Unsupervised cleansing of noisy text. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 189–196, Beijing, China. Paul Cook and Suzanne Stevenson. 2009. An unsu- pervised model for text message normalization. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 71– 78, Boulder, USA. Jennifer Foster, O¨zlem C ¸etinoglu, Joachim Wagner, Joseph L. Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. #hardtoparse: POS Tagging and Parsing the Twitterverse. In Analyzing Microtext: Papers from the 2011 AAAI Workshop, volume WS-1 1-05 of AAAI Workshops, pages 20–25, San Francisco, CA, USA. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 42–47, Portland, Oregon, USA. Roberto Gonz a´lez-Ib ´a n˜ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 581–586, Portland, Oregon, USA. Stephan Gouws, Dirk Hovy, and Donald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 82–90, Edinburgh, Scotland, UK. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 368–378, Portland, Oregon, USA. K. Jarvelin and J. Kekalainen. 2002. Cumulated gainbased evaluation of IR techniques. ACM Transactions on Information Systems, 20(4). Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent Twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 151–160, Portland, Oregon, USA. Joseph Kaufmann and Jugal Kalita. 2010. Syntactic normalization of Twitter messages. In International Con431 ference on Natural Language Processing, Kharagpur, India. S. Kullback and R. A. Leibler. 1951. On information and sufficiency. Annals of Mathematical Statistics, 22:49– 86. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings ofthe Eighteenth International Conference on Machine Learning, pages 282–289, San Francisco, CA, USA. Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10:707–710. Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou. 2006. Exploring distributional similarity based models for query spelling correction. In Proceedings of COLING/ACL 2006, pages 1025–1032, Sydney, Australia. Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1): 145–151. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the ACL and 1 International Con7th ference on Computational Linguistics (COLING/ACL98), pages 768–774, Montreal, Quebec, Canada. Fei Liu, Fuliang Weng, Bingqing Wang, and Yang Liu. 2011a. Insertion, deletion, or substitution? normalizing text messages without pre-categorization nor supervision. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 71–76, Portland, Oregon, USA. Xiaohua Liu, Shaodian Zhang, Furu Wei, and Ming Zhou. 2011b. Recognizing named entities in tweets. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 359–367, Portland, Oregon, USA. Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broadcoverage normalization system for social media language. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), Jeju, Republic of Korea. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. J. Mach. Learn. Res., 2:419– 444. Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), pages 553–561, Chiang Mai, Thailand. Brendan O’Connor, Michel Krieger, and David Ahn. 2010. TweetMotif: Exploratory search and topic summarization for Twitter. In Proceedings of the 4th International Conference on Weblogs and Social Media (ICWSM 2010), pages 384–385, Washington, USA. Lawrence Philips. 2000. The double metaphone search algorithm. C/C++ Users Journal, 18:38–43. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL-HLT 2010), pages 172–180, Los Angeles, USA. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1524–1534, Edinburgh, Scotland, UK. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th International Conference on the World Wide Web (WWW 2010), pages 851–860, Raleigh, North Carolina, USA. Crispin Thurlow. 2003. Generation txt? The sociolinguistics of young people’s text-messaging. Discourse Analysis Online, 1(1). Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proceedings of the 40th Annual Meeting of the ACL and 3rd Annual Meeting of the NAACL (ACL-02), pages 144–15 1, Philadelphia, USA. Official Blog Twitter. 2011. 200 million tweets per day. Retrived at August 17th, 2011. Jianshu Weng and Bu-Sung Lee. 2011. Event detection in Twitter. In Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM 2011), Barcelona, Spain. Zhenzhen Xue, Dawei Yin, and Brian D. Davison. 2011. Normalizing microtext. In Proceedings of the AAAI11 Workshop on Analyzing Microtext, pages 74–79, San Francisco, USA. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 947–953, Saarbr¨ ucken, Germany. 432

5 0.32905486 86 emnlp-2012-Locally Training the Log-Linear Model for SMT

Author: Lemao Liu ; Hailong Cao ; Taro Watanabe ; Tiejun Zhao ; Mo Yu ; Conghui Zhu

Abstract: In statistical machine translation, minimum error rate training (MERT) is a standard method for tuning a single weight with regard to a given development data. However, due to the diversity and uneven distribution of source sentences, there are two problems suffered by this method. First, its performance is highly dependent on the choice of a development set, which may lead to an unstable performance for testing. Second, translations become inconsistent at the sentence level since tuning is performed globally on a document level. In this paper, we propose a novel local training method to address these two problems. Unlike a global training method, such as MERT, in which a single weight is learned and used for all the input sentences, we perform training and testing in one step by learning a sentencewise weight for each input sentence. We pro- pose efficient incremental training methods to put the local training into practice. In NIST Chinese-to-English translation tasks, our local training method significantly outperforms MERT with the maximal improvements up to 2.0 BLEU points, meanwhile its efficiency is comparable to that of the global method.

6 0.28519934 58 emnlp-2012-Generalizing Sub-sentential Paraphrase Acquisition across Original Signal Type of Text Pairs

7 0.28240502 18 emnlp-2012-An Empirical Investigation of Statistical Significance in NLP

8 0.26596752 96 emnlp-2012-Name Phylogeny: A Generative Model of String Variation

9 0.26019317 67 emnlp-2012-Inducing a Discriminative Parser to Optimize Machine Translation Reordering

10 0.25900829 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

11 0.2585679 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

12 0.25206843 118 emnlp-2012-Source Language Adaptation for Resource-Poor Machine Translation

13 0.24791357 54 emnlp-2012-Forced Derivation Tree based Model Training to Statistical Machine Translation

14 0.24238077 31 emnlp-2012-Cross-Lingual Language Modeling with Syntactic Reordering for Low-Resource Speech Recognition

15 0.23465128 66 emnlp-2012-Improving Transition-Based Dependency Parsing with Buffer Transitions

16 0.23443036 74 emnlp-2012-Language Model Rest Costs and Space-Efficient Storage

17 0.22628298 119 emnlp-2012-Spectral Dependency Parsing with Latent Variables

18 0.21946853 35 emnlp-2012-Document-Wide Decoding for Phrase-Based Statistical Machine Translation

19 0.21709405 39 emnlp-2012-Enlarging Paraphrase Collections through Generalization and Instantiation

20 0.21464509 1 emnlp-2012-A Bayesian Model for Learning SCFGs with Discontiguous Rules


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.017), (14, 0.019), (16, 0.038), (20, 0.284), (25, 0.01), (34, 0.101), (45, 0.018), (60, 0.096), (63, 0.059), (64, 0.026), (65, 0.022), (70, 0.014), (74, 0.056), (76, 0.038), (79, 0.017), (80, 0.024), (81, 0.017), (86, 0.017), (95, 0.032)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.75493526 22 emnlp-2012-Automatically Constructing a Normalisation Dictionary for Microblogs

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: Microblog normalisation methods often utilise complex models and struggle to differentiate between correctly-spelled unknown words and lexical variants of known words. In this paper, we propose a method for constructing a dictionary of lexical variants of known words that facilitates lexical normalisation via simple string substitution (e.g. tomorrow for tmrw). We use context information to generate possible variant and normalisation pairs and then rank these by string similarity. Highlyranked pairs are selected to populate the dictionary. We show that a dictionary-based approach achieves state-of-the-art performance for both F-score and word error rate on a standard dataset. Compared with other methods, this approach offers a fast, lightweight and easy-to-use solution, and is thus suitable for high-volume microblog pre-processing. 1 Lexical Normalisation A staggering number of short text “microblog” messages are produced every day through social media such as Twitter (Twitter, 2011). The immense volume of real-time, user-generated microblogs that flows through sites has been shown to have utility in applications such as disaster detection (Sakaki et al., 2010), sentiment analysis (Jiang et al., 2011; Gonz a´lez-Ib ´a n˜ez et al., 2011), and event discovery (Weng and Lee, 2011; Benson et al., 2011). However, due to the spontaneous nature of the posts, microblogs are notoriously noisy, containing many non-standard forms e.g., tmrw “tomorrow” and 2day “today” which degrade the performance of — — 421 natural language processing (NLP) tools (Ritter et al., 2010; Han and Baldwin, 2011). To reduce this effect, attempts have been made to adapt NLP tools to microblog data (Gimpel et al., 2011; Foster et al., 2011; Liu et al., 2011b; Ritter et al., 2011). An alternative approach is to pre-normalise non-standard lexical variants to their standard orthography (Liu et al., 2011a; Han and Baldwin, 2011; Xue et al., 2011; Gouws et al., 2011). For example, se u 2morw!!! would be normalised to see you tomorrow! The normalisation approach is especially attractive as a preprocessing step for applications which rely on keyword match or word frequency statistics. For example, earthqu, eathquake, and earthquakeee all attested in a Twitter corpus have the standard form earthquake; by normalising these types to their standard form, better coverage can be achieved for keyword-based methods, and better word frequency estimates can be obtained. In this paper, we focus on the task of lexical normalisation of English Twitter messages, in which out-of-vocabulary (OOV) tokens are normalised to their in-vocabulary (IV) standard form, i.e., a standard form that is in a dictionary. Following other recent work on lexical normalisation (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011; Liu et al., 2012), we specifically focus on one-to-one normalisation in which one OOV token is normalised to one IV word. Naturally, not all OOV words in microblogs are lexical variants of IV words: named entities, e.g., — — are prevalent in microblogs, but not all named entities are included in our dictionary. One challenge for lexical normalisation is therefore to disPLraoncge uadgineg Lse oafr tnhineg 2,0 p1a2g Jeosin 42t C1–o4n3f2e,re Jnecjue Iosnla Enmd,p Kiroicraela, M 1e2t–h1o4ds Ju ilny N 20a1tu2r.a ?lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl tinguish those OOV tokens that require normalisation from those that are well-formed. Recent unsupervised approaches have not attempted to distinguish such tokens from other types of OOV tokens (Cook and Stevenson, 2009; Liu et al., 2011a), limiting their applicability to real-world normalisation tasks. Other approaches (Han and Baldwin, 2011; Gouws et al., 2011) have followed a cascaded approach in which lexical variants are first identified, and then normalised. However, such two-step approaches suffer from poor lexical variant identification performance, which is propagated to the normalisation step. Motivated by the observation that most lexical variants have an unambiguous standard form (especially for longer tokens), and that a lexical variant and its standard form typically occur in similar contexts, in this paper we propose methods for automatically constructing a lexical normalisation dictionary a dictionary whose entries consist — of (lexical variant, standard form) pairs that enables type-based normalisation. Despite the simplicity of this dictionary-based normalisation method, we show it to outperform previously-proposed approaches. This very fast, lightweight solution is suitable for real-time processing of the large volume of streaming microblog data available from Twitter, and offers a simple solution to the lexical variant detection problem that hinders other normalisation methods. Furthermore, this dictionary-based method can be easily integrated with other more-complex normalisation approaches (Liu et al., 2011a; Han and Baldwin, 2011; Gouws et al., 2011) to produce hybrid systems. After discussing related work in Section 2, we present an overview of our dictionary-based approach to normalisation in Section 3. In Sections 4 and 5 we experimentally select the optimised context similarity parameters and string similarity reranking method. We present experimental results on the unseen test data in Section 6, and offer some concluding remarks in Section 7. — 2 Related Work Given a token t, lexical normalisation is the task of finding arg max P(s|t) ∝ arg max P(t| s)P(s), wofh efinred s igs tahreg smtaanxdaPrd(s form, i.e., an aIVx Pw(otr|sd). PSt(asn)-, dardly in lexical normalisation, t is assumed to be an 422 OOV token, relative to a fixed dictionary. In practice, not all OOV tokens should be normalised; i.e., only lexical variants (e.g., tmrw “tomorrow”) should be normalised and tokens that are OOV but otherwise not lexical variants (e.g., iPad “iPad”) should be unchanged. Most work in this area focuses only on the normalisation task itself, oftentimes assuming that the task of lexical variant detection has already been completed. Various approaches have been proposed to estimate the error model, P(t|s). For example, in work on spell-checking, eBl,ril Pl (atn|ds) M. Fooorre e (2000) improve on a standard edit-distance approach by considering multi-character edit operations; Toutanova and Moore (2002) build on this by incorporating phonological information. Li et al. (2006) utilise distributional similarity (Lin, 1998) to correct misspelled search queries. In text message normalisation, Choudhury et al. (2007) model the letter transformations and emissions using a hidden Markov model (Rabiner, 1989). Cook and Stevenson (2009) and Xue et al. (201 1) propose multiple simple error models, each of which captures a particular way in which lexical variants are formed, such as phonetic spelling (e.g., epik “epic”) or clipping (e.g., walkin “walking”). Nevertheless, optimally weighting the various error models in these approaches is challenging. Without pre-categorising lexical variants into different types, Liu et al. (201 1a) collect Google search snippets from carefully-designed queries from which they then extract noisy lexical variant– standard form pairs. These pairs are used to train a conditional random field (Lafferty et al., 2001) to estimate P(t|s) at the character level. One shortcoming eo fP querying a ese cahracrha engine teol. .o Obtanein strhaoirnt-ing pairs is it tends to be costly in terms of time and bandwidth. Here we exploit microblog data directly to derive (lexical variant, standard form) pairs, instead of relying on external resources. In morerecent work, Liu et al. (2012) endeavour to improve the accuracy of top-n normalisation candidates by integrating human cognitive inference, characterlevel transformations and spell checking in their normalisation model. The encouraging results shift the focus to reranking and promoting the correct normalisation to the top-1 position. However, like much previous work on lexical normalisation, this work assumes perfect lexical variant detection. Aw et al. (2006) and Kaufmann and Kalita (2010) consider normalisation as a machine translation task from lexical variants to standard forms using off-theshelf tools. These methods do not assume that lexical variants have been pre-identified; however, these methods do rely on large quantities of labelled training data, which is not available for microblogs. Recently, Han and Baldwin (201 1) and Gouws et al. (201 1) propose two-step unsupervised approaches to normalisation, in which lexical variants are first identified, and then normalised. They approach lexical variant detection by using a context fitness classifier (Han and Baldwin, 2011) or through dictionary lookup (Gouws et al., 2011). However, the lexical variant detection of both meth- ods is rather unreliable, indicating the challenge of this aspect of normalisation. Both of these approaches incorporate a relatively small normalisation dictionary to capture frequent lexical variants with high precision. In particular, Gouws et al. (201 1) produce a small normalisation lexicon based on distributional similarity and string similarity (Lodhi et al., 2002). Our method adopts a similar strategy using distributional/string similarity, but instead of constructing a small lexicon for preprocessing, we build a much wider-coverage normalisation dictionary and opt for a fully lexiconbased end-to-end normalisation approach. In contrast to the normalisation dictionaries of Han and Baldwin (201 1) and Gouws et al. (201 1) which focus on very frequent lexical variants, we focus on moderate frequency lexical variants of a minimum character length, which tend to have unambiguous standard forms; our intention is to produce normalisation lexicons that are complementary to those currently available. Furthermore, we investigate the impact of a variety of contextual and string similarity measures on the quality of the resulting lexicons. In summary, our dictionary-based normalisation ap- proach is a lightweight end-to-end method which performs both lexical variant detection and normalisation, and thus is suitable for practical online preprocessing, despite its simplicity. 423 3 A Lexical Normalisation Dictionary Before discussing our method for creating a normalisation dictionary, we first discuss the feasibility of such an approach. 3.1 Feasibility Dictionary lookup approaches to normalisation have been shown to have high precision but low recall (Han and Baldwin, 2011; Gouws et al., 2011). Frequent (lexical variant, standard form) pairs such as (u, you) are typically included in the dictionaries used by such methods, while less-frequent items such as (g0tta, gotta) are generally omitted. Because of the degree of lexical creativity and large number of non-standard forms observed on Twitter, a wide-coverage normalisation dictionary would be expensive to construct manually. Based on the assumption that lexical variants occur in similar con- texts to their standard forms, however, it should be possible to automatically construct a normalisation dictionary with wider coverage than is currently available. Dictionary lookup is a type-based approach to normalisation, i.e., every token instance of a given type will always be normalised in the same way. However, lexical variants can be ambiguous, e.g., y corresponds to “you” in yeah, y r right! LOL but “why” in AM CONFUSED!!! y you did that? Nevertheless, the relative occurrence of ambiguous lexical variants is small (Liu et al., 2011a), and it has been observed that while shorter variants such as y are often ambiguous, longer variants tend to be unambiguous. For example bthday and 4eva are unlikely to have standard forms other than “birthday” and “forever”, respectively. Therefore, the normalisation lexicons we produce will only contain entries for OOVs with character length greater than a specified threshold, which are likely to have an unambiguous standard form. 3.2 Overview of approach Our method for constructing a normalisation dictio- nary is as follows: Input: Tokenised English tweets 1. Extract (OOV, IV) pairs based on distributional similarity. 2. Re-rank the extracted pairs by string similarity. Output: A list of (OOV, IV) pairs ordered by string similarity; select the top-n pairs for inclusion in the normalisation lexicon. In Step 1, we leverage large volumes of Twitter data to identify the most distributionally-similar IV type for each OOV type. The result of this process is a set of (OOV, IV) pairs, ranked by distributional similarity. The extracted pairs will include (lexical variant, standard form) pairs, such as (tmrw, tomorrow), but will also contain false positives such as (Tusday, Sunday) Tusday is a lexical variant, but its standard form is not “Sunday” and (Youtube, web) Youtube is an OOV named entity, not a lexical variant. Nevertheless, lexical variants are typically formed from their standard forms through regular processes (Thurlow, 2003) e.g., the omission of characters and from this perspective Sunday and web are not plausible standard — — — — — forms for Tusday and Youtube, respectively. In Step 2, we therefore capture this intuition to re-rank the extracted pairs by string similarity. The top-n items in this re-ranked list then form the normalisation lexicon, which is based only on development data. Although computationally-expensive to build, this dictionary can be created offline. Once built, it then offers a very fast approach to normalisation. We can only reliably compute distributional similarity for types that are moderately frequent in a corpus. Nevertheless, many lexical variants are sufficiently frequent to be able to compute distributional similarity, and can potentially make their way into our normalisation lexicon. This approach is not suitable for normalising low-frequency lexical variants, nor is it suitable for shorter lexical variant types which as discussed in Section 3.1 are more likely to have an ambiguous standard form. Nevertheless, previously-proposed normalisation methods that can handle such phenomena also rely in part on a normalisation lexicon. The normalisation lexicons we create can therefore be easily integrated with previous approaches to form hybrid normalisation systems. — — 4 Contextually-similar Pair Generation Our objective is to extract contextually-similar (OOV, IV) pairs from a large-scale collection of mi424 croblog data. Fundamentally, the surrounding words define the primary context, but there are different ways of representing context and different similarity measures we can use, which may influence the quality of generated normalisation pairs. In representing the context, we experimentally explore the following factors: (1) context window size (from 1 to 3 tokens on both sides); (2) n-gram order ofthe context tokens (unigram, bigram, trigram); (3) whether context words are indexed for relative position or not; and (4) whether we use all context tokens, or only IV words. Because high-accuracy linguistic processing tools for Twitter are still under exploration (Liu et al., 2011b; Gimpel et al., 2011; Ritter et al., 2011; Foster et al., 2011), we do not consider richer representations of context, for example, incorporating information about part-of-speech tags or syntax. We also experiment with a number of simple but widely-used geometric and information theoretic distance/similarity measures. In particular, we use Kullback–Leibler (KL) divergence (Kullback and Leibler, 195 1), Jensen–Shannon (JS) divergence (Lin, 1991), Euclidean distance and Cosine distance. We use a corpus of 10 million English tweets to do parameter tuning over, and a larger corpus of tweets in the final candidate ranking. All tweets were collected from September 2010 to January 2011 via the Twitter API.1 From the raw data we extract English tweets using a language identification tool (Lui and Baldwin, 2011), and then apply a simplified Twitter tokeniser (adapted from O’Connor et al. (2010)). We use the Aspell dictionary (v6.06)2 to determine whether a word is IV, and only include in our normalisation dictionary OOV tokens with at least 64 occurrences in the corpus and character length ≥ 4, both of which were determined through empirical 4o,b bsoetrhva otifo wnh. Fcohr w weearceh d OetOeVrm winoedrd t type ginh the corpus, we select the most similar IV type to form (OOV, IV) pairs. To further narrow the search space, we only consider IV words which are morphophonemically similar to the OOV type, follow- ing settings in Han and Baldwin (201 1).3 1http s : / / dev .twitter . com/ docs / st reaming-api /methods 2http : / / aspe l .net / l 3We only consider IV words within an edit distance of 2 or a phonemic edit distance of 1from the OOV type, and we further In order to evaluate the generated pairs, we randomly selected 1000 OOV words from the 10 million tweet corpus. We set up an annotation task on Amazon Mechanical Turk,4 presenting five independent annotators with each word type (with no context) and asking for corrections where appropriate. For instance, given tmrw, the annotators would likely identify it as a non-standard variant of “tomorrow”. For correct OOV words like iPad, on the other hand, we would expect them to leave the word unchanged. If 3 or more of the 5 annotators make the same suggestion (in the form of either a canonical spelling or leaving the word unchanged), we include this in our gold standard for evaluation. In total, this resulted in 351 lexical variants and 282 correct OOV words, accounting for 63.3% of the 1000 OOV words. These 633 OOV words were used as (OOV, IV) pairs for parameter tuning. The remainder of the 1000 OOV words were ignored on the grounds that there was not sufficient consensus amongst the annotators.5 Contextually-similar pair generation aims to include as many correct normalisation pairs as possible. We evaluate the quality of the normalisation pairs using “Cumulative Gain” (CG): XN0 CG = Xreli0 Xi=1 Suppose there are N0 correct generated pairs (oovi, ivi), each of which is weighted by reli0, the frequency of oovi to indicate its relative importance; for example, (thinkin, thinking) has a higher weight than (g0tta, gotta) because thinkin is more frequent than g0tta in our corpus. In this evaluation we don’t consider the position of normalisation pairs, and nor do we penalise incorrect pairs. Instead, we push distinguishing between correct and incorrect pairs into the downstream re-ranking step in which we incorporate string similarity information. Given the development data and CG, we run an exhaustive search of parameter combinations over only consider the top 30% most-frequent of these IV words. 4https : / /www .mturk .com/mturk/welcome 5Note that the objective of this annotation task is to identify lexical variants that have agreed-upon standard forms irrespective of context, as a special case of the more general task of lexical normalisation (where context may or may not play a significant role in the determination of the normalisation). 425 our development corpus. The five best parameter combinations are shown in Table 1. We notice the CG is almost identical for the top combinations. As a context window size of 3 incurs a heavy processing and memory overhead over a size of 2, we use the 3rd-best parameter combination for subsequent experiments, namely: context window of ±2 tokens, teoxkpeenr bigrams, positional index, nadnodw wK oLf divergence as our distance measure. To better understand the sensitivity of the method to each parameter, we perform a post-hoc parameter analysis relative to a default setting (as underlined in Table 2), altering one parameter at a time. The results in Table 2 show that bigrams outperform other n-gram orders by a large margin (note that the evaluation is based on a log scale), and information-theoretic measures are superior to the geometric measures. Furthermore, it also indicates using the positional indexing better captures context. However, there is little to distinguish context modelling with just IV words or all tokens. Similarly, the context window size has relatively little impact on the overall performance, supporting our earlier observation from Table 1. 5 Pair Re-ranking by String Similarity Once the contextually-similar (OOV, IV) pairs are generated using the selected parameters in Section 4, we further re-rank this set of pairs in an attempt to boost morphophonemically-similar pairs like (bananaz, bananas), and penalise noisy pairs like (paninis, beans). Instead of using the small 10 million tweet corpus, from this step onwards, we use a larger corpus of 80 million English tweets (collected over the same period as the development corpus) to develop a larger-scale normalisation dictionary. This is because once pairs are generated, re-ranking based on string comparison is much faster. We only include in the dictionary OOV words with a token frequency > 15 to include more OOV types than in Section 4, and again apply a minimum length cutoff of 4 char- acters. To measure how well our re-ranking method promotes correct pairs and demotes incorrect pairs (including both OOV words that should not be normalised, e.g. (Youtube, web), and incorrect normalRankWindow sizen-gramPositional index?Lex. choiceSim/distance measurelog(CG) 1±32YesAllKL divergence19.571 2 ±±33 2 No All KL divergence 19.562 3 ±±23 2 Yes All KL divergence 19.562 4 ±±32 2 Yes IVs KL divergence 19.561 5 ±±23 2 Yes IVs JS divergence 19.554 ±2 Table 1: The five best parameter combinations in the exhaustive search of parameter combinations Window sizen-gramPositional index?Lexical choiceSimilarity/distance measure ±1 19.3251 19.328Yes 19.328IVs 19.335KL divergence 19.328 ±±21 1199..332275 2 19.571 No 19.263 All 19.328 Euclidean 19.227 ±±32 1199..332287 3 19.324 JS divergence 19.31 1 Cosine 19.170 Table 2: Parameter sensitivity analysis measured as log(CG) for correctly-generated pairs. We tune one parameter at a time, using the default (underlined) setting for other parameters; the non-exhaustive best-performing setting in each case is indicated in bold. isations for lexical variants, e.g. (bcuz, cause)), we modify our evaluation metric from Section 4 to evaluate the ranking at different points, using Discounted Cumulative Gain (DCG@N: Jarvelin and Kekalainen (2002)): DCG@N = rel1+XiN=2logr2el(i ) where reli again represents the frequency of the OOV, but it can be gain (a positive number) or loss (a negative number), depending on whether the ith pair is correct or incorrect. Because we also expect correct pairs to be ranked higher than incorrect pairs, DCG@N takes both factors into account. Given the generated pairs and the evaluation metric, we first consider three baselines: no re-ranking (i.e., the final ranking is that of the contextual similarity scores), and re-rankings of the pairs based on the frequencies of the OOVs in the Twitter corpus, and the IV unigram frequencies in the Google Web 1T corpus (Brants and Franz, 2006) to get less-noisy frequency estimates. We also compared a variety of re-rankings based on a number of string similarity measures that have been previously considered in normalisation work (reviewed in Section 2). We experiment with standard edit distance (Levenshtein, 1966), edit distance over double metaphone codes (phonetic edit distance: (Philips, 2000)), longest common subsequence ratio over the consonant edit distance of the paired words (hereafter, denoted as 426 consonant edit distance: (Contractor et al., 2010)), and a string subsequence kernel (Lodhi et al., 2002). In Figure 1, we present the DCG@N results for each of our ranking methods at different rank cutoffs. Ranking by OOV frequency is motivated by the assumption that lexical variants are frequently used by social media users. This is confirmed by our findings that lexical pairs like (goin, going) and (nite, night) are at the top of the ranking. However, many proper nouns and named entities are also used frequently and ranked at the top, mixed with lexical variants like (Facebook, speech) and (Youtube, web). In ranking by IV word frequency, we assume the lexical variants are usually derived from frequently-used IV equivalents, e.g. (abou, about). However, many less-frequent lexical variant types have high-frequency (IV) normalisations. For instance, the highest-frequency IV word the has more than 40 OOV lexical variants, such as tthe and thhe. These less-frequent types occupy the top positions, reducing the cumulative gain. Compared with these two baselines, ranking by default contextual similarity scores delivers promising results. It successfully ranks many more intuitive normalisation pairs at the top, such as (2day, today) and (wknd, weekend), but also ranks some incorrect pairs highly, such as (needa, gotta). The string similarity-based methods perform better than our baselines in general. Through manual analysis, we found that standard edit distance ranking is fairly accurate for lexical variants with low edit distance to their standard forms, but fails to identify heavily-altered variants like (tmrw, tomorrow). Consonant edit distance is similar to standard edit distance, but places many longer words at the top of the ranking. Edit distance over double metaphone codes (phonetic edit distance) performs particularly well for lexical variants that include character repetitions commonly used for emphasis on Twitter because such repetitions do not typically alter the phonetic codes. Compared with the other methods, the string subsequence kernel delivers encouraging results. It measures common character subsequences of length n between (OOV, IV) pairs. Because it is computationally expensive to calculate similarity for larger n, we choose n=2, following Gouws et al. (201 1). As N (the lexicon size cut-off) increases, the performance drops more slowly than the other meth— — ods. Although this method fails to rank heavilyaltered variants such as (4get,forget) highly, it typically works well for longer words. Given that we focus on longer OOVs (specifically those longer than 4 characters), this ultimately isn’t a great handicap. 6 Evaluation Given the re-ranked pairs from Section 5, here we apply them to a token-level normalisation task using the normalisation dataset of Han and Baldwin (201 1). 6.1 Metrics We evaluate using the standard evaluation metrics of precision (P), recall (R) and F-score (F) as detailed below. We also consider the false alarm rate (FA) and word error rate (WER), also as shown below. FA measures the negative effects of applying normalisation; a good approach to normalisation should not (incorrectly) normalise tokens that are already in their standard form and do not require normalisation.6 WER, like F-score, shows the overall benefits of normalisation, but unlike F-score, measures how many token-level edits are required for the output to be the same as the ground truth data. In general, dictionaries with a high F-score/low WER and low FA 6FA + P ≤ 1because some lexical variants might be incorrectly Ano +rm Pa ≤lise 1d b. 427 are preferable. P = R= F = FA = WER = # cor#re nctolrym naolrismedal tioskeden toskens # to ckoernresc rtelyqu niori nmga nloisremda tloiskaetniosn P2P +R R # inco#rr encotrlmya nliosremda tloikseedns tokens # token edits n#ee adlletd o akfetnesr normalisation 6.2 Results We select the three best re-ranking methods, and best cut-off N for each method, based on the highest DCG@N value for a given method over the development data, as presented in Figure 1. Namely, they are string subsequence kernel (S-dict, N=40,000), double metaphone edit distance (DMdict, N=10,000) and default contextual similarity without re-ranking (C-dict, N=10,000).7 We evaluate each of the learned dictionaries in Table 3. We also compare each dictionary with the performance of the manually-constructed Internet slang dictionary (HB-dict) used by Han and Baldwin (201 1), the small automatically-derived dictionary of Gouws et al. (201 1) (GHM-dict), and combinations of the different dictionaries. In addition, the contribution of these dictionaries in hybrid normalisation approaches is also presented, in which we first normalise OOVs using a given dictionary (combined or otherwise), and then apply the normalisation method of Gouws et al. (201 1) based on consonant edit distance (GHM-norm), or the approach of Han and Baldwin (201 1) based on the summation of many unsupervised approaches (HB-norm), to the remaining OOVs. Results are shown in Table 3, and discussed below. 6.2.1 Individual Dictionaries Overall, the individual dictionaries derived by the re-ranking methods (DM-dict, S-dict) perform bet- 7We also experimented with combining ranks using Mean Reciprocal Rank. However, the combined rank didn’t improve performance on the development data. We plan to explore other ranking aggregation methods in future work. 1 3 5 7 9 11 31 51 71 91 N cut−offs Figure 1: Re-ranking based on different string similarity methods. ter than that based on contextual similarity (C-dict) in terms of precision and false alarm rate, indicating the importance of re-ranking. Even though C-dict delivers higher recall indicating that many lexical variants are correctly normalised this is offset by its high false alarm rate, which is particularly undesirable in normalisation. Because S-dict has better performance than DM-dict in terms of both F-score and WER, and a much lower false alarm rate than C-dict, subsequent results are presented using S-dict only. — — Both HB-dict and GHM-dict achieve better than 90% precision with moderate recall. Compared to these methods, S-dict is not competitive in terms of either precision or recall. This result seems rather discouraging. However, considering that S-dict is an automatically-constructed dictionary targeting lexical variants of varying frequency, it is not surprising that the precision is worse than that of HB-dict which is manually-constructed and GHM-dict which includes entries only for more-frequent OOVs for which distributional similarity is more accurate. Additionally, the recall of S-dict is hampered by the — — — 428 restriction on lexical variant token length of 4 characters. 6.2.2 Combined Dictionaries Next we look to combining HB-dict, GHM-dict and S-dict. In combining the dictionaries, a given OOV word can be listed with different standard forms in different dictionaries. In such cases we use the following preferences for dictionaries motivated by our confidence in the normalisation pairs — of the dictionaries to resolve conflicts: HB-dict > GHM-dict > S-dict. When we combine dictionaries in the second section of Table 3, we find that they contain complementary information: in each case the recall and F-score are higher for the combined dictionary than any of the individual dictionaries. The combination of HB-dict+GHM-dict produces only a small improvement in terms of F-score over HBdict (the better-performing dictionary) suggesting that, as claimed, HB-dict and GHM-dict share many frequent normalisation pairs. HB-dict+S-dict and GHM-dict+S-dict, on the other hand, improve sub— MethodPrecisionRecallF-ScoreFalse AlarmWord Error Rate C-dict0.4740.2180.2990.2980.103 DM-dict S-dict HB-dict GHM-dict 0.727 0.700 0.915 0.982 0.106 0.179 0.435 0.319 0.185 0.285 0.590 0.482 0.145 0.162 0.048 0.000 0.102 0.097 0.066 0.076 HB-dict+S-dict0.8400.6010.7010.0900.052 GHM-dict+S-dict HB-dict+GHM-dict HB-dict+GHM-dict+S-dict 0.863 0.920 0.847 0.498 0.465 0.630 0.632 0.618 0.723 0.072 0.045 0.086 0.061 0.063 0.049 GHM-dict+GHM-norm0.3380.5780.4270.4580.135 HB-dict+GHM-dict+S-dict+GHM-norm HB-dict+HB-norm HB-dict+GHM-dict+S-dict+HB-norm 0.406 0.515 0.527 0.715 0.771 0.789 0.518 0.618 0.632 0.468 0.332 0.332 0.124 0.081 0.079 Table 3: Normalisation results using our derived dictionaries (contextual similarity (C-dict); double metaphone rendering (DM-dict); string subsequence kernel scores (S-dict)), the dictionary of Gouws et al. (201 1) (GHM-dict), the Internet slang dictionary (HB-dict) from Han and Baldwin (201 1), and combinations of these dictionaries. In addition, we combine the dictionaries with the normalisation method of Gouws et al. (201 1) (GHM-norm) and the combined unsupervised approach of Han and Baldwin (201 1) (HB-norm). stantially over HB-dict and GHM-dict, respectively, indicating that S-dict contains markedly different entries to both HB-dict and GHM-dict. The best Fscore and WER are obtained using the combination of all three dictionaries, HB-dict+GHM-dict+S-dict. Furthermore, the difference between the results using HB-dict+GHM-dict+S-dict and HB-dict+GHMdict is statistically significant (p < 0.01), based on the computationally-intensive Monte Carlo method of Yeh (2000), demonstrating the contribution of Sdict. 6.2.3 Hybrid Approaches The methods of Gouws et al. (201 1) (i.e. GHM-dict+GHM-norm) and Han and Baldwin (201 1) (i.e. HB-dict+HB-norm) have lower precision and higher false alarm rates than the dictionarybased approaches; this is largely caused by lexical variant detection errors.8 Using all dictionaries in combination with these methods HB-dict+GHM-dict+S-dict+GHM-norm and HBdict+GHM-dict+S-dict+HB-norm gives some improvements, but the false alarm rates remain high. Despite the limitations of a pure dictionary-based approach to normalisation discussed in Section 3.1 the current best practical approach to normal— — — — 8Here we report results that do not assume perfect detection of lexical variants, unlike the original published results in each case. 429 Error typeOOVDSitcat.ndard fGoromld (a) pluralsplayeplayersplayer (b) negation unlike like dislike (c) possessives anyones anyone anyone ’s (d) correct OOVs iphone phone iphone (e) test data errors durin during durin (f) ambiguity siging signing singing Table 4: Error types in the combined dictionary (HBdict+GHM-dict+S-dict) isation is to use a lexicon, combining hand-built and automatically-learned normalisation dictionaries. 6.3 Discussion and Error Analysis We first manually analyse the errors in the combined dictionary (HB-dict+GHM-dict+S-dict) and give examples of each error type in Table 4. The most frequent word errors are caused by slight morphologi- cal variations, including plural forms (a), negations (b), possessive cases (c), and OOVs that are correct and do not require normalisation (d). In addition, we also notice some missing annotations where lexical variants are skipped by human annotations but captured by our method (e). Ambiguity (f) definitely exists in longer OOVs, however, these cases do not appear to have a strong negative impact on the normalisation performance. An example of a remainLength cut-off (N)#VariantsPrecisionRecall (≥ N)Recall (all)False Alarm ≥45560.700Rec0al.l3 8(≥1 N)0.1790.162 ≥≥54 382 0.814 0.471 0.152 0.122 ≥≥65 254 0.804 0.484 0.104 0.131 ≥≥76 138 0.793 0.471 0.055 0.122 ≥71380.7930.4710.0550.122 Table 5: S-dict normalisation results broken down according to OOV token length. Recall is presented both over the subset of instances of length ≥ N in the data (“Recall (≥ N)”), and over the entirety of the dataset (“Recall (all)”); “su#bVsaertia onftis n” sitsa tnhcee snu omfb leenrg othf t≥ok Nen iinns tthaenc deast ao f( “tRhee cinadllic (a≥ted N length idn o othveer rt tehset d eanttaisreetty. ing miscellaneous error is bday “birthday”, which is mis-normalised as day. To further study the influence of OOV word length relative to the normalisation performance, we conduct a fine-grained analysis of the performance of the derived dictionary (S-dict) in Table 5, broken down across different OOV word lengths. The results generally support our hypothesis that our method works better for longer OOV words. The derived dictionary is much more reliable for longer tokens (length 5, 6, and 7 characters) in terms of precision and false alarm. Although the recall is relatively modest, in the future we intend to improve recall by mining more normalisation pairs from larger collections of microblog data. 7 Conclusions and Future Work In this paper, we describe a method for automatically constructing a normalisation dictionary that supports normalisation of microblog text through direct substitution of lexical variants with their standard forms. After investigating the impact of different distributional and string similarity methods on the quality of the dictionary, we present experimental results on a standard dataset showing that our proposed methods acquire high quality (lexical variant, standard form) pairs, with reasonable coverage, and achieve state-of-the-art end-toend lexical normalisation performance on a realworld token-level task. Furthermore, this dictionarylookup method combines the detection and normalisation of lexical variants into a simple, lightweight solution which is suitable for processing of highvolume microblog feeds. In the future, we intend to improve our dictionary by leveraging the constantly-growing volume of microblog data, and considering alternative ways to combine distributional and string similarity. In addi430 tion to direct evaluation, we also want to explore the benefits of applying normalisation for downstream social media text processing applications, e.g. event detection. Acknowledgements We would like to thank the three anonymous reviewers for their insightful comments, and Stephan Gouws for kindly sharing his data and discussing his work. NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT centre of Excellence programme. References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization. In Proceedings of COLING/ACL 2006, pages 33–40, Sydney, Australia. Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 389–398, Portland, Oregon, USA. Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 286–293, Hong Kong. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. International Journal on Document Analysis and Recognition, 10: 157–174. Danish Contractor, Tanveer A. Faruquie, and L. Venkata Subramaniam. 2010. Unsupervised cleansing of noisy text. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 189–196, Beijing, China. Paul Cook and Suzanne Stevenson. 2009. An unsu- pervised model for text message normalization. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 71– 78, Boulder, USA. Jennifer Foster, O¨zlem C ¸etinoglu, Joachim Wagner, Joseph L. Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. #hardtoparse: POS Tagging and Parsing the Twitterverse. In Analyzing Microtext: Papers from the 2011 AAAI Workshop, volume WS-1 1-05 of AAAI Workshops, pages 20–25, San Francisco, CA, USA. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 42–47, Portland, Oregon, USA. Roberto Gonz a´lez-Ib ´a n˜ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 581–586, Portland, Oregon, USA. Stephan Gouws, Dirk Hovy, and Donald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 82–90, Edinburgh, Scotland, UK. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 368–378, Portland, Oregon, USA. K. Jarvelin and J. Kekalainen. 2002. Cumulated gainbased evaluation of IR techniques. ACM Transactions on Information Systems, 20(4). Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent Twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 151–160, Portland, Oregon, USA. Joseph Kaufmann and Jugal Kalita. 2010. Syntactic normalization of Twitter messages. In International Con431 ference on Natural Language Processing, Kharagpur, India. S. Kullback and R. A. Leibler. 1951. On information and sufficiency. Annals of Mathematical Statistics, 22:49– 86. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings ofthe Eighteenth International Conference on Machine Learning, pages 282–289, San Francisco, CA, USA. Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10:707–710. Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou. 2006. Exploring distributional similarity based models for query spelling correction. In Proceedings of COLING/ACL 2006, pages 1025–1032, Sydney, Australia. Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1): 145–151. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the ACL and 1 International Con7th ference on Computational Linguistics (COLING/ACL98), pages 768–774, Montreal, Quebec, Canada. Fei Liu, Fuliang Weng, Bingqing Wang, and Yang Liu. 2011a. Insertion, deletion, or substitution? normalizing text messages without pre-categorization nor supervision. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 71–76, Portland, Oregon, USA. Xiaohua Liu, Shaodian Zhang, Furu Wei, and Ming Zhou. 2011b. Recognizing named entities in tweets. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 359–367, Portland, Oregon, USA. Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broadcoverage normalization system for social media language. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), Jeju, Republic of Korea. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. J. Mach. Learn. Res., 2:419– 444. Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), pages 553–561, Chiang Mai, Thailand. Brendan O’Connor, Michel Krieger, and David Ahn. 2010. TweetMotif: Exploratory search and topic summarization for Twitter. In Proceedings of the 4th International Conference on Weblogs and Social Media (ICWSM 2010), pages 384–385, Washington, USA. Lawrence Philips. 2000. The double metaphone search algorithm. C/C++ Users Journal, 18:38–43. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL-HLT 2010), pages 172–180, Los Angeles, USA. Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1524–1534, Edinburgh, Scotland, UK. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th International Conference on the World Wide Web (WWW 2010), pages 851–860, Raleigh, North Carolina, USA. Crispin Thurlow. 2003. Generation txt? The sociolinguistics of young people’s text-messaging. Discourse Analysis Online, 1(1). Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proceedings of the 40th Annual Meeting of the ACL and 3rd Annual Meeting of the NAACL (ACL-02), pages 144–15 1, Philadelphia, USA. Official Blog Twitter. 2011. 200 million tweets per day. Retrived at August 17th, 2011. Jianshu Weng and Bu-Sung Lee. 2011. Event detection in Twitter. In Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM 2011), Barcelona, Spain. Zhenzhen Xue, Dawei Yin, and Brian D. Davison. 2011. Normalizing microtext. In Proceedings of the AAAI11 Workshop on Analyzing Microtext, pages 74–79, San Francisco, USA. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 947–953, Saarbr¨ ucken, Germany. 432

same-paper 2 0.71854222 108 emnlp-2012-Probabilistic Finite State Machines for Regression-based MT Evaluation

Author: Mengqiu Wang ; Christopher D. Manning

Abstract: Accurate and robust metrics for automatic evaluation are key to the development of statistical machine translation (MT) systems. We first introduce a new regression model that uses a probabilistic finite state machine (pFSM) to compute weighted edit distance as predictions of translation quality. We also propose a novel pushdown automaton extension of the pFSM model for modeling word swapping and cross alignments that cannot be captured by standard edit distance models. Our models can easily incorporate a rich set of linguistic features, and automatically learn their weights, eliminating the need for ad-hoc parameter tuning. Our methods achieve state-of-the-art correlation with human judgments on two different prediction tasks across a diverse set of standard evaluations (NIST OpenMT06,08; WMT0608).

3 0.50656593 42 emnlp-2012-Entropy-based Pruning for Phrase-based Machine Translation

Author: Wang Ling ; Joao Graca ; Isabel Trancoso ; Alan Black

Abstract: Phrase-based machine translation models have shown to yield better translations than Word-based models, since phrase pairs encode the contextual information that is needed for a more accurate translation. However, many phrase pairs do not encode any relevant context, which means that the translation event encoded in that phrase pair is led by smaller translation events that are independent from each other, and can be found on smaller phrase pairs, with little or no loss in translation accuracy. In this work, we propose a relative entropy model for translation models, that measures how likely a phrase pair encodes a translation event that is derivable using smaller translation events with similar probabilities. This model is then applied to phrase table pruning. Tests show that considerable amounts of phrase pairs can be excluded, without much impact on the transla- . tion quality. In fact, we show that better translations can be obtained using our pruned models, due to the compression of the search space during decoding.

4 0.50305289 18 emnlp-2012-An Empirical Investigation of Statistical Significance in NLP

Author: Taylor Berg-Kirkpatrick ; David Burkett ; Dan Klein

Abstract: We investigate two aspects of the empirical behavior of paired significance tests for NLP systems. First, when one system appears to outperform another, how does significance level relate in practice to the magnitude of the gain, to the size of the test set, to the similarity of the systems, and so on? Is it true that for each task there is a gain which roughly implies significance? We explore these issues across a range of NLP tasks using both large collections of past systems’ outputs and variants of single systems. Next, once significance levels are computed, how well does the standard i.i.d. notion of significance hold up in practical settings where future distributions are neither independent nor identically distributed, such as across domains? We explore this question using a range of test set variations for constituency parsing.

5 0.50251812 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

Author: Jayant Krishnamurthy ; Tom Mitchell

Abstract: We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms ofweak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.

6 0.50191665 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon

7 0.50056106 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

8 0.49961227 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

9 0.49929732 129 emnlp-2012-Type-Supervised Hidden Markov Models for Part-of-Speech Tagging with Incomplete Tag Dictionaries

10 0.49918255 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

11 0.49795455 5 emnlp-2012-A Discriminative Model for Query Spelling Correction with Latent Structural SVM

12 0.49729839 54 emnlp-2012-Forced Derivation Tree based Model Training to Statistical Machine Translation

13 0.49619135 45 emnlp-2012-Exploiting Chunk-level Features to Improve Phrase Chunking

14 0.4954645 70 emnlp-2012-Joint Chinese Word Segmentation, POS Tagging and Parsing

15 0.49303398 95 emnlp-2012-N-gram-based Tense Models for Statistical Machine Translation

16 0.49275863 24 emnlp-2012-Biased Representation Learning for Domain Adaptation

17 0.4915601 35 emnlp-2012-Document-Wide Decoding for Phrase-Based Statistical Machine Translation

18 0.49154145 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

19 0.49005991 77 emnlp-2012-Learning Constraints for Consistent Timeline Extraction

20 0.48968828 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns