acl acl2012 acl2012-128 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jingbo Zhu ; Tong Xiao ; Chunliang Zhang
Abstract: This paper presents an unsupervised approach to learning translation span alignments from parallel data that improves syntactic rule extraction by deleting spurious word alignment links and adding new valuable links based on bilingual translation span correspondences. Experiments on Chinese-English translation demonstrate improvements over standard methods for tree-to-string and tree-to-tree translation. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Learning Better Rule Extraction with Translation Span Alignment Jingbo Zhu Tong Xiao Chunliang Zhang Natural Language Processing Laboratory Northeastern University, Shenyang, China { zhu j ingbo , xiaot ong, zhangcl } @mai l neu . [sent-1, score-0.067]
2 Abstract This paper presents an unsupervised approach to learning translation span alignments from parallel data that improves syntactic rule extraction by deleting spurious word alignment links and adding new valuable links based on bilingual translation span correspondences. [sent-4, score-2.828]
3 Experiments on Chinese-English translation demonstrate improvements over standard methods for tree-to-string and tree-to-tree translation. [sent-5, score-0.292]
4 1 Introduction Most syntax-based statistical machine translation (SMT) systems typically utilize word alignments and parse trees on the source/target side to learn syntactic transformation rules from parallel data. [sent-6, score-0.824]
5 The approach suffers from a practical problem that even one spurious (word alignment) link can pre- vent some desirable syntactic translation rules from extraction, which can in turn affect the quality of translation rules and translation performance (May and Knight 2007; Fossum et al. [sent-7, score-1.612]
6 To address this challenge, a considerable amount of previous research has been done to improve alignment quality by incorporating some statistics and linguistic heuristics or syntactic information into word alignments (Cherry and Lin 2006; DeNero and Klein 2007; May and Knight 2007; Fossum et al. [sent-9, score-0.65]
7 NkAoNurealdeaSxAfuaDdmupleVjioaPfnVsC haiVonePsAel-SEnglish ent ce pair with word alignment and both-side parse trees. [sent-15, score-0.363]
8 Some useful syntactic rules are blocked due to the spurious link between “了” and “the”. [sent-17, score-0.687]
9 Firstly, The TSAs are constructed in an unsupervised learning manner, and optimized by the translation model during the forced decoding process, without using any statistics and linguistic heuristics or syntactic constraints. [sent-18, score-0.797]
10 Secondly, our approach is independent of the word alignment-based algo- rithm used to extract translation rules, and easy to implement. [sent-19, score-0.333]
11 2 Translation Span Alignment Model Different from word alignment, TSA is a process of identifying span-to-span alignments between parallel sentences. [sent-20, score-0.26]
12 For each translation span pair, 280 Proce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A. [sent-21, score-0.634]
13 Extract phrase translation rules R from the parallel c = 进口 大幅度 减少 了 its source (or target) span is a sequence of source (or target) words. [sent-24, score-0.887]
14 em, and its word alignment A, a translation span pair τ is a pair of source span (ci. [sent-31, score-1.471]
15 eq) τ = (cij ⇔ ) where τ indicates that the source span (ci. [sent-37, score-0.393]
16 We do not require that τ must be consistent with the associated word alignment A in a TSA model. [sent-44, score-0.282]
17 Figure 2 depicts the TSA generation algorithm epq in which a phrase-based forced decoding technique is adopted to produce the TSA of each sentence pair. [sent-45, score-0.385]
18 In this work, we do not apply syntaxbased forced decoding (e. [sent-46, score-0.282]
19 , tree-to-string) because phrase-based models can achieve the state-of-theart translation quality with a large amount of training data, and are not limited by any constituent boundary based constraints for decoding. [sent-48, score-0.339]
20 θ indicates parameters of the phrase-based translation model learned from the parallel corpus. [sent-50, score-0.342]
21 The best derivation d* produced by forced decoding can be viewed as a sequence of translation steps (i. [sent-51, score-0.684]
22 , phrase translation rules), expressed by d* = r1 ⊕ r2 ⊕ . [sent-53, score-0.324]
23 where ri indicates a phrase rule used to form d*. [sent-58, score-0.249]
24 ⊕ is a composition operation that combines rules {r1. [sent-59, score-0.069]
25 As mentioned above, the best derivation d* respects the input sentence pair (c, e). [sent-63, score-0.234]
26 It means that for each phrase translation rule ri used by d*, its source (or target) side exactly matches a span of the given source (or target) sentence. [sent-64, score-1.019]
27 The source side src(ri) and the target side tgt(ri) of each phrase translation rule ri in d* form a translation span pair {src(ri)<=>tgt(ri)} of (c,e). [sent-65, score-1.449]
28 In other words, the TSA of (c,e) is a set of translation span pairs generated from phrase translation rules used by the best derivation d*. [sent-66, score-1.166]
29 The forced decoding based TSA generation on the example sentence pair in Figure 1 can be shown in Table 2. [sent-67, score-0.406]
30 3 Better Rule Extraction with TSAs To better understand the particular task that we will address in this section, we first introduce a definition of inconsistent with a translation span alignment. [sent-68, score-0.698]
31 Given a sentence pair (c, e) with the word alignment A and the translation span alignment P, we call a link (ci, ej) ∈A inconsistent with P, if ci and ej are covered respectively by two different translation span pairs in P and vice versa. [sent-69, score-2.395]
32 (ci, ej) ∈A inconsistent with P ⇔ ∃ τ ∈ P : ci ∈ src(τ) ∧ ej ∉ tgt(τ) OR ∃ τ ∈ P : ci ∉ src(τ) ∧ ej ∈ tgt(τ) where src(τ) and tgt(τ) indicate the source and target span of a translation span pair τ. [sent-70, score-1.604]
33 By this, we will say that a link (ci, ej) ∈A is a spurious link if it is inconsistent with the given TSA. [sent-71, score-0.645]
34 Table 3 shows that an original link (4→ 1) are covered by two different translation span pairs ([4,4]<=>[3,3]) and ([1,1] <=>[1,2]), respectively. [sent-72, score-0.871]
35 In such a case, we think that this link (4→ 1) is a spurious link according to this TSA, and should be removed for rule extraction. [sent-73, score-0.698]
36 Given a resulting TSA P, there are four different types of translation span pairs, such as one-to-one, one-to-many, many-to-one, and many-to-many cases. [sent-74, score-0.634]
37 For example, the TSA shown in Table 3 contains a one-to-one span pair ([4,4]<=>[3,3]), a one-to-many span pair ([1, 1]<=>[1,2]) and a many-many span pair ([2,3]<=>[4,5]). [sent-75, score-1.269]
38 In such a case, we can learn a confident link from a one-toone translation span pair that is preferred by the translation model in the forced decoding based TSA generation approach. [sent-76, score-1.534]
39 If such a confident link does not exist in the original word alignment, we consider it as a new valuable link. [sent-77, score-0.35]
40 Until now, a natural way is to use TSAs to directly improve word alignment quality by deleting some spurious links and adding some new confident links, which in turn improves rule quality and translation quality. [sent-78, score-1.336]
41 In other words, if a desirable translation rule was blocked due to some spurious links, we will output this translation rule. [sent-79, score-1.055]
42 The blocked tree-to-string r3 can be extracted successfully after deleting the spurious link (了 , the), and a new treeto-string rule r1 can be extracted after adding a new confident link (了 , have) that is inferred from a one-to-one translation span pair [4,4]<=>[3,3]. [sent-81, score-1.718]
43 We begin with a training parallel corpus of Chinese-English bitexts that consists of 8. [sent-85, score-0.077]
44 For syntactic translation rule extraction, minimal GHKM (Galley et al. [sent-89, score-0.526]
45 , 2004) rules are first extracted from the bilingual corpus whose source and target sides are parsed using the Berkeley parser (Petrov et al. [sent-90, score-0.245]
46 The composed rules are then generated by composing two or three minimal rules. [sent-92, score-0.069]
47 2006), including 14 base features in total such as 5gram language model, bidirectional lexical and phrase-based translation probabilities. [sent-96, score-0.292]
48 All features were log-linearly combined and their weights were optimized by performing minimum error rate training (MERT) (Och 2003). [sent-97, score-0.043]
49 The development data set used for weight training comes from NIST MT03 evaluation set, consisting of 326 sentence pairs of less than 20 words in each Chinese sentence. [sent-98, score-0.072]
50 Two test sets are NIST MT04 (1788 sentence pairs) and MT05 (1082 sentence pairs) evaluation sets. [sent-99, score-0.086]
51 The translation quality is evaluated in terms of the caseinsensitive IBM-BLEU4 metric. [sent-100, score-0.366]
52 2 Effect on Word Alignment To investigate the effect of the TSA method on word alignment, we designed an experiment to evaluate alignment quality against gold standard annotations. [sent-102, score-0.329]
53 There are 200 random chosen and manually aligned Chinese-English sentence pairs used to assert the word alignment quality. [sent-103, score-0.354]
54 For word alignment evaluation, we calculated precision, recall and F1-score over gold word alignment. [sent-104, score-0.323]
55 Table 4 depicts word alignment performance of the baseline and TSA methods. [sent-105, score-0.342]
56 We apply the TSAs to refine the baseline word alignments, involving spurious link deletion and new link insertion operations. [sent-106, score-0.679]
57 3 Translation Quality baseline and our method (TSA) in tree-to-string (T2S) and tree-to-tree (T2T) translation on Dev set (MT03) and two test sets (MT04 and MT05). [sent-109, score-0.292]
58 Table 5 depicts effectiveness of our TSA method on translation quality in tree-to-string and tree-totree translation tasks. [sent-113, score-0.691]
59 Table 5 shows that our TSA method can improve both syntax-based translation systems. [sent-114, score-0.292]
60 As mentioned before, the resulting TSAs are essentially optimized by the translation model. [sent-115, score-0.335]
61 Based on such TSAs, experiments show that spurious link deletion and new valuable link insertion can improve translation quality for tree-to-string and tree-to-tree systems. [sent-116, score-1.041]
62 5 Related Work Previous studies have made great efforts to incorporate statistics and linguistic heuristics or syntactic information into word alignments (Ittycheriah and Roukos 2005; Taskar et al. [sent-117, score-0.388]
63 (2008) used a discriminatively trained model to identify and delete incorrect links from original word alignments to improve stringto-tree transformation rule extraction, which incorporates four types of features such as lexical and syntactic features. [sent-123, score-0.644]
64 This paper presents an approach to incorporating translation span alignments into word alignments to delete spurious links and add new valuable links. [sent-124, score-1.479]
65 Some previous work directly models the syntactic correspondence in the training data for syntactic rule extraction (Imamura 2001 ; Groves et al. [sent-125, score-0.407]
66 Some previous methods infer syntactic correspondences between the source and the 283 target languages through word alignments and constituent boundary based syntactic constraints. [sent-130, score-0.569]
67 Such a syntactic alignment method is sensitive to word alignment behavior. [sent-131, score-0.64]
68 (2010) presented an unsupervised ITG alignment model that directly aligns syntactic structures for string-to-tree transformation rule extraction. [sent-133, score-0.555]
69 One major problem with syntactic structure alignment is that syntactic divergence between languages can prevent accurate syntactic alignments between the source and target languages. [sent-134, score-0.886]
70 May and Knight (2007) presented a syntactic realignment model for syntax-based MT that uses syntactic constraints to re-align a parallel corpus with word alignments. [sent-135, score-0.388]
71 First, the approach proposed by May and Knight (2007) first utilizes the EM algorithm to obtain Viterbi derivation trees from derivation forests of each (tree, string) pair, and then produces Viterbi alignments based on obtained derivation trees. [sent-138, score-0.526]
72 Our forced decoding based approach searches for the best derivation to produce translation span alignments that are used to improve the extraction of translation rules. [sent-139, score-1.543]
73 Translation span alignments are optimized by the translation model. [sent-140, score-0.846]
74 Secondly, their models are only applicable for syntax-based systems while our method can be applied to both phrase-based and syntax-based translation tasks. [sent-141, score-0.292]
75 6 Conclusion This paper presents an unsupervised approach to improving syntactic transformation rule extraction by deleting spurious links and adding new valuable links with the help of bilingual translation span alignments that are built by using a phrase-based forced decoding technique. [sent-142, score-2.196]
76 In our future work, it is worth studying how to combine the best of our approach and discriminative word alignment models to improve rule extraction for SMT models. [sent-143, score-0.499]
77 Soft syntactic constraints for word alignment through discriminative training. [sent-147, score-0.443]
78 A maximum entropy word aligner for Arabic-English machine translation. [sent-186, score-0.041]
79 SPMT: Statistical machine translation with syntactified target language phrases. [sent-194, score-0.366]
wordName wordTfidf (topN-words)
[('tsa', 0.401), ('span', 0.342), ('translation', 0.292), ('alignment', 0.241), ('spurious', 0.227), ('tgt', 0.2), ('forced', 0.186), ('link', 0.177), ('alignments', 0.169), ('tsas', 0.164), ('fossum', 0.125), ('syntactic', 0.117), ('rule', 0.117), ('links', 0.116), ('derivation', 0.11), ('src', 0.105), ('deleting', 0.105), ('ej', 0.102), ('ri', 0.1), ('blocked', 0.097), ('decoding', 0.096), ('knight', 0.093), ('vp', 0.086), ('pair', 0.081), ('ci', 0.077), ('target', 0.074), ('rules', 0.069), ('confident', 0.068), ('valuable', 0.064), ('inconsistent', 0.064), ('groves', 0.063), ('hearne', 0.063), ('hermjakob', 0.063), ('jingbo', 0.063), ('realignment', 0.063), ('tinsley', 0.063), ('pauls', 0.06), ('depicts', 0.06), ('cherry', 0.058), ('extraction', 0.056), ('imports', 0.055), ('niutrans', 0.055), ('denero', 0.052), ('transformation', 0.052), ('source', 0.051), ('bilingual', 0.051), ('ittycheriah', 0.05), ('parallel', 0.05), ('xiao', 0.049), ('quality', 0.047), ('kevin', 0.047), ('nn', 0.045), ('discriminative', 0.044), ('tong', 0.044), ('vbz', 0.044), ('sentence', 0.043), ('optimized', 0.043), ('chew', 0.042), ('word', 0.041), ('zhu', 0.04), ('andy', 0.037), ('sun', 0.037), ('adding', 0.035), ('heuristics', 0.035), ('side', 0.034), ('taskar', 0.034), ('lim', 0.033), ('np', 0.033), ('phrase', 0.032), ('delete', 0.032), ('moore', 0.031), ('smt', 0.031), ('covered', 0.031), ('deletion', 0.031), ('desirable', 0.03), ('viterbi', 0.029), ('secondly', 0.029), ('pairs', 0.029), ('unsupervised', 0.028), ('harmonized', 0.027), ('bitexts', 0.027), ('ventsislav', 0.027), ('zhechev', 0.027), ('forests', 0.027), ('cij', 0.027), ('tailoring', 0.027), ('imamura', 0.027), ('advp', 0.027), ('chunliang', 0.027), ('caseinsensitive', 0.027), ('ghkm', 0.027), ('ingbo', 0.027), ('sts', 0.027), ('galley', 0.027), ('mary', 0.027), ('derivations', 0.027), ('zhang', 0.027), ('presents', 0.027), ('insertion', 0.026), ('efforts', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000002 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment
Author: Jingbo Zhu ; Tong Xiao ; Chunliang Zhang
Abstract: This paper presents an unsupervised approach to learning translation span alignments from parallel data that improves syntactic rule extraction by deleting spurious word alignment links and adding new valuable links based on bilingual translation span correspondences. Experiments on Chinese-English translation demonstrate improvements over standard methods for tree-to-string and tree-to-tree translation. 1
2 0.24492949 140 acl-2012-Machine Translation without Words through Substring Alignment
Author: Graham Neubig ; Taro Watanabe ; Shinsuke Mori ; Tatsuya Kawahara
Abstract: In this paper, we demonstrate that accurate machine translation is possible without the concept of “words,” treating MT as a problem of transformation between character strings. We achieve this result by applying phrasal inversion transduction grammar alignment techniques to character strings to train a character-based translation model, and using this in the phrase-based MT framework. We also propose a look-ahead parsing algorithm and substring-informed prior probabilities to achieve more effective and efficient alignment. In an evaluation, we demonstrate that character-based translation can achieve results that compare to word-based systems while effectively translating unknown and uncommon words over several language pairs.
3 0.22007738 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation
Author: Tong Xiao ; Jingbo Zhu ; Hao Zhang ; Qiang Li
Abstract: We present a new open source toolkit for phrase-based and syntax-based machine translation. The toolkit supports several state-of-the-art models developed in statistical machine translation, including the phrase-based model, the hierachical phrase-based model, and various syntaxbased models. The key innovation provided by the toolkit is that the decoder can work with various grammars and offers different choices of decoding algrithms, such as phrase-based decoding, decoding as parsing/tree-parsing and forest-based decoding. Moreover, several useful utilities were distributed with the toolkit, including a discriminative reordering model, a simple and fast language model, and an implementation of minimum error rate training for weight tuning. 1
4 0.21586876 179 acl-2012-Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm
Author: Ashish Vaswani ; Liang Huang ; David Chiang
Abstract: Two decades after their invention, the IBM word-based translation models, widely available in the GIZA++ toolkit, remain the dominant approach to word alignment and an integral part of many statistical translation systems. Although many models have surpassed them in accuracy, none have supplanted them in practice. In this paper, we propose a simple extension to the IBM models: an ‘0 prior to encourage sparsity in the word-to-word translation model. We explain how to implement this extension efficiently for large-scale data (also released as a modification to GIZA++) and demonstrate, in experiments on Czech, Arabic, Chinese, and Urdu to English translation, significant improvements over IBM Model 4 in both word alignment (up to +6.7 F1) and translation quality (up to +1.4 B ).
5 0.19944832 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?
Author: Hui Zhang ; David Chiang
Abstract: Syntax-based translation models that operate on the output of a source-language parser have been shown to perform better if allowed to choose from a set of possible parses. In this paper, we investigate whether this is because it allows the translation stage to overcome parser errors or to override the syntactic structure itself. We find that it is primarily the latter, but that under the right conditions, the translation stage does correct parser errors, improving parsing accuracy on the Chinese Treebank.
6 0.17756619 131 acl-2012-Learning Translation Consensus with Structured Label Propagation
7 0.17688757 141 acl-2012-Maximum Expected BLEU Training of Phrase and Lexicon Translation Models
8 0.16396207 81 acl-2012-Enhancing Statistical Machine Translation with Character Alignment
9 0.15788887 108 acl-2012-Hierarchical Chunk-to-String Translation
10 0.14941765 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation
11 0.14936037 109 acl-2012-Higher-order Constituent Parsing and Parser Combination
12 0.14164458 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation
13 0.13655941 3 acl-2012-A Class-Based Agreement Model for Generating Accurately Inflected Translations
14 0.13467696 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information
15 0.13175334 19 acl-2012-A Ranking-based Approach to Word Reordering for Statistical Machine Translation
16 0.12786545 147 acl-2012-Modeling the Translation of Predicate-Argument Structure for SMT
17 0.12653835 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation
18 0.12297256 116 acl-2012-Improve SMT Quality with Automatically Extracted Paraphrase Rules
19 0.12039019 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation
20 0.11897064 66 acl-2012-DOMCAT: A Bilingual Concordancer for Domain-Specific Computer Assisted Translation
topicId topicWeight
[(0, -0.291), (1, -0.293), (2, 0.089), (3, 0.036), (4, 0.057), (5, -0.11), (6, -0.036), (7, 0.03), (8, 0.013), (9, 0.0), (10, 0.028), (11, -0.08), (12, -0.058), (13, -0.016), (14, 0.053), (15, -0.045), (16, -0.003), (17, 0.055), (18, 0.071), (19, 0.002), (20, -0.108), (21, 0.059), (22, -0.024), (23, 0.057), (24, 0.022), (25, -0.032), (26, -0.06), (27, -0.189), (28, 0.054), (29, -0.081), (30, 0.062), (31, -0.015), (32, -0.044), (33, 0.067), (34, -0.149), (35, 0.151), (36, -0.045), (37, -0.023), (38, -0.105), (39, 0.06), (40, 0.03), (41, -0.07), (42, 0.012), (43, -0.05), (44, -0.011), (45, 0.044), (46, 0.059), (47, -0.051), (48, -0.076), (49, -0.056)]
simIndex simValue paperId paperTitle
same-paper 1 0.96413422 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment
Author: Jingbo Zhu ; Tong Xiao ; Chunliang Zhang
Abstract: This paper presents an unsupervised approach to learning translation span alignments from parallel data that improves syntactic rule extraction by deleting spurious word alignment links and adding new valuable links based on bilingual translation span correspondences. Experiments on Chinese-English translation demonstrate improvements over standard methods for tree-to-string and tree-to-tree translation. 1
2 0.76527435 179 acl-2012-Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm
Author: Ashish Vaswani ; Liang Huang ; David Chiang
Abstract: Two decades after their invention, the IBM word-based translation models, widely available in the GIZA++ toolkit, remain the dominant approach to word alignment and an integral part of many statistical translation systems. Although many models have surpassed them in accuracy, none have supplanted them in practice. In this paper, we propose a simple extension to the IBM models: an ‘0 prior to encourage sparsity in the word-to-word translation model. We explain how to implement this extension efficiently for large-scale data (also released as a modification to GIZA++) and demonstrate, in experiments on Czech, Arabic, Chinese, and Urdu to English translation, significant improvements over IBM Model 4 in both word alignment (up to +6.7 F1) and translation quality (up to +1.4 B ).
3 0.76489049 140 acl-2012-Machine Translation without Words through Substring Alignment
Author: Graham Neubig ; Taro Watanabe ; Shinsuke Mori ; Tatsuya Kawahara
Abstract: In this paper, we demonstrate that accurate machine translation is possible without the concept of “words,” treating MT as a problem of transformation between character strings. We achieve this result by applying phrasal inversion transduction grammar alignment techniques to character strings to train a character-based translation model, and using this in the phrase-based MT framework. We also propose a look-ahead parsing algorithm and substring-informed prior probabilities to achieve more effective and efficient alignment. In an evaluation, we demonstrate that character-based translation can achieve results that compare to word-based systems while effectively translating unknown and uncommon words over several language pairs.
4 0.75710809 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation
Author: Junhui Li ; Zhaopeng Tu ; Guodong Zhou ; Josef van Genabith
Abstract: This paper presents an extension of Chiang’s hierarchical phrase-based (HPB) model, called Head-Driven HPB (HD-HPB), which incorporates head information in translation rules to better capture syntax-driven information, as well as improved reordering between any two neighboring non-terminals at any stage of a derivation to explore a larger reordering search space. Experiments on Chinese-English translation on four NIST MT test sets show that the HD-HPB model significantly outperforms Chiang’s model with average gains of 1.91 points absolute in BLEU. 1
5 0.7522341 108 acl-2012-Hierarchical Chunk-to-String Translation
Author: Yang Feng ; Dongdong Zhang ; Mu Li ; Qun Liu
Abstract: We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.
6 0.7351861 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?
7 0.72754681 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation
8 0.69477874 66 acl-2012-DOMCAT: A Bilingual Concordancer for Domain-Specific Computer Assisted Translation
9 0.69458151 131 acl-2012-Learning Translation Consensus with Structured Label Propagation
10 0.68520659 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation
11 0.67842978 118 acl-2012-Improving the IBM Alignment Models Using Variational Bayes
12 0.66544008 81 acl-2012-Enhancing Statistical Machine Translation with Character Alignment
13 0.60788524 1 acl-2012-ACCURAT Toolkit for Multi-Level Alignment and Information Extraction from Comparable Corpora
14 0.60738045 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation
15 0.55541044 141 acl-2012-Maximum Expected BLEU Training of Phrase and Lexicon Translation Models
16 0.54577172 67 acl-2012-Deciphering Foreign Language by Combining Language Models and Context Vectors
17 0.54324484 3 acl-2012-A Class-Based Agreement Model for Generating Accurately Inflected Translations
18 0.54161 107 acl-2012-Heuristic Cube Pruning in Linear Time
19 0.53787851 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information
20 0.51862323 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence
topicId topicWeight
[(7, 0.025), (26, 0.018), (28, 0.508), (30, 0.014), (37, 0.012), (39, 0.027), (57, 0.047), (74, 0.022), (84, 0.017), (85, 0.031), (90, 0.084), (92, 0.035), (94, 0.039), (99, 0.036)]
simIndex simValue paperId paperTitle
1 0.94910586 26 acl-2012-Applications of GPC Rules and Character Structures in Games for Learning Chinese Characters
Author: Wei-Jie Huang ; Chia-Ru Chou ; Yu-Lin Tzeng ; Chia-Ying Lee ; Chao-Lin Liu
Abstract: We demonstrate applications of psycholinguistic and sublexical information for learning Chinese characters. The knowledge about the grapheme-phoneme conversion (GPC) rules of languages has been shown to be highly correlated to the ability of reading alphabetic languages and Chinese. We build and will demo a game platform for strengthening the association of phonological components in Chinese characters with the pronunciations of the characters. Results of a preliminary evaluation of our games indicated significant improvement in learners’ response times in Chinese naming tasks. In addition, we construct a Webbased open system for teachers to prepare their own games to best meet their teaching goals. Techniques for decomposing Chinese characters and for comparing the similarity between Chinese characters were employed to recommend lists of Chinese characters for authoring the games. Evaluation of the authoring environment with 20 subjects showed that our system made the authoring of games more effective and efficient.
same-paper 2 0.90079683 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment
Author: Jingbo Zhu ; Tong Xiao ; Chunliang Zhang
Abstract: This paper presents an unsupervised approach to learning translation span alignments from parallel data that improves syntactic rule extraction by deleting spurious word alignment links and adding new valuable links based on bilingual translation span correspondences. Experiments on Chinese-English translation demonstrate improvements over standard methods for tree-to-string and tree-to-tree translation. 1
3 0.89534712 15 acl-2012-A Meta Learning Approach to Grammatical Error Correction
Author: Hongsuck Seo ; Jonghoon Lee ; Seokhwan Kim ; Kyusong Lee ; Sechun Kang ; Gary Geunbae Lee
Abstract: We introduce a novel method for grammatical error correction with a number of small corpora. To make the best use of several corpora with different characteristics, we employ a meta-learning with several base classifiers trained on different corpora. This research focuses on a grammatical error correction task for article errors. A series of experiments is presented to show the effectiveness of the proposed approach on two different grammatical error tagged corpora. 1.
4 0.82184231 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation
Author: Vladimir Eidelman ; Jordan Boyd-Graber ; Philip Resnik
Abstract: We propose an approach that biases machine translation systems toward relevant translations based on topic-specific contexts, where topics are induced in an unsupervised way using topic models; this can be thought of as inducing subcorpora for adaptation without any human annotation. We use these topic distributions to compute topic-dependent lex- ical weighting probabilities and directly incorporate them into our translation model as features. Conditioning lexical probabilities on the topic biases translations toward topicrelevant output, resulting in significant improvements of up to 1 BLEU and 3 TER on Chinese to English translation over a strong baseline.
5 0.80158526 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability
Author: Cristian Danescu-Niculescu-Mizil ; Justin Cheng ; Jon Kleinberg ; Lillian Lee
Abstract: Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased the choice of words and sentence structure — can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts — that is, more portable. — We also show how the concept of “memorable language” can be extended across domains. 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people’s minds. An active line of recent computational work has employed a variety of perspectives on this question. 892 Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to infor- mation on different topics [28, 36]. Beyond all these factors, however, one’s everyday experience with these domains suggests that the way in which a piece of information is expressed the choice of words, the way it is phrased might also have a fundamental effect on the extent to which it takes hold in people’s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, “catchy,” or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of “successful” language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or independent of these (obviously very crucial) factors? To — — — investigate the question, one needs a way of controlProce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi8c 9s2–901, ling as much as possible for the role that the surrounding context of the language plays. — — The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition people should be able to retain the quote and recognize it when they hear it invoked and production people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): “Certain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use” [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to — — — not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most “memorable”. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a “memorable quote” by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number oftimes it appears on the Web. Both ofthese serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we comparethemwithquotes thatarenotassessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly — fine-grained way for the confounding effects of context discussed above: we can observe differences 893 that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren’t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more “free-standing,” containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting as- pects as well, such as sound distributions. Our analysis ofmemorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap §2 lays the empirical foundations of our work: the design yasntdh ecerematpioirnic aofl our movie-quotes dataset, which we make publicly available (§2. 1), a pilot study cwhit hw ehu mmakaen subjects validating §I2M.1D),b abased memorability labels (§2.2), and further study bofa incorporating search-engine c2)o,u anntds (§2.3). §3 uddeytoafi lisn our analysis aenardc prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. §4 surveys rcerloastse-dd owmoarkin across a variety goafn fsie dladtsa.. §5 briefly sruelmatmedar wizoesrk ka andcr ionsdsic aat veasr some ffuft uierled sd.ire §c5tio bnrsie. 2 I’m ready for my close-up. 2.1 Data To study the properties of memorable movie quotes, we need a source of movie lines and a designation of memorability. Following [8], we constructed a corpus consisting of all lines from roughly 1000 movies, varying in genre, era, and popularity; for each movie, we then extracted the list of quotes from IMDb’s Memorable Quotes page corresponding to the movie.1 A memorable quote in IMDb can appear either as an individual sentence spoken by one character, or as a multi-sentence line, or as a block of dialogue involving multiple characters. In the latter two cases, it can be hard to determine which particular portion is viewed as memorable (some involve a build-up to a punch line; others involve the follow-through after a well-phrased opening sentence), and so we focus in our comparisons on those memorable quotes that 1This extraction involved some edit-distance-based alignment, since the exact form of the line in the script can exhibit minor differences from the version typed into IMDb. rmotuqsfebmaNerolbm543281760 0 1234D5ecil678910 894 Figure 1: Location of memorable quotes in each decile of movie scripts (the first 10th, the second 10th, etc.), summed over all movies. The same qualitative results hold if we discard each movie’s very first and last line, which might have privileged status. appear as a single sentence rather than a multi-line block.2 We now formulate a task that we can use to evaluate the features of memorable quotes. Recall that our goal is to identify effects based in the language of the quotes themselves, beyond any factors arising from the speaker or context. Thus, for each (singlesentence) memorable quote M, we identify a nonmemorable quote that is as similar as possible to M in all characteristics but the choice of words. This means we want it to be spoken by the same character in the same movie. It also means that we want it to have the same length: controlling for length is important because we expect that on average, shorter quotes will be easier to remember than long quotes, and that wouldn’t be an interesting textual effect to report. Moreover, we also want to control for the fact that a quote’s position in a movie can affect memorability: certain scenes produce more memorable dialogue, and as Figure 1 demonstrates, in aggregate memorable quotes also occur disproportionately near the beginnings and especially the ends of movies. In summary, then, for each M, we pick a contrasting (single-sentence) quote N from the same movie that is as close in the script as possible to M (either before or after it), subject to the conditions that (i) M and N are uttered by the same speaker, (ii) M and N have the same number of words, and (iii) N does not occur in the IMDb list of memorable 2We also ran experiments relaxing the single-sentence assumption, which allows for stricter scene control and a larger dataset but complicates comparisons involving syntax. The non-syntax results were in line with those reported here. TaJSOMbtrclodekviTn1ra:eBTykhoPrwNenpmlxeasipFIHAeaithrclsfnitkaQeomuifltw’sdaveoitycmsnedoqatbuliocrkeytsl f.woEeimlanchguwspakyirdfsebavot;ilmsdfcoenti’dus.erx-citaINmSnrkeioamct:ohenwmardleytQ.howfeu t’yvrecp,o’gsmrtpuaosnmtyef o rtgnhqieuvrobt.pehasirtdeosfpykuern close together in the movie by the same while the other is not. (Contractions character, have the same length, and one is labeled memorable by the IMDb such as “it’s” count as two words.) quotes for the movie (either as a single line or as part of a larger block). Given such pairs, we formulate a pairwise comparison task: given M and N, determine which is the memorable quote. Psychological research on subjective evaluation [35], as well as initial experiments using ourselves as subjects, indicated that this pairwise set-up easier to work with than simply presenting a single sentence and asking whether it is memorable or not; the latter requires agreement on an “absolute” criterion for memorability that is very hard to impose consistently, whereas the former simply requires a judgment that one quote is more memorable than another. Our main dataset, available at http://www.cs. cornell.edu/∼cristian/memorability.html,3 thus consists of approximately 2200 such (M, N) pairs, separated by a median of 5 same-character lines in the script. The reader can get a sense for the nature of the data from the three examples in Table 1. We now discuss two further aspects to the formulation of the experiment: a preliminary pilot study involving human subjects, and the incorporation of search engine counts into the data. 2.2 Pilot study: Human performance As a preliminary consideration, we did a small pilot study to see if humans can distinguish memorable from non-memorable quotes, assuming our IMDBinduced labels as gold standard. Six subjects, all native speakers of English and none an author of this paper, were presented with 11 or 12 pairs of memorable vs. non-memorable quotes; again, we controlled for extra-textual effects by ensuring that in each pair the two quotes come from the same movie, are by the same character, have the same length, and 3Also available there: other examples and factoids. 895 Table 2: Human pilot study: number of matches to IMDb-induced annotation, ordered by decreasing match percentage. For the null hypothesis of random guessing, these results are statistically significant, p < 2−6 ≈ .016. appear as nearly as possible in the same scene.4 The order of quotes within pairs was randomized. Importantly, because we wanted to understand whether the language of the quotes by itself contains signals about memorability, we chose quotes from movies that the subjects said they had not seen. (This means that each subject saw a different set of quotes.) Moreover, the subjects were requested not to consult any external sources of information.5 The reader is welcome to try a demo version of the task at http: //www.cs.cornell.edu/∼cristian/memorability.html. Table 2 shows that all the subjects performed (sometimes much) better than chance, and against the null hypothesis that all subjects are guessing randomly, the results are statistically significant, p < 2−6 ≈ .016. These preliminary findings provide evidenc≈e f.0or1 t6h.e T validity eolifm our traysk fi:n despite trohev apparent difficulty of the job, even humans who haven’t seen the movie in question can recover our IMDb4In this pilot study, we allowed multi-sentence quotes. 5We did not use crowd-sourcing because we saw no way to ensure that this condition would be obeyed by arbitrary subjects. We do note, though, that after our research was completed and as of Apr. 26, 2012, ≈ 11,300 people completed the online test: average accuracy: 27,2 ≈%, 1 1m,3o0d0e npueompbleer c coomrrpelcett:e d9 t/1he2. induced labels with some reliability.6 2.3 Incorporating search engine counts Thus far we have discussed a dataset in which memorability is determined through an explicit labeling drawn from the IMDb. Given the “production” aspect of memorability discussed in § 1, we stihoonu”ld a saplesoc expect tmhaotr mabeimlityora dbislce quotes nw §il1l ,te wnde to appear more extensively on Web pages than nonmemorable quotes; note that incorporating this insight makes it possible to use the (implicit) judgments of a much larger number of people than are represented by the IMDb database. It therefore makes sense to try using search-engine result counts as a second indication of memorability. We experimented with several ways of constructing memorability information from search-engine counts, but this proved challenging. Searching for a quote as a stand-alone phrase runs into the problem that a number of quotes are also sentences that people use without the movie in mind, and so high counts for such quotes do not testify to the phrase’s status as a memorable quote from the movie. On the other hand, searching for the quote in a Boolean conjunction with the movie’s title discards most of these uses, but also eliminates a large fraction of the appearances on the Web that we want to find: precisely because memorable quotes tend to have widespread cultural usage, people generally don’t feel the need to include the movie’s title when invoking them. Finally, since we are dealing with roughly 1000 movies, the result counts vary over an enormous range, from recent blockbusters to movies with relatively small fan bases. In the end, we found that it was more effective to use the result counts in conjunction with the IMDb labels, so that the counts played the role of an additional filter rather than a free-standing numerical value. Thus, for each pair (M, N) produced using the IMDb methodology above, we searched for each of M and N as quoted expressions in a Boolean conjunction with the title of the movie. We then kept only those pairs for which M (i) produced more than five results in our (quoted, conjoined) search, and (ii) produced at least twice as many results as the cor6The average accuracy being below 100% reinforces that context is very important, too. 896 responding search for N. We created a version of this filtered dataset using each of Google and Bing, and all the main findings were consistent with the results on the IMDb-only dataset. Thus, in what follows, we will focus on the main IMDb-only dataset, discussing the relationship to the dataset filtered by search engine counts where relevant (in which case we will refer to the +Google dataset). 3 Never send a human to do a machine’s job. We now discuss experiments that investigate the hypotheses discussed in §1. In particular, we devise pmoetthheosdess t dhiastc can assess 1th.e Idnis ptianrcttiicvuelnaer,ss w aend d generality hypotheses and test whether there exists a notion of “memorable language” that operates across domains. In addition, we evaluate and compare the predictive power of these hypotheses. 3.1 Distinctiveness One of the hypotheses we examine is whether the use of language in memorable quotes is to some extent unusual. In order to quantify the level of distinctiveness of a quote, we take a language-model approach: we model “common language” using the newswire sections of the Brown corpus [21]7, and evaluate how distinctive a quote is by evaluating its likelihood with respect to this model the lower the likelihood, the more distinctive. In order to assess different levels of lexical and syntactic distinctiveness, we employ a total of six Laplacesmoothed8 language models: 1-gram, 2-gram, and — 3-gram word LMs and 1-gram, 2-gram and 3-gram LMs. We find strong evidence that from a lexical perspective, memorable quotes are more distinctive than their non-memorable counterparts. As indicated in Table 3, for each of our lexical “common language” models, in about 60% of the quote pairs, the memorable quote is more distinctive. Interestingly, the reverse is true when it comes to part-of-speech9 7Results were qualitatively similar if we used the fiction portions. The age of the Brown corpus makes it less likely to contain modern movie quotes. 8We employ Laplace (additive) smoothing with a smoothing parameter of 0.2. The language models’ vocabulary was that of the entire training corpus. 9Throughout we obtain part-of-speech tags by using the NLTK maximum entropy tagger with default parameters. in which the the memorable quote is more distinctive than the non-memorable one according to the respective “common language” model. Significance according to a two-tailed sign test is indicated using *-notation (∗∗∗=“p<.001”). syntax: memorable quotes appear to follow the syntactic patterns of “common language” as closely as or more closely than non-memorable quotes. Together, these results suggest that memorable quotes consist of unusual word sequences built on common syntactic scaffolding. 3.2 Generality Another of our hypotheses is that memorable quotes are easier to use outside the specific context in which they were uttered that is, more “portable” and therefore exhibit fewer terms that refer to those settings. We use the following syntactic properties as proxies for the generality of a quote: • Fewer 3rd-person pronouns, since these commonly r 3efer to a person or object that was introduced earlier in the discourse. Utterances that employ fewer such pronouns are easier to adapt to new contexts, and so will be considered more — — general. • More indefinite articles like a and an, since they are more likely ttioc lreesfer li ktoe general concepts than definite articles. Quotes with more indefinite articles will be considered more general. Fewer past tense verbs and more present tFeenwsee verbs, tseinncsee t vheer bfosrm aenrd are more likely to refer to specific previous events. Therefore utterances that employ fewer past tense verbs (and more present tense verbs) will be considered more general. Table 4 gives the results for each of these four metrics in each case, we show the percentage of • — 897 TalfmGebowsnre4pa:in3srGldet sypfne.msrate.lripnctysoe: purncsetaI56gM47e.326D9o710bf% -qo∗u n∗l tyepa+56iG892rs.o7i364ng% wl∗ eh∗i ch the memorable quote is more general than the non- memorable ones according to the respective metric. Pairs where the metric does not distinguish between the quotes are not considered. quote pairs for which the memorable quote scores better on the generality metric. Note that because the issue of generality is a complex one for which there is no straightforward single metric, our approach here is based on several proxies for generality, considered independently; yet, as the results show, all of these point in a consistent direction. It is an interesting open question to develop richer ways of assessing whether a quote has greater generality, in the sense that people intuitively attribute to memorable quotes. 3.3 “Memorable” language beyond movies One of the motivating questions in our analysis is whether there are general principles underlying “memorable language.” The results thus far suggest potential families of such principles. A further question in this direction is whether the notion of memorability can be extended across different domains, and for this we collected (and distribute on our website) 431 phrases that were explicitly designed to be memorable: advertising slogans (e.g., “Quality never goes out of style.”). The focus on slogans is also in keeping with one of the initial motivations in studying memorability, namely, marketing applications in other words, assessing whether a proposed slogan has features that are consistent with memorable text. The fact that it’s not clear how to construct a collection of “non-memorable” counterparts to slogans appears to pose a technical challenge. However, we can still use a language-modeling approach to assess whether the textual properties of the slogans are closer to the memorable movie quotes (as one would conjecture) or to the non-memorable movie quotes. Specifically, we train one language model on memorable quotes and another on non-memorable quotes — guage: percentage of slogans that have higher likelihood under the memorable language model than under the nonmemorable one (for each of the six language models considered). Rightmost column: for reference, the percentage of newswire sentences that have higher likelihood under the memorable language model than under the nonmemorable one. TaG% ble3nipared6stpa:lfeitrnSsyilto.megpareotnsicluaerns mo1s42lto.61g048ae% nseral2w1m.h16e3mn% .comn2p-63ma.0r46e19dm% .to memorable and non-memorable quotes. (%s of 3rd pers. pronouns and indefinite articles are relative to all tokens, %s of past tense are relative to all past and present verbs.) and compare how likely each slogan is to be produced according to these two models. As shown in the middle column of Table 5, we find that slogans are better predicted both lexically and syntactically by the former model. This result thus offers evidence for a concept of “memorable language” that can be applied beyond a single domain. We also note that the higher likelihood of slogans under a “memorable language” model is not simply occurring for the trivial reason that this model predicts all other large bodies of text better. In particular, the newswire section of the Brown corpus is predicted better at the lexical level by the language model trained on non-memorable quotes. Finally, Table 6 shows that slogans employ general language, in the sense that for each of our generality metrics, we see a slogans/memorablequotes/non-memorable quotes spectrum. 3.4 Prediction task We now show how the principles discussed above can provide features for a basic prediction task, corresponding to the task in our human pilot study: 898 given a pair of quotes, identify the memorable one. Our first formulation of the prediction task uses a standard bag-of-words model10. If there were no information in the textual content of a quote to determine whether it were memorable, then an SVM employing bag-of-words features should perform no better than chance. Instead, though, it obtains 59.67% (10-fold cross-validation) accuracy, as shown in Table 7. We then develop models using features based on the measures formulated earlier in this section: generality measures (the four listed in Table 4); distinctiveness measures (likelihood according to 1, 2, and 3-gram “common language” models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them); and similarityto-slogans measures (likelihood according to 1, 2, and 3-gram slogan-language models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them). Even a relatively small number of distinctiveness features, on their own, improve significantly over the much larger bag-of-words model. When we include additional features based on generality and language-model features measuring similarity to slogans, the performance improves further (last line of Table 7). Thus, the main conclusion from these prediction tasks is that abstracting notions such as distinctiveness and generality can produce relatively streamlined models that outperform much heavier-weight bag-of-words models, and can suggest steps toward approaching the performance of human judges who very much unlike our system have the full cultural context in which movies occur at their disposal. — — 3.5 Other characteristics We also made some auxiliary observations that may be ofinterest. Specifically, we find differences in letter and sound distribution (e.g., memorable quotes after curse-word removal use significantly more “front sounds” (labials or front vowels such as represented by the letter i) and significantly fewer “back sounds” such as the one represented by u),11 — — 10We discarded terms appearing fewer than 10 times. 11These findings may relate to marketing research on sound symbolism [7, 19, 40]. TablesdgF7lieao:sngtPiehnorauefc dtliswevctymeo irnp.des:StoVgeMh10r-fo#ldec9ra265ot42sv5aA6l8942ic.d36720atu57%ri aocn∗yresult using the respective feature sets. Random baseline accuracy is 50%. Accuracies statistically significantly greater than bag-of-words according to a two-tailed t-test are indicated with *(p<.05) and **(p<.01). word complexity (e.g., memorable quotes use words with significantly more syllables) and phrase complexity (e.g., memorable quotes use fewer coordinating conjunctions). The latter two are in line with our distinctiveness hypothesis. 4 A long time ago, in a galaxy far, far away How an item’s linguistic form affects the reaction it generates has been studied in several contexts, including evaluations of product reviews [9], political speeches [12], on-line posts [13], scientific papers [14], and retweeting of Twitter posts [36]. We use a different set of features, abstracting the notions of distinctiveness and generality, in order to focus on these higher-level aspects of phrasing rather than on particular lower-level features. Related to our interest in distinctiveness, work in advertising research has studied the effect of syntactic complexity on recognition and recall of slogans [5, 6, 24]. There may also be connections to Von Restorff’s isolation effect Hunt [17], which asserts that when all but one item in a list are similar in some way, memory for the different item is enhanced. Related to our interest in generality, Knapp et al. [20] surveyed subjects regarding memorable messages or pieces of advice they had received, finding that the ability to be applied to multiple concrete situations was an important factor. Memorability, although distinct from “memorizability”, relates to short- and long-term recall. Thorn and Page [34] survey sub-lexical, lexical, and semantic attributes affecting short-term memorability of lexical items. Studies of verbatim recall have also considered the task of distinguishing an exact quote from close paraphrases [3]. Investigations of longterm recall have included studies ofculturally signif- 899 icant passages of text [29] and findings regarding the effect of rhetorical devices of alliterative [4], “rhythmic, poetic, and thematic constraints” [18, 26]. Finally, there are complex connections between humor and memory [32], which may lead to interactions with computational humor recognition [25]. 5 I think this is the beginning of a beautiful friendship. Motivated by the broad question of what kinds of information achieve widespread public awareness, we studied the the effect of phrasing on a quote’s memorability. A challenge is that quotes differ not only in how they are worded, but also in who said them and under what circumstances; to deal with this difficulty, we constructed a controlled corpus of movie quotes in which lines deemed memorable are paired with non-memorable lines spoken by the same character at approximately the same point in the same movie. After controlling for context and situation, memorable quotes were still found to exhibit, on av- erage (there will always be individual exceptions), significant differences from non-memorable quotes in several important respects, including measures capturing distinctiveness and generality. Our experiments with slogans show how the principles we identify can extend to a different domain. Future work may lead to applications in marketing, advertising and education [4]. Moreover, the subtle nature of memorability, and its connection to research in psychology, suggests a range of further research directions. We believe that the framework developed here can serve as the basis for further computational studies of the process by which information takes hold in the public consciousness, and the role that language effects play in this process. My mother thanks you. My father thanks you. My sister thanks you. And Ithank you: Rebecca Hwa, Evie Kleinberg, Diana Minculescu, Alex Niculescu-Mizil, Jennifer Smith, Benjamin Zimmer, and the anonymous reviewers for helpful discussions and comments; our annotators Steven An, Lars Backstrom, Eric Baumer, Jeff Chadwick, Evie Kleinberg, and Myle Ott; and the makers of Cepacol, Robitussin, and Sudafed, whose products got us through the submission deadline. This paper is based upon work supported in part by NSF grants IIS-0910664, IIS-1016099, Google, and Yahoo! References [1] [2] [3] [4] [5] Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of KDD, 2006. Elizabeth Bates, Walter Kintsch, Charles R. Fletcher, and Vittoria Giuliani. The role of pronominalization and ellipsis in texts: Some memory experiments. Journal of Experimental Psychology: Human Learning and Memory, 6 (6):676–691, 1980. Frank Boers and Seth Lindstromberg. Finding ways to make phrase-learning feasible: The mnemonic effect of alliteration. System, 33(2): 225–238, 2005. Samuel D. Bradley and Robert Meeds. Surface-structure transformations and advertising slogans: The case for moderate syntactic complexity. Psychology and Marketing, 19: 595–619, 2002. [6] Robert Chamblee, Robert Gilmore, Gloria Thomas, and Gary Soldow. When copy complexity can help ad readership. Journal of Advertising Research, 33(3):23–23, 1993. [7] John Colapinto. Famous names. The New Yorker, pages 38–43, 2011. [8] Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2011. [9] Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of WWW, pages 141–150, 2009. [10] Stuart Fischoff, Esmeralda Cardenas, Angela Hernandez, Korey Wyatt, Jared Young, and 900 [11] [12] [13] [14] [15] Rachel Gordon. Popular movie quotes: Reflections of a people and a culture. In Annual Convention of the American Psychological Association, 2000. Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. Information diffusion through blogspace. Proceedings of WWW, pages 491–501, 2004. Marco Guerini, Carlo Strapparava, and Oliviero Stock. Trusting politicians’ words (for persuasive NLP). In Proceedings of CICLing, pages 263–274, 2008. Marco Guerini, Carlo Strapparava, and G o¨zde O¨zbal. Exploring text virality in social networks. In Proceedings of ICWSM (poster), 2011. Marco Guerini, Alberto Pepe, and Bruno Lepri. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM, 2012. Richard Jackson Harris, Abigail J. Werth, Kyle E. Bures, and Chelsea M. Bartel. Social movie quoting: What, why, and how? Ciencias Psicologicas, 2(1):35–45, 2008. [16] Chip Heath, Chris Bell, and Emily Steinberg. Emotional selection in memes: The case of urban legends. Journal of Personality, 81(6): 1028–1041, 2001. [17] R. Reed Hunt. The subtlety of distinctiveness: What von Restorff really did. Psychonomic Bulletin & Review, 2(1): 105–1 12, 1995. [18] Ira E. Hyman Jr. and David C. Rubin. Memorabeatlia: A naturalistic study of long-term memory. Memory & Cognition, 18(2):205– 214, 1990. [19] Richard R. Klink. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20, 2000. [20] Mark L. Knapp, Cynthia Stohl, and Kathleen K. Reardon. “Memorable” messages. Journal of Communication, 3 1(4):27– 41, 1981. [21] Henry Kuˇ cera and W. Nelson Francis. Computational analysis of present-day American English. Dartmouth Publishing Group, 1967. [22] Jure Leskovec, Lada Adamic, and Bernardo Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May [23] [24] [25] [26] [27] [28] [29] 2007. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD, pages 497–506, 2009. Tina M. Lowrey. The relation between script complexity and commercial memorability. Journal of Advertising, 35(3):7–15, 2006. Rada Mihalcea and Carlo Strapparava. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2): 126–142, 2006. Milman Parry and Adam Parry. The making of Homeric verse: The collected papers of Milman Parry. Clarendon Press, Oxford, 1971. Everett Rogers. Diffusion of Innovations. Free Press, fourth edition, 1995. Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on Twitter. Proceedings of WWW, pages 695–704, 2011. David C. Rubin. Very long-term memory for [30] [3 1] [32] [33] prose and verse. Journal of Verbal Learning and Verbal Behavior, 16(5):61 1–621, 1977. Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith. Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-01-103, CMU, 2010. David Strang and Sarah Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265–290, 1998. Hannah Summerfelt, Louis Lippman, and Ira E. Hyman Jr. The effect of humor on memory: Constrained by the pun. The Journal of General Psychology, 137(4), 2010. Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento. Gesundheit! Model- 901 ing contagion through Facebook News Feed. In Proceedings of ICWSM, 2009. [34] Annabel Thorn and Mike Page. Interactions Between Short-Term and Long-Term Memory [35] [36] [37] [38] [39] [40] in the Verbal Domain. Psychology Press, 2009. Louis L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273– 286, 1927. Oren Tsur and Ari Rappoport. What’s in a Hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM, 2012. Fang Wu, Bernardo A. Huberman, Lada A. Adamic, and Joshua R. Tyler. Information flow in social groups. Physica A: Statistical and Theoretical Physics, 337(1-2):327–335, 2004. Shaomei Wu, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Who says what to whom on Twitter. In Proceedings of WWW, 2011. Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of WSDM, 2011. Eric Yorkston and Geeta Menon. A sound idea: Phonetic effects of brand names on consumer judgments. Journal of Consumer Research, 3 1 (1):43–51, 2004.
6 0.53024906 81 acl-2012-Enhancing Statistical Machine Translation with Character Alignment
7 0.52232325 8 acl-2012-A Corpus of Textual Revisions in Second Language Writing
8 0.49668413 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information
9 0.49442166 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation
10 0.49012938 140 acl-2012-Machine Translation without Words through Substring Alignment
11 0.44464353 151 acl-2012-Multilingual Subjectivity and Sentiment Analysis
13 0.42756 125 acl-2012-Joint Learning of a Dual SMT System for Paraphrase Generation
14 0.4223302 116 acl-2012-Improve SMT Quality with Automatically Extracted Paraphrase Rules
15 0.42194289 158 acl-2012-PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning
16 0.42158389 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence
17 0.41847292 66 acl-2012-DOMCAT: A Bilingual Concordancer for Domain-Specific Computer Assisted Translation
18 0.41838199 40 acl-2012-Big Data versus the Crowd: Looking for Relationships in All the Right Places
19 0.41764846 139 acl-2012-MIX Is Not a Tree-Adjoining Language
20 0.41632953 62 acl-2012-Cross-Lingual Mixture Model for Sentiment Classification