acl acl2012 acl2012-112 knowledge-graph by maker-knowledge-mining

112 acl-2012-Humor as Circuits in Semantic Networks


Source: pdf

Author: Igor Labutov ; Hod Lipson

Abstract: This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 i l @ corne l edu i4 l Abstract This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). [sent-2, score-0.043]

2 Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. [sent-3, score-0.741]

3 We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. [sent-4, score-0.818]

4 Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. [sent-5, score-0.713]

5 We evaluate the said metrics through a user-assessed quality of the generated two-liners. [sent-6, score-0.035]

6 1 Introduction While of significant interest in linguistics and philosophy, humor had received less attention in the computational domain. [sent-7, score-0.591]

7 And of that work, most recent is predominately focused on humor recognition. [sent-8, score-0.591]

8 In this paper we focus on the problem of humor generation. [sent-10, score-0.591]

9 , 2003), the application of humor generation is not any less significant. [sent-12, score-0.591]

10 First, a good generative model of humor has the potential to outperform current discriminative models for humor recognition. [sent-13, score-1.182]

11 Thus, ability to 150 Hod Lipson Cornell University hod l ips on@ corne l edu l . [sent-14, score-0.086]

12 Figure 1: Semantic circuit generate humor will potentially lead to better humor detection. [sent-16, score-1.268]

13 Second, a computational model that conforms to the verbal theory of humor is an accessible avenue for verifying the psycholinguistic theory. [sent-17, score-0.718]

14 In this paper we take the Semantic Script Theory of Humor (SSTH) (Attardo and Raskin, 1991) - a widely accepted theory of verbal humor and build a generative model that conforms to it. [sent-18, score-0.718]

15 Much of the existing work in humor generation had focused on puns and punning riddles - humor that is centered around wordplay. [sent-19, score-1.396]

16 , 2006) take a knowledge-based approach that is rooted in the linguistic theory (SSTH), the constraint, nevertheless, significantly limits the potential of SSTH. [sent-21, score-0.053]

17 To our knowledge, our work is the first attempt to instantiate the theory at the fundamental level, without imposing constraints on phonological similarity, or a restricted set of domain oppositions. [sent-22, score-0.096]

18 1 Semantic Script Theory of Humor The Semantic Script Theory of Humor (SSTH) provides machinery to formalize the structure of most types of verbal humor (Ruch et al. [sent-26, score-0.636]

19 SSTH posits an existence of two underlying scripts, one of which is more obvious than the other. [sent-28, score-0.03]

20 To be humorous, the underlying scripts must satisfy two conditions: overlap and incongruity. [sent-29, score-0.235]

21 In the setup phase of the joke, instances of the two scripts are presented in a way that does not give away the less obvious script (due to their overlap). [sent-30, score-0.536]

22 In the punchline (resolution), a trigger expression forces the audience to switch their interpretation to the alternate (less likely) script. [sent-31, score-0.111]

23 The alternate script must differ sig- nificantly in meaning (be incongruent with the first script) for the switch to have a humorous effect. [sent-32, score-0.56]

24 An example below illustrates this idea (S1 is the obvious script, and S2 is the alternate script. [sent-33, score-0.073]

25 ‘ ‘I s the the [ doct or ] [ pat ient ] [ bronchi al ] the S1 [ doct or ’ s ] wi fe ] S2 [ ‘ ‘ Come 2 S1 as ked home ? [sent-35, score-0.086]

26 ’ ’ in hi s [ whi sper ] [ young S1 [ whi spered ] right at S1 . [sent-36, score-0.074]

27 , 2007), produced question/answer punning riddles from general nonhumorous lexicon. [sent-40, score-0.139]

28 While humor in the generated puns could be explained by SSTH, the SSTH model itself was not employed in the process of generation. [sent-41, score-0.701]

29 While still focused on generating puns, they do so by explicitly defining and applying script opposition (SO) using ontological semantics. [sent-43, score-0.426]

30 HAHAcronym (Stock and Strapparava, 2002), a system for generating humorous acronyms, for example, utilizes WordNetDomains to select phonologically similar concepts from semantically disparate domains. [sent-45, score-0.337]

31 3 System overview ConceptNet (Liu and Singh, 2004) lends itself as an ideal ontological resource for script generation. [sent-47, score-0.381]

32 As a network that connects everyday concepts and events with a set of causal and spatial relationships, the relational structure of ConceptNet parallels the structure of the fabula model of story generation - namely the General Transition Network (GTN) (Swartjes and Theune, 2006). [sent-48, score-0.268]

33 As such, we hypothesize that there exist paths within the ConceptNet graph that can be represented as feasible scripts in the surface form. [sent-49, score-0.381]

34 Moreover, multiple paths between two given nodes represent overlapping scripts - a necessary condition for verbal humor in SSTH. [sent-50, score-0.908]

35 Given a semantic network hypergraph G = (V, L) where aV s ∈ Concepts, rLk ∈ Relations, we hypothesize tVhat ∈ ∈it iCs possible to s∈e ar cRhe flaort script-pairs as semantic circuits that can be converted to a surface form of the Question/Answer format. [sent-51, score-0.351]

36 We define a circuit as two paths from root A that terminate at a common node B. [sent-52, score-0.261]

37 (3) Ranked scripts are converted to surface form by aligning a subset of its concepts to natural language templates of the Question/Answer form. [sent-54, score-0.423]

38 Alignment is performed through a scoring heuristic which greedily optimizes for incongruity of the surface form. [sent-55, score-0.222]

39 1 Script model We model a script as a first order Markov chain of relations between concepts. [sent-57, score-0.345]

40 Given a seed concept, depth-first search is performed starting from the root concept, considering all directed paths terminating at the same node as candidates for feasible script pairs. [sent-58, score-0.539]

41 Most of the found semantic circuits, however, do not yield a meaningful surface form and need to be pruned. [sent-59, score-0.12]

42 Feasible circuits are learned in a supervised way, where binary labels assign each candidate circuit one of the two classes {feas ible diindfateea csir ible} (we uthseed tw w8 ose celdas concepts, swiitbhl l3e0,0 generated bcilrceu}its (w wfoer u seaedch 8 concept). [sent-60, score-0.319]

43 Given a chain of concepts S (from hereon referred to as a scQript) c1, c2. [sent-62, score-0.19]

44 cn, we obtain its likelihood Pr(S) = Q Pr(rij |rjk), where rij and rjk are directed relationQs joining concepts < ci, cj >, and < cj, ck > respectively, and the conditionals are computed from the maximum likelihood estimate of , the training data. [sent-65, score-0.384]

45 2 Semantic overlap and spreading activation While the script model is able to capture semantically meaningful transitions in a single script, it does not capture inter-script measures such as overlap and incongruity. [sent-67, score-0.695]

46 We employ a modified form of spreading activation with fan-out and path constraints to find semantic circuits while maximizing their semantic overlap. [sent-68, score-0.499]

47 Activation starts at the userspecified root concept and radiates along outgoing edges. [sent-69, score-0.26]

48 An additional fan-out constraint penalizes nodes with a large number of outgoing edges (concepts that are too general to be interesting). [sent-71, score-0.096]

49 The weight of a current node w(ci) is given by: w(ci) =ck∈Xfin(cj)cj∈Xfin(ci)P|rf(oruitj(|crij)k|)γw(cj) (1) Termination condition is satisfied when the activation weights fall below a threshold (loop checking is performed to prevent feedback). [sent-72, score-0.182]

50 Upon termina- tion, nodes are ranked by their activation weight, and for each node above a specified rank, a set of paths (scripts) Sk ∈ S is scored according to:. [sent-73, score-0.269]

51 φk= |Sk|logγ +|XSk|logPrk(ri+1|ri) (2) Xi where φk is decay-weighted log-likelihood of script Sk in a given circuit and |Sk | is the length of script 152 1S1QA QS2QC2 Figure 2: Question(Q) and Answer(A) concepts within the semantic circuit. [sent-74, score-0.942]

52 Note that the answer(A) concept is chosen from a different cluster than the question concepts Sk (number of nodes in the kth chain). [sent-76, score-0.332]

53 A set of scripts S with the highest scores in the highest rank- × ing cptisrc Suit ws represent scripts othreast are likely hteos bte r afneka-sible and display a significant amount of semantic overlap within the circuit. [sent-77, score-0.468]

54 3 Incongruity and surface realization The task is to select a script pair {Si, Sj i j} ∈ TSh tSas akn ids a set eofc concepts pCa ∈ Si ∪ Sj t6 =hat j w}i ∈ll align Sw aitnhd some sofur cfaocnec template, wh∪ile S maximizing inter-script incongruity. [sent-79, score-0.559]

55 As a measure of concept incongruity, we hierarchically cluster the entire ConceptNet using a Fast Community Detection algorithm (Clauset et al. [sent-80, score-0.135]

56 We observe that clusters are generated for related concepts, such as religion, marriage, computers. [sent-82, score-0.035]

57 Each template presents up to two concepts {c1 ∈ Si, c2 ∈ Sj i j} in the question ose cnotnecnecpet (Q in Figure 2), Sandi one concept c3 ∈ Si ∪ Sj in the answer sentence (A in Figure 2). [sent-83, score-0.406]

58 ∈Th Se m∪o Stivation of this approach is that the two concepts in the question are selected from two different scripts but from the same cluster, while the answer concept is selected from one of the two scripts and from a different cluster. [sent-84, score-0.713]

59 The effect the generated two-liner produces is that of a setup and resolution (punchline), where the question intentionally sets up = = two parallel and compatible scripts, and the answer triggers the script switch. [sent-85, score-0.398]

60 Below are the top-ranking two-liners as rated by a group of fifteen subjects (testing details in the next section). [sent-86, score-0.037]

61 Each concept is indicated in brackets and labeled with the script from which the concept had originated: Why doe s the [prie st ] root [ kneel ] S1 in [ church ] S2 ? [sent-87, score-0.716]

62 Be cause the [ prie st ] root want s t o [ propo s e woman ] S1 Why doe s the [ prie st ] root [ drink co f fee ] S1 and [ be l ieve god ] S2 ? [sent-88, score-0.422]

63 Because the [ prie st ] root want s t o [ wake up ] S1 Why i the [ comput er ] root [ hot ] S1 in s [mit ] S2 ? [sent-89, score-0.348]

64 Be cause [mit ] S2 i [ he l ] S2 s l Why i the [ comput er ] root in s [ ho spit al ] S1 ? [sent-90, score-0.171]

65 Becau se the [ comput er ] root has [ virus ] S2 4 Results We evaluate the generated two-liners by presenting them as human-generated to remove possible bias. [sent-91, score-0.206]

66 Each two-liner was generated from one of the three root categories (12 two-liners in each): priest, woman, computer, robot, and to normalize against individual humor biases, humanmade two-liners were mixed in in the same categories. [sent-93, score-0.717]

67 Two-liners generated by three different algorithms were evaluated by each subject: Script model + Concept clustering (SM+CC) Both script opposition and incongruity are favored through spreading activation and concept clustering. [sent-94, score-0.998]

68 Script model only (SM) No concept clustering is employed. [sent-95, score-0.173]

69 Adherence of scripts to the script model is ensured through spreading activation. [sent-96, score-0.626]

70 Baseline Loops are generated from a user-specified root using depth first search. [sent-97, score-0.126]

71 We compare the average scores between the twoliners generated using both the script model and con- cept clustering (SM+CC) (MEAN=1. [sent-99, score-0.437]

72 153 (51N)%=168042 0 Baselin SM +C HumanhN H uio lamn ro-sieornuosues Figure 3: Human blind evaluation of generated two-liners We observe that the fraction of non-humorous and nonsensical two-liners generated is still significant. [sent-105, score-0.126]

73 Many non-humorous (but semantically sound) twoliners were formed due to erroneous labels on the concept clusters. [sent-106, score-0.178]

74 While clustering provides a fundamental way to generate incongruity, noise in the ConceptNet often leads ofcluster overfitting, and assigns related concepts into separate clusters. [sent-107, score-0.204]

75 Because our surface form templates assume a part of speech, or a phrase type from the ConceptNet specification, erroneous entries produce nonsensical results. [sent-109, score-0.128]

76 We partially address the problem by pruning low-scoring concepts (ConceptNet features a SCORE attribute reflecting the number of user votes for the concept), and all terminal nodes from consideration (nodes that are not expanded by users often indicate weak relationships). [sent-110, score-0.197]

77 5 Future Work Through observation of the generated semantic paths, we note that more complex narratives, beyond questions/answer forms can be produced from the ConceptNet. [sent-111, score-0.083]

78 Relaxing the rigid template constraint of the surface realizer will allow for more diverse types of generated humor. [sent-112, score-0.164]

79 To mitigate the fragility of concept clustering, we are augmenting the ConceptNet with additional resources that provide do- main knowledge. [sent-113, score-0.135]

80 , 2010b), and WordNet-Domains (Kolte and Bhirud, 2008) are both viable avenues for robust concept clustering and incongruity generation. [sent-115, score-0.323]

81 Huge thanks to Max Kelner - those everyday teas at Mattins and continuous inspiration. [sent-117, score-0.03]

82 Script theory revis (it) ed: Joke similarity and joke representation model. [sent-124, score-0.182]

83 A symbolic description of punning riddles and its computer implementation. [sent-130, score-0.139]

84 Senticnet: A publicly available semantic resource for opinion mining. [sent-163, score-0.048]

85 Retrieving documents by constrained spreading activation on automatically constructed hypertexts. [sent-177, score-0.274]

86 but please make it funny: Computational humor with ontological semantics. [sent-198, score-0.651]

87 Learning to laugh (automatically): Computational models for humor recognition. [sent-221, score-0.591]

88 Toward an empirical verification of the general theory of verbal humor. [sent-266, score-0.098]

89 Bayesian inference networks and spreading activation in hypertext systems. [sent-271, score-0.274]

90 Edge dependent pathway scoring for calculating semantic similarity in conceptnet. [sent-277, score-0.048]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('humor', 0.591), ('script', 0.321), ('conceptnet', 0.187), ('scripts', 0.185), ('humorous', 0.171), ('concepts', 0.166), ('activation', 0.154), ('incongruity', 0.15), ('ssth', 0.15), ('concept', 0.135), ('circuits', 0.129), ('joke', 0.129), ('spreading', 0.12), ('ritchie', 0.107), ('root', 0.091), ('circuit', 0.086), ('prie', 0.086), ('raskin', 0.086), ('sm', 0.077), ('punning', 0.075), ('puns', 0.075), ('surface', 0.072), ('stock', 0.066), ('attardo', 0.064), ('binsted', 0.064), ('cambria', 0.064), ('hempelmann', 0.064), ('nijholt', 0.064), ('riddles', 0.064), ('rij', 0.064), ('rjk', 0.064), ('cj', 0.064), ('sk', 0.062), ('ontological', 0.06), ('strapparava', 0.057), ('comput', 0.056), ('nonsensical', 0.056), ('paths', 0.056), ('theory', 0.053), ('overlap', 0.05), ('semantic', 0.048), ('sj', 0.046), ('opposition', 0.045), ('verbal', 0.045), ('alternate', 0.043), ('phonological', 0.043), ('corne', 0.043), ('doct', 0.043), ('fabula', 0.043), ('friedland', 0.043), ('havasi', 0.043), ('hod', 0.043), ('kolte', 0.043), ('manurung', 0.043), ('punchline', 0.043), ('ruch', 0.043), ('senticnet', 0.043), ('std', 0.043), ('swartjes', 0.043), ('twoliners', 0.043), ('waller', 0.043), ('wordplay', 0.043), ('xfin', 0.043), ('feasible', 0.043), ('answer', 0.042), ('clustering', 0.038), ('fifteen', 0.037), ('hahacronym', 0.037), ('sophistication', 0.037), ('florida', 0.037), ('whi', 0.037), ('ose', 0.037), ('clauset', 0.037), ('ci', 0.035), ('generated', 0.035), ('cc', 0.035), ('outgoing', 0.034), ('woman', 0.034), ('cornell', 0.034), ('pain', 0.034), ('doe', 0.034), ('loops', 0.034), ('ible', 0.032), ('nodes', 0.031), ('constraint', 0.031), ('intelligent', 0.031), ('si', 0.031), ('everyday', 0.03), ('obvious', 0.03), ('network', 0.029), ('conforms', 0.029), ('commonsense', 0.029), ('node', 0.028), ('template', 0.026), ('ck', 0.026), ('pr', 0.026), ('hypothesize', 0.025), ('switch', 0.025), ('er', 0.024), ('chain', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 112 acl-2012-Humor as Circuits in Semantic Networks

Author: Igor Labutov ; Hod Lipson

Abstract: This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners.

2 0.11639985 7 acl-2012-A Computational Approach to the Automation of Creative Naming

Author: Gozde Ozbal ; Carlo Strapparava

Abstract: In this paper, we propose a computational approach to generate neologisms consisting of homophonic puns and metaphors based on the category of the service to be named and the properties to be underlined. We describe all the linguistic resources and natural language processing techniques that we have exploited for this task. Then, we analyze the performance of the system that we have developed. The empirical results show that our approach is generally effective and it constitutes a solid starting point for the automation ofthe naming process.

3 0.054554511 149 acl-2012-Movie-DiC: a Movie Dialogue Corpus for Research and Development

Author: Rafael E. Banchs

Abstract: This paper describes Movie-DiC a Movie Dialogue Corpus recently collected for research and development purposes. The collected dataset comprises 132,229 dialogues containing a total of 764,146 turns that have been extracted from 753 movies. Details on how the data collection has been created and how it is structured are provided along with its main statistics and characteristics. 1

4 0.044946831 55 acl-2012-Community Answer Summarization for Multi-Sentence Question with Group L1 Regularization

Author: Wen Chan ; Xiangdong Zhou ; Wei Wang ; Tat-Seng Chua

Abstract: We present a novel answer summarization method for community Question Answering services (cQAs) to address the problem of “incomplete answer”, i.e., the “best answer” of a complex multi-sentence question misses valuable information that is contained in other answers. In order to automatically generate a novel and non-redundant community answer summary, we segment the complex original multi-sentence question into several sub questions and then propose a general Conditional Random Field (CRF) based answer summary method with group L1 regularization. Various textual and non-textual QA features are explored. Specifically, we explore four different types of contextual factors, namely, the information novelty and non-redundancy modeling for local and non-local sentence interactions under question segmentation. To further unleash the potential of the abundant cQA features, we introduce the group L1 regularization for feature learning. Experimental results on a Yahoo! Answers dataset show that our proposed method significantly outperforms state-of-the-art methods on cQA summarization task.

5 0.043298282 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

Author: Limin Yao ; Sebastian Riedel ; Andrew McCallum

Abstract: To discover relation types from text, most methods cluster shallow or syntactic patterns of relation mentions, but consider only one possible sense per pattern. In practice this assumption is often violated. In this paper we overcome this issue by inducing clusters of pattern senses from feature representations of patterns. In particular, we employ a topic model to partition entity pairs associated with patterns into sense clusters using local and global features. We merge these sense clusters into semantic relations using hierarchical agglomerative clustering. We compare against several baselines: a generative latent-variable model, a clustering method that does not disambiguate between path senses, and our own approach but with only local features. Experimental results show our proposed approach discovers dramatically more accurate clusters than models without sense disambiguation, and that incorporating global features, such as the document theme, is crucial.

6 0.040735256 117 acl-2012-Improving Word Representations via Global Context and Multiple Word Prototypes

7 0.040583044 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability

8 0.039037243 41 acl-2012-Bootstrapping a Unified Model of Lexical and Phonetic Acquisition

9 0.038989272 82 acl-2012-Entailment-based Text Exploration with Application to the Health-care Domain

10 0.037034664 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

11 0.036999285 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures

12 0.036290206 57 acl-2012-Concept-to-text Generation via Discriminative Reranking

13 0.035201371 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

14 0.03441127 177 acl-2012-Sentence Dependency Tagging in Online Question Answering Forums

15 0.034080844 197 acl-2012-Tokenization: Returning to a Long Solved Problem A Survey, Contrastive Experiment, Recommendations, and Toolkit

16 0.034022104 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

17 0.03377904 62 acl-2012-Cross-Lingual Mixture Model for Sentiment Classification

18 0.03341046 3 acl-2012-A Class-Based Agreement Model for Generating Accurately Inflected Translations

19 0.033168688 152 acl-2012-Multilingual WSD with Just a Few Lines of Code: the BabelNet API

20 0.032453533 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.102), (1, 0.047), (2, -0.01), (3, 0.003), (4, 0.009), (5, 0.043), (6, 0.003), (7, 0.035), (8, 0.0), (9, 0.007), (10, -0.006), (11, 0.027), (12, 0.006), (13, 0.023), (14, -0.061), (15, -0.041), (16, -0.006), (17, 0.065), (18, -0.017), (19, 0.001), (20, 0.036), (21, 0.007), (22, -0.029), (23, -0.047), (24, -0.059), (25, 0.014), (26, -0.03), (27, -0.081), (28, 0.075), (29, -0.016), (30, 0.015), (31, -0.02), (32, -0.008), (33, 0.022), (34, 0.062), (35, -0.06), (36, 0.078), (37, -0.02), (38, 0.073), (39, -0.092), (40, -0.005), (41, -0.092), (42, 0.095), (43, 0.055), (44, -0.09), (45, 0.226), (46, -0.15), (47, 0.048), (48, -0.16), (49, -0.126)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93756789 112 acl-2012-Humor as Circuits in Semantic Networks

Author: Igor Labutov ; Hod Lipson

Abstract: This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners.

2 0.74672371 7 acl-2012-A Computational Approach to the Automation of Creative Naming

Author: Gozde Ozbal ; Carlo Strapparava

Abstract: In this paper, we propose a computational approach to generate neologisms consisting of homophonic puns and metaphors based on the category of the service to be named and the properties to be underlined. We describe all the linguistic resources and natural language processing techniques that we have exploited for this task. Then, we analyze the performance of the system that we have developed. The empirical results show that our approach is generally effective and it constitutes a solid starting point for the automation ofthe naming process.

3 0.57396084 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability

Author: Cristian Danescu-Niculescu-Mizil ; Justin Cheng ; Jon Kleinberg ; Lillian Lee

Abstract: Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased the choice of words and sentence structure — can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts — that is, more portable. — We also show how the concept of “memorable language” can be extended across domains. 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people’s minds. An active line of recent computational work has employed a variety of perspectives on this question. 892 Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to infor- mation on different topics [28, 36]. Beyond all these factors, however, one’s everyday experience with these domains suggests that the way in which a piece of information is expressed the choice of words, the way it is phrased might also have a fundamental effect on the extent to which it takes hold in people’s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, “catchy,” or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of “successful” language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or independent of these (obviously very crucial) factors? To — — — investigate the question, one needs a way of controlProce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi8c 9s2–901, ling as much as possible for the role that the surrounding context of the language plays. — — The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition people should be able to retain the quote and recognize it when they hear it invoked and production people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): “Certain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use” [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to — — — not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most “memorable”. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a “memorable quote” by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number oftimes it appears on the Web. Both ofthese serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we comparethemwithquotes thatarenotassessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly — fine-grained way for the confounding effects of context discussed above: we can observe differences 893 that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren’t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more “free-standing,” containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting as- pects as well, such as sound distributions. Our analysis ofmemorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap §2 lays the empirical foundations of our work: the design yasntdh ecerematpioirnic aofl our movie-quotes dataset, which we make publicly available (§2. 1), a pilot study cwhit hw ehu mmakaen subjects validating §I2M.1D),b abased memorability labels (§2.2), and further study bofa incorporating search-engine c2)o,u anntds (§2.3). §3 uddeytoafi lisn our analysis aenardc prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. §4 surveys rcerloastse-dd owmoarkin across a variety goafn fsie dladtsa.. §5 briefly sruelmatmedar wizoesrk ka andcr ionsdsic aat veasr some ffuft uierled sd.ire §c5tio bnrsie. 2 I’m ready for my close-up. 2.1 Data To study the properties of memorable movie quotes, we need a source of movie lines and a designation of memorability. Following [8], we constructed a corpus consisting of all lines from roughly 1000 movies, varying in genre, era, and popularity; for each movie, we then extracted the list of quotes from IMDb’s Memorable Quotes page corresponding to the movie.1 A memorable quote in IMDb can appear either as an individual sentence spoken by one character, or as a multi-sentence line, or as a block of dialogue involving multiple characters. In the latter two cases, it can be hard to determine which particular portion is viewed as memorable (some involve a build-up to a punch line; others involve the follow-through after a well-phrased opening sentence), and so we focus in our comparisons on those memorable quotes that 1This extraction involved some edit-distance-based alignment, since the exact form of the line in the script can exhibit minor differences from the version typed into IMDb. rmotuqsfebmaNerolbm543281760 0 1234D5ecil678910 894 Figure 1: Location of memorable quotes in each decile of movie scripts (the first 10th, the second 10th, etc.), summed over all movies. The same qualitative results hold if we discard each movie’s very first and last line, which might have privileged status. appear as a single sentence rather than a multi-line block.2 We now formulate a task that we can use to evaluate the features of memorable quotes. Recall that our goal is to identify effects based in the language of the quotes themselves, beyond any factors arising from the speaker or context. Thus, for each (singlesentence) memorable quote M, we identify a nonmemorable quote that is as similar as possible to M in all characteristics but the choice of words. This means we want it to be spoken by the same character in the same movie. It also means that we want it to have the same length: controlling for length is important because we expect that on average, shorter quotes will be easier to remember than long quotes, and that wouldn’t be an interesting textual effect to report. Moreover, we also want to control for the fact that a quote’s position in a movie can affect memorability: certain scenes produce more memorable dialogue, and as Figure 1 demonstrates, in aggregate memorable quotes also occur disproportionately near the beginnings and especially the ends of movies. In summary, then, for each M, we pick a contrasting (single-sentence) quote N from the same movie that is as close in the script as possible to M (either before or after it), subject to the conditions that (i) M and N are uttered by the same speaker, (ii) M and N have the same number of words, and (iii) N does not occur in the IMDb list of memorable 2We also ran experiments relaxing the single-sentence assumption, which allows for stricter scene control and a larger dataset but complicates comparisons involving syntax. The non-syntax results were in line with those reported here. TaJSOMbtrclodekviTn1ra:eBTykhoPrwNenpmlxeasipFIHAeaithrclsfnitkaQeomuifltw’sdaveoitycmsnedoqatbuliocrkeytsl f.woEeimlanchguwspakyirdfsebavot;ilmsdfcoenti’dus.erx-citaINmSnrkeioamct:ohenwmardleytQ.howfeu t’yvrecp,o’gsmrtpuaosnmtyef o rtgnhqieuvrobt.pehasirtdeosfpykuern close together in the movie by the same while the other is not. (Contractions character, have the same length, and one is labeled memorable by the IMDb such as “it’s” count as two words.) quotes for the movie (either as a single line or as part of a larger block). Given such pairs, we formulate a pairwise comparison task: given M and N, determine which is the memorable quote. Psychological research on subjective evaluation [35], as well as initial experiments using ourselves as subjects, indicated that this pairwise set-up easier to work with than simply presenting a single sentence and asking whether it is memorable or not; the latter requires agreement on an “absolute” criterion for memorability that is very hard to impose consistently, whereas the former simply requires a judgment that one quote is more memorable than another. Our main dataset, available at http://www.cs. cornell.edu/∼cristian/memorability.html,3 thus consists of approximately 2200 such (M, N) pairs, separated by a median of 5 same-character lines in the script. The reader can get a sense for the nature of the data from the three examples in Table 1. We now discuss two further aspects to the formulation of the experiment: a preliminary pilot study involving human subjects, and the incorporation of search engine counts into the data. 2.2 Pilot study: Human performance As a preliminary consideration, we did a small pilot study to see if humans can distinguish memorable from non-memorable quotes, assuming our IMDBinduced labels as gold standard. Six subjects, all native speakers of English and none an author of this paper, were presented with 11 or 12 pairs of memorable vs. non-memorable quotes; again, we controlled for extra-textual effects by ensuring that in each pair the two quotes come from the same movie, are by the same character, have the same length, and 3Also available there: other examples and factoids. 895 Table 2: Human pilot study: number of matches to IMDb-induced annotation, ordered by decreasing match percentage. For the null hypothesis of random guessing, these results are statistically significant, p < 2−6 ≈ .016. appear as nearly as possible in the same scene.4 The order of quotes within pairs was randomized. Importantly, because we wanted to understand whether the language of the quotes by itself contains signals about memorability, we chose quotes from movies that the subjects said they had not seen. (This means that each subject saw a different set of quotes.) Moreover, the subjects were requested not to consult any external sources of information.5 The reader is welcome to try a demo version of the task at http: //www.cs.cornell.edu/∼cristian/memorability.html. Table 2 shows that all the subjects performed (sometimes much) better than chance, and against the null hypothesis that all subjects are guessing randomly, the results are statistically significant, p < 2−6 ≈ .016. These preliminary findings provide evidenc≈e f.0or1 t6h.e T validity eolifm our traysk fi:n despite trohev apparent difficulty of the job, even humans who haven’t seen the movie in question can recover our IMDb4In this pilot study, we allowed multi-sentence quotes. 5We did not use crowd-sourcing because we saw no way to ensure that this condition would be obeyed by arbitrary subjects. We do note, though, that after our research was completed and as of Apr. 26, 2012, ≈ 11,300 people completed the online test: average accuracy: 27,2 ≈%, 1 1m,3o0d0e npueompbleer c coomrrpelcett:e d9 t/1he2. induced labels with some reliability.6 2.3 Incorporating search engine counts Thus far we have discussed a dataset in which memorability is determined through an explicit labeling drawn from the IMDb. Given the “production” aspect of memorability discussed in § 1, we stihoonu”ld a saplesoc expect tmhaotr mabeimlityora dbislce quotes nw §il1l ,te wnde to appear more extensively on Web pages than nonmemorable quotes; note that incorporating this insight makes it possible to use the (implicit) judgments of a much larger number of people than are represented by the IMDb database. It therefore makes sense to try using search-engine result counts as a second indication of memorability. We experimented with several ways of constructing memorability information from search-engine counts, but this proved challenging. Searching for a quote as a stand-alone phrase runs into the problem that a number of quotes are also sentences that people use without the movie in mind, and so high counts for such quotes do not testify to the phrase’s status as a memorable quote from the movie. On the other hand, searching for the quote in a Boolean conjunction with the movie’s title discards most of these uses, but also eliminates a large fraction of the appearances on the Web that we want to find: precisely because memorable quotes tend to have widespread cultural usage, people generally don’t feel the need to include the movie’s title when invoking them. Finally, since we are dealing with roughly 1000 movies, the result counts vary over an enormous range, from recent blockbusters to movies with relatively small fan bases. In the end, we found that it was more effective to use the result counts in conjunction with the IMDb labels, so that the counts played the role of an additional filter rather than a free-standing numerical value. Thus, for each pair (M, N) produced using the IMDb methodology above, we searched for each of M and N as quoted expressions in a Boolean conjunction with the title of the movie. We then kept only those pairs for which M (i) produced more than five results in our (quoted, conjoined) search, and (ii) produced at least twice as many results as the cor6The average accuracy being below 100% reinforces that context is very important, too. 896 responding search for N. We created a version of this filtered dataset using each of Google and Bing, and all the main findings were consistent with the results on the IMDb-only dataset. Thus, in what follows, we will focus on the main IMDb-only dataset, discussing the relationship to the dataset filtered by search engine counts where relevant (in which case we will refer to the +Google dataset). 3 Never send a human to do a machine’s job. We now discuss experiments that investigate the hypotheses discussed in §1. In particular, we devise pmoetthheosdess t dhiastc can assess 1th.e Idnis ptianrcttiicvuelnaer,ss w aend d generality hypotheses and test whether there exists a notion of “memorable language” that operates across domains. In addition, we evaluate and compare the predictive power of these hypotheses. 3.1 Distinctiveness One of the hypotheses we examine is whether the use of language in memorable quotes is to some extent unusual. In order to quantify the level of distinctiveness of a quote, we take a language-model approach: we model “common language” using the newswire sections of the Brown corpus [21]7, and evaluate how distinctive a quote is by evaluating its likelihood with respect to this model the lower the likelihood, the more distinctive. In order to assess different levels of lexical and syntactic distinctiveness, we employ a total of six Laplacesmoothed8 language models: 1-gram, 2-gram, and — 3-gram word LMs and 1-gram, 2-gram and 3-gram LMs. We find strong evidence that from a lexical perspective, memorable quotes are more distinctive than their non-memorable counterparts. As indicated in Table 3, for each of our lexical “common language” models, in about 60% of the quote pairs, the memorable quote is more distinctive. Interestingly, the reverse is true when it comes to part-of-speech9 7Results were qualitatively similar if we used the fiction portions. The age of the Brown corpus makes it less likely to contain modern movie quotes. 8We employ Laplace (additive) smoothing with a smoothing parameter of 0.2. The language models’ vocabulary was that of the entire training corpus. 9Throughout we obtain part-of-speech tags by using the NLTK maximum entropy tagger with default parameters. in which the the memorable quote is more distinctive than the non-memorable one according to the respective “common language” model. Significance according to a two-tailed sign test is indicated using *-notation (∗∗∗=“p<.001”). syntax: memorable quotes appear to follow the syntactic patterns of “common language” as closely as or more closely than non-memorable quotes. Together, these results suggest that memorable quotes consist of unusual word sequences built on common syntactic scaffolding. 3.2 Generality Another of our hypotheses is that memorable quotes are easier to use outside the specific context in which they were uttered that is, more “portable” and therefore exhibit fewer terms that refer to those settings. We use the following syntactic properties as proxies for the generality of a quote: • Fewer 3rd-person pronouns, since these commonly r 3efer to a person or object that was introduced earlier in the discourse. Utterances that employ fewer such pronouns are easier to adapt to new contexts, and so will be considered more — — general. • More indefinite articles like a and an, since they are more likely ttioc lreesfer li ktoe general concepts than definite articles. Quotes with more indefinite articles will be considered more general. Fewer past tense verbs and more present tFeenwsee verbs, tseinncsee t vheer bfosrm aenrd are more likely to refer to specific previous events. Therefore utterances that employ fewer past tense verbs (and more present tense verbs) will be considered more general. Table 4 gives the results for each of these four metrics in each case, we show the percentage of • — 897 TalfmGebowsnre4pa:in3srGldet sypfne.msrate.lripnctysoe: purncsetaI56gM47e.326D9o710bf% -qo∗u n∗l tyepa+56iG892rs.o7i364ng% wl∗ eh∗i ch the memorable quote is more general than the non- memorable ones according to the respective metric. Pairs where the metric does not distinguish between the quotes are not considered. quote pairs for which the memorable quote scores better on the generality metric. Note that because the issue of generality is a complex one for which there is no straightforward single metric, our approach here is based on several proxies for generality, considered independently; yet, as the results show, all of these point in a consistent direction. It is an interesting open question to develop richer ways of assessing whether a quote has greater generality, in the sense that people intuitively attribute to memorable quotes. 3.3 “Memorable” language beyond movies One of the motivating questions in our analysis is whether there are general principles underlying “memorable language.” The results thus far suggest potential families of such principles. A further question in this direction is whether the notion of memorability can be extended across different domains, and for this we collected (and distribute on our website) 431 phrases that were explicitly designed to be memorable: advertising slogans (e.g., “Quality never goes out of style.”). The focus on slogans is also in keeping with one of the initial motivations in studying memorability, namely, marketing applications in other words, assessing whether a proposed slogan has features that are consistent with memorable text. The fact that it’s not clear how to construct a collection of “non-memorable” counterparts to slogans appears to pose a technical challenge. However, we can still use a language-modeling approach to assess whether the textual properties of the slogans are closer to the memorable movie quotes (as one would conjecture) or to the non-memorable movie quotes. Specifically, we train one language model on memorable quotes and another on non-memorable quotes — guage: percentage of slogans that have higher likelihood under the memorable language model than under the nonmemorable one (for each of the six language models considered). Rightmost column: for reference, the percentage of newswire sentences that have higher likelihood under the memorable language model than under the nonmemorable one. TaG% ble3nipared6stpa:lfeitrnSsyilto.megpareotnsicluaerns mo1s42lto.61g048ae% nseral2w1m.h16e3mn% .comn2p-63ma.0r46e19dm% .to memorable and non-memorable quotes. (%s of 3rd pers. pronouns and indefinite articles are relative to all tokens, %s of past tense are relative to all past and present verbs.) and compare how likely each slogan is to be produced according to these two models. As shown in the middle column of Table 5, we find that slogans are better predicted both lexically and syntactically by the former model. This result thus offers evidence for a concept of “memorable language” that can be applied beyond a single domain. We also note that the higher likelihood of slogans under a “memorable language” model is not simply occurring for the trivial reason that this model predicts all other large bodies of text better. In particular, the newswire section of the Brown corpus is predicted better at the lexical level by the language model trained on non-memorable quotes. Finally, Table 6 shows that slogans employ general language, in the sense that for each of our generality metrics, we see a slogans/memorablequotes/non-memorable quotes spectrum. 3.4 Prediction task We now show how the principles discussed above can provide features for a basic prediction task, corresponding to the task in our human pilot study: 898 given a pair of quotes, identify the memorable one. Our first formulation of the prediction task uses a standard bag-of-words model10. If there were no information in the textual content of a quote to determine whether it were memorable, then an SVM employing bag-of-words features should perform no better than chance. Instead, though, it obtains 59.67% (10-fold cross-validation) accuracy, as shown in Table 7. We then develop models using features based on the measures formulated earlier in this section: generality measures (the four listed in Table 4); distinctiveness measures (likelihood according to 1, 2, and 3-gram “common language” models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them); and similarityto-slogans measures (likelihood according to 1, 2, and 3-gram slogan-language models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them). Even a relatively small number of distinctiveness features, on their own, improve significantly over the much larger bag-of-words model. When we include additional features based on generality and language-model features measuring similarity to slogans, the performance improves further (last line of Table 7). Thus, the main conclusion from these prediction tasks is that abstracting notions such as distinctiveness and generality can produce relatively streamlined models that outperform much heavier-weight bag-of-words models, and can suggest steps toward approaching the performance of human judges who very much unlike our system have the full cultural context in which movies occur at their disposal. — — 3.5 Other characteristics We also made some auxiliary observations that may be ofinterest. Specifically, we find differences in letter and sound distribution (e.g., memorable quotes after curse-word removal use significantly more “front sounds” (labials or front vowels such as represented by the letter i) and significantly fewer “back sounds” such as the one represented by u),11 — — 10We discarded terms appearing fewer than 10 times. 11These findings may relate to marketing research on sound symbolism [7, 19, 40]. TablesdgF7lieao:sngtPiehnorauefc dtliswevctymeo irnp.des:StoVgeMh10r-fo#ldec9ra265ot42sv5aA6l8942ic.d36720atu57%ri aocn∗yresult using the respective feature sets. Random baseline accuracy is 50%. Accuracies statistically significantly greater than bag-of-words according to a two-tailed t-test are indicated with *(p<.05) and **(p<.01). word complexity (e.g., memorable quotes use words with significantly more syllables) and phrase complexity (e.g., memorable quotes use fewer coordinating conjunctions). The latter two are in line with our distinctiveness hypothesis. 4 A long time ago, in a galaxy far, far away How an item’s linguistic form affects the reaction it generates has been studied in several contexts, including evaluations of product reviews [9], political speeches [12], on-line posts [13], scientific papers [14], and retweeting of Twitter posts [36]. We use a different set of features, abstracting the notions of distinctiveness and generality, in order to focus on these higher-level aspects of phrasing rather than on particular lower-level features. Related to our interest in distinctiveness, work in advertising research has studied the effect of syntactic complexity on recognition and recall of slogans [5, 6, 24]. There may also be connections to Von Restorff’s isolation effect Hunt [17], which asserts that when all but one item in a list are similar in some way, memory for the different item is enhanced. Related to our interest in generality, Knapp et al. [20] surveyed subjects regarding memorable messages or pieces of advice they had received, finding that the ability to be applied to multiple concrete situations was an important factor. Memorability, although distinct from “memorizability”, relates to short- and long-term recall. Thorn and Page [34] survey sub-lexical, lexical, and semantic attributes affecting short-term memorability of lexical items. Studies of verbatim recall have also considered the task of distinguishing an exact quote from close paraphrases [3]. Investigations of longterm recall have included studies ofculturally signif- 899 icant passages of text [29] and findings regarding the effect of rhetorical devices of alliterative [4], “rhythmic, poetic, and thematic constraints” [18, 26]. Finally, there are complex connections between humor and memory [32], which may lead to interactions with computational humor recognition [25]. 5 I think this is the beginning of a beautiful friendship. Motivated by the broad question of what kinds of information achieve widespread public awareness, we studied the the effect of phrasing on a quote’s memorability. A challenge is that quotes differ not only in how they are worded, but also in who said them and under what circumstances; to deal with this difficulty, we constructed a controlled corpus of movie quotes in which lines deemed memorable are paired with non-memorable lines spoken by the same character at approximately the same point in the same movie. After controlling for context and situation, memorable quotes were still found to exhibit, on av- erage (there will always be individual exceptions), significant differences from non-memorable quotes in several important respects, including measures capturing distinctiveness and generality. Our experiments with slogans show how the principles we identify can extend to a different domain. Future work may lead to applications in marketing, advertising and education [4]. Moreover, the subtle nature of memorability, and its connection to research in psychology, suggests a range of further research directions. We believe that the framework developed here can serve as the basis for further computational studies of the process by which information takes hold in the public consciousness, and the role that language effects play in this process. My mother thanks you. My father thanks you. My sister thanks you. And Ithank you: Rebecca Hwa, Evie Kleinberg, Diana Minculescu, Alex Niculescu-Mizil, Jennifer Smith, Benjamin Zimmer, and the anonymous reviewers for helpful discussions and comments; our annotators Steven An, Lars Backstrom, Eric Baumer, Jeff Chadwick, Evie Kleinberg, and Myle Ott; and the makers of Cepacol, Robitussin, and Sudafed, whose products got us through the submission deadline. This paper is based upon work supported in part by NSF grants IIS-0910664, IIS-1016099, Google, and Yahoo! References [1] [2] [3] [4] [5] Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of KDD, 2006. Elizabeth Bates, Walter Kintsch, Charles R. Fletcher, and Vittoria Giuliani. The role of pronominalization and ellipsis in texts: Some memory experiments. Journal of Experimental Psychology: Human Learning and Memory, 6 (6):676–691, 1980. Frank Boers and Seth Lindstromberg. Finding ways to make phrase-learning feasible: The mnemonic effect of alliteration. System, 33(2): 225–238, 2005. Samuel D. Bradley and Robert Meeds. Surface-structure transformations and advertising slogans: The case for moderate syntactic complexity. Psychology and Marketing, 19: 595–619, 2002. [6] Robert Chamblee, Robert Gilmore, Gloria Thomas, and Gary Soldow. When copy complexity can help ad readership. Journal of Advertising Research, 33(3):23–23, 1993. [7] John Colapinto. Famous names. The New Yorker, pages 38–43, 2011. [8] Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2011. [9] Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of WWW, pages 141–150, 2009. [10] Stuart Fischoff, Esmeralda Cardenas, Angela Hernandez, Korey Wyatt, Jared Young, and 900 [11] [12] [13] [14] [15] Rachel Gordon. Popular movie quotes: Reflections of a people and a culture. In Annual Convention of the American Psychological Association, 2000. Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. Information diffusion through blogspace. Proceedings of WWW, pages 491–501, 2004. Marco Guerini, Carlo Strapparava, and Oliviero Stock. Trusting politicians’ words (for persuasive NLP). In Proceedings of CICLing, pages 263–274, 2008. Marco Guerini, Carlo Strapparava, and G o¨zde O¨zbal. Exploring text virality in social networks. In Proceedings of ICWSM (poster), 2011. Marco Guerini, Alberto Pepe, and Bruno Lepri. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM, 2012. Richard Jackson Harris, Abigail J. Werth, Kyle E. Bures, and Chelsea M. Bartel. Social movie quoting: What, why, and how? Ciencias Psicologicas, 2(1):35–45, 2008. [16] Chip Heath, Chris Bell, and Emily Steinberg. Emotional selection in memes: The case of urban legends. Journal of Personality, 81(6): 1028–1041, 2001. [17] R. Reed Hunt. The subtlety of distinctiveness: What von Restorff really did. Psychonomic Bulletin & Review, 2(1): 105–1 12, 1995. [18] Ira E. Hyman Jr. and David C. Rubin. Memorabeatlia: A naturalistic study of long-term memory. Memory & Cognition, 18(2):205– 214, 1990. [19] Richard R. Klink. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20, 2000. [20] Mark L. Knapp, Cynthia Stohl, and Kathleen K. Reardon. “Memorable” messages. Journal of Communication, 3 1(4):27– 41, 1981. [21] Henry Kuˇ cera and W. Nelson Francis. Computational analysis of present-day American English. Dartmouth Publishing Group, 1967. [22] Jure Leskovec, Lada Adamic, and Bernardo Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May [23] [24] [25] [26] [27] [28] [29] 2007. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD, pages 497–506, 2009. Tina M. Lowrey. The relation between script complexity and commercial memorability. Journal of Advertising, 35(3):7–15, 2006. Rada Mihalcea and Carlo Strapparava. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2): 126–142, 2006. Milman Parry and Adam Parry. The making of Homeric verse: The collected papers of Milman Parry. Clarendon Press, Oxford, 1971. Everett Rogers. Diffusion of Innovations. Free Press, fourth edition, 1995. Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on Twitter. Proceedings of WWW, pages 695–704, 2011. David C. Rubin. Very long-term memory for [30] [3 1] [32] [33] prose and verse. Journal of Verbal Learning and Verbal Behavior, 16(5):61 1–621, 1977. Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith. Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-01-103, CMU, 2010. David Strang and Sarah Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265–290, 1998. Hannah Summerfelt, Louis Lippman, and Ira E. Hyman Jr. The effect of humor on memory: Constrained by the pun. The Journal of General Psychology, 137(4), 2010. Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento. Gesundheit! Model- 901 ing contagion through Facebook News Feed. In Proceedings of ICWSM, 2009. [34] Annabel Thorn and Mike Page. Interactions Between Short-Term and Long-Term Memory [35] [36] [37] [38] [39] [40] in the Verbal Domain. Psychology Press, 2009. Louis L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273– 286, 1927. Oren Tsur and Ari Rappoport. What’s in a Hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM, 2012. Fang Wu, Bernardo A. Huberman, Lada A. Adamic, and Joshua R. Tyler. Information flow in social groups. Physica A: Statistical and Theoretical Physics, 337(1-2):327–335, 2004. Shaomei Wu, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Who says what to whom on Twitter. In Proceedings of WWW, 2011. Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of WSDM, 2011. Eric Yorkston and Geeta Menon. A sound idea: Phonetic effects of brand names on consumer judgments. Journal of Consumer Research, 3 1 (1):43–51, 2004.

4 0.50995845 186 acl-2012-Structuring E-Commerce Inventory

Author: Karin Mauge ; Khash Rohanimanesh ; Jean-David Ruvini

Abstract: Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. In the paper we present a 2 steps method for structuring items into descriptive properties. The first step consists in unsupervised property discovery and extraction. The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall.

5 0.33851758 195 acl-2012-The Creation of a Corpus of English Metalanguage

Author: Shomir Wilson

Abstract: Metalanguage is an essential linguistic mechanism which allows us to communicate explicit information about language itself. However, it has been underexamined in research in language technologies, to the detriment of the performance of systems that could exploit it. This paper describes the creation of the first tagged and delineated corpus of English metalanguage, accompanied by an explicit definition and a rubric for identifying the phenomenon in text. This resource will provide a basis for further studies of metalanguage and enable its utilization in language technologies.

6 0.32866484 120 acl-2012-Information-theoretic Multi-view Domain Adaptation

7 0.3123832 26 acl-2012-Applications of GPC Rules and Character Structures in Games for Learning Chinese Characters

8 0.29370484 161 acl-2012-Polarity Consistency Checking for Sentiment Dictionaries

9 0.28179824 197 acl-2012-Tokenization: Returning to a Long Solved Problem A Survey, Contrastive Experiment, Recommendations, and Toolkit

10 0.28168312 77 acl-2012-Ecological Evaluation of Persuasive Messages Using Google AdWords

11 0.26783499 74 acl-2012-Discriminative Pronunciation Modeling: A Large-Margin, Feature-Rich Approach

12 0.26446754 35 acl-2012-Automatically Mining Question Reformulation Patterns from Search Log Data

13 0.26388741 83 acl-2012-Error Mining on Dependency Trees

14 0.26089215 41 acl-2012-Bootstrapping a Unified Model of Lexical and Phonetic Acquisition

15 0.25408527 42 acl-2012-Bootstrapping via Graph Propagation

16 0.25255492 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

17 0.24903905 49 acl-2012-Coarse Lexical Semantic Annotation with Supersenses: An Arabic Case Study

18 0.24657749 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

19 0.24597523 136 acl-2012-Learning to Translate with Multiple Objectives

20 0.23217171 129 acl-2012-Learning High-Level Planning from Text


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(25, 0.011), (26, 0.04), (28, 0.04), (30, 0.032), (34, 0.412), (37, 0.016), (39, 0.057), (74, 0.021), (82, 0.037), (85, 0.045), (90, 0.073), (92, 0.055), (94, 0.018), (99, 0.041)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.82533169 200 acl-2012-Toward Automatically Assembling Hittite-Language Cuneiform Tablet Fragments into Larger Texts

Author: Stephen Tyndall

Abstract: This paper presents the problem within Hittite and Ancient Near Eastern studies of fragmented and damaged cuneiform texts, and proposes to use well-known text classification metrics, in combination with some facts about the structure of Hittite-language cuneiform texts, to help classify a number offragments of clay cuneiform-script tablets into more complete texts. In particular, Ipropose using Sumerian and Akkadian ideogrammatic signs within Hittite texts to improve the performance of Naive Bayes and Maximum Entropy classifiers. The performance in some cases is improved, and in some cases very much not, suggesting that the variable frequency of occurrence of these ideograms in individual fragments makes considerable difference in the ideal choice for a classification method. Further, complexities of the writing system and the digital availability ofHittite texts complicate the problem.

same-paper 2 0.80588454 112 acl-2012-Humor as Circuits in Semantic Networks

Author: Igor Labutov ; Hod Lipson

Abstract: This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners.

3 0.57987845 219 acl-2012-langid.py: An Off-the-shelf Language Identification Tool

Author: Marco Lui ; Timothy Baldwin

Abstract: We present langid .py, an off-the-shelflanguage identification tool. We discuss the design and implementation of langid .py, and provide an empirical comparison on 5 longdocument datasets, and 2 datasets from the microblog domain. We find that langid .py maintains consistently high accuracy across all domains, making it ideal for end-users that require language identification without wanting to invest in preparation of in-domain training data.

4 0.43275443 191 acl-2012-Temporally Anchored Relation Extraction

Author: Guillermo Garrido ; Anselmo Penas ; Bernardo Cabaleiro ; Alvaro Rodrigo

Abstract: Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually fluents, as their validity is naturally anchored to a certain time period. This paper proposes a methodological approach to temporally anchored relation extraction. Our proposal performs distant supervised learning to extract a set of relations from a natural language corpus, and anchors each of them to an interval of temporal validity, aggregating evidence from documents supporting the relation. We use a rich graphbased document-level representation to generate novel features for this task. Results show that our implementation for temporal anchoring is able to achieve a 69% of the upper bound performance imposed by the relation extraction step. Compared to the state of the art, the overall system achieves the highest precision reported.

5 0.30693182 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

Author: Gerard de Melo ; Gerhard Weikum

Abstract: We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.

6 0.29998946 187 acl-2012-Subgroup Detection in Ideological Discussions

7 0.2973783 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

8 0.29704589 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

9 0.29549384 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle

10 0.29544216 31 acl-2012-Authorship Attribution with Author-aware Topic Models

11 0.29505467 84 acl-2012-Estimating Compact Yet Rich Tree Insertion Grammars

12 0.29440138 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition

13 0.2938385 29 acl-2012-Assessing the Effect of Inconsistent Assessors on Summarization Evaluation

14 0.29317176 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

15 0.29242077 139 acl-2012-MIX Is Not a Tree-Adjoining Language

16 0.29147705 36 acl-2012-BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment

17 0.29094785 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

18 0.29083568 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

19 0.29079467 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

20 0.28915018 175 acl-2012-Semi-supervised Dependency Parsing using Lexical Affinities