acl acl2012 acl2012-186 knowledge-graph by maker-knowledge-mining

186 acl-2012-Structuring E-Commerce Inventory


Source: pdf

Author: Karin Mauge ; Khash Rohanimanesh ; Jean-David Ruvini

Abstract: Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. In the paper we present a 2 steps method for structuring items into descriptive properties. The first step consists in unsupervised property discovery and extraction. The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Structuring E-Commerce Inventory Karin Mauge eBay Research Labs 2145 Hamilton Avenue San Jose, CA 95125 kmauge @ ebay . [sent-1, score-0.283]

2 com Khash Rohanimanesh eBay Research Labs 2145 Hamilton Avenue San Jose, CA 95125 krohanimane sh@ ebay . [sent-2, score-0.283]

3 com Jean-David Ruvini eBay Research Labs 2145 Hamilton Avenue San Jose, CA 95125 j ruvini @ ebay . [sent-3, score-0.364]

4 com Abstract Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. [sent-4, score-0.331]

5 While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. [sent-5, score-0.558]

6 In the paper we present a 2 steps method for structuring items into descriptive properties. [sent-6, score-0.449]

7 The first step consists in unsupervised property discovery and extraction. [sent-7, score-0.522]

8 The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. [sent-8, score-0.704]

9 We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall. [sent-9, score-0.118]

10 1 Introduction Online commerce has gained a lot of popularity over the past decade. [sent-10, score-0.142]

11 Large on-line C2C marketplaces like eBay and Amazon, feature a very large and long-tail inventory with millions of items (product offers) entered into the marketplace every day by a large variety of sellers. [sent-11, score-0.576]

12 While some sellers (generally large professional ones) provide rich, structured description of their products (using schemas or via a global trade item number), the vast majority only provide unstructured natural language descriptions. [sent-12, score-0.788]

13 For example, this is important for measuring item similarity and complementarity in merchandising, providing faceted navigation and various business intelligence applications. [sent-15, score-0.277]

14 Note that structuring items does not necessarily mean identifying products as not all e-commerce inventory is manufactured (animals for examples). [sent-16, score-0.631]

15 Structuring inventory in the e-commerce domain raises several challenges. [sent-17, score-0.158]

16 First, one needs to identify and extract the names and the values used by individual sellers from unstructured textual descriptions. [sent-18, score-0.521]

17 Second, different sellers may describe the same product in very different ways, using different terminologies. [sent-19, score-0.439]

18 For example, Figure 1 shows 3 item descriptions of hard drives from 3 different sellers. [sent-20, score-0.311]

19 The left description mentions ”rotational speed” in a specification table while the other two descriptions use the synonym ”spindle speed” in a bulleted list (top right) or natural language specifications (bottom right). [sent-21, score-0.438]

20 This requires discovering semantically equivalent property names and values across inventories from multiple sellers. [sent-22, score-0.538]

21 Third, the scale at which on-line marketplaces operate makes impractical to solve any of these problems manually. [sent-23, score-0.122]

22 For instance, eBay reported 99 million active users in 2011, many ofwhom are sellers, which may translate into thousands or even millions of synonyms to discover accross more than 20,000 categories ranging from consumer electronics to collectible and art. [sent-24, score-0.148]

23 This paper describes a two step process for structuring items in the e-commerce domain. [sent-25, score-0.429]

24 The first step consists in an unsupervised property extraction technique which allows discovering name-value Proce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A. [sent-26, score-0.545]

25 c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi8c 0s5–814, pairs from unstructured item descriptions. [sent-28, score-0.295]

26 The second step consists in identifying semantically equivalent property names amongst these extracted properties. [sent-29, score-0.537]

27 This is accomplished using supervised maximum entropy based clustering. [sent-30, score-0.099]

28 Note that, although value synonym discovery is an equally important task for structuring items, this is still an area of ongoing research and is not addressed in this paper. [sent-31, score-0.531]

29 We then describe the two steps of our approach: 1) unsupervised property discovery and extraction and 2) property name synonym discovery. [sent-34, score-1.128]

30 2 Related Work This section reviews related work for the two components of our method, namely unsupervised property extraction and supervised property name synonym discovery. [sent-36, score-1.153]

31 1 Unsupervised Property Extraction A lot of progress has been accomplished in the area ofproperty discovery from product reviews since the pioneering work by (Hu and Liu, 2004). [sent-38, score-0.368]

32 Most of this work is based on the observation, later formalized as double propagation by (Qiu et al. [sent-39, score-0.075]

33 , 2009), that in reviews, opinion words are usually associated with product properties in some ways, and thus product properties can be identified from opinion words and opinion words from properties alternately and iteratively. [sent-40, score-1.167]

34 , 2005) used Part-Of-Speech and supervised rule mining to generate language patterns and identify product properties; (Popescu and Etzioni, 2005) used point wise mutual information between candidate properties and meronymy discriminators; (Zhuang et al. [sent-42, score-0.532]

35 , 2007) mined property-opinion patterns using statistical and contextual cues; (Wang and Wang, 2008) leveraged property-opinion mutual information and linguistic rules to identify infrequent properties; and (Zhang et al. [sent-45, score-0.077]

36 , 2010) proposed a ranking scheme to improve double propagation precision. [sent-46, score-0.075]

37 In this paper, we are focusing on extracting properties from 806 product descriptions which do not contain opinion words. [sent-47, score-0.531]

38 In a sense, item properties can be viewed as slots of product templates and our work bears similarities with template induction methods. [sent-48, score-0.609]

39 (Chambers and Jurafsky, 2011) proposed a method for inferring event templates based on word clustering according to their proximity in the corpus and syntactic function clustering. [sent-49, score-0.069]

40 Unfortunately, this technique cannot be applied to our problem due to the lack of discourse redundancy within item descriptions. [sent-50, score-0.243]

41 , 2011) also addressed the problem of structuring items in the e-commerce domain. [sent-52, score-0.436]

42 However, these works assume that property names are known in advance and focus on discovering values for these properties from very short product titles. [sent-53, score-0.889]

43 Although we are primarily concerned with unsupervised property discovery, it is worth mentioning (Peng and McCallum, 2004) and (Ghani et al. [sent-54, score-0.446]

44 , 2006) who approached the problem using supervised machine learning techniques and require labeled data. [sent-55, score-0.051]

45 2 Property Name Synonym Discovery Our work is related to the synonym discovery research which aims at identifying groups of words that are semantically identical based on some defined similarity metric. [sent-57, score-0.29]

46 , 2010) propose a constrained semi-supervised learning method using a naive Bayes formulation of EM seeded by a small set of labeled data and a set of soft constraints based on the prior knowledge of the problem. [sent-66, score-0.035]

47 Our work is also related to the existing research on schema matching problem where the objective is to identify objects that are semantically related cross schemas. [sent-71, score-0.399]

48 There has been an extensive study on the Figure 1: Three examples of item descriptions containing a specification table (left image), a bulleted list (top right) and natural language specifications (bottom right). [sent-72, score-0.492]

49 problem of schema matching (for a comprehensive survey see (Rahm and Bernstein, 2001 ; Bellahsene et al. [sent-73, score-0.311]

50 Palopol and Ursino, 1998) often utilize only the schema information (e. [sent-78, score-0.241]

51 , elements, domain types of schema elements, and schema structure) to define a similarity metric for performing matching among the schema elements in a hard coded fashion. [sent-80, score-0.831]

52 In contrast learning based approaches learn a similarity metric based on both the schema information and the data. [sent-81, score-0.241]

53 , schema meta-data, statistics of the data content, properties of the objects shared between the schemas, etc). [sent-85, score-0.528]

54 These systems do not exploit the complete textual information in the data content therefore have limited applicability. [sent-86, score-0.042]

55 Most recent systems attempt to incorporate the textual contents of the data sources into the system. [sent-87, score-0.042]

56 (2001) introduce LSD which is a semi-automatic machine learning based matching framework that trains a set of base learners using a set of user provided semantic mappings over a small data sources. [sent-89, score-0.231]

57 Each base learner exploits a different type of information, e. [sent-90, score-0.042]

58 source schema information and information in the data source. [sent-92, score-0.241]

59 Given a new data source, the base learners are used to discover se- mantic mappings and their prediction is combined using a meta-learner. [sent-93, score-0.211]

60 , 2003) also uses a set of base learners combined into a meta-learner for solving the matching problem between two ontologies. [sent-95, score-0.203]

61 , 2008) where they propose a general framework for performing jointly schema matching, co-reference and canonicalization using a supervised machine learning approach. [sent-97, score-0.292]

62 In this approach the matching problem is treated as a clustering problem in the schema attribute space, where a cluster captures a matched set of attributes. [sent-98, score-0.347]

63 , 2001) is trained using user provided mappings between example schemas, or ontologies. [sent-100, score-0.062]

64 CRF benefits from first order logic features that capture both schema/ontology information as well as textual features in the related data sources. [sent-101, score-0.042]

65 3 Unsupervised Property Extraction The first step of our solution to structuring ecommerce inventory aims at discovering and extracting relevant properties from items. [sent-102, score-0.843]

66 Our method is unsupervised and requires no prior knowledge of relevant properties or any domain knowledge as it operates the exact same way for all items and categories. [sent-103, score-0.528]

67 It maintains a set of previously discovered properties called known properties with popularity information. [sent-104, score-0.62]

68 value V ) is defined as the number of sellers who are using N (resp. [sent-106, score-0.366]

69 A seller is said to use a name or a value if we are able to extract the property name or value from at least one of its item descriptions. [sent-108, score-0.862]

70 The method is incremental in that it starts with an empty set of known properties, mines individual items independently and incrementally builds and updates the set of known properties. [sent-109, score-0.292]

71 In other words, popular properties are used by many sellers and some of them write their descriptions in a manner that makes these properties easy to extract. [sent-112, score-0.939]

72 For example one pattern that some sellers use to describe product properties often starts by a property name followed by a colon and then the property value (we refer to this pattern as the colon pattern). [sent-113, score-1.93]

73 Using this pattern we can mine colon separated short strings like ”size : 20 inches” or ”color : light blue” which enables us to discover most relevant property names. [sent-114, score-0.656]

74 However, such a pattern extracts properties from a fraction of the inventory only and does not suffice. [sent-115, score-0.533]

75 We are using 4 patterns which are formally defined in Table 1. [sent-116, score-0.042]

76 Pattern 1 skips the html markers and scripts and 808 applies only to the content sentences. [sent-118, score-0.045]

77 It ignores any candidate property which name is longer than 30 characters and values longer than 80 characters. [sent-119, score-0.446]

78 Pattern 2, 3 and 4 search for known property names. [sent-122, score-0.389]

79 Pattern 2 extracts the closest value to the right of the name. [sent-123, score-0.115]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('property', 0.351), ('sellers', 0.325), ('ebay', 0.283), ('properties', 0.255), ('schema', 0.241), ('structuring', 0.217), ('item', 0.207), ('items', 0.18), ('inventory', 0.158), ('synonym', 0.153), ('marketplaces', 0.122), ('product', 0.114), ('hamilton', 0.106), ('descriptions', 0.104), ('labs', 0.097), ('colon', 0.097), ('name', 0.095), ('unstructured', 0.088), ('schemas', 0.086), ('pattern', 0.084), ('jose', 0.081), ('bulleted', 0.081), ('doan', 0.081), ('ecommerce', 0.081), ('lsd', 0.081), ('ruvini', 0.081), ('discovery', 0.081), ('avenue', 0.072), ('popularity', 0.072), ('clifton', 0.071), ('matching', 0.07), ('names', 0.066), ('millions', 0.066), ('discovering', 0.065), ('bernstein', 0.065), ('mappings', 0.062), ('san', 0.059), ('opinion', 0.058), ('unsupervised', 0.058), ('specifications', 0.057), ('learners', 0.057), ('semantically', 0.056), ('reviews', 0.055), ('hu', 0.053), ('descriptive', 0.052), ('qiu', 0.052), ('supervised', 0.051), ('discover', 0.05), ('entered', 0.05), ('accomplished', 0.048), ('html', 0.045), ('specification', 0.043), ('patterns', 0.042), ('textual', 0.042), ('base', 0.042), ('vast', 0.041), ('products', 0.041), ('etzioni', 0.041), ('value', 0.041), ('double', 0.04), ('bottom', 0.039), ('separated', 0.039), ('extraction', 0.039), ('addressed', 0.039), ('elements', 0.038), ('right', 0.038), ('known', 0.038), ('ca', 0.037), ('worth', 0.037), ('clustering', 0.036), ('extracts', 0.036), ('starts', 0.036), ('redundancy', 0.036), ('rohanimanesh', 0.035), ('wick', 0.035), ('seeded', 0.035), ('faceted', 0.035), ('animals', 0.035), ('complementarity', 0.035), ('pioneering', 0.035), ('meronymy', 0.035), ('enterprises', 0.035), ('commerce', 0.035), ('ghani', 0.035), ('manufactured', 0.035), ('lot', 0.035), ('mutual', 0.035), ('relevant', 0.035), ('propagation', 0.035), ('solving', 0.034), ('templates', 0.033), ('amongst', 0.032), ('electronics', 0.032), ('cameras', 0.032), ('seller', 0.032), ('karin', 0.032), ('objects', 0.032), ('crf', 0.032), ('step', 0.032), ('cial', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 186 acl-2012-Structuring E-Commerce Inventory

Author: Karin Mauge ; Khash Rohanimanesh ; Jean-David Ruvini

Abstract: Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. In the paper we present a 2 steps method for structuring items into descriptive properties. The first step consists in unsupervised property discovery and extraction. The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall.

2 0.095656589 7 acl-2012-A Computational Approach to the Automation of Creative Naming

Author: Gozde Ozbal ; Carlo Strapparava

Abstract: In this paper, we propose a computational approach to generate neologisms consisting of homophonic puns and metaphors based on the category of the service to be named and the properties to be underlined. We describe all the linguistic resources and natural language processing techniques that we have exploited for this task. Then, we analyze the performance of the system that we have developed. The empirical results show that our approach is generally effective and it constitutes a solid starting point for the automation ofthe naming process.

3 0.088537723 73 acl-2012-Discriminative Learning for Joint Template Filling

Author: Einat Minkov ; Luke Zettlemoyer

Abstract: This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention de- tection and template filling tasks.

4 0.068441428 33 acl-2012-Automatic Event Extraction with Structured Preference Modeling

Author: Wei Lu ; Dan Roth

Abstract: This paper presents a novel sequence labeling model based on the latent-variable semiMarkov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.

5 0.066161379 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model

Author: Enrique Alfonseca ; Katja Filippova ; Jean-Yves Delort ; Guillermo Garrido

Abstract: We describe the use of a hierarchical topic model for automatically identifying syntactic and lexical patterns that explicitly state ontological relations. We leverage distant supervision using relations from the knowledge base FreeBase, but do not require any manual heuristic nor manual seed list selections. Results show that the learned patterns can be used to extract new relations with good precision.

6 0.06061101 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

7 0.060265671 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

8 0.058041584 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

9 0.055870451 51 acl-2012-Collective Generation of Natural Image Descriptions

10 0.053561352 217 acl-2012-Word Sense Disambiguation Improves Information Retrieval

11 0.052026905 144 acl-2012-Modeling Review Comments

12 0.050633289 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

13 0.050180882 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation

14 0.048773061 213 acl-2012-Utilizing Dependency Language Models for Graph-based Dependency Parsing Models

15 0.048144955 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

16 0.047914896 184 acl-2012-String Re-writing Kernel

17 0.04727345 187 acl-2012-Subgroup Detection in Ideological Discussions

18 0.047133032 195 acl-2012-The Creation of a Corpus of English Metalanguage

19 0.044305161 153 acl-2012-Named Entity Disambiguation in Streaming Data

20 0.044163164 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.146), (1, 0.084), (2, -0.0), (3, 0.03), (4, 0.014), (5, 0.078), (6, -0.022), (7, 0.026), (8, -0.009), (9, 0.007), (10, 0.007), (11, -0.005), (12, -0.046), (13, 0.016), (14, -0.046), (15, -0.042), (16, 0.009), (17, 0.019), (18, -0.035), (19, -0.035), (20, -0.01), (21, -0.021), (22, -0.005), (23, 0.018), (24, -0.072), (25, 0.023), (26, 0.019), (27, -0.068), (28, 0.04), (29, -0.065), (30, 0.0), (31, -0.03), (32, 0.006), (33, 0.082), (34, -0.072), (35, -0.054), (36, -0.006), (37, 0.07), (38, -0.004), (39, -0.086), (40, 0.05), (41, 0.08), (42, 0.121), (43, -0.081), (44, -0.168), (45, 0.193), (46, 0.011), (47, 0.147), (48, -0.044), (49, 0.052)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95290005 186 acl-2012-Structuring E-Commerce Inventory

Author: Karin Mauge ; Khash Rohanimanesh ; Jean-David Ruvini

Abstract: Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. In the paper we present a 2 steps method for structuring items into descriptive properties. The first step consists in unsupervised property discovery and extraction. The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall.

2 0.67605358 7 acl-2012-A Computational Approach to the Automation of Creative Naming

Author: Gozde Ozbal ; Carlo Strapparava

Abstract: In this paper, we propose a computational approach to generate neologisms consisting of homophonic puns and metaphors based on the category of the service to be named and the properties to be underlined. We describe all the linguistic resources and natural language processing techniques that we have exploited for this task. Then, we analyze the performance of the system that we have developed. The empirical results show that our approach is generally effective and it constitutes a solid starting point for the automation ofthe naming process.

3 0.56907493 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability

Author: Cristian Danescu-Niculescu-Mizil ; Justin Cheng ; Jon Kleinberg ; Lillian Lee

Abstract: Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased the choice of words and sentence structure — can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts — that is, more portable. — We also show how the concept of “memorable language” can be extended across domains. 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people’s minds. An active line of recent computational work has employed a variety of perspectives on this question. 892 Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to infor- mation on different topics [28, 36]. Beyond all these factors, however, one’s everyday experience with these domains suggests that the way in which a piece of information is expressed the choice of words, the way it is phrased might also have a fundamental effect on the extent to which it takes hold in people’s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, “catchy,” or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of “successful” language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or independent of these (obviously very crucial) factors? To — — — investigate the question, one needs a way of controlProce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi8c 9s2–901, ling as much as possible for the role that the surrounding context of the language plays. — — The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition people should be able to retain the quote and recognize it when they hear it invoked and production people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): “Certain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use” [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to — — — not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most “memorable”. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a “memorable quote” by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number oftimes it appears on the Web. Both ofthese serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we comparethemwithquotes thatarenotassessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly — fine-grained way for the confounding effects of context discussed above: we can observe differences 893 that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren’t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more “free-standing,” containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting as- pects as well, such as sound distributions. Our analysis ofmemorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap §2 lays the empirical foundations of our work: the design yasntdh ecerematpioirnic aofl our movie-quotes dataset, which we make publicly available (§2. 1), a pilot study cwhit hw ehu mmakaen subjects validating §I2M.1D),b abased memorability labels (§2.2), and further study bofa incorporating search-engine c2)o,u anntds (§2.3). §3 uddeytoafi lisn our analysis aenardc prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. §4 surveys rcerloastse-dd owmoarkin across a variety goafn fsie dladtsa.. §5 briefly sruelmatmedar wizoesrk ka andcr ionsdsic aat veasr some ffuft uierled sd.ire §c5tio bnrsie. 2 I’m ready for my close-up. 2.1 Data To study the properties of memorable movie quotes, we need a source of movie lines and a designation of memorability. Following [8], we constructed a corpus consisting of all lines from roughly 1000 movies, varying in genre, era, and popularity; for each movie, we then extracted the list of quotes from IMDb’s Memorable Quotes page corresponding to the movie.1 A memorable quote in IMDb can appear either as an individual sentence spoken by one character, or as a multi-sentence line, or as a block of dialogue involving multiple characters. In the latter two cases, it can be hard to determine which particular portion is viewed as memorable (some involve a build-up to a punch line; others involve the follow-through after a well-phrased opening sentence), and so we focus in our comparisons on those memorable quotes that 1This extraction involved some edit-distance-based alignment, since the exact form of the line in the script can exhibit minor differences from the version typed into IMDb. rmotuqsfebmaNerolbm543281760 0 1234D5ecil678910 894 Figure 1: Location of memorable quotes in each decile of movie scripts (the first 10th, the second 10th, etc.), summed over all movies. The same qualitative results hold if we discard each movie’s very first and last line, which might have privileged status. appear as a single sentence rather than a multi-line block.2 We now formulate a task that we can use to evaluate the features of memorable quotes. Recall that our goal is to identify effects based in the language of the quotes themselves, beyond any factors arising from the speaker or context. Thus, for each (singlesentence) memorable quote M, we identify a nonmemorable quote that is as similar as possible to M in all characteristics but the choice of words. This means we want it to be spoken by the same character in the same movie. It also means that we want it to have the same length: controlling for length is important because we expect that on average, shorter quotes will be easier to remember than long quotes, and that wouldn’t be an interesting textual effect to report. Moreover, we also want to control for the fact that a quote’s position in a movie can affect memorability: certain scenes produce more memorable dialogue, and as Figure 1 demonstrates, in aggregate memorable quotes also occur disproportionately near the beginnings and especially the ends of movies. In summary, then, for each M, we pick a contrasting (single-sentence) quote N from the same movie that is as close in the script as possible to M (either before or after it), subject to the conditions that (i) M and N are uttered by the same speaker, (ii) M and N have the same number of words, and (iii) N does not occur in the IMDb list of memorable 2We also ran experiments relaxing the single-sentence assumption, which allows for stricter scene control and a larger dataset but complicates comparisons involving syntax. The non-syntax results were in line with those reported here. TaJSOMbtrclodekviTn1ra:eBTykhoPrwNenpmlxeasipFIHAeaithrclsfnitkaQeomuifltw’sdaveoitycmsnedoqatbuliocrkeytsl f.woEeimlanchguwspakyirdfsebavot;ilmsdfcoenti’dus.erx-citaINmSnrkeioamct:ohenwmardleytQ.howfeu t’yvrecp,o’gsmrtpuaosnmtyef o rtgnhqieuvrobt.pehasirtdeosfpykuern close together in the movie by the same while the other is not. (Contractions character, have the same length, and one is labeled memorable by the IMDb such as “it’s” count as two words.) quotes for the movie (either as a single line or as part of a larger block). Given such pairs, we formulate a pairwise comparison task: given M and N, determine which is the memorable quote. Psychological research on subjective evaluation [35], as well as initial experiments using ourselves as subjects, indicated that this pairwise set-up easier to work with than simply presenting a single sentence and asking whether it is memorable or not; the latter requires agreement on an “absolute” criterion for memorability that is very hard to impose consistently, whereas the former simply requires a judgment that one quote is more memorable than another. Our main dataset, available at http://www.cs. cornell.edu/∼cristian/memorability.html,3 thus consists of approximately 2200 such (M, N) pairs, separated by a median of 5 same-character lines in the script. The reader can get a sense for the nature of the data from the three examples in Table 1. We now discuss two further aspects to the formulation of the experiment: a preliminary pilot study involving human subjects, and the incorporation of search engine counts into the data. 2.2 Pilot study: Human performance As a preliminary consideration, we did a small pilot study to see if humans can distinguish memorable from non-memorable quotes, assuming our IMDBinduced labels as gold standard. Six subjects, all native speakers of English and none an author of this paper, were presented with 11 or 12 pairs of memorable vs. non-memorable quotes; again, we controlled for extra-textual effects by ensuring that in each pair the two quotes come from the same movie, are by the same character, have the same length, and 3Also available there: other examples and factoids. 895 Table 2: Human pilot study: number of matches to IMDb-induced annotation, ordered by decreasing match percentage. For the null hypothesis of random guessing, these results are statistically significant, p < 2−6 ≈ .016. appear as nearly as possible in the same scene.4 The order of quotes within pairs was randomized. Importantly, because we wanted to understand whether the language of the quotes by itself contains signals about memorability, we chose quotes from movies that the subjects said they had not seen. (This means that each subject saw a different set of quotes.) Moreover, the subjects were requested not to consult any external sources of information.5 The reader is welcome to try a demo version of the task at http: //www.cs.cornell.edu/∼cristian/memorability.html. Table 2 shows that all the subjects performed (sometimes much) better than chance, and against the null hypothesis that all subjects are guessing randomly, the results are statistically significant, p < 2−6 ≈ .016. These preliminary findings provide evidenc≈e f.0or1 t6h.e T validity eolifm our traysk fi:n despite trohev apparent difficulty of the job, even humans who haven’t seen the movie in question can recover our IMDb4In this pilot study, we allowed multi-sentence quotes. 5We did not use crowd-sourcing because we saw no way to ensure that this condition would be obeyed by arbitrary subjects. We do note, though, that after our research was completed and as of Apr. 26, 2012, ≈ 11,300 people completed the online test: average accuracy: 27,2 ≈%, 1 1m,3o0d0e npueompbleer c coomrrpelcett:e d9 t/1he2. induced labels with some reliability.6 2.3 Incorporating search engine counts Thus far we have discussed a dataset in which memorability is determined through an explicit labeling drawn from the IMDb. Given the “production” aspect of memorability discussed in § 1, we stihoonu”ld a saplesoc expect tmhaotr mabeimlityora dbislce quotes nw §il1l ,te wnde to appear more extensively on Web pages than nonmemorable quotes; note that incorporating this insight makes it possible to use the (implicit) judgments of a much larger number of people than are represented by the IMDb database. It therefore makes sense to try using search-engine result counts as a second indication of memorability. We experimented with several ways of constructing memorability information from search-engine counts, but this proved challenging. Searching for a quote as a stand-alone phrase runs into the problem that a number of quotes are also sentences that people use without the movie in mind, and so high counts for such quotes do not testify to the phrase’s status as a memorable quote from the movie. On the other hand, searching for the quote in a Boolean conjunction with the movie’s title discards most of these uses, but also eliminates a large fraction of the appearances on the Web that we want to find: precisely because memorable quotes tend to have widespread cultural usage, people generally don’t feel the need to include the movie’s title when invoking them. Finally, since we are dealing with roughly 1000 movies, the result counts vary over an enormous range, from recent blockbusters to movies with relatively small fan bases. In the end, we found that it was more effective to use the result counts in conjunction with the IMDb labels, so that the counts played the role of an additional filter rather than a free-standing numerical value. Thus, for each pair (M, N) produced using the IMDb methodology above, we searched for each of M and N as quoted expressions in a Boolean conjunction with the title of the movie. We then kept only those pairs for which M (i) produced more than five results in our (quoted, conjoined) search, and (ii) produced at least twice as many results as the cor6The average accuracy being below 100% reinforces that context is very important, too. 896 responding search for N. We created a version of this filtered dataset using each of Google and Bing, and all the main findings were consistent with the results on the IMDb-only dataset. Thus, in what follows, we will focus on the main IMDb-only dataset, discussing the relationship to the dataset filtered by search engine counts where relevant (in which case we will refer to the +Google dataset). 3 Never send a human to do a machine’s job. We now discuss experiments that investigate the hypotheses discussed in §1. In particular, we devise pmoetthheosdess t dhiastc can assess 1th.e Idnis ptianrcttiicvuelnaer,ss w aend d generality hypotheses and test whether there exists a notion of “memorable language” that operates across domains. In addition, we evaluate and compare the predictive power of these hypotheses. 3.1 Distinctiveness One of the hypotheses we examine is whether the use of language in memorable quotes is to some extent unusual. In order to quantify the level of distinctiveness of a quote, we take a language-model approach: we model “common language” using the newswire sections of the Brown corpus [21]7, and evaluate how distinctive a quote is by evaluating its likelihood with respect to this model the lower the likelihood, the more distinctive. In order to assess different levels of lexical and syntactic distinctiveness, we employ a total of six Laplacesmoothed8 language models: 1-gram, 2-gram, and — 3-gram word LMs and 1-gram, 2-gram and 3-gram LMs. We find strong evidence that from a lexical perspective, memorable quotes are more distinctive than their non-memorable counterparts. As indicated in Table 3, for each of our lexical “common language” models, in about 60% of the quote pairs, the memorable quote is more distinctive. Interestingly, the reverse is true when it comes to part-of-speech9 7Results were qualitatively similar if we used the fiction portions. The age of the Brown corpus makes it less likely to contain modern movie quotes. 8We employ Laplace (additive) smoothing with a smoothing parameter of 0.2. The language models’ vocabulary was that of the entire training corpus. 9Throughout we obtain part-of-speech tags by using the NLTK maximum entropy tagger with default parameters. in which the the memorable quote is more distinctive than the non-memorable one according to the respective “common language” model. Significance according to a two-tailed sign test is indicated using *-notation (∗∗∗=“p<.001”). syntax: memorable quotes appear to follow the syntactic patterns of “common language” as closely as or more closely than non-memorable quotes. Together, these results suggest that memorable quotes consist of unusual word sequences built on common syntactic scaffolding. 3.2 Generality Another of our hypotheses is that memorable quotes are easier to use outside the specific context in which they were uttered that is, more “portable” and therefore exhibit fewer terms that refer to those settings. We use the following syntactic properties as proxies for the generality of a quote: • Fewer 3rd-person pronouns, since these commonly r 3efer to a person or object that was introduced earlier in the discourse. Utterances that employ fewer such pronouns are easier to adapt to new contexts, and so will be considered more — — general. • More indefinite articles like a and an, since they are more likely ttioc lreesfer li ktoe general concepts than definite articles. Quotes with more indefinite articles will be considered more general. Fewer past tense verbs and more present tFeenwsee verbs, tseinncsee t vheer bfosrm aenrd are more likely to refer to specific previous events. Therefore utterances that employ fewer past tense verbs (and more present tense verbs) will be considered more general. Table 4 gives the results for each of these four metrics in each case, we show the percentage of • — 897 TalfmGebowsnre4pa:in3srGldet sypfne.msrate.lripnctysoe: purncsetaI56gM47e.326D9o710bf% -qo∗u n∗l tyepa+56iG892rs.o7i364ng% wl∗ eh∗i ch the memorable quote is more general than the non- memorable ones according to the respective metric. Pairs where the metric does not distinguish between the quotes are not considered. quote pairs for which the memorable quote scores better on the generality metric. Note that because the issue of generality is a complex one for which there is no straightforward single metric, our approach here is based on several proxies for generality, considered independently; yet, as the results show, all of these point in a consistent direction. It is an interesting open question to develop richer ways of assessing whether a quote has greater generality, in the sense that people intuitively attribute to memorable quotes. 3.3 “Memorable” language beyond movies One of the motivating questions in our analysis is whether there are general principles underlying “memorable language.” The results thus far suggest potential families of such principles. A further question in this direction is whether the notion of memorability can be extended across different domains, and for this we collected (and distribute on our website) 431 phrases that were explicitly designed to be memorable: advertising slogans (e.g., “Quality never goes out of style.”). The focus on slogans is also in keeping with one of the initial motivations in studying memorability, namely, marketing applications in other words, assessing whether a proposed slogan has features that are consistent with memorable text. The fact that it’s not clear how to construct a collection of “non-memorable” counterparts to slogans appears to pose a technical challenge. However, we can still use a language-modeling approach to assess whether the textual properties of the slogans are closer to the memorable movie quotes (as one would conjecture) or to the non-memorable movie quotes. Specifically, we train one language model on memorable quotes and another on non-memorable quotes — guage: percentage of slogans that have higher likelihood under the memorable language model than under the nonmemorable one (for each of the six language models considered). Rightmost column: for reference, the percentage of newswire sentences that have higher likelihood under the memorable language model than under the nonmemorable one. TaG% ble3nipared6stpa:lfeitrnSsyilto.megpareotnsicluaerns mo1s42lto.61g048ae% nseral2w1m.h16e3mn% .comn2p-63ma.0r46e19dm% .to memorable and non-memorable quotes. (%s of 3rd pers. pronouns and indefinite articles are relative to all tokens, %s of past tense are relative to all past and present verbs.) and compare how likely each slogan is to be produced according to these two models. As shown in the middle column of Table 5, we find that slogans are better predicted both lexically and syntactically by the former model. This result thus offers evidence for a concept of “memorable language” that can be applied beyond a single domain. We also note that the higher likelihood of slogans under a “memorable language” model is not simply occurring for the trivial reason that this model predicts all other large bodies of text better. In particular, the newswire section of the Brown corpus is predicted better at the lexical level by the language model trained on non-memorable quotes. Finally, Table 6 shows that slogans employ general language, in the sense that for each of our generality metrics, we see a slogans/memorablequotes/non-memorable quotes spectrum. 3.4 Prediction task We now show how the principles discussed above can provide features for a basic prediction task, corresponding to the task in our human pilot study: 898 given a pair of quotes, identify the memorable one. Our first formulation of the prediction task uses a standard bag-of-words model10. If there were no information in the textual content of a quote to determine whether it were memorable, then an SVM employing bag-of-words features should perform no better than chance. Instead, though, it obtains 59.67% (10-fold cross-validation) accuracy, as shown in Table 7. We then develop models using features based on the measures formulated earlier in this section: generality measures (the four listed in Table 4); distinctiveness measures (likelihood according to 1, 2, and 3-gram “common language” models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them); and similarityto-slogans measures (likelihood according to 1, 2, and 3-gram slogan-language models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them). Even a relatively small number of distinctiveness features, on their own, improve significantly over the much larger bag-of-words model. When we include additional features based on generality and language-model features measuring similarity to slogans, the performance improves further (last line of Table 7). Thus, the main conclusion from these prediction tasks is that abstracting notions such as distinctiveness and generality can produce relatively streamlined models that outperform much heavier-weight bag-of-words models, and can suggest steps toward approaching the performance of human judges who very much unlike our system have the full cultural context in which movies occur at their disposal. — — 3.5 Other characteristics We also made some auxiliary observations that may be ofinterest. Specifically, we find differences in letter and sound distribution (e.g., memorable quotes after curse-word removal use significantly more “front sounds” (labials or front vowels such as represented by the letter i) and significantly fewer “back sounds” such as the one represented by u),11 — — 10We discarded terms appearing fewer than 10 times. 11These findings may relate to marketing research on sound symbolism [7, 19, 40]. TablesdgF7lieao:sngtPiehnorauefc dtliswevctymeo irnp.des:StoVgeMh10r-fo#ldec9ra265ot42sv5aA6l8942ic.d36720atu57%ri aocn∗yresult using the respective feature sets. Random baseline accuracy is 50%. Accuracies statistically significantly greater than bag-of-words according to a two-tailed t-test are indicated with *(p<.05) and **(p<.01). word complexity (e.g., memorable quotes use words with significantly more syllables) and phrase complexity (e.g., memorable quotes use fewer coordinating conjunctions). The latter two are in line with our distinctiveness hypothesis. 4 A long time ago, in a galaxy far, far away How an item’s linguistic form affects the reaction it generates has been studied in several contexts, including evaluations of product reviews [9], political speeches [12], on-line posts [13], scientific papers [14], and retweeting of Twitter posts [36]. We use a different set of features, abstracting the notions of distinctiveness and generality, in order to focus on these higher-level aspects of phrasing rather than on particular lower-level features. Related to our interest in distinctiveness, work in advertising research has studied the effect of syntactic complexity on recognition and recall of slogans [5, 6, 24]. There may also be connections to Von Restorff’s isolation effect Hunt [17], which asserts that when all but one item in a list are similar in some way, memory for the different item is enhanced. Related to our interest in generality, Knapp et al. [20] surveyed subjects regarding memorable messages or pieces of advice they had received, finding that the ability to be applied to multiple concrete situations was an important factor. Memorability, although distinct from “memorizability”, relates to short- and long-term recall. Thorn and Page [34] survey sub-lexical, lexical, and semantic attributes affecting short-term memorability of lexical items. Studies of verbatim recall have also considered the task of distinguishing an exact quote from close paraphrases [3]. Investigations of longterm recall have included studies ofculturally signif- 899 icant passages of text [29] and findings regarding the effect of rhetorical devices of alliterative [4], “rhythmic, poetic, and thematic constraints” [18, 26]. Finally, there are complex connections between humor and memory [32], which may lead to interactions with computational humor recognition [25]. 5 I think this is the beginning of a beautiful friendship. Motivated by the broad question of what kinds of information achieve widespread public awareness, we studied the the effect of phrasing on a quote’s memorability. A challenge is that quotes differ not only in how they are worded, but also in who said them and under what circumstances; to deal with this difficulty, we constructed a controlled corpus of movie quotes in which lines deemed memorable are paired with non-memorable lines spoken by the same character at approximately the same point in the same movie. After controlling for context and situation, memorable quotes were still found to exhibit, on av- erage (there will always be individual exceptions), significant differences from non-memorable quotes in several important respects, including measures capturing distinctiveness and generality. Our experiments with slogans show how the principles we identify can extend to a different domain. Future work may lead to applications in marketing, advertising and education [4]. Moreover, the subtle nature of memorability, and its connection to research in psychology, suggests a range of further research directions. We believe that the framework developed here can serve as the basis for further computational studies of the process by which information takes hold in the public consciousness, and the role that language effects play in this process. My mother thanks you. My father thanks you. My sister thanks you. And Ithank you: Rebecca Hwa, Evie Kleinberg, Diana Minculescu, Alex Niculescu-Mizil, Jennifer Smith, Benjamin Zimmer, and the anonymous reviewers for helpful discussions and comments; our annotators Steven An, Lars Backstrom, Eric Baumer, Jeff Chadwick, Evie Kleinberg, and Myle Ott; and the makers of Cepacol, Robitussin, and Sudafed, whose products got us through the submission deadline. This paper is based upon work supported in part by NSF grants IIS-0910664, IIS-1016099, Google, and Yahoo! References [1] [2] [3] [4] [5] Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of KDD, 2006. Elizabeth Bates, Walter Kintsch, Charles R. Fletcher, and Vittoria Giuliani. The role of pronominalization and ellipsis in texts: Some memory experiments. Journal of Experimental Psychology: Human Learning and Memory, 6 (6):676–691, 1980. Frank Boers and Seth Lindstromberg. Finding ways to make phrase-learning feasible: The mnemonic effect of alliteration. System, 33(2): 225–238, 2005. Samuel D. Bradley and Robert Meeds. Surface-structure transformations and advertising slogans: The case for moderate syntactic complexity. Psychology and Marketing, 19: 595–619, 2002. [6] Robert Chamblee, Robert Gilmore, Gloria Thomas, and Gary Soldow. When copy complexity can help ad readership. Journal of Advertising Research, 33(3):23–23, 1993. [7] John Colapinto. Famous names. The New Yorker, pages 38–43, 2011. [8] Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2011. [9] Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of WWW, pages 141–150, 2009. [10] Stuart Fischoff, Esmeralda Cardenas, Angela Hernandez, Korey Wyatt, Jared Young, and 900 [11] [12] [13] [14] [15] Rachel Gordon. Popular movie quotes: Reflections of a people and a culture. In Annual Convention of the American Psychological Association, 2000. Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. Information diffusion through blogspace. Proceedings of WWW, pages 491–501, 2004. Marco Guerini, Carlo Strapparava, and Oliviero Stock. Trusting politicians’ words (for persuasive NLP). In Proceedings of CICLing, pages 263–274, 2008. Marco Guerini, Carlo Strapparava, and G o¨zde O¨zbal. Exploring text virality in social networks. In Proceedings of ICWSM (poster), 2011. Marco Guerini, Alberto Pepe, and Bruno Lepri. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM, 2012. Richard Jackson Harris, Abigail J. Werth, Kyle E. Bures, and Chelsea M. Bartel. Social movie quoting: What, why, and how? Ciencias Psicologicas, 2(1):35–45, 2008. [16] Chip Heath, Chris Bell, and Emily Steinberg. Emotional selection in memes: The case of urban legends. Journal of Personality, 81(6): 1028–1041, 2001. [17] R. Reed Hunt. The subtlety of distinctiveness: What von Restorff really did. Psychonomic Bulletin & Review, 2(1): 105–1 12, 1995. [18] Ira E. Hyman Jr. and David C. Rubin. Memorabeatlia: A naturalistic study of long-term memory. Memory & Cognition, 18(2):205– 214, 1990. [19] Richard R. Klink. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20, 2000. [20] Mark L. Knapp, Cynthia Stohl, and Kathleen K. Reardon. “Memorable” messages. Journal of Communication, 3 1(4):27– 41, 1981. [21] Henry Kuˇ cera and W. Nelson Francis. Computational analysis of present-day American English. Dartmouth Publishing Group, 1967. [22] Jure Leskovec, Lada Adamic, and Bernardo Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May [23] [24] [25] [26] [27] [28] [29] 2007. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD, pages 497–506, 2009. Tina M. Lowrey. The relation between script complexity and commercial memorability. Journal of Advertising, 35(3):7–15, 2006. Rada Mihalcea and Carlo Strapparava. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2): 126–142, 2006. Milman Parry and Adam Parry. The making of Homeric verse: The collected papers of Milman Parry. Clarendon Press, Oxford, 1971. Everett Rogers. Diffusion of Innovations. Free Press, fourth edition, 1995. Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on Twitter. Proceedings of WWW, pages 695–704, 2011. David C. Rubin. Very long-term memory for [30] [3 1] [32] [33] prose and verse. Journal of Verbal Learning and Verbal Behavior, 16(5):61 1–621, 1977. Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith. Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-01-103, CMU, 2010. David Strang and Sarah Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265–290, 1998. Hannah Summerfelt, Louis Lippman, and Ira E. Hyman Jr. The effect of humor on memory: Constrained by the pun. The Journal of General Psychology, 137(4), 2010. Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento. Gesundheit! Model- 901 ing contagion through Facebook News Feed. In Proceedings of ICWSM, 2009. [34] Annabel Thorn and Mike Page. Interactions Between Short-Term and Long-Term Memory [35] [36] [37] [38] [39] [40] in the Verbal Domain. Psychology Press, 2009. Louis L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273– 286, 1927. Oren Tsur and Ari Rappoport. What’s in a Hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM, 2012. Fang Wu, Bernardo A. Huberman, Lada A. Adamic, and Joshua R. Tyler. Information flow in social groups. Physica A: Statistical and Theoretical Physics, 337(1-2):327–335, 2004. Shaomei Wu, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Who says what to whom on Twitter. In Proceedings of WWW, 2011. Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of WSDM, 2011. Eric Yorkston and Geeta Menon. A sound idea: Phonetic effects of brand names on consumer judgments. Journal of Consumer Research, 3 1 (1):43–51, 2004.

4 0.53820729 112 acl-2012-Humor as Circuits in Semantic Networks

Author: Igor Labutov ; Hod Lipson

Abstract: This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (ConceptNet) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin’s Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners.

5 0.53310877 195 acl-2012-The Creation of a Corpus of English Metalanguage

Author: Shomir Wilson

Abstract: Metalanguage is an essential linguistic mechanism which allows us to communicate explicit information about language itself. However, it has been underexamined in research in language technologies, to the detriment of the performance of systems that could exploit it. This paper describes the creation of the first tagged and delineated corpus of English metalanguage, accompanied by an explicit definition and a rubric for identifying the phenomenon in text. This resource will provide a basis for further studies of metalanguage and enable its utilization in language technologies.

6 0.50261515 215 acl-2012-WizIE: A Best Practices Guided Development Environment for Information Extraction

7 0.44965327 197 acl-2012-Tokenization: Returning to a Long Solved Problem A Survey, Contrastive Experiment, Recommendations, and Toolkit

8 0.43800834 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

9 0.43106142 73 acl-2012-Discriminative Learning for Joint Template Filling

10 0.38410813 77 acl-2012-Ecological Evaluation of Persuasive Messages Using Google AdWords

11 0.37716413 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

12 0.3648093 153 acl-2012-Named Entity Disambiguation in Streaming Data

13 0.35534152 49 acl-2012-Coarse Lexical Semantic Annotation with Supersenses: An Arabic Case Study

14 0.31995907 144 acl-2012-Modeling Review Comments

15 0.31963059 166 acl-2012-Qualitative Modeling of Spatial Prepositions and Motion Expressions

16 0.31866452 14 acl-2012-A Joint Model for Discovery of Aspects in Utterances

17 0.31771499 51 acl-2012-Collective Generation of Natural Image Descriptions

18 0.31706822 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models

19 0.31705457 76 acl-2012-Distributional Semantics in Technicolor

20 0.31674632 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(26, 0.015), (28, 0.019), (30, 0.018), (37, 0.013), (39, 0.598), (74, 0.025), (82, 0.012), (84, 0.01), (85, 0.014), (90, 0.094), (92, 0.041), (94, 0.01), (99, 0.06)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96187693 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

Author: Wen-Tai Hsieh ; Chen-Ming Wu ; Tsun Ku ; Seng-cho T. Chou

Abstract: Social Event Radar is a new social networking-based service platform, that aim to alert as well as monitor any merchandise flaws, food-safety related issues, unexpected eruption of diseases or campaign issues towards to the Government, enterprises of any kind or election parties, through keyword expansion detection module, using bilingual sentiment opinion analysis tool kit to conclude the specific event social dashboard and deliver the outcome helping authorities to plan “risk control” strategy. With the rapid development of social network, people can now easily publish their opinions on the Internet. On the other hand, people can also obtain various opinions from others in a few seconds even though they do not know each other. A typical approach to obtain required information is to use a search engine with some relevant keywords. We thus take the social media and forum as our major data source and aim at collecting specific issues efficiently and effectively in this work. 163 Chen-Ming Wu Institute for Information Industry cmwu@ i i i .org .tw Seng-cho T. Chou Department of IM, National Taiwan University chou @ im .ntu .edu .tw 1

same-paper 2 0.91598672 186 acl-2012-Structuring E-Commerce Inventory

Author: Karin Mauge ; Khash Rohanimanesh ; Jean-David Ruvini

Abstract: Large e-commerce enterprises feature millions of items entered daily by a large variety of sellers. While some sellers provide rich, structured descriptions of their items, a vast majority of them provide unstructured natural language descriptions. In the paper we present a 2 steps method for structuring items into descriptive properties. The first step consists in unsupervised property discovery and extraction. The second step involves supervised property synonym discovery using a maximum entropy based clustering algorithm. We evaluate our method on a year worth of ecommerce data and show that it achieves excellent precision with good recall.

3 0.89847946 7 acl-2012-A Computational Approach to the Automation of Creative Naming

Author: Gozde Ozbal ; Carlo Strapparava

Abstract: In this paper, we propose a computational approach to generate neologisms consisting of homophonic puns and metaphors based on the category of the service to be named and the properties to be underlined. We describe all the linguistic resources and natural language processing techniques that we have exploited for this task. Then, we analyze the performance of the system that we have developed. The empirical results show that our approach is generally effective and it constitutes a solid starting point for the automation ofthe naming process.

4 0.87875587 133 acl-2012-Learning to "Read Between the Lines" using Bayesian Logic Programs

Author: Sindhu Raghavan ; Raymond Mooney ; Hyeonseo Ku

Abstract: Most information extraction (IE) systems identify facts that are explicitly stated in text. However, in natural language, some facts are implicit, and identifying them requires “reading between the lines”. Human readers naturally use common sense knowledge to infer such implicit information from the explicitly stated facts. We propose an approach that uses Bayesian Logic Programs (BLPs), a statistical relational model combining firstorder logic and Bayesian networks, to infer additional implicit information from extracted facts. It involves learning uncertain commonsense knowledge (in the form of probabilistic first-order rules) from natural language text by mining a large corpus of automatically extracted facts. These rules are then used to derive additional facts from extracted information using BLP inference. Experimental evaluation on a benchmark data set for machine reading demonstrates the efficacy of our approach.

5 0.87377924 79 acl-2012-Efficient Tree-Based Topic Modeling

Author: Yuening Hu ; Jordan Boyd-Graber

Abstract: Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot. However, its expressive power comes at the cost of more complicated inference. We extend the SPARSELDA (Yao et al., 2009) inference scheme for latent Dirichlet allocation (LDA) to tree-based topic models. This sampling scheme computes the exact conditional distribution for Gibbs sampling much more quickly than enumerating all possible latent variable assignments. We further improve performance by iteratively refining the sampling distribution only when needed. Experiments show that the proposed techniques dramatically improve the computation time.

6 0.56210071 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle

7 0.51162642 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

8 0.49374342 161 acl-2012-Polarity Consistency Checking for Sentiment Dictionaries

9 0.48647666 151 acl-2012-Multilingual Subjectivity and Sentiment Analysis

10 0.48580596 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

11 0.48561576 198 acl-2012-Topic Models, Latent Space Models, Sparse Coding, and All That: A Systematic Understanding of Probabilistic Semantic Extraction in Large Corpus

12 0.4823415 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

13 0.47062469 6 acl-2012-A Comprehensive Gold Standard for the Enron Organizational Hierarchy

14 0.46552834 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

15 0.46479103 138 acl-2012-LetsMT!: Cloud-Based Platform for Do-It-Yourself Machine Translation

16 0.4635413 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

17 0.45681265 187 acl-2012-Subgroup Detection in Ideological Discussions

18 0.45584282 70 acl-2012-Demonstration of IlluMe: Creating Ambient According to Instant Message Logs

19 0.45051226 100 acl-2012-Fine Granular Aspect Analysis using Latent Structural Models

20 0.44544944 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model