acl acl2012 acl2012-22 knowledge-graph by maker-knowledge-mining

22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation


Source: pdf

Author: Xinyan Xiao ; Deyi Xiong ; Min Zhang ; Qun Liu ; Shouxun Lin

Abstract: Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. However, SMT has been advanced from word-based paradigm to phrase/rule-based paradigm. We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. We show that our model significantly improves the translation performance over the baseline on NIST Chinese-to-English translation experiments. Our model also achieves a better performance and a faster speed than previous approaches that work at the word level.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cn Abstract Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. [sent-5, score-1.636]

2 We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. [sent-7, score-1.933]

3 We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. [sent-8, score-2.087]

4 To exploit topic information for statistical machine translation (SMT), researchers have proposed various topic-specific lexicon translation models (Zhao and Xing, 2006; Zhao and Xing, 2007; Tam et al. [sent-13, score-1.157]

5 Since a synchronous rule is rarely factorized into individual words, we believe that it is more reasonable to incorporate the topic model directly at the rule level rather than the word level. [sent-21, score-1.245]

6 Consequently, we propose a topic similarity model for hierarchical phrase-based translation (Chiang, 2007), where each synchronous rule is associated with a topic distribution. [sent-22, score-2.065]

7 In particular, • • Given a document to be translated, we calcGuivlaeten t ahe d topic similarity b treatnwsleaetne a r wulee acnaldthe document based on their topic distributions. [sent-23, score-1.721]

8 We augment the hierarchical phrase-based system by integrating the proposed topic similarity model as a new feature (Section 3. [sent-24, score-0.906]

9 2, the similarity bAestw weee wn a generic sr iunle S aenctdi a given source idlaorcituyment computed by our topic similarity model is often very low. [sent-27, score-1.057]

10 Therefore topic sensitivity rules we further propose a model which rewards generic complement the topic similarity so as to model. [sent-29, score-1.961]

11 • We estimate bWaese des on the topic distribution batoeth t hthee t satnrdib source for a rule target fsoirde a topic models (Section 4. [sent-30, score-1.845]

12 In order to calculate similarities between target-side of rules and source-side given documents topic distributions topic distributions during decoding, of we project ProceedJienjgus, R ofep thueb 5lic0t hof A Knonrueaa,l M 8-e1e4ti Jnugly o f2 t0h1e2 A. [sent-32, score-1.896]

13 6 1 5 能 10 15 20 25 30 1 5 10 15 20 25 30 1 5 10 15 20 25 30 1 5 10 15 20 25 30 (a) 作 战 力 ⇒ opera- (b) 给予 X1 ⇒ grands X1 (c) 给予 X1 ⇒ give X1 (d) X1 举 行 会 谈 X2 ⇒ (tiao)na 作l capability held talks X1 X2 Figure 1: Four synchronous rules with topic distributions. [sent-46, score-0.929]

14 Each sub-graph shows a rule with its topic distribution, where the X-axis means topic index and the Y-axis means the topic probability. [sent-47, score-2.37]

15 Notably, the rule (b) and rule (c) shares the same source Chinese string, but they have different topic distributions due to the different English translations. [sent-48, score-1.277]

16 the target-side topic distributions of rules into the space of source-side topic model by one-tomany projection (Section 4. [sent-49, score-1.773]

17 We further show that both the source-side and target-side topic distributions improve translation quality and their improvements are complementary to each other. [sent-55, score-0.98]

18 2 Background: Topic Model A topic model is used for discovering the topics that occur in a collection of documents. [sent-56, score-0.827]

19 LDA is the most common topic model currently in use, therefore we exploit it for mining topics in this paper. [sent-59, score-0.85]

20 Then, for each word in the document, it samples a topic index from the document-topic distribution and samples the word conditioned on the topic index according the topic-word distribution. [sent-63, score-1.633]

21 The first one relates to the documenttopic distribution, which records the topic distribution of each document. [sent-65, score-0.943]

22 The second one is used for topic-word distribution, which represents each topic 751 as a distribution over words. [sent-66, score-0.849]

23 In the following sections, we will use these parameters and the topic assignments of words to estimate the parameters in our method. [sent-68, score-0.776]

24 The probability over a topic is high if the rule is highly related to the topic, other- wise the probability will be low. [sent-73, score-1.001]

25 Therefore, we use topic distribution to describe the relatedness of rules to topics. [sent-74, score-0.982]

26 Figure 1 shows four synchronous rules (Chiang, 2007) with topic distributions, some of which contain nonterminals. [sent-75, score-0.929]

27 We can see that, although the source part of rule (b) and (c) are identical, their topic distributions are quite different. [sent-76, score-1.069]

28 Rule (b) contains a highest probability on the topic about “China-U. [sent-77, score-0.753]

29 We achieve this by calculating similarity between the topic distributions of a rule and a document to be translated. [sent-84, score-1.243]

30 The k-th component P(z = k|r) means the probability kof- topic kp given tPhe(z zru =le r. [sent-87, score-0.753]

31 Analogously, we represent the topic information of a document d to be translated by a documenttopic distribution P(z|d), which is also a Ktdoimpiecns dioisnt vector. [sent-89, score-1.03]

32 nTh Pe( kz-|dth) ,di wmheincshion is P(z = k|d) means tiohen probability eo kf topic mke given Pdo(czum =en kt| dd). [sent-90, score-0.972]

33 Consequently, based on these two distributions, we select a rule for a document to be translated according to their topic similarity (Section 3. [sent-92, score-1.154]

34 In order to encourage the application of generic rules which are often penalized by our similarity model, we also propose a topic sensitivity model (Section 3. [sent-94, score-1.273]

35 1 Topic Similarity By comparing the similarity of their topic distributions, we are able to decide whether a rule is suitable for a given source document. [sent-97, score-1.082]

36 The topic similarity computes the distance of two topic distributions. [sent-98, score-1.551]

37 We calculate the topic similarity by Hellinger function: Similarity(P(z|d), P(z|r)) =∑kK=1(√P(z = k|d) −√P(z = k|r))2 (1) Hellinger function is used to calculate distribution distance and is popular in topic model (Blei and Lafferty, 2007). [sent-99, score-1.786]

38 1 By topic similarity, we aim to encourage or penalize the application of a rule for a given document according to their topic distributions, which then helps the SMT system make better translation decisions. [sent-100, score-1.932]

39 We can easily find that the topic distribution of rule (c) distribute evenly. [sent-108, score-1.057]

40 A document typically focuses on a few topics, and has a sharp topic distribution. [sent-114, score-0.824]

41 To address such issue of the topic similarity model, we further introduce a topic sensitivity model to describe the topic sensitivity of a rule using entropy as a metric: Sensitivity(P(z|r)) ∑K = −∑P(z = k∑= ∑1 k|r) × log(P(z = k|r)) (2) According to the Eq. [sent-122, score-2.973]

42 By incorporating the topic sensitivity model with the topic similarity model, we enable our SMT system to balance the selection of these two types of rules. [sent-124, score-1.825]

43 In this paper, we try to exploit the topic information of both source and target language. [sent-128, score-0.796]

44 To achieve this goal, we use both source-side and target-side monolingual topic models, and learn the correspondence between the two topic models from word-aligned bilingual corpus. [sent-129, score-1.686]

45 These two rule-topic distributions are estimated by corresponding topic models in the same way (Section 4. [sent-131, score-0.825]

46 In order to compute the similarity between the target-side topic distribution of a rule and the source-side topic distribution of a given document, we need to project the targetside topic distribution of a synchronous rule into the space of the source-side topic model (Section 4. [sent-134, score-4.026]

47 A more principle way is to learn a bilingual topic model from bilingual corpus (Mimno et al. [sent-136, score-0.894]

48 It requires a marginalization to infer the monolingual topic distribution using the bilingual topic model. [sent-139, score-1.765]

49 Previous work on bilingual topic model avoid this problem by some monolingual assumptions. [sent-141, score-0.892]

50 Zhao and Xing (2007) assume that the topic model is generated in a monolingual manner, while Tam et al. [sent-142, score-0.818]

51 , (2007) construct their bilingual topic model by enforcing a one-to- one correspondence between two monolingual topic models. [sent-143, score-1.719]

52 We also estimate our rule-topic distribution by two monolingual topic models, but use a different way to project target-side topics onto source-side topics. [sent-144, score-1.086]

53 When a rule r is extracted from a document d with topic distribution P(z|d), we mco alle dcotc an iennstta dnc we (r, P(z|d) , c), wuthieonre c (isz tdh)e, fwraec ctioollne ccotu annt ionfs an icnes t(arn,cPe( as dd)e,scc)r,ib wedhe eirne Chiang, (2007). [sent-152, score-1.142]

54 d Using dthoecseum instances, we calculate the topic probability P(z = k|r) as fwoello cwalsc: P(z = k|r) =∑kK′∑=1I∑∈Ic ×c P ×(z P =(z k =|d) k′|d) (3) By using bo∑th so∑urce-side and target-side document-topic distribution, we obtain two ruletopic distributions for each rule in total. [sent-154, score-1.161]

55 In order to calculate the similarity between the target-side rule-topic distribution of a rule and the source-side documenttopic distribution of a source document, we need to project target-side topics into the source-side topic space. [sent-158, score-1.586]

56 The projection contains two steps: • • In the first step, we learn the topic-to-topic correspondence probability p(zf |ze) ftro-omtop targetside topic ze to source-side topic zf. [sent-159, score-1.842]

57 In the second step, we project the target-side topic ed sisetcroibnudtio snte pof, a reul per ionjetoc tso thuerc tea-srgideet- topic space using the correspondence probability. [sent-160, score-1.585]

58 In the first step, we estimate the correspondence probability by the co-occurrence of the source-side and the target-side topic assignment of the wordaligned corpus. [sent-161, score-0.932]

59 Thus, we denotes each sentence pair by (zf, ze, a), where zf and ze are the topic assignments of source-side and target-side sentences respectively, and a is a set of links {(i, j)}. [sent-163, score-1.049]

60 Thus, the co-occurrence of a source-side topic with index kf and a target-side epagnrmot(-idwsuczfparn|kletimcdso)a农财收改保业政调社民革f入会(策障整村-t0gpirsna. [sent-165, score-0.882]

61 topic ke is calculated by: ∑ ∑ δ(zfi,kf) ∗ δ(zej,ke) (zf∑ ∑,ze ,a) (i∑,j) ∈a (4) where δ(x, y) is the Kronecker function, which is 1 if x = y and 0 otherwise. [sent-172, score-0.746]

62 Overall, after the first step, we obtain an correspondence matrix MKe ×Kf from target-side topic to source-side topic, where the item Mi,j represents the probability P(zf = i|ze = j). [sent-174, score-0.867]

63 Obviously, our projection method allo|wr)s) one target-side topic to align to multiple source-side topics. [sent-176, score-0.805]

64 From the training result of the correspondence matrix MKe ×Kf, we find that the topic correspondence between source and target language is not necessarily one-to-one. [sent-179, score-0.977]

65 Typically, the probability P(z = kf |z = ke) of a target-side topic mainly distributes on two or three source-side topics. [sent-180, score-0.921]

66 754 5 Decoding We incorporate our topic similarity model as a new feature into a traditional hiero system (Chiang, 2007) under discriminative framework (Och and Ney, 2002). [sent-182, score-0.931]

67 During decoding, we first infer the topic distribution P(zf |d) for a given document on source language. [sent-185, score-0.995]

68 In the topic-specific lexicon translation model, given a source document, it first calculates the topicspecific translation probability by normalizing the entire lexicon translation table, and then adapts the lexical weights ofrules correspondingly. [sent-189, score-0.748]

69 Therefore, comparing with the previous topic-specific lexicon translation method, our method provides a more efficient way for incorporating topic model into SMT. [sent-191, score-1.013]

70 Is our topic similarity model able to improve translation quality in terms of BLEU? [sent-193, score-1.026]

71 2 2: Result of our topic similarity model in terms of BLEU and speed (words per second), comparing with the traditional hierarchical system (“Baseline”) and the topic-specific lexicon translation method (“TopicLex”). [sent-212, score-1.236]

72 Is it helpful to introduce the topic sensitivity model to distinguish topic-insensitive and topic-sensitive rules? [sent-218, score-0.965]

73 For the topic model, we used the open source LDA tool GibbsLDA++ for estimation and inference. [sent-239, score-0.802]

74 4 During decoding, we first infer the topic distribution of given documents before translation according to the topic model trained on Chinese part of FBIS corpus. [sent-243, score-1.81]

75 80 Table 3: Percentage of topic-sensitive rules of various types of rule according to source-side (“Src”) and targetside (“Tgt”) topic distributions. [sent-263, score-1.14]

76 Given a new document, we need to adapt the lexical transla- tion weights of the rules based on topic model. [sent-269, score-0.846]

77 This verifies that topic similarity model can improve the translation quality significantly. [sent-276, score-1.058]

78 If the entropy of a rule is smaller than a certain threshold, then the rule is topic sensitive. [sent-280, score-1.159]

79 Meanwhile, we try to separate the effects of source-side topic distribution from the target-side topic distribution. [sent-303, score-1.608]

80 2, because the similarity features always punish topic-insensitive rules, we introduce topic sensitivity features as a complement. [sent-314, score-1.057]

81 23 points, when incorporating topic sensitivity features with topic similarity features. [sent-316, score-1.77]

82 2, we find that source-side topic and target-side topics may not exactly match, hence we use one-to-many topic correspondence. [sent-321, score-1.507]

83 Yet another method is to enforce one-to-one topic projection (Tam et al. [sent-322, score-0.805]

84 We achieve one-to-one projection by aligning a target topic to the source topic with the largest correspondence probability as calculated in Section 4. [sent-324, score-1.685]

85 94 Table 5: Effect of our topic model on three types of rules. [sent-335, score-0.768]

86 We find that the enforced one-to-one topic method obtains a slight improvement over the baseline system, while one-to-many projection achieves a larger improvement. [sent-338, score-0.831]

87 Our topic similarity method on monotone rule achieves the most improvement which is 0. [sent-347, score-1.165]

88 This shows that topic information also helps the selections of rules with non-terminals. [sent-349, score-0.846]

89 7 Related Work In addition to the topic-specific lexicon translation method mentioned in the previous sections, researchers also explore topic model for machine translation in other ways. [sent-350, score-1.168]

90 (2010) introduce topic model for filtering topic-mismatched phrase pairs. [sent-357, score-0.77]

91 They first assign a specific topic for the document to be translated. [sent-358, score-0.798]

92 A phrase pair will be discarded if its topic mismatches the document topic. [sent-360, score-0.822]

93 Researchers also introduce topic model for crosslingual language model adaptation (Tam et al. [sent-361, score-0.834]

94 They use bilingual topic model to project latent topic distribution across lan- guages. [sent-363, score-1.74]

95 Based on the bilingual topic model, they apply the source-side topic weights into the target-side topic model, and adapt the n-gram language model of target side. [sent-364, score-2.246]

96 Our topic similarity model uses the document topic information. [sent-365, score-1.669]

97 8 Conclusion and Future Work We have presented a topic similarity model which incorporates the rule-topic distributions on both the source and target side into traditional hierarchical phrase-based system. [sent-370, score-1.088]

98 This verifies the advantage of exploiting topic model at the rule level over the word level. [sent-372, score-0.986]

99 Further improvement is achieved by distinguishing topic-sensitive and topic-insensitive rules using the topic sensitivity model. [sent-373, score-1.091]

100 In the future, we are interesting to find ways to exploit topic model on bilingual data without document boundaries, thus to enlarge the size of training data. [sent-374, score-0.928]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('topic', 0.713), ('sensitivity', 0.219), ('rule', 0.208), ('zf', 0.183), ('translation', 0.155), ('kf', 0.146), ('distribution', 0.136), ('rules', 0.133), ('ze', 0.129), ('similarity', 0.125), ('correspondence', 0.114), ('distributions', 0.112), ('tam', 0.102), ('lexicon', 0.089), ('lda', 0.087), ('document', 0.085), ('synchronous', 0.083), ('topics', 0.081), ('bilingual', 0.074), ('documenttopic', 0.073), ('mke', 0.073), ('monolingual', 0.072), ('monotone', 0.07), ('projection', 0.069), ('targetside', 0.064), ('xing', 0.063), ('zhao', 0.063), ('adaptation', 0.055), ('ruletopic', 0.055), ('estimation', 0.053), ('bleu', 0.053), ('df', 0.051), ('fbis', 0.051), ('decoding', 0.046), ('project', 0.045), ('gong', 0.044), ('smt', 0.042), ('probability', 0.04), ('reordering', 0.04), ('estimate', 0.039), ('chiang', 0.039), ('blei', 0.038), ('projected', 0.037), ('simsrc', 0.037), ('topicinsensitive', 0.037), ('source', 0.036), ('hierarchical', 0.035), ('documents', 0.035), ('traditional', 0.034), ('calculate', 0.033), ('model', 0.033), ('och', 0.033), ('penalize', 0.033), ('ke', 0.033), ('marginalization', 0.032), ('hellinger', 0.032), ('ruiz', 0.032), ('verifies', 0.032), ('zhengxian', 0.032), ('entropy', 0.03), ('speed', 0.029), ('sourceside', 0.029), ('gibbslda', 0.029), ('tgt', 0.029), ('topicspecific', 0.029), ('xinyan', 0.029), ('nist', 0.028), ('improvement', 0.026), ('latent', 0.026), ('mimno', 0.026), ('wordaligned', 0.026), ('sharp', 0.026), ('carpuat', 0.026), ('hiero', 0.026), ('encourage', 0.025), ('conditioned', 0.025), ('infer', 0.025), ('generic', 0.025), ('franz', 0.025), ('translates', 0.024), ('phrase', 0.024), ('try', 0.024), ('assignments', 0.024), ('translated', 0.023), ('exploit', 0.023), ('index', 0.023), ('method', 0.023), ('inferred', 0.023), ('dd', 0.022), ('faster', 0.022), ('types', 0.022), ('effects', 0.022), ('mainly', 0.022), ('nonterminals', 0.022), ('notably', 0.022), ('statistical', 0.022), ('lexicalized', 0.021), ('kk', 0.021), ('relates', 0.021), ('effect', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999934 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

Author: Xinyan Xiao ; Deyi Xiong ; Min Zhang ; Qun Liu ; Shouxun Lin

Abstract: Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. However, SMT has been advanced from word-based paradigm to phrase/rule-based paradigm. We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. We show that our model significantly improves the translation performance over the baseline on NIST Chinese-to-English translation experiments. Our model also achieves a better performance and a faster speed than previous approaches that work at the word level.

2 0.41266313 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation

Author: Vladimir Eidelman ; Jordan Boyd-Graber ; Philip Resnik

Abstract: We propose an approach that biases machine translation systems toward relevant translations based on topic-specific contexts, where topics are induced in an unsupervised way using topic models; this can be thought of as inducing subcorpora for adaptation without any human annotation. We use these topic distributions to compute topic-dependent lex- ical weighting probabilities and directly incorporate them into our translation model as features. Conditioning lexical probabilities on the topic biases translations toward topicrelevant output, resulting in significant improvements of up to 1 BLEU and 3 TER on Chinese to English translation over a strong baseline.

3 0.37120053 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information

Author: Jinsong Su ; Hua Wu ; Haifeng Wang ; Yidong Chen ; Xiaodong Shi ; Huailin Dong ; Qun Liu

Abstract: To adapt a translation model trained from the data in one domain to another, previous works paid more attention to the studies of parallel corpus while ignoring the in-domain monolingual corpora which can be obtained more easily. In this paper, we propose a novel approach for translation model adaptation by utilizing in-domain monolingual topic information instead of the in-domain bilingual corpora, which incorporates the topic information into translation probability estimation. Our method establishes the relationship between the out-of-domain bilingual corpus and the in-domain monolingual corpora via topic mapping and phrase-topic distribution probability estimation from in-domain monolingual corpora. Experimental result on the NIST Chinese-English translation task shows that our approach significantly outperforms the baseline system.

4 0.32213357 171 acl-2012-SITS: A Hierarchical Nonparametric Model using Speaker Identity for Topic Segmentation in Multiparty Conversations

Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik

Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK ×  VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.

5 0.26089311 79 acl-2012-Efficient Tree-Based Topic Modeling

Author: Yuening Hu ; Jordan Boyd-Graber

Abstract: Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot. However, its expressive power comes at the cost of more complicated inference. We extend the SPARSELDA (Yao et al., 2009) inference scheme for latent Dirichlet allocation (LDA) to tree-based topic models. This sampling scheme computes the exact conditional distribution for Gibbs sampling much more quickly than enumerating all possible latent variable assignments. We further improve performance by iteratively refining the sampling distribution only when needed. Experiments show that the proposed techniques dramatically improve the computation time.

6 0.25432366 98 acl-2012-Finding Bursty Topics from Microblogs

7 0.2460743 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

8 0.24180418 198 acl-2012-Topic Models, Latent Space Models, Sparse Coding, and All That: A Systematic Understanding of Probabilistic Semantic Extraction in Large Corpus

9 0.224534 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks

10 0.20743474 31 acl-2012-Authorship Attribution with Author-aware Topic Models

11 0.16544873 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

12 0.153412 141 acl-2012-Maximum Expected BLEU Training of Phrase and Lexicon Translation Models

13 0.14154781 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation

14 0.12945582 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model

15 0.12653835 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment

16 0.12523663 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation

17 0.12047691 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?

18 0.11808179 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation

19 0.11150713 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model

20 0.11017796 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.318), (1, -0.045), (2, 0.418), (3, 0.086), (4, -0.467), (5, -0.051), (6, -0.095), (7, -0.083), (8, 0.055), (9, 0.007), (10, 0.053), (11, 0.051), (12, 0.095), (13, -0.041), (14, 0.065), (15, 0.051), (16, -0.058), (17, -0.027), (18, 0.014), (19, -0.029), (20, -0.074), (21, 0.044), (22, -0.016), (23, 0.02), (24, 0.027), (25, 0.03), (26, 0.017), (27, -0.042), (28, -0.069), (29, 0.022), (30, -0.014), (31, 0.109), (32, -0.054), (33, 0.003), (34, -0.059), (35, -0.019), (36, 0.015), (37, -0.004), (38, 0.077), (39, -0.052), (40, 0.025), (41, -0.048), (42, -0.049), (43, 0.021), (44, -0.039), (45, -0.007), (46, 0.05), (47, 0.019), (48, 0.003), (49, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99327439 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

Author: Xinyan Xiao ; Deyi Xiong ; Min Zhang ; Qun Liu ; Shouxun Lin

Abstract: Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. However, SMT has been advanced from word-based paradigm to phrase/rule-based paradigm. We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. We show that our model significantly improves the translation performance over the baseline on NIST Chinese-to-English translation experiments. Our model also achieves a better performance and a faster speed than previous approaches that work at the word level.

2 0.91465449 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation

Author: Vladimir Eidelman ; Jordan Boyd-Graber ; Philip Resnik

Abstract: We propose an approach that biases machine translation systems toward relevant translations based on topic-specific contexts, where topics are induced in an unsupervised way using topic models; this can be thought of as inducing subcorpora for adaptation without any human annotation. We use these topic distributions to compute topic-dependent lex- ical weighting probabilities and directly incorporate them into our translation model as features. Conditioning lexical probabilities on the topic biases translations toward topicrelevant output, resulting in significant improvements of up to 1 BLEU and 3 TER on Chinese to English translation over a strong baseline.

3 0.86772346 31 acl-2012-Authorship Attribution with Author-aware Topic Models

Author: Yanir Seroussi ; Fabian Bohnert ; Ingrid Zukerman

Abstract: Authorship attribution deals with identifying the authors of anonymous texts. Building on our earlier finding that the Latent Dirichlet Allocation (LDA) topic model can be used to improve authorship attribution accuracy, we show that employing a previously-suggested Author-Topic (AT) model outperforms LDA when applied to scenarios with many authors. In addition, we define a model that combines LDA and AT by representing authors and documents over two disjoint topic sets, and show that our model outperforms LDA, AT and support vector machines on datasets with many authors.

4 0.86113322 171 acl-2012-SITS: A Hierarchical Nonparametric Model using Speaker Identity for Topic Segmentation in Multiparty Conversations

Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik

Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK ×  VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.

5 0.85701674 79 acl-2012-Efficient Tree-Based Topic Modeling

Author: Yuening Hu ; Jordan Boyd-Graber

Abstract: Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot. However, its expressive power comes at the cost of more complicated inference. We extend the SPARSELDA (Yao et al., 2009) inference scheme for latent Dirichlet allocation (LDA) to tree-based topic models. This sampling scheme computes the exact conditional distribution for Gibbs sampling much more quickly than enumerating all possible latent variable assignments. We further improve performance by iteratively refining the sampling distribution only when needed. Experiments show that the proposed techniques dramatically improve the computation time.

6 0.79314643 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information

7 0.74769008 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model

8 0.73803014 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks

9 0.73005521 198 acl-2012-Topic Models, Latent Space Models, Sparse Coding, and All That: A Systematic Understanding of Probabilistic Semantic Extraction in Large Corpus

10 0.71195418 98 acl-2012-Finding Bursty Topics from Microblogs

11 0.59185672 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

12 0.53432482 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

13 0.48969004 14 acl-2012-A Joint Model for Discovery of Aspects in Utterances

14 0.43368912 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation

15 0.41997531 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation

16 0.39431715 108 acl-2012-Hierarchical Chunk-to-String Translation

17 0.39110529 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation

18 0.38297713 141 acl-2012-Maximum Expected BLEU Training of Phrase and Lexicon Translation Models

19 0.36470413 131 acl-2012-Learning Translation Consensus with Structured Label Propagation

20 0.36435944 144 acl-2012-Modeling Review Comments


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(25, 0.01), (26, 0.031), (28, 0.108), (30, 0.037), (37, 0.039), (39, 0.06), (57, 0.019), (59, 0.013), (64, 0.137), (74, 0.039), (82, 0.014), (85, 0.057), (90, 0.122), (92, 0.123), (94, 0.052), (99, 0.039)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.85425436 60 acl-2012-Coupling Label Propagation and Constraints for Temporal Fact Extraction

Author: Yafang Wang ; Maximilian Dylla ; Marc Spaniol ; Gerhard Weikum

Abstract: The Web and digitized text sources contain a wealth of information about named entities such as politicians, actors, companies, or cultural landmarks. Extracting this information has enabled the automated construction oflarge knowledge bases, containing hundred millions of binary relationships or attribute values about these named entities. However, in reality most knowledge is transient, i.e. changes over time, requiring a temporal dimension in fact extraction. In this paper we develop a methodology that combines label propagation with constraint reasoning for temporal fact extraction. Label propagation aggressively gathers fact candidates, and an Integer Linear Program is used to clean out false hypotheses that violate temporal constraints. Our method is able to improve on recall while keeping up with precision, which we demonstrate by experiments with biography-style Wikipedia pages and a large corpus of news articles.

same-paper 2 0.84186828 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

Author: Xinyan Xiao ; Deyi Xiong ; Min Zhang ; Qun Liu ; Shouxun Lin

Abstract: Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. However, SMT has been advanced from word-based paradigm to phrase/rule-based paradigm. We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. We show that our model significantly improves the translation performance over the baseline on NIST Chinese-to-English translation experiments. Our model also achieves a better performance and a faster speed than previous approaches that work at the word level.

3 0.77239001 218 acl-2012-You Had Me at Hello: How Phrasing Affects Memorability

Author: Cristian Danescu-Niculescu-Mizil ; Justin Cheng ; Jon Kleinberg ; Lillian Lee

Abstract: Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased the choice of words and sentence structure — can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts — that is, more portable. — We also show how the concept of “memorable language” can be extended across domains. 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people’s minds. An active line of recent computational work has employed a variety of perspectives on this question. 892 Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to infor- mation on different topics [28, 36]. Beyond all these factors, however, one’s everyday experience with these domains suggests that the way in which a piece of information is expressed the choice of words, the way it is phrased might also have a fundamental effect on the extent to which it takes hold in people’s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, “catchy,” or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of “successful” language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or independent of these (obviously very crucial) factors? To — — — investigate the question, one needs a way of controlProce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi8c 9s2–901, ling as much as possible for the role that the surrounding context of the language plays. — — The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition people should be able to retain the quote and recognize it when they hear it invoked and production people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): “Certain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use” [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to — — — not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most “memorable”. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a “memorable quote” by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number oftimes it appears on the Web. Both ofthese serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we comparethemwithquotes thatarenotassessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly — fine-grained way for the confounding effects of context discussed above: we can observe differences 893 that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren’t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more “free-standing,” containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting as- pects as well, such as sound distributions. Our analysis ofmemorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap §2 lays the empirical foundations of our work: the design yasntdh ecerematpioirnic aofl our movie-quotes dataset, which we make publicly available (§2. 1), a pilot study cwhit hw ehu mmakaen subjects validating §I2M.1D),b abased memorability labels (§2.2), and further study bofa incorporating search-engine c2)o,u anntds (§2.3). §3 uddeytoafi lisn our analysis aenardc prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. §4 surveys rcerloastse-dd owmoarkin across a variety goafn fsie dladtsa.. §5 briefly sruelmatmedar wizoesrk ka andcr ionsdsic aat veasr some ffuft uierled sd.ire §c5tio bnrsie. 2 I’m ready for my close-up. 2.1 Data To study the properties of memorable movie quotes, we need a source of movie lines and a designation of memorability. Following [8], we constructed a corpus consisting of all lines from roughly 1000 movies, varying in genre, era, and popularity; for each movie, we then extracted the list of quotes from IMDb’s Memorable Quotes page corresponding to the movie.1 A memorable quote in IMDb can appear either as an individual sentence spoken by one character, or as a multi-sentence line, or as a block of dialogue involving multiple characters. In the latter two cases, it can be hard to determine which particular portion is viewed as memorable (some involve a build-up to a punch line; others involve the follow-through after a well-phrased opening sentence), and so we focus in our comparisons on those memorable quotes that 1This extraction involved some edit-distance-based alignment, since the exact form of the line in the script can exhibit minor differences from the version typed into IMDb. rmotuqsfebmaNerolbm543281760 0 1234D5ecil678910 894 Figure 1: Location of memorable quotes in each decile of movie scripts (the first 10th, the second 10th, etc.), summed over all movies. The same qualitative results hold if we discard each movie’s very first and last line, which might have privileged status. appear as a single sentence rather than a multi-line block.2 We now formulate a task that we can use to evaluate the features of memorable quotes. Recall that our goal is to identify effects based in the language of the quotes themselves, beyond any factors arising from the speaker or context. Thus, for each (singlesentence) memorable quote M, we identify a nonmemorable quote that is as similar as possible to M in all characteristics but the choice of words. This means we want it to be spoken by the same character in the same movie. It also means that we want it to have the same length: controlling for length is important because we expect that on average, shorter quotes will be easier to remember than long quotes, and that wouldn’t be an interesting textual effect to report. Moreover, we also want to control for the fact that a quote’s position in a movie can affect memorability: certain scenes produce more memorable dialogue, and as Figure 1 demonstrates, in aggregate memorable quotes also occur disproportionately near the beginnings and especially the ends of movies. In summary, then, for each M, we pick a contrasting (single-sentence) quote N from the same movie that is as close in the script as possible to M (either before or after it), subject to the conditions that (i) M and N are uttered by the same speaker, (ii) M and N have the same number of words, and (iii) N does not occur in the IMDb list of memorable 2We also ran experiments relaxing the single-sentence assumption, which allows for stricter scene control and a larger dataset but complicates comparisons involving syntax. The non-syntax results were in line with those reported here. TaJSOMbtrclodekviTn1ra:eBTykhoPrwNenpmlxeasipFIHAeaithrclsfnitkaQeomuifltw’sdaveoitycmsnedoqatbuliocrkeytsl f.woEeimlanchguwspakyirdfsebavot;ilmsdfcoenti’dus.erx-citaINmSnrkeioamct:ohenwmardleytQ.howfeu t’yvrecp,o’gsmrtpuaosnmtyef o rtgnhqieuvrobt.pehasirtdeosfpykuern close together in the movie by the same while the other is not. (Contractions character, have the same length, and one is labeled memorable by the IMDb such as “it’s” count as two words.) quotes for the movie (either as a single line or as part of a larger block). Given such pairs, we formulate a pairwise comparison task: given M and N, determine which is the memorable quote. Psychological research on subjective evaluation [35], as well as initial experiments using ourselves as subjects, indicated that this pairwise set-up easier to work with than simply presenting a single sentence and asking whether it is memorable or not; the latter requires agreement on an “absolute” criterion for memorability that is very hard to impose consistently, whereas the former simply requires a judgment that one quote is more memorable than another. Our main dataset, available at http://www.cs. cornell.edu/∼cristian/memorability.html,3 thus consists of approximately 2200 such (M, N) pairs, separated by a median of 5 same-character lines in the script. The reader can get a sense for the nature of the data from the three examples in Table 1. We now discuss two further aspects to the formulation of the experiment: a preliminary pilot study involving human subjects, and the incorporation of search engine counts into the data. 2.2 Pilot study: Human performance As a preliminary consideration, we did a small pilot study to see if humans can distinguish memorable from non-memorable quotes, assuming our IMDBinduced labels as gold standard. Six subjects, all native speakers of English and none an author of this paper, were presented with 11 or 12 pairs of memorable vs. non-memorable quotes; again, we controlled for extra-textual effects by ensuring that in each pair the two quotes come from the same movie, are by the same character, have the same length, and 3Also available there: other examples and factoids. 895 Table 2: Human pilot study: number of matches to IMDb-induced annotation, ordered by decreasing match percentage. For the null hypothesis of random guessing, these results are statistically significant, p < 2−6 ≈ .016. appear as nearly as possible in the same scene.4 The order of quotes within pairs was randomized. Importantly, because we wanted to understand whether the language of the quotes by itself contains signals about memorability, we chose quotes from movies that the subjects said they had not seen. (This means that each subject saw a different set of quotes.) Moreover, the subjects were requested not to consult any external sources of information.5 The reader is welcome to try a demo version of the task at http: //www.cs.cornell.edu/∼cristian/memorability.html. Table 2 shows that all the subjects performed (sometimes much) better than chance, and against the null hypothesis that all subjects are guessing randomly, the results are statistically significant, p < 2−6 ≈ .016. These preliminary findings provide evidenc≈e f.0or1 t6h.e T validity eolifm our traysk fi:n despite trohev apparent difficulty of the job, even humans who haven’t seen the movie in question can recover our IMDb4In this pilot study, we allowed multi-sentence quotes. 5We did not use crowd-sourcing because we saw no way to ensure that this condition would be obeyed by arbitrary subjects. We do note, though, that after our research was completed and as of Apr. 26, 2012, ≈ 11,300 people completed the online test: average accuracy: 27,2 ≈%, 1 1m,3o0d0e npueompbleer c coomrrpelcett:e d9 t/1he2. induced labels with some reliability.6 2.3 Incorporating search engine counts Thus far we have discussed a dataset in which memorability is determined through an explicit labeling drawn from the IMDb. Given the “production” aspect of memorability discussed in § 1, we stihoonu”ld a saplesoc expect tmhaotr mabeimlityora dbislce quotes nw §il1l ,te wnde to appear more extensively on Web pages than nonmemorable quotes; note that incorporating this insight makes it possible to use the (implicit) judgments of a much larger number of people than are represented by the IMDb database. It therefore makes sense to try using search-engine result counts as a second indication of memorability. We experimented with several ways of constructing memorability information from search-engine counts, but this proved challenging. Searching for a quote as a stand-alone phrase runs into the problem that a number of quotes are also sentences that people use without the movie in mind, and so high counts for such quotes do not testify to the phrase’s status as a memorable quote from the movie. On the other hand, searching for the quote in a Boolean conjunction with the movie’s title discards most of these uses, but also eliminates a large fraction of the appearances on the Web that we want to find: precisely because memorable quotes tend to have widespread cultural usage, people generally don’t feel the need to include the movie’s title when invoking them. Finally, since we are dealing with roughly 1000 movies, the result counts vary over an enormous range, from recent blockbusters to movies with relatively small fan bases. In the end, we found that it was more effective to use the result counts in conjunction with the IMDb labels, so that the counts played the role of an additional filter rather than a free-standing numerical value. Thus, for each pair (M, N) produced using the IMDb methodology above, we searched for each of M and N as quoted expressions in a Boolean conjunction with the title of the movie. We then kept only those pairs for which M (i) produced more than five results in our (quoted, conjoined) search, and (ii) produced at least twice as many results as the cor6The average accuracy being below 100% reinforces that context is very important, too. 896 responding search for N. We created a version of this filtered dataset using each of Google and Bing, and all the main findings were consistent with the results on the IMDb-only dataset. Thus, in what follows, we will focus on the main IMDb-only dataset, discussing the relationship to the dataset filtered by search engine counts where relevant (in which case we will refer to the +Google dataset). 3 Never send a human to do a machine’s job. We now discuss experiments that investigate the hypotheses discussed in §1. In particular, we devise pmoetthheosdess t dhiastc can assess 1th.e Idnis ptianrcttiicvuelnaer,ss w aend d generality hypotheses and test whether there exists a notion of “memorable language” that operates across domains. In addition, we evaluate and compare the predictive power of these hypotheses. 3.1 Distinctiveness One of the hypotheses we examine is whether the use of language in memorable quotes is to some extent unusual. In order to quantify the level of distinctiveness of a quote, we take a language-model approach: we model “common language” using the newswire sections of the Brown corpus [21]7, and evaluate how distinctive a quote is by evaluating its likelihood with respect to this model the lower the likelihood, the more distinctive. In order to assess different levels of lexical and syntactic distinctiveness, we employ a total of six Laplacesmoothed8 language models: 1-gram, 2-gram, and — 3-gram word LMs and 1-gram, 2-gram and 3-gram LMs. We find strong evidence that from a lexical perspective, memorable quotes are more distinctive than their non-memorable counterparts. As indicated in Table 3, for each of our lexical “common language” models, in about 60% of the quote pairs, the memorable quote is more distinctive. Interestingly, the reverse is true when it comes to part-of-speech9 7Results were qualitatively similar if we used the fiction portions. The age of the Brown corpus makes it less likely to contain modern movie quotes. 8We employ Laplace (additive) smoothing with a smoothing parameter of 0.2. The language models’ vocabulary was that of the entire training corpus. 9Throughout we obtain part-of-speech tags by using the NLTK maximum entropy tagger with default parameters. in which the the memorable quote is more distinctive than the non-memorable one according to the respective “common language” model. Significance according to a two-tailed sign test is indicated using *-notation (∗∗∗=“p<.001”). syntax: memorable quotes appear to follow the syntactic patterns of “common language” as closely as or more closely than non-memorable quotes. Together, these results suggest that memorable quotes consist of unusual word sequences built on common syntactic scaffolding. 3.2 Generality Another of our hypotheses is that memorable quotes are easier to use outside the specific context in which they were uttered that is, more “portable” and therefore exhibit fewer terms that refer to those settings. We use the following syntactic properties as proxies for the generality of a quote: • Fewer 3rd-person pronouns, since these commonly r 3efer to a person or object that was introduced earlier in the discourse. Utterances that employ fewer such pronouns are easier to adapt to new contexts, and so will be considered more — — general. • More indefinite articles like a and an, since they are more likely ttioc lreesfer li ktoe general concepts than definite articles. Quotes with more indefinite articles will be considered more general. Fewer past tense verbs and more present tFeenwsee verbs, tseinncsee t vheer bfosrm aenrd are more likely to refer to specific previous events. Therefore utterances that employ fewer past tense verbs (and more present tense verbs) will be considered more general. Table 4 gives the results for each of these four metrics in each case, we show the percentage of • — 897 TalfmGebowsnre4pa:in3srGldet sypfne.msrate.lripnctysoe: purncsetaI56gM47e.326D9o710bf% -qo∗u n∗l tyepa+56iG892rs.o7i364ng% wl∗ eh∗i ch the memorable quote is more general than the non- memorable ones according to the respective metric. Pairs where the metric does not distinguish between the quotes are not considered. quote pairs for which the memorable quote scores better on the generality metric. Note that because the issue of generality is a complex one for which there is no straightforward single metric, our approach here is based on several proxies for generality, considered independently; yet, as the results show, all of these point in a consistent direction. It is an interesting open question to develop richer ways of assessing whether a quote has greater generality, in the sense that people intuitively attribute to memorable quotes. 3.3 “Memorable” language beyond movies One of the motivating questions in our analysis is whether there are general principles underlying “memorable language.” The results thus far suggest potential families of such principles. A further question in this direction is whether the notion of memorability can be extended across different domains, and for this we collected (and distribute on our website) 431 phrases that were explicitly designed to be memorable: advertising slogans (e.g., “Quality never goes out of style.”). The focus on slogans is also in keeping with one of the initial motivations in studying memorability, namely, marketing applications in other words, assessing whether a proposed slogan has features that are consistent with memorable text. The fact that it’s not clear how to construct a collection of “non-memorable” counterparts to slogans appears to pose a technical challenge. However, we can still use a language-modeling approach to assess whether the textual properties of the slogans are closer to the memorable movie quotes (as one would conjecture) or to the non-memorable movie quotes. Specifically, we train one language model on memorable quotes and another on non-memorable quotes — guage: percentage of slogans that have higher likelihood under the memorable language model than under the nonmemorable one (for each of the six language models considered). Rightmost column: for reference, the percentage of newswire sentences that have higher likelihood under the memorable language model than under the nonmemorable one. TaG% ble3nipared6stpa:lfeitrnSsyilto.megpareotnsicluaerns mo1s42lto.61g048ae% nseral2w1m.h16e3mn% .comn2p-63ma.0r46e19dm% .to memorable and non-memorable quotes. (%s of 3rd pers. pronouns and indefinite articles are relative to all tokens, %s of past tense are relative to all past and present verbs.) and compare how likely each slogan is to be produced according to these two models. As shown in the middle column of Table 5, we find that slogans are better predicted both lexically and syntactically by the former model. This result thus offers evidence for a concept of “memorable language” that can be applied beyond a single domain. We also note that the higher likelihood of slogans under a “memorable language” model is not simply occurring for the trivial reason that this model predicts all other large bodies of text better. In particular, the newswire section of the Brown corpus is predicted better at the lexical level by the language model trained on non-memorable quotes. Finally, Table 6 shows that slogans employ general language, in the sense that for each of our generality metrics, we see a slogans/memorablequotes/non-memorable quotes spectrum. 3.4 Prediction task We now show how the principles discussed above can provide features for a basic prediction task, corresponding to the task in our human pilot study: 898 given a pair of quotes, identify the memorable one. Our first formulation of the prediction task uses a standard bag-of-words model10. If there were no information in the textual content of a quote to determine whether it were memorable, then an SVM employing bag-of-words features should perform no better than chance. Instead, though, it obtains 59.67% (10-fold cross-validation) accuracy, as shown in Table 7. We then develop models using features based on the measures formulated earlier in this section: generality measures (the four listed in Table 4); distinctiveness measures (likelihood according to 1, 2, and 3-gram “common language” models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them); and similarityto-slogans measures (likelihood according to 1, 2, and 3-gram slogan-language models at the lexical and part-of-speech level for each quote in the pair, their differences, and pairwise comparisons between them). Even a relatively small number of distinctiveness features, on their own, improve significantly over the much larger bag-of-words model. When we include additional features based on generality and language-model features measuring similarity to slogans, the performance improves further (last line of Table 7). Thus, the main conclusion from these prediction tasks is that abstracting notions such as distinctiveness and generality can produce relatively streamlined models that outperform much heavier-weight bag-of-words models, and can suggest steps toward approaching the performance of human judges who very much unlike our system have the full cultural context in which movies occur at their disposal. — — 3.5 Other characteristics We also made some auxiliary observations that may be ofinterest. Specifically, we find differences in letter and sound distribution (e.g., memorable quotes after curse-word removal use significantly more “front sounds” (labials or front vowels such as represented by the letter i) and significantly fewer “back sounds” such as the one represented by u),11 — — 10We discarded terms appearing fewer than 10 times. 11These findings may relate to marketing research on sound symbolism [7, 19, 40]. TablesdgF7lieao:sngtPiehnorauefc dtliswevctymeo irnp.des:StoVgeMh10r-fo#ldec9ra265ot42sv5aA6l8942ic.d36720atu57%ri aocn∗yresult using the respective feature sets. Random baseline accuracy is 50%. Accuracies statistically significantly greater than bag-of-words according to a two-tailed t-test are indicated with *(p<.05) and **(p<.01). word complexity (e.g., memorable quotes use words with significantly more syllables) and phrase complexity (e.g., memorable quotes use fewer coordinating conjunctions). The latter two are in line with our distinctiveness hypothesis. 4 A long time ago, in a galaxy far, far away How an item’s linguistic form affects the reaction it generates has been studied in several contexts, including evaluations of product reviews [9], political speeches [12], on-line posts [13], scientific papers [14], and retweeting of Twitter posts [36]. We use a different set of features, abstracting the notions of distinctiveness and generality, in order to focus on these higher-level aspects of phrasing rather than on particular lower-level features. Related to our interest in distinctiveness, work in advertising research has studied the effect of syntactic complexity on recognition and recall of slogans [5, 6, 24]. There may also be connections to Von Restorff’s isolation effect Hunt [17], which asserts that when all but one item in a list are similar in some way, memory for the different item is enhanced. Related to our interest in generality, Knapp et al. [20] surveyed subjects regarding memorable messages or pieces of advice they had received, finding that the ability to be applied to multiple concrete situations was an important factor. Memorability, although distinct from “memorizability”, relates to short- and long-term recall. Thorn and Page [34] survey sub-lexical, lexical, and semantic attributes affecting short-term memorability of lexical items. Studies of verbatim recall have also considered the task of distinguishing an exact quote from close paraphrases [3]. Investigations of longterm recall have included studies ofculturally signif- 899 icant passages of text [29] and findings regarding the effect of rhetorical devices of alliterative [4], “rhythmic, poetic, and thematic constraints” [18, 26]. Finally, there are complex connections between humor and memory [32], which may lead to interactions with computational humor recognition [25]. 5 I think this is the beginning of a beautiful friendship. Motivated by the broad question of what kinds of information achieve widespread public awareness, we studied the the effect of phrasing on a quote’s memorability. A challenge is that quotes differ not only in how they are worded, but also in who said them and under what circumstances; to deal with this difficulty, we constructed a controlled corpus of movie quotes in which lines deemed memorable are paired with non-memorable lines spoken by the same character at approximately the same point in the same movie. After controlling for context and situation, memorable quotes were still found to exhibit, on av- erage (there will always be individual exceptions), significant differences from non-memorable quotes in several important respects, including measures capturing distinctiveness and generality. Our experiments with slogans show how the principles we identify can extend to a different domain. Future work may lead to applications in marketing, advertising and education [4]. Moreover, the subtle nature of memorability, and its connection to research in psychology, suggests a range of further research directions. We believe that the framework developed here can serve as the basis for further computational studies of the process by which information takes hold in the public consciousness, and the role that language effects play in this process. My mother thanks you. My father thanks you. My sister thanks you. And Ithank you: Rebecca Hwa, Evie Kleinberg, Diana Minculescu, Alex Niculescu-Mizil, Jennifer Smith, Benjamin Zimmer, and the anonymous reviewers for helpful discussions and comments; our annotators Steven An, Lars Backstrom, Eric Baumer, Jeff Chadwick, Evie Kleinberg, and Myle Ott; and the makers of Cepacol, Robitussin, and Sudafed, whose products got us through the submission deadline. This paper is based upon work supported in part by NSF grants IIS-0910664, IIS-1016099, Google, and Yahoo! References [1] [2] [3] [4] [5] Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. Implicit structure and the dynamics of blogspace. In Workshop on the Weblogging Ecosystem, 2004. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of KDD, 2006. Elizabeth Bates, Walter Kintsch, Charles R. Fletcher, and Vittoria Giuliani. The role of pronominalization and ellipsis in texts: Some memory experiments. Journal of Experimental Psychology: Human Learning and Memory, 6 (6):676–691, 1980. Frank Boers and Seth Lindstromberg. Finding ways to make phrase-learning feasible: The mnemonic effect of alliteration. System, 33(2): 225–238, 2005. Samuel D. Bradley and Robert Meeds. Surface-structure transformations and advertising slogans: The case for moderate syntactic complexity. Psychology and Marketing, 19: 595–619, 2002. [6] Robert Chamblee, Robert Gilmore, Gloria Thomas, and Gary Soldow. When copy complexity can help ad readership. Journal of Advertising Research, 33(3):23–23, 1993. [7] John Colapinto. Famous names. The New Yorker, pages 38–43, 2011. [8] Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2011. [9] Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In Proceedings of WWW, pages 141–150, 2009. [10] Stuart Fischoff, Esmeralda Cardenas, Angela Hernandez, Korey Wyatt, Jared Young, and 900 [11] [12] [13] [14] [15] Rachel Gordon. Popular movie quotes: Reflections of a people and a culture. In Annual Convention of the American Psychological Association, 2000. Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. Information diffusion through blogspace. Proceedings of WWW, pages 491–501, 2004. Marco Guerini, Carlo Strapparava, and Oliviero Stock. Trusting politicians’ words (for persuasive NLP). In Proceedings of CICLing, pages 263–274, 2008. Marco Guerini, Carlo Strapparava, and G o¨zde O¨zbal. Exploring text virality in social networks. In Proceedings of ICWSM (poster), 2011. Marco Guerini, Alberto Pepe, and Bruno Lepri. Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of ICWSM, 2012. Richard Jackson Harris, Abigail J. Werth, Kyle E. Bures, and Chelsea M. Bartel. Social movie quoting: What, why, and how? Ciencias Psicologicas, 2(1):35–45, 2008. [16] Chip Heath, Chris Bell, and Emily Steinberg. Emotional selection in memes: The case of urban legends. Journal of Personality, 81(6): 1028–1041, 2001. [17] R. Reed Hunt. The subtlety of distinctiveness: What von Restorff really did. Psychonomic Bulletin & Review, 2(1): 105–1 12, 1995. [18] Ira E. Hyman Jr. and David C. Rubin. Memorabeatlia: A naturalistic study of long-term memory. Memory & Cognition, 18(2):205– 214, 1990. [19] Richard R. Klink. Creating brand names with meaning: The use of sound symbolism. Marketing Letters, 11(1):5–20, 2000. [20] Mark L. Knapp, Cynthia Stohl, and Kathleen K. Reardon. “Memorable” messages. Journal of Communication, 3 1(4):27– 41, 1981. [21] Henry Kuˇ cera and W. Nelson Francis. Computational analysis of present-day American English. Dartmouth Publishing Group, 1967. [22] Jure Leskovec, Lada Adamic, and Bernardo Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May [23] [24] [25] [26] [27] [28] [29] 2007. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In Proceedings of KDD, pages 497–506, 2009. Tina M. Lowrey. The relation between script complexity and commercial memorability. Journal of Advertising, 35(3):7–15, 2006. Rada Mihalcea and Carlo Strapparava. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2): 126–142, 2006. Milman Parry and Adam Parry. The making of Homeric verse: The collected papers of Milman Parry. Clarendon Press, Oxford, 1971. Everett Rogers. Diffusion of Innovations. Free Press, fourth edition, 1995. Daniel M. Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics: Idioms, political hashtags, and complex contagion on Twitter. Proceedings of WWW, pages 695–704, 2011. David C. Rubin. Very long-term memory for [30] [3 1] [32] [33] prose and verse. Journal of Verbal Learning and Verbal Behavior, 16(5):61 1–621, 1977. Nathan Schneider, Rebecca Hwa, Philip Gianfortoni, Dipanjan Das, Michael Heilman, Alan W. Black, Frederick L. Crabbe, and Noah A. Smith. Visualizing topical quotations over time to understand news discourse. Technical Report CMU-LTI-01-103, CMU, 2010. David Strang and Sarah Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265–290, 1998. Hannah Summerfelt, Louis Lippman, and Ira E. Hyman Jr. The effect of humor on memory: Constrained by the pun. The Journal of General Psychology, 137(4), 2010. Eric Sun, Itamar Rosenn, Cameron Marlow, and Thomas M. Lento. Gesundheit! Model- 901 ing contagion through Facebook News Feed. In Proceedings of ICWSM, 2009. [34] Annabel Thorn and Mike Page. Interactions Between Short-Term and Long-Term Memory [35] [36] [37] [38] [39] [40] in the Verbal Domain. Psychology Press, 2009. Louis L. Thurstone. A law of comparative judgment. Psychological Review, 34(4):273– 286, 1927. Oren Tsur and Ari Rappoport. What’s in a Hashtag? Content based prediction of the spread of ideas in microblogging communities. In Proceedings of WSDM, 2012. Fang Wu, Bernardo A. Huberman, Lada A. Adamic, and Joshua R. Tyler. Information flow in social groups. Physica A: Statistical and Theoretical Physics, 337(1-2):327–335, 2004. Shaomei Wu, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Who says what to whom on Twitter. In Proceedings of WWW, 2011. Jaewon Yang and Jure Leskovec. Patterns of temporal variation in online media. In Proceedings of WSDM, 2011. Eric Yorkston and Geeta Menon. A sound idea: Phonetic effects of brand names on consumer judgments. Journal of Consumer Research, 3 1 (1):43–51, 2004.

4 0.76726568 31 acl-2012-Authorship Attribution with Author-aware Topic Models

Author: Yanir Seroussi ; Fabian Bohnert ; Ingrid Zukerman

Abstract: Authorship attribution deals with identifying the authors of anonymous texts. Building on our earlier finding that the Latent Dirichlet Allocation (LDA) topic model can be used to improve authorship attribution accuracy, we show that employing a previously-suggested Author-Topic (AT) model outperforms LDA when applied to scenarios with many authors. In addition, we define a model that combines LDA and AT by representing authors and documents over two disjoint topic sets, and show that our model outperforms LDA, AT and support vector machines on datasets with many authors.

5 0.75660449 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

Author: Bevan Jones ; Mark Johnson ; Sharon Goldwater

Abstract: Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.

6 0.7563796 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

7 0.74687946 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition

8 0.74635768 36 acl-2012-BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment

9 0.74416393 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

10 0.74302846 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation

11 0.74001837 205 acl-2012-Tweet Recommendation with Graph Co-Ranking

12 0.73831546 84 acl-2012-Estimating Compact Yet Rich Tree Insertion Grammars

13 0.73407274 140 acl-2012-Machine Translation without Words through Substring Alignment

14 0.73383802 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

15 0.73177254 167 acl-2012-QuickView: NLP-based Tweet Search

16 0.73096848 154 acl-2012-Native Language Detection with Tree Substitution Grammars

17 0.73081499 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale

18 0.73065561 136 acl-2012-Learning to Translate with Multiple Objectives

19 0.72602522 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

20 0.72305137 217 acl-2012-Word Sense Disambiguation Improves Information Retrieval