acl acl2012 acl2012-173 knowledge-graph by maker-knowledge-mining

173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations


Source: pdf

Author: JinYeong Bak ; Suin Kim ; Alice Oh

Abstract: In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. [sent-4, score-0.265]

2 We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. [sent-5, score-0.088]

3 Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). [sent-6, score-0.106]

4 Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure. [sent-7, score-0.417]

5 1 Introduction We often self-disclose, that is, share our emotions, personal information, and secrets, with our friends, family, coworkers, and even strangers. [sent-8, score-0.045]

6 Social psy- chologists say that the degree of self-disclosure in a relationship depends on the strength of the relationship, and strategic self-disclosure can strengthen the relationship (Duck, 2007). [sent-9, score-0.682]

7 In this paper, we study whether relationship strength has the same effect on self-disclosure of Twitter users. [sent-10, score-0.42]

8 To do this, we first present a method for computational analysis of self-disclosure in online conversations and show promising results. [sent-11, score-0.158]

9 To accommodate the largely unannotated nature of online conversation data, we take a topic-model based approach (Blei et al. [sent-12, score-0.158]

10 A similar approach was able to discover sentiments (Jo and Oh, 2011) and emotions (Kim et al. [sent-14, score-0.194]

11 edu ce work on self-disclosure for online social networks has been from communications research (Jiang et al. [sent-19, score-0.156]

12 , 2010) which relies on human judgements for analyzing self-disclosure. [sent-21, score-0.051]

13 The limitation of such research is that the data is small, so our approach of automatic analysis of selfdisclosure will be able to show robust results over a much larger data set. [sent-22, score-0.072]

14 Analyzing relationship strength in online social networks has been done for Facebook and Twitter in (Gilbert and Karahalios, 2009; Gilbert, 2012) and for enterprise SNS (Wu et al. [sent-23, score-0.612]

15 In this paper, we estimate relationship strength simply based on the duration and frequency of interaction. [sent-25, score-0.457]

16 We then look at the correlation between self-disclosure and relationship strength and present the preliminary results that show a positive and significant correlation. [sent-26, score-0.464]

17 2 Data and Methodology Twitter is widely used for conversations (Ritter et al. [sent-27, score-0.133]

18 , 2010), and prior work has looked at Twitter for different aspects of conversations (Boyd et al. [sent-28, score-0.161]

19 Ours is the first paper to analyze the degree of self-disclosure in conversational tweets. [sent-32, score-0.122]

20 In this section, we describe the details of our Twitter conversation data and our methodology for analyzing relationship strength and self-disclosure. [sent-33, score-0.631]

21 1 Twitter Conversation Data A Twitter conversation is a chain of tweets where two users are consecutively replying to each other’s tweets using the Twitter reply button. [sent-35, score-0.436]

22 We identified dyads of English-tweeting users who had at least Proce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. [sent-36, score-0.127]

23 c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 60–64, three conversations from October, 2011 to December, 2011 and collected their tweets for that duration. [sent-38, score-0.224]

24 To protect users’ privacy, we anonymized the data to remove all identifying information. [sent-39, score-0.025]

25 This dataset consists of 13 1,633 users, 2,283,821 chains and 11,196,397 tweets. [sent-40, score-0.049]

26 2 Relationship Strength Research in social psychology shows that relationship strength is characterized by interaction frequency and closeness of a relationship between two people (Granovetter, 1973; Levin and Cross, 2004). [sent-42, score-0.96]

27 Hence, we suggest measuring the relationship strength of the conversational dyads via the following two metrics. [sent-43, score-0.567]

28 Chain frequency (CF) measures the number of conversational chains between the dyad averaged per month. [sent-44, score-0.338]

29 Chain length (CL) measures the length of conversational chains be- tween the dyad averaged per month. [sent-45, score-0.301]

30 Intuitively, high CF or CL for a dyad means the relationship is strong. [sent-46, score-0.392]

31 3 Self-Disclosure Social psychology literature asserts that selfdisclosure consists of personal information and open communication composed of the following five elements (Montgomery, 1982). [sent-48, score-0.214]

32 Negative openness is how much disagreement or negative feeling one expresses about a situation or the communicative partner. [sent-49, score-0.448]

33 In Twitter conversations, we analyze sentiment using the aspect and sentiment unification model (ASUM) (Jo and Oh, 2011), based on LDA (Blei et al. [sent-50, score-0.133]

34 Nonverbal openness includes facial expressions, vocal tone, bodily postures or movements. [sent-55, score-0.444]

35 Since tweets do not show these, we look at emoticons, ‘lol’ (laughing out loud) and ‘xxx’ (kisses) for these nonverbal elements. [sent-56, score-0.279]

36 (2007), emoticons are used as substitutes for facial expressions or vocal tones in socio-emotional contexts. [sent-58, score-0.202]

37 The methodology used for identifying profanity is described in the next section. [sent-60, score-0.292]

38 Emotional openness is how much one discloses his/her feelings and moods. [sent-61, score-0.45]

39 org/wiki/List of emoticons 61 we look for tweets that contain words that are identified as the most common expressions of feelings in blogs as found in Harris and Kamvar (2009). [sent-64, score-0.319]

40 Receptive openness and General-style openness are difficult to get from tweets, and they are not defined precisely in the literature, so we do not consider these here. [sent-65, score-0.72]

41 Automatically identifying these is quite difficult, but there are certain topics that are indicative of PII and PEI, such as family, money, sickness and location, so we can use a widely-used topic model, LDA (Blei et al. [sent-68, score-0.11]

42 , 2003) to discover topics and annotate them using MTurk2 for PII and PEI, and profanity. [sent-69, score-0.115]

43 We asked the Turkers to read the conversation chains representing the topics discovered by LDA and have them mark the conversations that contain PII and PEI. [sent-70, score-0.4]

44 From this annotation, we identified five topics for profanity, ten topics for PII, and eight topics for PEI. [sent-71, score-0.279]

45 The profanity words identified this way include nigga, lmao, shit, fuck, lmfao, ass, bitch. [sent-76, score-0.264]

46 43210G 5 G1052PIEG (a) Chain Frequency (b) Conversation Length Figure 1: Degree of self-disclosure depending on various relationship strength metrics. [sent-98, score-0.42]

47 The x axis shows relationship strength according to tweeting behavior (chain frequency and chain length), and the y axis shows proportion of selfdisclosure in terms of negative openness, emotional openness, profanity, and PII and PEI. [sent-99, score-0.903]

48 3 Results and Discussions Chain frequency (CF) and chain length (CL) reflect the dyad’s tweeting behaviors. [sent-100, score-0.166]

49 When two users have stronger rela- tionships, they show more negative openness, nonverbal openness, profanity, and PEI. [sent-102, score-0.24]

50 However, weaker relationships tend to show more PII and emotions. [sent-104, score-0.153]

51 A closer look at the data reveals that PII topics are related to cities where they live, time of day, and birthday. [sent-105, score-0.153]

52 This shows that the weaker relationships, usually new acquaintances, use PII to introduce themselves or send trivial greetings for birthdays. [sent-106, score-0.122]

53 Higher emotional openness in weaker relationships looks strange at first, but similar to PII, emotion in weak relationships is usually expressed as greetings, reactions to baby or pet photos, or other shallow expressions. [sent-107, score-0.972]

54 It is interesting to look at outliers, dyads with very strong and very weak relationship groups. [sent-108, score-0.488]

55 Table 3 summarizes the self-disclosure behaviors of these outliers. [sent-109, score-0.05]

56 There is a clear pattern that stronger relationships show more nonverbal openness, nega62 Tablmse ahtrsf2 i:1htoTpiasntcwbliergat2dhkpaetrfwo helmoa lcnwsoktmiwe1spnrgomiwnbseoinutal rnekit2srowpahclnerwguvantw(ek‘y3str’) and weak relationships. [sent-110, score-0.388]

57 In figure 1, emotional openness does not differ for the strong and weak relationship groups. [sent-112, score-0.856]

58 We can see why this is when we look at the topics for the strong and weak groups. [sent-113, score-0.286]

59 Table 2 shows the topics that are most prominent in the strong relationships, and they include daily greetings, plans, nonverbal emotions such as ‘lol’, ‘omg’, and profanity. [sent-114, score-0.428]

60 In weak relationships, the prominent topics illustrate the prevalence of initial getting-to-know conversations in Twitter. [sent-115, score-0.359]

61 They welcome and greet each other about kids and pets, and offer sympathies about feeling bad. [sent-116, score-0.034]

62 One interesting way to use our analysis is in idenstrong weak # relation CF 5,640 14. [sent-117, score-0.116]

63 013 Table 3: Comparing the top 1% and the bottom 1% relationships as measured by the combination of CF and CL. [sent-141, score-0.103]

64 From ‘Emotion’ to PEI, all values are average proportions of tweets containing each self-disclosure behavior. [sent-142, score-0.091]

65 Strong relationships show more negative sentiment, profanity, and PEI, and weak relationships show more positive sentiment and PII. [sent-143, score-0.404]

66 tifying a rare situation that deviates from the general pattern, such as a dyad linked weakly but shows high self-disclosure. [sent-145, score-0.191]

67 In figure 2, we show an example of a conversation with a high degree of self-disclosure by a dyad who shares only one conversation in our dataset spanning two months. [sent-147, score-0.472]

68 4 Conclusion and Future Work We looked at the relationship strength in Twitter conversational partners and how much they selfdisclose to each other. [sent-148, score-0.532]

69 We found that people disclose more to closer friends, confirming the social psychology studies, but people show more positive sentiment to weak relationships rather than strong relationships. [sent-149, score-0.56]

70 This reflects the social norm toward first-time acquaintances on Twitter. [sent-150, score-0.179]

71 Also, emotional openness does not change significantly with relationship strength. [sent-151, score-0.699]

72 We think this may be due to the inherent difficulty in truly identifying the emotions on Twitter. [sent-152, score-0.158]

73 Identifying emotion merely based on keywords captures mostly shallow emotions, and deeper emotional openness either does not occur much on 63 Figure 2: Example of Twitter conversation in a weak relationship that shows a high degree of self-disclosure. [sent-153, score-1.111]

74 With our automatic analysis, we showed that when Twitter users have conversations, they control self-disclosure depending on the relationship strength. [sent-155, score-0.264]

75 We showed the results of measuring the relationship strength of a Twitter conversational dyad with chain frequency and length. [sent-156, score-0.79]

76 We also showed the results ofautomatically analyzing self-disclosure behaviors using topic modeling. [sent-157, score-0.101]

77 This is ongoing work, and we are looking to improve methods for analyzing relationship strength and self-disclosure, especially emotions, PII and PEI. [sent-158, score-0.471]

78 For relationship strength, we will consider not only interaction frequency, but also network distance and relationship duration. [sent-159, score-0.481]

79 Emoticons and social interaction on the internet: the importance of social context. [sent-198, score-0.295]

80 From perception to behavior: Disclosure reciprocity and the intensification of intimacy in computer-mediated communication. [sent-243, score-0.021]

81 Aspect and sentiment unification model for online review analysis. [sent-250, score-0.107]

82 The strength of weak ties you can trust: The mediating role of trust in effective knowledge transfer. [sent-273, score-0.348]

83 Verbal immediacy as a behavioral indicator of open communication content. [sent-279, score-0.045]

84 Detecting professional versus personal closeness using an enterprise social network site. [sent-316, score-0.254]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pii', 0.456), ('openness', 0.36), ('pei', 0.251), ('profanity', 0.24), ('relationship', 0.224), ('twitter', 0.221), ('strength', 0.196), ('dyad', 0.168), ('nonverbal', 0.144), ('conversation', 0.133), ('conversations', 0.133), ('emotions', 0.133), ('social', 0.131), ('emotion', 0.125), ('weak', 0.116), ('emotional', 0.115), ('relationships', 0.103), ('emoticons', 0.096), ('tweets', 0.091), ('topics', 0.085), ('conversational', 0.084), ('chain', 0.081), ('cf', 0.074), ('bak', 0.072), ('greetings', 0.072), ('selfdisclosure', 0.072), ('ritter', 0.067), ('gilbert', 0.063), ('dyads', 0.063), ('lol', 0.057), ('psychology', 0.052), ('analyzing', 0.051), ('sentiment', 0.051), ('behaviors', 0.05), ('weaker', 0.05), ('chains', 0.049), ('oh', 0.048), ('acquaintances', 0.048), ('asum', 0.048), ('boyd', 0.048), ('derks', 0.048), ('discloses', 0.048), ('humphreys', 0.048), ('landis', 0.048), ('suin', 0.048), ('tokuhisa', 0.048), ('tweeting', 0.048), ('vaassen', 0.048), ('xxx', 0.048), ('communication', 0.045), ('personal', 0.045), ('look', 0.044), ('facial', 0.042), ('closeness', 0.042), ('vocal', 0.042), ('privacy', 0.042), ('feelings', 0.042), ('strong', 0.041), ('kim', 0.041), ('cl', 0.041), ('users', 0.04), ('jo', 0.04), ('blei', 0.039), ('degree', 0.038), ('personally', 0.038), ('friends', 0.038), ('behavior', 0.037), ('frequency', 0.037), ('sent', 0.037), ('kai', 0.036), ('trust', 0.036), ('enterprise', 0.036), ('feeling', 0.034), ('levin', 0.034), ('interaction', 0.033), ('harris', 0.032), ('negative', 0.031), ('unification', 0.031), ('sentiments', 0.031), ('axis', 0.031), ('discover', 0.03), ('tweet', 0.029), ('cherry', 0.029), ('lda', 0.028), ('looked', 0.028), ('korea', 0.027), ('feel', 0.027), ('methodology', 0.027), ('family', 0.026), ('prominent', 0.025), ('stronger', 0.025), ('identifying', 0.025), ('online', 0.025), ('tie', 0.024), ('closer', 0.024), ('identified', 0.024), ('situation', 0.023), ('expressions', 0.022), ('people', 0.021), ('intimacy', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations

Author: JinYeong Bak ; Suin Kim ; Alice Oh

Abstract: In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure.

2 0.18723381 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle

Author: Hao Wang ; Dogan Can ; Abe Kazemzadeh ; Francois Bar ; Shrikanth Narayanan

Abstract: This paper describes a system for real-time analysis of public sentiment toward presidential candidates in the 2012 U.S. election as expressed on Twitter, a microblogging service. Twitter has become a central site where people express their opinions and views on political parties and candidates. Emerging events or news are often followed almost instantly by a burst in Twitter volume, providing a unique opportunity to gauge the relation between expressed public sentiment and electoral events. In addition, sentiment analysis can help explore how these events affect public opinion. While traditional content analysis takes days or weeks to complete, the system demonstrated here analyzes sentiment in the entire Twitter traffic about the election, delivering results instantly and continuously. It offers the public, the media, politicians and scholars a new and timely perspective on the dynamics of the electoral process and public opinion. 1

3 0.12794209 205 acl-2012-Tweet Recommendation with Graph Co-Ranking

Author: Rui Yan ; Mirella Lapata ; Xiaoming Li

Abstract: Mirella Lapata‡ Xiaoming Li†, \ ‡Institute for Language, \State Key Laboratory of Software Cognition and Computation, Development Environment, University of Edinburgh, Beihang University, Edinburgh EH8 9AB, UK Beijing 100083, China mlap@ inf .ed .ac .uk lxm@pku .edu .cn 2012.1 Twitter enables users to send and read textbased posts ofup to 140 characters, known as tweets. As one of the most popular micro-blogging services, Twitter attracts millions of users, producing millions of tweets daily. Shared information through this service spreads faster than would have been possible with traditional sources, however the proliferation of user-generation content poses challenges to browsing and finding valuable information. In this paper we propose a graph-theoretic model for tweet recommendation that presents users with items they may have an interest in. Our model ranks tweets and their authors simultaneously using several networks: the social network connecting the users, the network connecting the tweets, and a third network that ties the two together. Tweet and author entities are ranked following a co-ranking algorithm based on the intuition that that there is a mutually reinforcing relationship between tweets and their authors that could be reflected in the rankings. We show that this framework can be parametrized to take into account user preferences, the popularity of tweets and their authors, and diversity. Experimental evaluation on a large dataset shows that our model out- performs competitive approaches by a large margin.

4 0.12263981 70 acl-2012-Demonstration of IlluMe: Creating Ambient According to Instant Message Logs

Author: Lun-Wei Ku ; Cheng-Wei Sun ; Ya-Hsin Hsueh

Abstract: We present IlluMe, a software tool pack which creates a personalized ambient using the music and lighting. IlluMe includes an emotion analysis software, the small space ambient lighting, and a multimedia controller. The software analyzes emotional changes from instant message logs and corresponds the detected emotion to the best sound and light settings. The ambient lighting can sparkle with different forms of light and the smart phone can broadcast music respectively according to different atmosphere. All settings can be modified by the multimedia controller at any time and the new settings will be feedback to the emotion analysis software. The IlluMe system, equipped with the learning function, provides a link between residential situation and personal emotion. It works in a Chinese chatting environment to illustrate the language technology in life.

5 0.11227866 171 acl-2012-SITS: A Hierarchical Nonparametric Model using Speaker Identity for Topic Segmentation in Multiparty Conversations

Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik

Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK ×  VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.

6 0.091262303 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

7 0.088663951 167 acl-2012-QuickView: NLP-based Tweet Search

8 0.086474992 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

9 0.08539816 91 acl-2012-Extracting and modeling durations for habits and events from Twitter

10 0.069689117 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks

11 0.068079576 2 acl-2012-A Broad-Coverage Normalization System for Social Media Language

12 0.061441362 98 acl-2012-Finding Bursty Topics from Microblogs

13 0.053765226 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

14 0.05244913 144 acl-2012-Modeling Review Comments

15 0.051596597 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

16 0.049719639 124 acl-2012-Joint Inference of Named Entity Recognition and Normalization for Tweets

17 0.049590431 100 acl-2012-Fine Granular Aspect Analysis using Latent Structural Models

18 0.038070351 153 acl-2012-Named Entity Disambiguation in Streaming Data

19 0.037590101 151 acl-2012-Multilingual Subjectivity and Sentiment Analysis

20 0.037323754 177 acl-2012-Sentence Dependency Tagging in Online Question Answering Forums


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.099), (1, 0.133), (2, 0.114), (3, -0.004), (4, -0.037), (5, -0.043), (6, 0.205), (7, 0.007), (8, 0.013), (9, 0.115), (10, 0.008), (11, 0.011), (12, -0.003), (13, 0.016), (14, 0.036), (15, -0.026), (16, -0.025), (17, -0.047), (18, 0.009), (19, -0.006), (20, -0.007), (21, -0.004), (22, 0.021), (23, -0.013), (24, -0.041), (25, 0.028), (26, 0.014), (27, 0.032), (28, 0.064), (29, -0.074), (30, -0.017), (31, -0.044), (32, 0.075), (33, 0.144), (34, 0.106), (35, 0.045), (36, -0.035), (37, -0.039), (38, -0.026), (39, 0.059), (40, -0.098), (41, -0.094), (42, 0.065), (43, -0.039), (44, 0.185), (45, 0.004), (46, -0.123), (47, -0.095), (48, 0.071), (49, 0.097)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96477658 173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations

Author: JinYeong Bak ; Suin Kim ; Alice Oh

Abstract: In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure.

2 0.7027601 70 acl-2012-Demonstration of IlluMe: Creating Ambient According to Instant Message Logs

Author: Lun-Wei Ku ; Cheng-Wei Sun ; Ya-Hsin Hsueh

Abstract: We present IlluMe, a software tool pack which creates a personalized ambient using the music and lighting. IlluMe includes an emotion analysis software, the small space ambient lighting, and a multimedia controller. The software analyzes emotional changes from instant message logs and corresponds the detected emotion to the best sound and light settings. The ambient lighting can sparkle with different forms of light and the smart phone can broadcast music respectively according to different atmosphere. All settings can be modified by the multimedia controller at any time and the new settings will be feedback to the emotion analysis software. The IlluMe system, equipped with the learning function, provides a link between residential situation and personal emotion. It works in a Chinese chatting environment to illustrate the language technology in life.

3 0.64985132 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System

Author: Wen-Tai Hsieh ; Chen-Ming Wu ; Tsun Ku ; Seng-cho T. Chou

Abstract: Social Event Radar is a new social networking-based service platform, that aim to alert as well as monitor any merchandise flaws, food-safety related issues, unexpected eruption of diseases or campaign issues towards to the Government, enterprises of any kind or election parties, through keyword expansion detection module, using bilingual sentiment opinion analysis tool kit to conclude the specific event social dashboard and deliver the outcome helping authorities to plan “risk control” strategy. With the rapid development of social network, people can now easily publish their opinions on the Internet. On the other hand, people can also obtain various opinions from others in a few seconds even though they do not know each other. A typical approach to obtain required information is to use a search engine with some relevant keywords. We thus take the social media and forum as our major data source and aim at collecting specific issues efficiently and effectively in this work. 163 Chen-Ming Wu Institute for Information Industry cmwu@ i i i .org .tw Seng-cho T. Chou Department of IM, National Taiwan University chou @ im .ntu .edu .tw 1

4 0.57512969 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle

Author: Hao Wang ; Dogan Can ; Abe Kazemzadeh ; Francois Bar ; Shrikanth Narayanan

Abstract: This paper describes a system for real-time analysis of public sentiment toward presidential candidates in the 2012 U.S. election as expressed on Twitter, a microblogging service. Twitter has become a central site where people express their opinions and views on political parties and candidates. Emerging events or news are often followed almost instantly by a burst in Twitter volume, providing a unique opportunity to gauge the relation between expressed public sentiment and electoral events. In addition, sentiment analysis can help explore how these events affect public opinion. While traditional content analysis takes days or weeks to complete, the system demonstrated here analyzes sentiment in the entire Twitter traffic about the election, delivering results instantly and continuously. It offers the public, the media, politicians and scholars a new and timely perspective on the dynamics of the electoral process and public opinion. 1

5 0.56663835 6 acl-2012-A Comprehensive Gold Standard for the Enron Organizational Hierarchy

Author: Apoorv Agarwal ; Adinoyi Omuya ; Aaron Harnly ; Owen Rambow

Abstract: Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.

6 0.55582917 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

7 0.50831372 205 acl-2012-Tweet Recommendation with Graph Co-Ranking

8 0.50476146 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks

9 0.37080285 2 acl-2012-A Broad-Coverage Normalization System for Social Media Language

10 0.36985263 167 acl-2012-QuickView: NLP-based Tweet Search

11 0.3686389 91 acl-2012-Extracting and modeling durations for habits and events from Twitter

12 0.32416809 39 acl-2012-Beefmoves: Dissemination, Diversity, and Dynamics of English Borrowings in a German Hip Hop Forum

13 0.30746311 171 acl-2012-SITS: A Hierarchical Nonparametric Model using Speaker Identity for Topic Segmentation in Multiparty Conversations

14 0.27109164 160 acl-2012-Personalized Normalization for a Multilingual Chat System

15 0.26355219 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model

16 0.24385132 53 acl-2012-Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions

17 0.2396785 77 acl-2012-Ecological Evaluation of Persuasive Messages Using Google AdWords

18 0.23775037 124 acl-2012-Joint Inference of Named Entity Recognition and Normalization for Tweets

19 0.2195427 161 acl-2012-Polarity Consistency Checking for Sentiment Dictionaries

20 0.21734856 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(25, 0.033), (26, 0.024), (28, 0.026), (30, 0.024), (37, 0.019), (39, 0.051), (69, 0.042), (71, 0.024), (74, 0.013), (82, 0.035), (84, 0.015), (85, 0.014), (90, 0.108), (92, 0.071), (94, 0.363), (99, 0.042)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.95822448 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation

Author: Seung-Wook Lee ; Dongdong Zhang ; Mu Li ; Ming Zhou ; Hae-Chang Rim

Abstract: In this paper, we propose a novel method of reducing the size of translation model for hierarchical phrase-based machine translation systems. Previous approaches try to prune infrequent entries or unreliable entries based on statistics, but cause a problem of reducing the translation coverage. On the contrary, the proposed method try to prune only ineffective entries based on the estimation of the information redundancy encoded in phrase pairs and hierarchical rules, and thus preserve the search space of SMT decoders as much as possible. Experimental results on Chinese-toEnglish machine translation tasks show that our method is able to reduce almost the half size of the translation model with very tiny degradation of translation performance.

same-paper 2 0.86379427 173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations

Author: JinYeong Bak ; Suin Kim ; Alice Oh

Abstract: In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure.

3 0.86361182 6 acl-2012-A Comprehensive Gold Standard for the Enron Organizational Hierarchy

Author: Apoorv Agarwal ; Adinoyi Omuya ; Aaron Harnly ; Owen Rambow

Abstract: Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.

4 0.86088771 176 acl-2012-Sentence Compression with Semantic Role Constraints

Author: Katsumasa Yoshikawa ; Ryu Iida ; Tsutomu Hirao ; Manabu Okumura

Abstract: For sentence compression, we propose new semantic constraints to directly capture the relations between a predicate and its arguments, whereas the existing approaches have focused on relatively shallow linguistic properties, such as lexical and syntactic information. These constraints are based on semantic roles and superior to the constraints of syntactic dependencies. Our empirical evaluation on the Written News Compression Corpus (Clarke and Lapata, 2008) demonstrates that our system achieves results comparable to other state-of-the-art techniques.

5 0.85925531 179 acl-2012-Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm

Author: Ashish Vaswani ; Liang Huang ; David Chiang

Abstract: Two decades after their invention, the IBM word-based translation models, widely available in the GIZA++ toolkit, remain the dominant approach to word alignment and an integral part of many statistical translation systems. Although many models have surpassed them in accuracy, none have supplanted them in practice. In this paper, we propose a simple extension to the IBM models: an ‘0 prior to encourage sparsity in the word-to-word translation model. We explain how to implement this extension efficiently for large-scale data (also released as a modification to GIZA++) and demonstrate, in experiments on Czech, Arabic, Chinese, and Urdu to English translation, significant improvements over IBM Model 4 in both word alignment (up to +6.7 F1) and translation quality (up to +1.4 B ).

6 0.58407354 118 acl-2012-Improving the IBM Alignment Models Using Variational Bayes

7 0.5185799 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

8 0.51657385 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation

9 0.49685189 108 acl-2012-Hierarchical Chunk-to-String Translation

10 0.48455885 140 acl-2012-Machine Translation without Words through Substring Alignment

11 0.48411384 136 acl-2012-Learning to Translate with Multiple Objectives

12 0.4779883 123 acl-2012-Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT

13 0.47653654 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale

14 0.47445211 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?

15 0.4657079 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

16 0.46517941 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

17 0.46232566 146 acl-2012-Modeling Topic Dependencies in Hierarchical Text Categorization

18 0.45760316 211 acl-2012-Using Rejuvenation to Improve Particle Filtering for Bayesian Word Segmentation

19 0.45585459 111 acl-2012-How Are Spelling Errors Generated and Corrected? A Study of Corrected and Uncorrected Spelling Errors Using Keystroke Logs

20 0.45479491 116 acl-2012-Improve SMT Quality with Automatically Extracted Paraphrase Rules