acl acl2012 acl2012-88 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Mark Johnson ; Katherine Demuth ; Michael Frank
Abstract: This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. The input to the model consists of (orthographically transcribed) child-directed utterances accompanied by the set of objects present in the non-linguistic context. Each object is annotated by social cues, indicating e.g., whether the caregiver is looking at or touching the object. We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. Exploiting social cues improves the performance of all models. Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consis- tent with the idea that children could discover the importance of particular social information sources during word learning.
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. [sent-8, score-1.003]
2 Each object is annotated by social cues, indicating e. [sent-10, score-0.513]
3 We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. [sent-13, score-0.881]
4 Exploiting social cues improves the performance of all models. [sent-14, score-0.748]
5 Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consis- tent with the idea that children could discover the importance of particular social information sources during word learning. [sent-15, score-1.555]
6 1 Introduction From learning sounds to learning the meanings of words, social interactions are extremely important for children’s early language acquisition (Baldwin, 1993; Kuhl et al. [sent-16, score-0.505]
7 Yet computational or formal models of social interaction are rare, and those that exist have rarely gone beyond the stage of cue-weighting models. [sent-22, score-0.478]
8 In order to study the role that social cues play in language acquisition, this paper presents a structured statistical model of 883 grounded learning that learns a mapping between words and objects from a corpus of child-directed utterances in a completely unsupervised fashion. [sent-23, score-1.102]
9 It exploits five different social cues, which indicate which object (if any) the child is looking at, which object the child is touching, etc. [sent-24, score-0.828]
10 Our models learn the salience of each social cue in establishing refer- ence, relative to their co-occurrence with objects that are not being referred to. [sent-25, score-0.923]
11 The strings presented to our grammatical learner contain a prefix which encodes the objects and their social cues for each utterance, and the rules of the grammar encode relationships between these objects and specific words. [sent-32, score-1.33]
12 This reduction of grounded learning to grammatical inference allows us to use standard grammatical inference procedures to learn our models. [sent-36, score-0.448]
13 Here we use the adaptor grammar package described in Johnson et al. [sent-37, score-0.432]
14 point Value objects objects objects objects objects child is looking at child is touching care-giver is looking at care-giver is touching care-giver is pointing to Figure 1: The 5 social cues in the Frank et al. [sent-47, score-1.816]
15 The value of a social cue for an utterance is a subset of the available topics (i. [sent-49, score-1.019]
16 Our models are intended as an “ideal learner” approach to early social language learning, attempting to weight the importance of social and structural factors in the acquisition of word-object correspondences. [sent-63, score-1.131]
17 1 A training corpus for social cues Our work here uses a corpus of child-directed speech annotated with social cues, described in Frank et al. [sent-67, score-1.192]
18 The sessions were video-taped, and each utterance was annotated with the five social cues described in Figure 1. [sent-70, score-0.928]
19 , objects in the nonlai sngetu oisftai cv aoibljaebcltes) to, the values of the social cues, and tah see vt olufe isn toefn tdheed s otocipaicls c,u ews,hi acnhd the care-giver rae sfeerts otof. [sent-73, score-0.576]
20 All of these but the intended topics are provided to our learning algorithms; the intended topics are used to evaluate the output produced by our learners. [sent-75, score-0.422]
21 Generally the intended topics consist of zero or one elements from the available topics, but not always: it is possible for the caregiver to refer to two objects in a single utterance, or to refer to an object not in the current non-linguistic context (e. [sent-76, score-0.492]
22 (to appear) give extensive details on the corpus, including inter-annotator reliability information for all annotations, and provide detailed statistical analyses of the relationships between the various social cues, the available topics and the intended topics. [sent-81, score-0.7]
23 2 Previous work There is a growing body of work on the role of social cues in language acquisition. [sent-84, score-0.748]
24 The language acquisition research community has long recognized the importance of social cues for child language acquisition (Baldwin, 1991 ; Carpenter et al. [sent-85, score-1.044]
25 Yu and Ballard (2007) describe an associative learner that associates words with topics and that exploits prosodic as well as social cues. [sent-89, score-0.643]
26 The relative importance of the various social cues are specified a priori in their model (rather than learned, as they are here), and unfortunately their training cor- pus is not available. [sent-90, score-0.84]
27 (2008) describes a Bayesian model that learns the relationship between words and topics, but the version of their model that included social cues presented a number of challenges for inference. [sent-92, score-0.748]
28 , the objects in the non-linguistic context) and their associated social cues (the cues for the pig are child. [sent-103, score-1.273]
29 hands, while the dog is not associated with any social cues). [sent-106, score-0.637]
30 The social cues associated with each object are generated either from a “Topical” or a “NotTopical” nonterminal, depending on whether the corresponding object is topical or not. [sent-113, score-0.985]
31 (2010) reduces grounded learning to grammatical inference for adaptor grammars and shows how it can be used to perform word segmentation as well as learning word-topic relationships, but their model does not take social cues into account. [sent-117, score-1.492]
32 2 Reducing grounded learning with social cues to grammatical inference This section explains how we reduce ground learning problems with social cues to grammatical inference problems, which lets us apply a wide variety ofgrammatical inference algorithms to grounded learning problems. [sent-118, score-2.192]
33 An advantage of reducing grounded learning to grammatical inference is that it suggests new ways to generalise grounded learning models; we explore three such generalisations here. [sent-119, score-0.547]
34 Here we encode the nonlinguistic information in a “prefix” to each utterance as shown in Figure 2, and devise a grammar such that inference for the grammar corresponds to learning the word-topic relationships and the salience of the social cues for grounded learning. [sent-121, score-1.449]
35 All our models associate each utterance with zero or one topics (this means we cannot correctly analyse utterances with more than one intended topic). [sent-122, score-0.453]
36 We analyse an utterance associated with zero topics as having the special topic None, so we can assume that every utterance has exactly one topic. [sent-123, score-0.708]
37 All our grammars generate strings of the form shown in Figure 2, and they do so by parsing the prefix and the words of the utterance separately; the top-level rules of the grammar force the same topic to be associated with both the prefix and the words of the utterance (see Figure 3). [sent-124, score-0.861]
38 (2010) observe, this kind of grounded learning can be viewed as a specialised kind of topic inference in a topic model, where the utterance topic is constrained by the available objects (possible topics). [sent-127, score-1.088]
39 , c‘) is an ordered list of the social cues, T is the set of all non-None available topics, T0 = T ∪ {None}, and W is the set of words appearing in th=e uTtt ∪era {Nncoens. [sent-140, score-0.444]
40 The rules expanding the Topict nonterminals ensure that the social cues for the available topic t are parsed under the Topical nonterminals. [sent-144, score-1.064]
41 All other available topics are parsed under TNone nonterminals, so their social cues are parsed under NotTopical nontermi- nals. [sent-145, score-0.935]
42 The rules expanding these non-terminals are specifically designed so that the generation ofthe social cues corresponds to a series of binary decisions about each social cue. [sent-146, score-1.219]
43 hands is the probability of an object that is an utterance topic occuring with the child. [sent-151, score-0.425]
44 By estimating the probabilities of these rules, the model effectively learns the probability of each social cue being associated with a Topical or a NotTopical available topic, respectively. [sent-153, score-0.749]
45 2 Adaptor grammars Our other grounded learning models are based on reductions of grounded learning to adaptor grammar inference problems. [sent-162, score-1.02]
46 Informally, an adaptor grammar is specified by a set of rules just as in a PCFG, plus a set of adapted nonterminals. [sent-165, score-0.494]
47 The set of trees generated by an adaptor grammar is the same as the set of trees generated by a PCFG with the same rules, but the generative process differs. [sent-166, score-0.432]
48 Nonadapted nonterminals in an adaptor grammar expand just as they do in a PCFG: the probability of choosing a rule is specified by its probability. [sent-167, score-0.606]
49 2 Thus an adaptor grammar can be viewed as caching each tree generated by each adapted nonterminal, and regenerating it with probability proportional to the number of times it was previously generated (with some probability mass reserved to generate “new” trees). [sent-170, score-0.491]
50 Our adaptor grammars are actually based on the more general Pitman-Yor Processes, as described in Johnson and Goldwater (2009). [sent-172, score-0.423]
51 887 Sentence Figure 6: Sample parse generated by the collocation adaptor grammar. [sent-173, score-0.495]
52 The adapted nonterminals Colloct and Wordt are shown underlined; the subtrees they dominate are “cached” by the adaptor grammar. [sent-174, score-0.466]
53 Generic software is available for adaptor grammar inference, based either on Variational Bayes (Cohen et al. [sent-177, score-0.479]
54 The collocation adaptor grammar in Figure 5 generates the words of the utterance as a sequence of collocations, each of which is a sequence of words. [sent-188, score-0.761]
55 Each collocation is either associated with the sentence topic or with the None topic, just like words in the unigram model. [sent-189, score-0.479]
56 Figure 6 shows a sample parse generated by the collocation adaptor grammar. [sent-190, score-0.495]
57 65384715903284 25891 Figure 7: Utterance topic, word topic and lexicon results for all models, on data with and without social cues. [sent-219, score-0.656]
58 Utterance topic shows how well the model discovered the intended topics at the utterance level, word topic shows how well the model associates word tokens with topics, and lexicon shows how well the topic most frequently associated with a word type matches an external word-topic dictionary. [sent-221, score-1.024]
59 3 Experimental evaluation We performed grammatical inference using the × adaptor grammar software described in Johnson and Goldwater (2009). [sent-229, score-0.606]
60 , most frequent) analysis, 3Because adaptor grammars are a generalisation of PCFGs, we could use the adaptor grammar software to estimate the unigram model. [sent-233, score-1.015]
61 We estimated the error in our results by running our most complex model (the colloc0 model with all social cues) 20 times (i. [sent-235, score-0.444]
62 The standard deviation of all utterance topic and word topic measures is between 0. [sent-240, score-0.532]
63 The adaptor grammar software uses a sentence-wise 888 which we evaluated as described below. [sent-246, score-0.479]
64 The results of evaluating each model on the corpus with social cues, and on another corpus identical except that the social cues have been removed, are presented in Figure 7. [sent-247, score-1.192]
65 The frequency with which the models’ predicted topics exactly matches the intended topics is given under “utterance topic accuracy”; the f-score, precision and recall of each model’s topic predictions are also given in the table. [sent-250, score-0.694]
66 A topic label is scored as correct if it is given in our dictionary and the topic is one of the intended topics for the utterance. [sent-257, score-0.59]
67 5826648258 Figure 8: Effect of using just “importance” of a social cue one can social cue on the experimental results for the unigram and collocation models. [sent-291, score-1.678]
68 The be quantified by the degree to which the model’s evaluation score improves when using a corpus containing that social cue relative to its evaluation score when using a corpus without any social cues. [sent-292, score-1.176]
69 The most important social cue is the one which causes performance to improve the most. [sent-293, score-0.708]
70 However, the topscoring result in every evaluation is always for a model trained using social cues, demonstrating the importance of these social cues. [sent-298, score-0.956]
71 The variant collocation model (trained on data with social cues) was the top-scoring model on four evaluation scores, which is more than any other model. [sent-299, score-0.631]
72 1 Evaluating the importance of social cues It is scientifically interesting to be able to evaluate the importance of each of the social cues to grounded learning. [sent-306, score-1.826]
73 One way to do this is to study the effect of adding or removing social cues from the corpus on the ability of our models to perform grounded learning. [sent-307, score-0.976]
74 An important social cue should 889 have a large impact on our models’ performance; an unimportant cue should have little or no impact. [sent-308, score-0.972]
75 eyes social cue had the most impact on model performance. [sent-311, score-0.708]
76 (to appear) show that for the current corpus, the topic of the preceding utterance is very likely to be the topic of the current one also. [sent-318, score-0.532]
77 eyes cue are diametrically opposed; the first explanation claims that the cue is important because the child is driving the discourse, while the second explanation claims that the cue is important because the child’s gaze follows the topic of the caregiver’s previous utterance. [sent-321, score-1.19]
78 0128759 Figure 9: Effect of using all but one social cue on the experimental results for the unigram and collocation models. [sent-353, score-0.97]
79 The “importance” of a social cue can be quantified by the degree to which the model’s evaluation score degrades when that just social cue is removed from the corpus, relative to its evaluation score when using a corpus without all social cues. [sent-354, score-1.884]
80 The most important social cue is the one which causes performance to degrade the most. [sent-355, score-0.708]
81 5 4 Conclusion and future work This paper presented four different grounded learning models that exploit social cues. [sent-357, score-0.672]
82 Here we used the same adaptor grammar software tools to learn all these models, so we can be relatively certain that any differences we observe are due to differences in the models, rather than quirks in the software. [sent-359, score-0.479]
83 Because the adaptor grammar software performs full Bayesian inference, including for model parameters, an unusual feature of our models is that we did not need to perform any parameter tuning whatsoever. [sent-360, score-0.513]
84 This feature is particularly interesting with respect to the parameters on social cues. [sent-361, score-0.444]
85 Psychological proposals have suggested that children may discover that particular social cues help in establishing reference (Baldwin, 1993; Hollich et al. [sent-362, score-0.833]
86 In contrast, the models described here could in principle discover a wide range of different social conventions. [sent-364, score-0.478]
87 eyes effectively provides the same information as the previous topic by adding the previous topic as a (pseudo-) social cue. [sent-366, score-0.796]
88 eyes scores essentially the same as the model with all social cues. [sent-373, score-0.444]
89 We used our models to investigate scientific questions about the role of social cues in grounded language learning. [sent-375, score-0.976]
90 Because the performance of all four models studied in this paper improve dramatically when provided with social cues in all ten evaluation metrics, this paper provides strong support for the view that social cues are a crucial information source for grounded language learning. [sent-376, score-1.724]
91 We also showed that the importance of the different social cues in grounded language learning can be evaluated using “add one cue” and “subtract one cue” methodologies. [sent-377, score-1.01]
92 eyes cue is the most important of the five social cues studied here. [sent-379, score-1.012]
93 eyes cue could be providing our models with information about the topic of the previous utterance. [sent-381, score-0.474]
94 eyes cue is just providing indirect information about topic continuity, then the importance of the child. [sent-386, score-0.508]
95 eyes cue should decrease when we incorporate topic continuity into our models. [sent-387, score-0.499]
96 eyes should remain a strong cue even when anaphoric dependencies and topic continuity are incorporated into our models. [sent-389, score-0.499]
97 Using adaptor grammars to identi- fying synergies in the unsupervised acquisition of linguistic structure. [sent-488, score-0.526]
98 PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. [sent-493, score-0.686]
99 Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning. [sent-499, score-0.444]
100 A unified model of early word learning: Integrating statistical and social cues. [sent-518, score-0.444]
wordName wordTfidf (topN-words)
[('social', 0.444), ('adaptor', 0.346), ('cues', 0.304), ('cue', 0.264), ('grounded', 0.194), ('utterance', 0.18), ('topic', 0.176), ('wordst', 0.159), ('dog', 0.152), ('collocation', 0.149), ('tt', 0.134), ('objects', 0.132), ('topics', 0.131), ('wordt', 0.127), ('unigram', 0.113), ('topict', 0.111), ('child', 0.106), ('wordnone', 0.095), ('johnson', 0.095), ('grammar', 0.086), ('nonterminals', 0.085), ('intended', 0.08), ('caregiver', 0.08), ('frank', 0.078), ('grammars', 0.077), ('grammatical', 0.073), ('pcfg', 0.069), ('object', 0.069), ('importance', 0.068), ('abbreviates', 0.064), ('collocst', 0.064), ('gaze', 0.064), ('touching', 0.064), ('acquisition', 0.061), ('children', 0.06), ('continuity', 0.059), ('expand', 0.059), ('topical', 0.058), ('inference', 0.054), ('caregivers', 0.048), ('carpenter', 0.048), ('hollich', 0.048), ('kuhl', 0.048), ('nottopical', 0.048), ('pig', 0.048), ('prefix', 0.047), ('software', 0.047), ('baldwin', 0.046), ('relationships', 0.045), ('synergies', 0.042), ('nonterminal', 0.041), ('associated', 0.041), ('learner', 0.04), ('variant', 0.038), ('lexicon', 0.036), ('reductions', 0.035), ('adapted', 0.035), ('looking', 0.034), ('models', 0.034), ('goldwater', 0.033), ('colloct', 0.032), ('dare', 0.032), ('demuth', 0.032), ('generalise', 0.032), ('nonlinguistic', 0.032), ('nottopicalci', 0.032), ('piggie', 0.032), ('tnone', 0.032), ('topicalci', 0.032), ('bayesian', 0.031), ('rule', 0.03), ('collocations', 0.029), ('none', 0.029), ('parsed', 0.028), ('pcfgs', 0.028), ('associates', 0.028), ('utterances', 0.028), ('monographs', 0.028), ('hardisty', 0.028), ('uc', 0.028), ('dictionary', 0.027), ('rules', 0.027), ('iterations', 0.027), ('schema', 0.026), ('explanation', 0.026), ('infants', 0.025), ('platt', 0.025), ('bevan', 0.025), ('establishing', 0.025), ('katherine', 0.025), ('mappings', 0.024), ('proportional', 0.024), ('mon', 0.024), ('salience', 0.024), ('pus', 0.024), ('quantified', 0.024), ('toy', 0.024), ('macquarie', 0.024), ('mq', 0.024), ('orschinger', 0.024)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999964 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction
Author: Mark Johnson ; Katherine Demuth ; Michael Frank
Abstract: This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. The input to the model consists of (orthographically transcribed) child-directed utterances accompanied by the set of objects present in the non-linguistic context. Each object is annotated by social cues, indicating e.g., whether the caregiver is looking at or touching the object. We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. Exploiting social cues improves the performance of all models. Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consis- tent with the idea that children could discover the importance of particular social information sources during word learning.
2 0.16544873 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation
Author: Xinyan Xiao ; Deyi Xiong ; Min Zhang ; Qun Liu ; Shouxun Lin
Abstract: Previous work using topic model for statistical machine translation (SMT) explore topic information at the word level. However, SMT has been advanced from word-based paradigm to phrase/rule-based paradigm. We therefore propose a topic similarity model to exploit topic information at the synchronous rule level for hierarchical phrase-based translation. We associate each synchronous rule with a topic distribution, and select desirable rules according to the similarity of their topic distributions with given documents. We show that our model significantly improves the translation performance over the baseline on NIST Chinese-to-English translation experiments. Our model also achieves a better performance and a faster speed than previous approaches that work at the word level.
3 0.16427109 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks
Author: Tsung-Ting Kuo ; San-Chuan Hung ; Wei-Shih Lin ; Nanyun Peng ; Shou-De Lin ; Wei-Fen Lin
Abstract: This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC improvement over baseline methods. The source code and dataset are available at http://www.csie.ntu.edu.tw/~d97944007/dif fusion/ 1 Background The diffusion of information on social networks has been studied for decades. Generally, the proposed strategies can be categorized into two categories, model-driven and data-driven. The model-driven strategies, such as independent cascade model (Kempe et al., 2003), rely on certain manually crafted, usually intuitive, models to fit the diffusion data without using diffusion history. The data-driven strategies usually utilize learning-based approaches to predict the future propagation given historical records of prediction (Fei et al., 2011; Galuba et al., 2010; Petrovic et al., 2011). Data-driven strategies usually perform better than model-driven approaches because the past diffusion behavior is used during learning (Galuba et al., 2010). Recently, researchers started to exploit content information in data-driven diffusion models (Fei et al., 2011; Petrovic et al., 2011; Zhu et al., 2011). 344 However, most of the data-driven approaches assume that in order to train a model and predict the future diffusion of a topic, it is required to obtain historical records about how this topic has propagated in a social network (Petrovic et al., 2011; Zhu et al., 2011). We argue that such assumption does not always hold in the real-world scenario, and being able to forecast the propagation of novel or unseen topics is more valuable in practice. For example, a company would like to know which users are more likely to be the source of ‘viva voce’ of a newly released product for advertising purpose. A political party might want to estimate the potential degree of responses of a half-baked policy before deciding to bring it up to public. To achieve such goal, it is required to predict the future propagation behavior of a topic even before any actual diffusion happens on this topic (i.e., no historical propagation data of this topic are available). Lin et al. also propose an idea aiming at predicting the inference of implicit diffusions for novel topics (Lin et al., 2011). The main difference between their work and ours is that they focus on implicit diffusions, whose data are usually not available. Consequently, they need to rely on a model-driven approach instead of a datadriven approach. On the other hand, our work focuses on the prediction of explicit diffusion behaviors. Despite the fact that no diffusion data of novel topics is available, we can still design a data- driven approach taking advantage of some explicit diffusion data of known topics. Our experiments show that being able to utilize such information is critical for diffusion prediction. 2 The Novel-Topic Diffusion Model We start by assuming an existing social network G = (V, E), where V is the set of nodes (or user) v, and E is the set of link e. The set of topics is Proce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi3c 4s4–348, denoted as T. Among them, some are considered as novel topics (denoted as N), while the rest (R) are used as the training records. We are also given a set of diffusion records D = {d | d = (src, dest, t) }, where src is the source node (or diffusion source), dest is the destination node, and t is the topic of the diffusion that belongs to R but not N. We assume that diffusions cannot occur between nodes without direct social connection; any diffusion pair implies the existence of a link e = (src, dest) ∈ E. Finally, we assume there are sets of keywords or tags that relevant to each topic (including existing and novel topics). Note that the set of keywords for novel topics should be seen in that of existing topics. From these sets of keywords, we construct a topicword matrix TW = (P(wordj | topici))i,j of which the elements stand for the conditional probabilities that a word appears in the text of a certain topic. Similarly, we also construct a user-word matrix UW= (P(wordj | useri))i,j from these sets of keywords. Given the above information, the goal is to predict whether a given link is active (i.e., belongs to a diffusion link) for topics in N. 2.1 The Framework The main challenge of this problem lays in that the past diffusion behaviors of new topics are missing. To address this challenge, we propose a supervised diffusion discovery framework that exploits the latent semantic information among users, topics, and their explicit / implicit interactions. Intuitively, four kinds of information are useful for prediction: • Topic information: Intuitively, knowing the signatures of a topic (e.g., is it about politics?) is critical to the success of the prediction. • User information: The information of a user such as the personality (e.g., whether this user is aggressive or passive) is generally useful. • User-topic interaction: Understanding the users' preference on certain topics can improve the quality of prediction. • Global information: We include some global features (e.g., topology info) of social network. Below we will describe how these four kinds of information can be modeled in our framework. 2.2 Topic Information We extract hidden topic category information to model topic signature. In particular, we exploit the 345 Latent Dirichlet Allocation (LDA) method (Blei et al., 2003), which is a widely used topic modeling technique, to decompose the topic-word matrix TW into hidden topic categories: TW = TH * HW , where TH is a topic-hidden matrix, HW is hiddenword matrix, and h is the manually-chosen parameter to determine the size of hidden topic categories. TH indicates the distribution of each topic to hidden topic categories, and HW indicates the distribution of each lexical term to hidden topic categories. Note that TW and TH include both existing and novel topics. We utilize THt,*, the row vector of the topic-hidden matrix TH for a topic t, as a feature set. In brief, we apply LDA to extract the topic-hidden vector THt,* to model topic signature (TG) for both existing and novel topics. Topic information can be further exploited. To predict whether a novel topic will be propagated through a link, we can first enumerate the existing topics that have been propagated through this link. For each such topic, we can calculate its similarity with the new topic based on the hidden vectors generated above (e.g., using cosine similarity between feature vectors). Then, we sum up the similarity values as a new feature: topic similarity (TS). For example, a link has previously propagated two topics for a total of three times {ACL, KDD, ACL}, and we would like to know whether a new topic, EMNLP, will propagate through this link. We can use the topic-hidden vector to generate the similarity values between EMNLP and the other topics (e.g., {0.6, 0.4, 0.6}), and then sum them up (1.6) as the value of TS. 2.3 User Information Similar to topic information, we extract latent personal information to model user signature (the users are anonymized already). We apply LDA on the user-word matrix UW: UW = UM * MW , where UM is the user-hidden matrix, MW is the hidden-word matrix, and m is the manually-chosen size of hidden user categories. UM indicates the distribution of each user to the hidden user categories (e.g., age). We then use UMu,*, the row vector of UM for the user u, as a feature set. In brief, we apply LDA to extract the user-hidden vector UMu,* for both source and destination nodes of a link to model user signature (UG). 2.4 User-Topic Interaction Modeling user-topic interaction turns out to be non-trivial. It is not useful to exploit latent semantic analysis directly on the user-topic matrix UR = UQ * QR , where UR represents how many times each user is diffused for existing topic R (R ∈ T), because UR does not contain information of novel topics, and neither do UQ and QR. Given no propagation record about novel topics, we propose a method that allows us to still extract implicit user-topic information. First, we extract from the matrix TH (described in Section 2.2) a subset RH that contains only information about existing topics. Next we apply left division to derive another userhidden matrix UH: UH = (RH \ URT)T = ((RHT RH )-1 RHT URT)T Using left division, we generate the UH matrix using existing topic information. Finally, we exploit UHu,*, the row vector of the user-hidden matrix UH for the user u, as a feature set. Note that novel topics were included in the process of learning the hidden topic categories on RH; therefore the features learned here do implicitly utilize some latent information of novel topics, which is not the case for UM. Experiments confirm the superiority of our approach. Furthermore, our approach ensures that the hidden categories in topic-hidden and user-hidden matrices are identical. Intuitively, our method directly models the user’s preference to topics’ signature (e.g., how capable is this user to propagate topics in politics category?). In contrast, the UM mentioned in Section 2.3 represents the users’ signature (e.g., aggressiveness) and has nothing to do with their opinions on a topic. In short, we obtain the user-hidden probability vector UHu,* as a feature set, which models user preferences to latent categories (UPLC). 2.5 Global Features Given a candidate link, we can extract global social features such as in-degree (ID) and outdegree (OD). We tried other features such as PageRank values but found them not useful. Moreover, we extract the number of distinct topics (NDT) for a link as a feature. The intuition behind this is that the more distinct topics a user has diffused to another, the more likely the diffusion will happen for novel topics. 346 2.6 Complexity Analysis The complexity to produce each feature is as below: (1) Topic information: O(I * |T| * h * Bt) for LDA using Gibbs sampling, where Iis # of the iterations in sampling, |T| is # of topics, and Bt is the average # of tokens in a topic. (2) User information: O(I * |V| * m * Bu) , where |V| is # of users, and Bu is the average # of tokens for a user. (3) User-topic interaction: the time complexity is O(h3 + h2 * |T| + h * |T| * |V|). (4) Global features: O(|D|), where |D| is # of diffusions. 3 Experiments For evaluation, we try to use the diffusion records of old topics to predict whether a diffusion link exists between two nodes given a new topic. 3.1 Dataset and Evaluation Metric We first identify 100 most popular topic (e.g., earthquake) from the Plurk micro-blog site between 01/201 1 and 05/201 1. Plurk is a popular micro-blog service in Asia with more than 5 million users (Kuo et al., 2011). We manually separate the 100 topics into 7 groups. We use topic-wise 4-fold cross validation to evaluate our method, because there are only 100 available topics. For each group, we select 3/4 of the topics as training and 1/4 as validation. The positive diffusion records are generated based on the post-response behavior. That is, if a person x posts a message containing one of the selected topic t, and later there is a person y responding to this message, we consider a diffusion of t has occurred from x to y (i.e., (x, y, t) is a positive instance). Our dataset contains a total of 1,642,894 positive instances out of 100 distinct topics; the largest and smallest topic contains 303,424 and 2,166 diffusions, respectively. Also, the same amount of negative instances for each topic (totally 1,642,894) is sampled for binary classification (similar to the setup in KDD Cup 2011 Track 2). The negative links of a topic t are sampled randomly based on the absence of responses for that given topic. The underlying social network is created using the post-response behavior as well. We assume there is an acquaintance link between x and y if and only if x has responded to y (or vice versa) on at least one topic. Eventually we generated a social network of 163,034 nodes and 382,878 links. Furthermore, the sets of keywords for each topic are required to create the TW and UW matrices for latent topic analysis; we simply extract the content of posts and responses for each topic to create both matrices. We set the hidden category number h = m = 7, which is equal to the number of topic groups. We use area under ROC curve (AUC) to evaluate our proposed framework (Davis and Goadrich, 2006); we rank the testing instances based on their likelihood of being positive, and compare it with the ground truth to compute AUC. 3.2 Implementation and Baseline After trying many classifiers and obtaining similar results for all of them, we report only results from LIBLINEAR with c=0.0001 (Fan et al., 2008) due to space limitation. We remove stop-words, use SCWS (Hightman, 2012) for tokenization, and MALLET (McCallum, 2002) and GibbsLDA++ (Phan and Nguyen, 2007) for LDA. There are three baseline models we compare the result with. First, we simply use the total number of existing diffusions among all topics between two nodes as the single feature for prediction. Second, we exploit the independent cascading model (Kempe et al., 2003), and utilize the normalized total number of diffusions as the propagation probability of each link. Third, we try the heat diffusion model (Ma et al., 2008), set initial heat proportional to out-degree, and tune the diffusion time parameter until the best results are obtained. Note that we did not compare with any data-driven approaches, as we have not identified one that can predict diffusion of novel topics. 3.3 Results The result of each model is shown in Table 1. All except two features outperform the baseline. The best single feature is TS. Note that UPLC performs better than UG, which verifies our hypothesis that maintaining the same hidden features across different LDA models is better. We further conduct experiments to evaluate different combinations of features (Table 2), and found that the best one (TS + ID + NDT) results in about 16% improvement over the baseline, and outperforms the combination of all features. As stated in (Witten et al., 2011), 347 adding useless features may cause the performance of classifiers to deteriorate. Intuitively, TS captures both latent topic and historical diffusion information, while ID and NDT provide complementary social characteristics of users. 4 Conclusions The main contributions of this paper are as below: 1. We propose a novel task of predicting the diffusion of unseen topics, which has wide applications in real-world. 2. Compared to the traditional model-driven or content-independent data-driven works on diffusion analysis, our solution demonstrates how one can bring together ideas from two different but promising areas, NLP and SNA, to solve a challenging problem. 3. Promising experiment result (74% in AUC) not only demonstrates the usefulness of the proposed models, but also indicates that predicting diffusion of unseen topics without historical diffusion data is feasible. Acknowledgments This work was also supported by National Science Council, National Taiwan University and Intel Corporation under Grants NSC 100-291 1-I-002-001, and 101R7501. References David M. Blei, Andrew Y. Ng & Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3.993-1022. Jesse Davis & Mark Goadrich. 2006. The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning, Pittsburgh, Pennsylvania. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang & Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. J. Mach. Learn. Res., 9.1871-74. Hongliang Fei, Ruoyi Jiang, Yuhao Yang, Bo Luo & Jun Huan. 2011. Content based social behavior prediction: a multi-task learning approach. Proceedings of the 20th ACM international conference on Information and knowledge management, Glasgow, Scotland, UK. Wojciech Galuba, Karl Aberer, Dipanjan Chakraborty, Zoran Despotovic & Wolfgang Kellerer. 2010. Outtweeting the twitterers - predicting information cascades in microblogs. Proceedings of the 3rd conference on Online social networks, Boston, MA. Hightman. 2012. Simple Chinese Words Segmentation (SCWS). David Kempe, Jon Kleinberg & Eva Tardos. 2003. Maximizing the spread of influence through a social network. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, Washington, D.C. Tsung-Ting Kuo, San-Chuan Hung, Wei-Shih Lin, Shou-De Lin, Ting-Chun Peng & Chia-Chun Shih. 2011. Assessing the Quality of Diffusion Models Using Real-World Social Network Data. Conference on Technologies and Applications of Artificial Intelligence, 2011. C.X. Lin, Q.Z. Mei, Y.L. Jiang, J.W. Han & S.X. Qi. 2011. Inferring the Diffusion and Evolution of Topics in Social Communities. Proceedings of the IEEE International Conference on Data Mining, 2011. Hao Ma, Haixuan Yang, Michael R. Lyu & Irwin King. 2008. Mining social networks using heat diffusion processes for marketing candidates selection. Proceeding of the 17th ACM conference on Information and knowledge management, Napa Valley, California, USA. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. Sasa Petrovic, Miles Osborne & Victor Lavrenko. 2011. RT to Win! Predicting Message Propagation in Twitter. International AAAI Conference on Weblogs and Social Media, 2011. 348 Xuan-Hieu Phan & Cam-Tu Nguyen. 2007. GibbsLDA++: A C/C++ implementation of latent Dirichlet allocation (LDA). Ian H. Witten, Eibe Frank & Mark A. Hall. 2011. Data Mining: Practical machine learning tools and techniques. San Francisco: Morgan Kaufmann Publishers Inc. Jiang Zhu, Fei Xiong, Dongzhen Piao, Yun Liu & Ying Zhang. 2011. Statistically Modeling the Effectiveness of Disaster Information in Social Media. Proceedings of the 2011 IEEE Global Humanitarian Technology Conference.
Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik
Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK × VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.
5 0.1628837 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System
Author: Wen-Tai Hsieh ; Chen-Ming Wu ; Tsun Ku ; Seng-cho T. Chou
Abstract: Social Event Radar is a new social networking-based service platform, that aim to alert as well as monitor any merchandise flaws, food-safety related issues, unexpected eruption of diseases or campaign issues towards to the Government, enterprises of any kind or election parties, through keyword expansion detection module, using bilingual sentiment opinion analysis tool kit to conclude the specific event social dashboard and deliver the outcome helping authorities to plan “risk control” strategy. With the rapid development of social network, people can now easily publish their opinions on the Internet. On the other hand, people can also obtain various opinions from others in a few seconds even though they do not know each other. A typical approach to obtain required information is to use a search engine with some relevant keywords. We thus take the social media and forum as our major data source and aim at collecting specific issues efficiently and effectively in this work. 163 Chen-Ming Wu Institute for Information Industry cmwu@ i i i .org .tw Seng-cho T. Chou Department of IM, National Taiwan University chou @ im .ntu .edu .tw 1
6 0.10732828 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation
8 0.099640645 195 acl-2012-The Creation of a Corpus of English Metalanguage
9 0.098930687 98 acl-2012-Finding Bursty Topics from Microblogs
10 0.091355309 79 acl-2012-Efficient Tree-Based Topic Modeling
11 0.091262303 173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations
12 0.087829851 34 acl-2012-Automatically Learning Measures of Child Language Development
13 0.082294933 211 acl-2012-Using Rejuvenation to Improve Particle Filtering for Bayesian Word Segmentation
14 0.081067666 14 acl-2012-A Joint Model for Discovery of Aspects in Utterances
15 0.078321867 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons
16 0.077721529 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing
17 0.076201193 41 acl-2012-Bootstrapping a Unified Model of Lexical and Phonetic Acquisition
18 0.074777946 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information
19 0.074683942 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model
20 0.073491767 144 acl-2012-Modeling Review Comments
topicId topicWeight
[(0, -0.178), (1, 0.096), (2, 0.118), (3, 0.006), (4, -0.268), (5, 0.034), (6, -0.0), (7, -0.018), (8, 0.033), (9, 0.104), (10, -0.066), (11, 0.008), (12, -0.035), (13, 0.036), (14, 0.014), (15, -0.044), (16, -0.048), (17, -0.033), (18, -0.013), (19, -0.013), (20, -0.064), (21, -0.013), (22, -0.013), (23, -0.005), (24, -0.099), (25, 0.115), (26, 0.135), (27, 0.028), (28, 0.071), (29, -0.021), (30, -0.012), (31, 0.035), (32, 0.106), (33, 0.146), (34, 0.052), (35, 0.128), (36, 0.044), (37, -0.128), (38, -0.089), (39, -0.007), (40, -0.114), (41, 0.012), (42, 0.105), (43, -0.05), (44, 0.132), (45, 0.039), (46, 0.048), (47, -0.111), (48, 0.075), (49, 0.023)]
simIndex simValue paperId paperTitle
same-paper 1 0.96151912 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction
Author: Mark Johnson ; Katherine Demuth ; Michael Frank
Abstract: This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. The input to the model consists of (orthographically transcribed) child-directed utterances accompanied by the set of objects present in the non-linguistic context. Each object is annotated by social cues, indicating e.g., whether the caregiver is looking at or touching the object. We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. Exploiting social cues improves the performance of all models. Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consis- tent with the idea that children could discover the importance of particular social information sources during word learning.
2 0.74432427 86 acl-2012-Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks
Author: Tsung-Ting Kuo ; San-Chuan Hung ; Wei-Shih Lin ; Nanyun Peng ; Shou-De Lin ; Wei-Fen Lin
Abstract: This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC improvement over baseline methods. The source code and dataset are available at http://www.csie.ntu.edu.tw/~d97944007/dif fusion/ 1 Background The diffusion of information on social networks has been studied for decades. Generally, the proposed strategies can be categorized into two categories, model-driven and data-driven. The model-driven strategies, such as independent cascade model (Kempe et al., 2003), rely on certain manually crafted, usually intuitive, models to fit the diffusion data without using diffusion history. The data-driven strategies usually utilize learning-based approaches to predict the future propagation given historical records of prediction (Fei et al., 2011; Galuba et al., 2010; Petrovic et al., 2011). Data-driven strategies usually perform better than model-driven approaches because the past diffusion behavior is used during learning (Galuba et al., 2010). Recently, researchers started to exploit content information in data-driven diffusion models (Fei et al., 2011; Petrovic et al., 2011; Zhu et al., 2011). 344 However, most of the data-driven approaches assume that in order to train a model and predict the future diffusion of a topic, it is required to obtain historical records about how this topic has propagated in a social network (Petrovic et al., 2011; Zhu et al., 2011). We argue that such assumption does not always hold in the real-world scenario, and being able to forecast the propagation of novel or unseen topics is more valuable in practice. For example, a company would like to know which users are more likely to be the source of ‘viva voce’ of a newly released product for advertising purpose. A political party might want to estimate the potential degree of responses of a half-baked policy before deciding to bring it up to public. To achieve such goal, it is required to predict the future propagation behavior of a topic even before any actual diffusion happens on this topic (i.e., no historical propagation data of this topic are available). Lin et al. also propose an idea aiming at predicting the inference of implicit diffusions for novel topics (Lin et al., 2011). The main difference between their work and ours is that they focus on implicit diffusions, whose data are usually not available. Consequently, they need to rely on a model-driven approach instead of a datadriven approach. On the other hand, our work focuses on the prediction of explicit diffusion behaviors. Despite the fact that no diffusion data of novel topics is available, we can still design a data- driven approach taking advantage of some explicit diffusion data of known topics. Our experiments show that being able to utilize such information is critical for diffusion prediction. 2 The Novel-Topic Diffusion Model We start by assuming an existing social network G = (V, E), where V is the set of nodes (or user) v, and E is the set of link e. The set of topics is Proce dJienjgus, R ofep thueb 5lic0t hof A Knonruea ,l M 8-e1e4ti Jnugly o f2 t0h1e2 A.s ?c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi3c 4s4–348, denoted as T. Among them, some are considered as novel topics (denoted as N), while the rest (R) are used as the training records. We are also given a set of diffusion records D = {d | d = (src, dest, t) }, where src is the source node (or diffusion source), dest is the destination node, and t is the topic of the diffusion that belongs to R but not N. We assume that diffusions cannot occur between nodes without direct social connection; any diffusion pair implies the existence of a link e = (src, dest) ∈ E. Finally, we assume there are sets of keywords or tags that relevant to each topic (including existing and novel topics). Note that the set of keywords for novel topics should be seen in that of existing topics. From these sets of keywords, we construct a topicword matrix TW = (P(wordj | topici))i,j of which the elements stand for the conditional probabilities that a word appears in the text of a certain topic. Similarly, we also construct a user-word matrix UW= (P(wordj | useri))i,j from these sets of keywords. Given the above information, the goal is to predict whether a given link is active (i.e., belongs to a diffusion link) for topics in N. 2.1 The Framework The main challenge of this problem lays in that the past diffusion behaviors of new topics are missing. To address this challenge, we propose a supervised diffusion discovery framework that exploits the latent semantic information among users, topics, and their explicit / implicit interactions. Intuitively, four kinds of information are useful for prediction: • Topic information: Intuitively, knowing the signatures of a topic (e.g., is it about politics?) is critical to the success of the prediction. • User information: The information of a user such as the personality (e.g., whether this user is aggressive or passive) is generally useful. • User-topic interaction: Understanding the users' preference on certain topics can improve the quality of prediction. • Global information: We include some global features (e.g., topology info) of social network. Below we will describe how these four kinds of information can be modeled in our framework. 2.2 Topic Information We extract hidden topic category information to model topic signature. In particular, we exploit the 345 Latent Dirichlet Allocation (LDA) method (Blei et al., 2003), which is a widely used topic modeling technique, to decompose the topic-word matrix TW into hidden topic categories: TW = TH * HW , where TH is a topic-hidden matrix, HW is hiddenword matrix, and h is the manually-chosen parameter to determine the size of hidden topic categories. TH indicates the distribution of each topic to hidden topic categories, and HW indicates the distribution of each lexical term to hidden topic categories. Note that TW and TH include both existing and novel topics. We utilize THt,*, the row vector of the topic-hidden matrix TH for a topic t, as a feature set. In brief, we apply LDA to extract the topic-hidden vector THt,* to model topic signature (TG) for both existing and novel topics. Topic information can be further exploited. To predict whether a novel topic will be propagated through a link, we can first enumerate the existing topics that have been propagated through this link. For each such topic, we can calculate its similarity with the new topic based on the hidden vectors generated above (e.g., using cosine similarity between feature vectors). Then, we sum up the similarity values as a new feature: topic similarity (TS). For example, a link has previously propagated two topics for a total of three times {ACL, KDD, ACL}, and we would like to know whether a new topic, EMNLP, will propagate through this link. We can use the topic-hidden vector to generate the similarity values between EMNLP and the other topics (e.g., {0.6, 0.4, 0.6}), and then sum them up (1.6) as the value of TS. 2.3 User Information Similar to topic information, we extract latent personal information to model user signature (the users are anonymized already). We apply LDA on the user-word matrix UW: UW = UM * MW , where UM is the user-hidden matrix, MW is the hidden-word matrix, and m is the manually-chosen size of hidden user categories. UM indicates the distribution of each user to the hidden user categories (e.g., age). We then use UMu,*, the row vector of UM for the user u, as a feature set. In brief, we apply LDA to extract the user-hidden vector UMu,* for both source and destination nodes of a link to model user signature (UG). 2.4 User-Topic Interaction Modeling user-topic interaction turns out to be non-trivial. It is not useful to exploit latent semantic analysis directly on the user-topic matrix UR = UQ * QR , where UR represents how many times each user is diffused for existing topic R (R ∈ T), because UR does not contain information of novel topics, and neither do UQ and QR. Given no propagation record about novel topics, we propose a method that allows us to still extract implicit user-topic information. First, we extract from the matrix TH (described in Section 2.2) a subset RH that contains only information about existing topics. Next we apply left division to derive another userhidden matrix UH: UH = (RH \ URT)T = ((RHT RH )-1 RHT URT)T Using left division, we generate the UH matrix using existing topic information. Finally, we exploit UHu,*, the row vector of the user-hidden matrix UH for the user u, as a feature set. Note that novel topics were included in the process of learning the hidden topic categories on RH; therefore the features learned here do implicitly utilize some latent information of novel topics, which is not the case for UM. Experiments confirm the superiority of our approach. Furthermore, our approach ensures that the hidden categories in topic-hidden and user-hidden matrices are identical. Intuitively, our method directly models the user’s preference to topics’ signature (e.g., how capable is this user to propagate topics in politics category?). In contrast, the UM mentioned in Section 2.3 represents the users’ signature (e.g., aggressiveness) and has nothing to do with their opinions on a topic. In short, we obtain the user-hidden probability vector UHu,* as a feature set, which models user preferences to latent categories (UPLC). 2.5 Global Features Given a candidate link, we can extract global social features such as in-degree (ID) and outdegree (OD). We tried other features such as PageRank values but found them not useful. Moreover, we extract the number of distinct topics (NDT) for a link as a feature. The intuition behind this is that the more distinct topics a user has diffused to another, the more likely the diffusion will happen for novel topics. 346 2.6 Complexity Analysis The complexity to produce each feature is as below: (1) Topic information: O(I * |T| * h * Bt) for LDA using Gibbs sampling, where Iis # of the iterations in sampling, |T| is # of topics, and Bt is the average # of tokens in a topic. (2) User information: O(I * |V| * m * Bu) , where |V| is # of users, and Bu is the average # of tokens for a user. (3) User-topic interaction: the time complexity is O(h3 + h2 * |T| + h * |T| * |V|). (4) Global features: O(|D|), where |D| is # of diffusions. 3 Experiments For evaluation, we try to use the diffusion records of old topics to predict whether a diffusion link exists between two nodes given a new topic. 3.1 Dataset and Evaluation Metric We first identify 100 most popular topic (e.g., earthquake) from the Plurk micro-blog site between 01/201 1 and 05/201 1. Plurk is a popular micro-blog service in Asia with more than 5 million users (Kuo et al., 2011). We manually separate the 100 topics into 7 groups. We use topic-wise 4-fold cross validation to evaluate our method, because there are only 100 available topics. For each group, we select 3/4 of the topics as training and 1/4 as validation. The positive diffusion records are generated based on the post-response behavior. That is, if a person x posts a message containing one of the selected topic t, and later there is a person y responding to this message, we consider a diffusion of t has occurred from x to y (i.e., (x, y, t) is a positive instance). Our dataset contains a total of 1,642,894 positive instances out of 100 distinct topics; the largest and smallest topic contains 303,424 and 2,166 diffusions, respectively. Also, the same amount of negative instances for each topic (totally 1,642,894) is sampled for binary classification (similar to the setup in KDD Cup 2011 Track 2). The negative links of a topic t are sampled randomly based on the absence of responses for that given topic. The underlying social network is created using the post-response behavior as well. We assume there is an acquaintance link between x and y if and only if x has responded to y (or vice versa) on at least one topic. Eventually we generated a social network of 163,034 nodes and 382,878 links. Furthermore, the sets of keywords for each topic are required to create the TW and UW matrices for latent topic analysis; we simply extract the content of posts and responses for each topic to create both matrices. We set the hidden category number h = m = 7, which is equal to the number of topic groups. We use area under ROC curve (AUC) to evaluate our proposed framework (Davis and Goadrich, 2006); we rank the testing instances based on their likelihood of being positive, and compare it with the ground truth to compute AUC. 3.2 Implementation and Baseline After trying many classifiers and obtaining similar results for all of them, we report only results from LIBLINEAR with c=0.0001 (Fan et al., 2008) due to space limitation. We remove stop-words, use SCWS (Hightman, 2012) for tokenization, and MALLET (McCallum, 2002) and GibbsLDA++ (Phan and Nguyen, 2007) for LDA. There are three baseline models we compare the result with. First, we simply use the total number of existing diffusions among all topics between two nodes as the single feature for prediction. Second, we exploit the independent cascading model (Kempe et al., 2003), and utilize the normalized total number of diffusions as the propagation probability of each link. Third, we try the heat diffusion model (Ma et al., 2008), set initial heat proportional to out-degree, and tune the diffusion time parameter until the best results are obtained. Note that we did not compare with any data-driven approaches, as we have not identified one that can predict diffusion of novel topics. 3.3 Results The result of each model is shown in Table 1. All except two features outperform the baseline. The best single feature is TS. Note that UPLC performs better than UG, which verifies our hypothesis that maintaining the same hidden features across different LDA models is better. We further conduct experiments to evaluate different combinations of features (Table 2), and found that the best one (TS + ID + NDT) results in about 16% improvement over the baseline, and outperforms the combination of all features. As stated in (Witten et al., 2011), 347 adding useless features may cause the performance of classifiers to deteriorate. Intuitively, TS captures both latent topic and historical diffusion information, while ID and NDT provide complementary social characteristics of users. 4 Conclusions The main contributions of this paper are as below: 1. We propose a novel task of predicting the diffusion of unseen topics, which has wide applications in real-world. 2. Compared to the traditional model-driven or content-independent data-driven works on diffusion analysis, our solution demonstrates how one can bring together ideas from two different but promising areas, NLP and SNA, to solve a challenging problem. 3. Promising experiment result (74% in AUC) not only demonstrates the usefulness of the proposed models, but also indicates that predicting diffusion of unseen topics without historical diffusion data is feasible. Acknowledgments This work was also supported by National Science Council, National Taiwan University and Intel Corporation under Grants NSC 100-291 1-I-002-001, and 101R7501. References David M. Blei, Andrew Y. Ng & Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3.993-1022. Jesse Davis & Mark Goadrich. 2006. The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning, Pittsburgh, Pennsylvania. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang & Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. J. Mach. Learn. Res., 9.1871-74. Hongliang Fei, Ruoyi Jiang, Yuhao Yang, Bo Luo & Jun Huan. 2011. Content based social behavior prediction: a multi-task learning approach. Proceedings of the 20th ACM international conference on Information and knowledge management, Glasgow, Scotland, UK. Wojciech Galuba, Karl Aberer, Dipanjan Chakraborty, Zoran Despotovic & Wolfgang Kellerer. 2010. Outtweeting the twitterers - predicting information cascades in microblogs. Proceedings of the 3rd conference on Online social networks, Boston, MA. Hightman. 2012. Simple Chinese Words Segmentation (SCWS). David Kempe, Jon Kleinberg & Eva Tardos. 2003. Maximizing the spread of influence through a social network. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, Washington, D.C. Tsung-Ting Kuo, San-Chuan Hung, Wei-Shih Lin, Shou-De Lin, Ting-Chun Peng & Chia-Chun Shih. 2011. Assessing the Quality of Diffusion Models Using Real-World Social Network Data. Conference on Technologies and Applications of Artificial Intelligence, 2011. C.X. Lin, Q.Z. Mei, Y.L. Jiang, J.W. Han & S.X. Qi. 2011. Inferring the Diffusion and Evolution of Topics in Social Communities. Proceedings of the IEEE International Conference on Data Mining, 2011. Hao Ma, Haixuan Yang, Michael R. Lyu & Irwin King. 2008. Mining social networks using heat diffusion processes for marketing candidates selection. Proceeding of the 17th ACM conference on Information and knowledge management, Napa Valley, California, USA. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. Sasa Petrovic, Miles Osborne & Victor Lavrenko. 2011. RT to Win! Predicting Message Propagation in Twitter. International AAAI Conference on Weblogs and Social Media, 2011. 348 Xuan-Hieu Phan & Cam-Tu Nguyen. 2007. GibbsLDA++: A C/C++ implementation of latent Dirichlet allocation (LDA). Ian H. Witten, Eibe Frank & Mark A. Hall. 2011. Data Mining: Practical machine learning tools and techniques. San Francisco: Morgan Kaufmann Publishers Inc. Jiang Zhu, Fei Xiong, Dongzhen Piao, Yun Liu & Ying Zhang. 2011. Statistically Modeling the Effectiveness of Disaster Information in Social Media. Proceedings of the 2011 IEEE Global Humanitarian Technology Conference.
3 0.66934574 180 acl-2012-Social Event Radar: A Bilingual Context Mining and Sentiment Analysis Summarization System
Author: Wen-Tai Hsieh ; Chen-Ming Wu ; Tsun Ku ; Seng-cho T. Chou
Abstract: Social Event Radar is a new social networking-based service platform, that aim to alert as well as monitor any merchandise flaws, food-safety related issues, unexpected eruption of diseases or campaign issues towards to the Government, enterprises of any kind or election parties, through keyword expansion detection module, using bilingual sentiment opinion analysis tool kit to conclude the specific event social dashboard and deliver the outcome helping authorities to plan “risk control” strategy. With the rapid development of social network, people can now easily publish their opinions on the Internet. On the other hand, people can also obtain various opinions from others in a few seconds even though they do not know each other. A typical approach to obtain required information is to use a search engine with some relevant keywords. We thus take the social media and forum as our major data source and aim at collecting specific issues efficiently and effectively in this work. 163 Chen-Ming Wu Institute for Information Industry cmwu@ i i i .org .tw Seng-cho T. Chou Department of IM, National Taiwan University chou @ im .ntu .edu .tw 1
4 0.6639269 173 acl-2012-Self-Disclosure and Relationship Strength in Twitter Conversations
Author: JinYeong Bak ; Suin Kim ; Alice Oh
Abstract: In social psychology, it is generally accepted that one discloses more of his/her personal information to someone in a strong relationship. We present a computational framework for automatically analyzing such self-disclosure behavior in Twitter conversations. Our framework uses text mining techniques to discover topics, emotions, sentiments, lexical patterns, as well as personally identifiable information (PII) and personally embarrassing information (PEI). Our preliminary results illustrate that in relationships with high relationship strength, Twitter users show significantly more frequent behaviors of self-disclosure.
Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik
Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK × VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.
6 0.59797418 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model
7 0.55150306 6 acl-2012-A Comprehensive Gold Standard for the Enron Organizational Hierarchy
8 0.4608022 79 acl-2012-Efficient Tree-Based Topic Modeling
9 0.448697 195 acl-2012-The Creation of a Corpus of English Metalanguage
11 0.43233588 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation
12 0.43004137 31 acl-2012-Authorship Attribution with Author-aware Topic Models
13 0.42856213 70 acl-2012-Demonstration of IlluMe: Creating Ambient According to Instant Message Logs
14 0.38109037 34 acl-2012-Automatically Learning Measures of Child Language Development
15 0.37713248 98 acl-2012-Finding Bursty Topics from Microblogs
16 0.37445024 14 acl-2012-A Joint Model for Discovery of Aspects in Utterances
17 0.37336871 113 acl-2012-INPROwidth.3emiSS: A Component for Just-In-Time Incremental Speech Synthesis
18 0.3702971 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation
19 0.35906303 211 acl-2012-Using Rejuvenation to Improve Particle Filtering for Bayesian Word Segmentation
20 0.338902 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers
topicId topicWeight
[(25, 0.027), (26, 0.046), (28, 0.032), (30, 0.043), (36, 0.303), (37, 0.025), (39, 0.088), (59, 0.014), (74, 0.021), (82, 0.021), (84, 0.03), (85, 0.016), (90, 0.1), (92, 0.089), (94, 0.034), (99, 0.038)]
simIndex simValue paperId paperTitle
Author: Viet-An Nguyen ; Jordan Boyd-Graber ; Philip Resnik
Abstract: One of the key tasks for analyzing conversational data is segmenting it into coherent topic segments. However, most models of topic segmentation ignore the social aspect of conversations, focusing only on the words used. We introduce a hierarchical Bayesian nonparametric model, Speaker Identity for Topic Segmentation (SITS), that discovers (1) the topics used in a conversation, (2) how these topics are shared across conversations, (3) when these topics shift, and (4) a person-specific tendency to introduce new topics. We evaluate against current unsupervised segmentation models to show that including personspecific information improves segmentation performance on meeting corpora and on political debates. Moreover, we provide evidence that SITS captures an individual’s tendency to introduce new topics in political contexts, via analysis of the 2008 US presidential debates and the television program Crossfire. 1 Topic Segmentation as a Social Process Conversation, interactive discussion between two or more people, is one of the most essential and common forms of communication. Whether in an informal situation or in more formal settings such as a political debate or business meeting, a conversation is often not about just one thing: topics evolve and are replaced as the conversation unfolds. Discovering this hidden structure in conversations is a key problem for conversational assistants (Tur et al., 2010) and tools that summarize (Murray et al., 2005) and display (Ehlen et al., 2007) conversational data. Topic segmentation also can illuminate individuals’ agendas (Boydstun et al., 2011), patterns of agree- ment and disagreement (Hawes et al., 2009; Abbott 78 Jordan Boyd-Graber iSchool and UMIACS University of Maryland College Park, MD jbg@ umiac s .umd .edu Philip Resnik Department of Linguistics and UMIACS University of Maryland College Park, MD re snik @ umd .edu al., 2011), and relationships among conversational participants (Ireland et al., 2011). One of the most natural ways to capture conversational structure is topic segmentation (Reynar, 1998; Purver, 2011). Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991 ; Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006; Purver et al., 2006; Gruber et al., 2007; Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. However, previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on conet tent, ignoring the speakers. We argue that, because conversation is a social process, we can understand conversational phenomena better by explicitly modeling behaviors of conversational participants. In Section 2, we incorporate participant identity in a new model we call Speaker Identity for Topic Segmentation (SITS), which discovers topical structure in conversation while jointly incorporating a participantlevel social component. Specifically, we explicitly model an individual’s tendency to introduce a topic. After outlining inference in Section 3 and introducing data in Section 4, we use SITS to improve state-ofthe-art-topic segmentation and topic identification models in Section 5. In addition, in Section 6, we also show that the per-speaker model is able to discover individuals who shape and influence the course of a conversation. Finally, we discuss related work and conclude the paper in Section 7. 2 Modeling Multiparty Discussions Data Properties We are interested in turn-taking, multiparty discussion. This is a broad category, inProce Jedijung, sR oefpu thbeli c50 othf K Aonrneua,a8l -M14e Jtiunlgy o 2f0 t1h2e. A ?c s 2o0c1ia2ti Aosns fo cria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsteiscs 78–87, cluding political debates, business meetings, and online chats. More formally, such datasets contain C conversations. A conversation c has Tc turns, each of which is a maximal uninterrupted utterance by one speaker.1 In each turn t ∈ [1, Tc], a speaker ac,t utters N words {wc,t,n}. Eatch ∈ w [1o,rTd is from a vocabulary of size V , {awnd th}ere are M distinct speakers. Modeling Approaches The key insight of topic segmentation is that segments evince lexical cohesion (Galley et al., 2003; Olney and Cai, 2005). Words within a segment will look more like their neighbors than other words. This insight has been used to tune supervised methods (Hsueh et al., 2006) and inspire unsupervised models of lexical cohesion using bags of words (Purver et al., 2006) and language models (Eisenstein and Barzilay, 2008). We too take the unsupervised statistical approach. It requires few resources and is applicable in many domains without extensive training. Like previous approaches, we consider each turn to be a bag of words generated from an admixture of topics. Topics—after the topic modeling literature (Blei and Lafferty, 2009)—are multinomial distributions over terms. These topics are part of a generative model posited to have produced a corpus. However, topic models alone cannot model the dynamics of a conversation. Topic models typically do not model the temporal dynamics of individual documents, and those that do (Wang et al., 2008; Gerrish and Blei, 2010) are designed for larger documents and are not applicable here because they assume that most topics appear in every time slice. Instead, we endow each turn with a binary latent variable lc,t, called the topic shift. This latent variable signifies whether the speaker changed the topic of the conversation. To capture the topic-controlling behavior of the speakers across different conversations, we further associate each speaker m with a latent topic shift tendency, πm. Informally, this variable is intended to capture the propensity of a speaker to effect a topic shift. Formally, it represents the probability that the speaker m will change the topic (distribution) of a conversation. We take a Bayesian nonparametric approach (M¨uller and Quintana, 2004). Unlike 1Note the distinction with phonetic definition are bounded by silence. utterances, which by 79 parametric models, which a priori fix the number of topics, nonparametric models use a flexible number of topics to better represent data. Nonparametric distributions such as the Dirichlet process (Ferguson, 1973) share statistical strength among conversations using a hierarchical model, such as the hierarchical Dirichlet process (HDP) (Teh et al., 2006). 2.1 Generative Process In this section, we develop SITS, a generative model of multiparty discourse that jointly discovers topics and speaker-specific topic shifts from an unannotated corpus (Figure 1a). As in the hierarchical Dirichlet process (Teh et al., 2006), we allow an unbounded number of topics to be shared among the turns of the corpus. Topics are drawn from a base distribution H over multinomial distributions over the vocabulary, a finite Dirichlet with symmetric prior λ. Unlike the HDP, where every document (here, every turn) draws a new multinomial distribution from a Dirichlet process, the social and temporal dynamics of a conversation, as specified by the binary topic shift indicator lc,t, determine when new draws happen. The full generative process is as follows: 1. For speaker m ∈ [1, M], draw speaker shift probability πm ∼ Beta(γ) 2. Draw∼ global probability measure G0 ∼ DP(α, H) 3. For each conversation c ∈ [1, C] (a) Draw conversation distribution Gc ∼ DP(α0 , G0) (b) For each turn t ∈ [1, Tc] with speaker ac,t i. If t = 1, set the topic shift lc,t = 1. Otherwise, draw lc,t ∼ Bernoulli(πac,t ). ii. If lc,t = 1∼, d Breawrn Gc,t ∼ DP(αc, Gc). Otherwise, set Gc,t ≡ Gc,t−1 . iii. For each word ≡ind Gex n ∈ [1, Nc,t] • Draw ψc,t,n ∼ Gc,t • DDrraaww wc,t,n ∼ Multinomial(ψc,t,n) The hierarchy of Dirichlet processes allows statistical strength to be shared across contexts; within a conversation and across conversations. The perspeaker topic shift tendency πm allows speaker identity to influence the evolution of topics. To make notation concrete and aligned with the topic segmentation, we introduce notation for segments in a conversation. A segment s of conversation c is a sequence of turns [τ, τ0] such that lc,τ = lc,τ0+1 = 1and lc,t = 0, ∀t ∈ (τ, τ0] . When lc,t = 0, Gc,t is the same =Gc 0,t,−∀1t a ∈nd ( aτ,llτ τtopics (i.e. multinomial distributions over words) {ψc,t,n} that generate words in turn t and the topics{ ψ{ψc,t}−1,n} that generate words in turn t −1 come from{ψ ψthc,et −s1a,mn}e as Figure 1: Graphical model representations of our proposed models: (a) the nonparametric version; (b) the parametric version. Nodes represent random variables (shaded ones are observed), lines are probabilistic dependencies. Plates represent repetition. The innermost plates are turns, grouped in conversations. distribution. Thus all topics used in a segment s are drawn from a single distribution, Gc,s, , , , Gc,s | lc,1 lc,2 · · · lc,Tc , αc, Gc ∼ DP(αc, Gc) (1) For notational convenience, Sc denotes the number of segments in conversation c, and st denotes the segment index of turn t. We emphasize that all segment-related notations are derived from the posterior over the topic shifts land not part of the model itself. Parametric Version SITS is a generalization of a parametric model (Figure 1b) where each turn has a multinomial distribution over K topics. In the parametric case, the number of topics K is fixed. Each topic, as before, is a multinomial distribution φ1 . . . φK. In the parametric case, each turn t in conversation c has an explicit multinomial distribution over K topics θc,t, identical for turns within a segment. A new topic distribution θ is drawn from a Dirichlet distribution parameterized by α when the topic shift indicator lis 1. The parametric version does not share strength within or across conversations, unlike SITS. When applied on a single conversation without speaker identity (all speakers are identical) it is equivalent to (Purver et al., 2006). In our experiments (Section 5), we compare against both. 80 3 Inference To find the latent variables that best explain observed data, we use Gibbs sampling, a widely used Markov chain Monte Carlo inference technique (Neal, 2000; Resnik and Hardisty, 2010). The state space is latent variables for topic indices assigned to all tokens z = {zc,t,n} and topic shifts assigned to turns l= {lc,t}. {Wze marginalize over all other latent variablle =s. Here, we only present the conditional sampling equations; for more details, see our supplement.2 3.1 Sampling Topic Assignments To sample zc,t,n, the index of the shared topic assigned to token n of turn t in conversation c, we need to sample the path assigning each word token to a segment-specific topic, each segment-specific topic to a conversational topic and each conversational topic to a shared topic. For efficiency, we make use of the minimal path assumption (Wallach, 2008) to generate these assignments.3 Under the minimal path assumption, an observation is assumed to have been generated by using a new distribution if and only if there is no existing distribution with the same value. 2 http://www.cs.umd.edu/∼vietan/topicshift/appendix.pdf 3We also investigated using the maximal assumption and fully sampling assignments. We found the minimal path assumption worked as well as explicitly sampling seating assignments and that the maximal path assumption worked less well. We use Nc,s,k to denote the number of tokens in segment s in conversation c assigned topic k; Nc,k denotes the total number of segment-specific topics in conversation c assigned topic k and Nk denotes the number of conversational topics assigned topic k. TWk,w denotes the number of times the shared topic k is assigned to word w in the vocabulary. Marginal counts are represented with · and ∗ represents all hyperparameters. The condit·ional d∗istribution for zc,t,n is P(zc,t,n = k | wc,t,n = w, z−c,t,n, w−c,t,n, l, ∗) ∝ Nc−,sct ,kn+αNc −c,s−ct,kct·,n Nn+c −,·αc ,t0cnN +k−· αc,t0 ,n + αK × VT1 W k−, ·c,wctk, n e+w V.λ( 2), Here V is the size of the vocabulary, K is the current number of shared topics and the superscript −c,t,n denotes counts without considering wc,t,n. In Equation 2, the first factor is proportional to the probability of sampling a path according to the minimal path assumption; the second factor is proportional to the likelihood of observing w given the sampled topic. Since an uninformed prior is used, when a new topic is sampled, all tokens are equiprobable. 3.2 Sampling Topic Shifts Sampling the topic shift variable lc,t requires us to consider merging or splitting segments. We use kc,t to denote the shared topic indices of all tokens in turn t of conversation c; Sac,t,x to denote the number of times speaker ac,t is assigned the topic shift with value x ∈ {0, 1}; Jcx,s to denote the number of topics in segment s 1o}f conversation c if lc,t = x and Ncx,s,j to denote the number of tokens assigned to the segment-specific topic j when lc,t = x.4 Again, the superscript −c,t is used to denote exclusion of turn t of conversation c in the corresponding counts. Recall that the topic shift is a binary variable. We use 0 to represent the case that the topic distribution is identical to the previous turn. We sample this assignment P(lc,t = 0 | l−c,t, w, k, a, ∗) ∝ SSa−a−cc,c,ct,t , t·,0++ 2 γγ×αcJ0c,sNtx=Qc01,sjJt=c,0·,1s(tx(N −c0 1,s +t,j α−c) 1)!. (3) 4Deterministically knowQing the path assignments is the primary efficiency motivation for using the minimal path assumption. The alternative is to explicitly sample the path assignments, which is more complicated (for both notation and computation). This option is spelled in full detail in the supplementary material. 81 In Equation 3, the first factor is proportional to the probability of assigning a topic shift of value 0 to speaker ac,t and the second factor is proportional to the joint probability of all topics in segment st of conversation c when lc,t = 0. The other alternative is for the topic shift to be 1, which represents the introduction of a new distri- bution over topics inside an existing segment. We sample this as P(lc,t = 1 | l−c,t, w, k, a, ∗) ∝ S −a −c ,c t, t, t, ·1+ 2 γ ×αcJc1,(st−1x)NQ=c1,1(jJs=ct1−,1(s1t)−,·1()x(N −c1 1,( +st− α1c) ,j− 1)! αcJcQ1,sNxt=c1Q1,stJj,c=1·,(s1xt( −N 1c1, +stj α−c) 1)!. (4) As above, the first faQctor in Equation 4 is proportional to the probability of assigning a topic shift of value 1to speaker ac,t; the second factor in the big bracket is proportional to the joint distribution of the topics in segments st − 1 and st. In this case lc,t = 1 means splitting the current segment, which results in two joint probabilities for two segments. 4 Datasets This section introduces the three corpora we use. We preprocess the data to remove stopwords and remove turns containing fewer than five tokens. The ICSI Meeting Corpus: The ICSI Meeting Corpus (Janin et al., 2003) is 75 transcribed meetings. For evaluation, we used a standard set of reference segmentations (Galley et al., 2003) of 25 meetings. Segmentations are binary, i.e., each point of the document is either a segment boundary or not, and on average each meeting has 8 segment boundaries. After preprocessing, there are 60 unique speakers and the vocabulary contains 3346 non-stopword tokens. The 2008 Presidential Election Debates Our second dataset contains three annotated presidential debates (Boydstun et al., 2011) between Barack Obama and John McCain and a vice presidential debate between Joe Biden and Sarah Palin. Each turn is one of two types: questions (Q) from the moderator or responses (R) from a candidate. Each clause in a turn is coded with a Question Topic (TQ) and a Response Topic (TR). Thus, a turn has a list of TQ’s and TR’s both of length equal to the number of clauses in the turn. Topics are from the Policy Agendas Topics SpeakerTypeTurn clausesTQTR BrokawQbSeenfo.r Oeib ta gmeat,s [b.e.t.t]er A arned yo thuey sa oyuingght [. to. b]e th parte tphaere Adm foerri tchaant? economy is going to get much worse1N/A ObamaR[hN.o .m,.]e Is B a,um mtac mokenofs itdu iermenpt o ahrabt oaun th tel yt ,h we c Aaen’rm epea gryoic ithnangei e trco bo hinlaosvm e[.y t. o. h]elp ordinary familes be able to stay in their1 1 4 BrokawQSen. McCain, in all candor, do you think the economy is going to get worse before it gets better?1N/A McCainR[Iom.ftwho.trie]n Ikiegrtofih oeicwonumkteiv aegfn wdlyt.ebri[ua.dyc otuf]petfh ec tserivo bnlayd,islmfoaw nes,d staobptihelcaziteplt ihoneptlrheoscuatsni hgflauvmean rckne itnw– WmhoaisrcthgiaIngbetoalnitevshoe w ne wca vna,l ucet1 240 Table 1: Example turns from the annotated 2008 election debates. The topics (TQ and TR) are from the Policy Agendas Topics Codebook which contains the following codes of topic: Macroeconomics Community Development (14), Government Operations (20). (1), Housing & Codebook, a manual inventory of 19 major topics and 225 subtopics.5 Table 1 shows an example annotation. To get reference segmentations, we assign each turn a real value from 0 to 1indicating how much a turn changes the topic. For a question-typed turn, the score is the fraction of clause topics not appearing in the previous turn; for response-typed turns, the score is the fraction of clause topics that do not appear in the corresponding question. This results in a set of non-binary reference segmentations. For evaluation metrics that require binary segmentations, we create a binary segmentation by setting a turn as a segment boundary if the computed score is 1. This threshold is chosen to include only true segment boundaries. CNN’s Crossfire Crossfire was a weekly U.S. television “talking heads” program engineered to incite heated arguments (hence the name). Each episode features two recurring hosts, two guests, and clips from the week’s news. Our Crossfire dataset contains 1134 transcribed episodes aired between 2000 and 2004.6 There are 2567 unique speakers. Unlike the previous two datasets, Crossfire does not have explicit topic segmentations, so we use it to explore speaker-specific characteristics (Section 6). 5 Topic Segmentation Experiments In this section, we examine how well SITS can replicate annotations of when new topics are introduced. 5 http://www.policyagendas.org/page/topic-codebook 6 http://www.cs.umd.edu/∼vietan/topicshift/crossfire.zip 82 We discuss metrics for evaluating an algorithm’s segmentation against a gold annotation, describe our experimental setup, and report those results. Evaluation Metrics To evaluate segmentations, we use Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002). Both metrics measure the probability that two points in a document will be incorrectly separated by a segment boundary. Both techniques consider all spans of length k in the document and count whether the two endpoints of the window are (im)properly segmented against the gold segmentation. However, these metrics have drawbacks. First, they require both hypothesized and reference segmentations to be binary. Many algorithms (e.g., probabilistic approaches) give non-binary segmentations where candidate boundaries have real-valued scores (e.g., probability or confidence). Thus, evaluation requires arbitrary thresholding to binarize soft scores. To be fair, thresholds are set so the number of segments are equal to a predefined value (Purver et al., 2006; Galley et al., 2003). To overcome these limitations, we also use Earth Mover’s Distance (EMD) (Rubner et al., 2000), a metric that measures the distance between two distributions. The EMD is the minimal cost to transform one distribution into the other. Each segmentation can be considered a multi-dimensional distribution where each candidate boundary is a dimension. In EMD, a distance function across features allows partial credit for “near miss” segment boundaries. In addition, because EMD operates on distributions, we can compute the distance between non-binary hypothesized segmentations with binary or real-valued reference segmentations. We use the FastEMD implementation (Pele and Werman, 2009). Experimental Methods We applied the following methods to discover topic segmentations in a document: • TextTiling (Hearst, 1997) is one of the earliest generalpurpose topic segmentation algorithms, sliding a fixedwidth window to detect major changes in lexical similarity. • P-NoSpeaker-S: parametric version without speaker identity run on keaerc-hS conversation (Purver et al., 2006) • P-NoSpeaker-M: parametric version without speaker identity run on Mall conversations • P-SITS: the parametric version of SITS with speaker identity run on all conversations • NP-HMM: the HMM-based nonparametric model which a single topic per turn. This model can be considered a Sticky HDP-HMM (Fox et al., 2008) with speaker identity. • NP-SITS: the nonparametric version of SITS with speaker identity run on all conversations. Parameter Settings and Implementations experiment, all parameters same as in (Hearst, 1997). of TextTiling In our are the For statistical models, Gibbs sampling with 10 randomly initialized chains is used. Initial hyperparameter values are sampled from U(0, 1) to favor sparsity; statistics are collected after 500 burn-in iterations with a lag of 25 iterations over a total of 5000 iterations; and slice sampling (Neal, 2003) optimizes hyperparameters. Results and Analysis Table 2 shows the perfor- mance of various models on the topic segmentation problem, using the ICSI corpus and the 2008 debates. Consistent with previous results, probabilistic models outperform TextTiling. In addition, among the probabilistic models, the models that had access to speaker information consistently segment better than those lacking such information, supporting our assertion that there is benefit to modeling conversation as a social process. Furthermore, NP-SITS outperforms NP-HMM in both experiments, suggesting that using a distribution over topics to turns is better than using a single topic. This is consistent with parametric results reported in (Purver et al., 2006). The contribution of speaker identity seems more valuable in the debate setting. Debates are characterized by strong rewards for setting the agenda; dodging a question or moving the debate toward an oppo83 nent’s weakness can be useful strategies (Boydstun et al., 2011). In contrast, meetings (particularly lowstakes ICSI meetings) are characterized by pragmatic rather than strategic topic shifts. Second, agendasetting roles are clearer in formal debates; a modera- tor is tasked with setting the agenda and ensuring the conversation does not wander too much. The nonparametric model does best on the smaller debate dataset. We suspect that an evaluation that directly accessed the topic quality, either via prediction (Teh et al., 2006) or interpretability (Chang et al., 2009) would favor the nonparametric model more. 6 Evaluating Topic Shift Tendency In this section, we focus on the ability of SITS to capture speaker-level attributes. Recall that SITS associates with each speaker a topic shift tendency π that represents the probability of asserting a new topic in the conversation. While topic segmentation is a well studied problem, there are no established quantitative measurements of an individual’s ability to control a conversation. To evaluate whether the tendency is capturing meaningful characteristics of speakers, we compare our inferred tendencies against insights from political science. 2008 Elections To obtain a posterior estimate of π (Figure 3) we create 10 chains with hyperparameters sampled from the uniform distribution U(0, 1) and averaged π over 10 chains (as described in Section 5). In these debates, Ifill is the moderator of the debate between Biden and Palin; Brokaw, Lehrer and Schieffer are the three moderators of three debates between Obama and McCain. Here “Question” denotes questions from audiences in “town hall” debate. The role of this “speaker” can be considered equivalent to the debate moderator. The topic shift tendencies of moderators are much higher than for candidates. In the three debates between Obama and McCain, the moderators— Brokaw, Lehrer and Schieffer—have significantly higher scores than both candidates. This is a useful reality check, since in a debate the moderators are the ones asking questions and literally controlling the topical focus. Interestingly, in the vice-presidential debate, the score of moderator Ifill is only slightly higher than those of Palin and Biden; this is consistent with media commentary characterizing her as a size of the metrics Pk and WindowDiff chosen to replicate previous results. weak moderator.7 Similarly, the “Question” speaker had a relatively high variance, consistent with an amalgamation of many distinct speakers. These topic shift tendencies suggest that all candidates manage to succeed at some points in setting and controlling the debate topics. Our model gives Obama a slightly higher score than McCain, consistent with social science claims (Boydstun et al., 2011) that Obama had the lead in setting the agenda over McCain. Table 4 shows of SITS-detected topic shifts. Crossfire Crossfire, unlike the debates, has many speakers. This allows us to examine more closely what we can learn about speakers’ topic shift tendency. We verified that SITS can segment topics, and assuming that changing the topic is useful for a speaker, how can we characterize who does so effectively? We examine the relationship between topic shift tendency, social roles, and political ideology. To focus on frequent speakers, we filter out speakers with fewer than 30 turns. Most speakers have relatively small π, with the mode around 0.3. There are, however, speakers with very high topic shift tendencies. Table 5 shows the speakers having the highest values according to SITS. We find that there are three general patterns for who influences the course of a conversation in Crossfire. First, there are structural “speakers” the show uses to frame and propose new topics. These are 7 http://harpers.org/archive/2008/10/hbc-90003659 84 2008 Presidential Election Debates (larger means greater tendency) audience questions, news clips (e.g. many of Gore’s and Bush’s turns from 2000), and voice overs. That SITS is able to recover these is reassuring. Second, the stable of regular hosts receives high topic shift tendencies, which is reasonable given their experience with the format and ostensible moderation roles (in practice they also stoke lively discussion). The remaining class is more interesting. The remaining non-hosts with high topic shift tendency are relative moderates on the political spectrum: • John Kasich, one of few Republicans to support the assault weapons ban and now governor of Ohio, a swing state • Christine Todd Whitman, former Republican governor of CNehrwis Jersey, a very iDtmemano,c froartmice srt Ratee • John McCain, who before 2008 was known as a “maverick” for working with Democrats (e.g. Russ Feingold) This suggests that, despite Crossfire’s tendency to create highly partisan debates, those who are able to work across the political spectrum may best be able to influence the topic under discussion in highly polarized contexts. Table 4 shows detected topic shifts from these speakers; two of these examples (McCain and Whitman) show disagreement of Republicans with President Bush. In the other, Kasich is defending a Republican plan (school vouchers) popular with traditional Democratic constituencies. 7 Related and Future Work In the realm of statistical models, a number of techniques incorporate social connections and identity to explain content in social networks (Chang and Blei, atsbDePMmwphIncFiAoasCrtuLleycnNdAg:irIs’SatYphyo,weumckItGrasy’.qoheivfnuIakgrsdt?heo vna,dtbpJ.omslrheyivcaBnwdspeur[.ihodqtef]nuar,slihmetdnyuaopi’s-SbeI[hBn.FCtDvHLcr]ligEemIhysNoa:nFbvWidxeAltEsghnmRboad:eics[yr.,fmtuwleinha][go.,dLYftweur]–’lhsdaitngxerkbIfoat.hqeslkOufinrmbtyoeha,rit[n.geholyasc]rdi,wteoaxylpm’sburneItaopfkvicsqr.,n[BYoOtafebxruli.,mcEksGgatvn]roOebpyitmlnorcd.ea[sfviPYtr]lgoandyu., Previous turnTurn detected as shifting topic examples of those with high topic shift tendency 238947156FPAGNQMreouna.mlvsWea†‡kt.iluBonrcseh‡.7586 41702 4863150FBCKWMealchgrsitCvA lamuhoin†efr.5 2473509 π. RankSpeakerπRankSpeakerπ Table 5: Top speakers by topic shift tendencies. We mark hosts (†) and “speakers” who often (but not always) appeared in clips (‡). Apart from those groups, speakers with the highest tendency were political moderates. 2009) and scientific corpora (Rosen-Zvi et al., 2004). However, these models ignore the temporal evolution of content, treating documents as static. Models that do investigate the evolution of topics over time typically ignore the identify of the speaker. For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005; Wal85 lach, 2006; Gruber et al., 2007; Ahmed and Xing, 2008; Boyd-Graber and Blei, 2008; Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006; Eisenstein et al., 2010). In contrast, SITS jointly models topic and individuals’ tendency to control a conversation. Not only does SITS outperform other models using standard computational linguistics baselines, but it also pro- poses intriguing hypotheses for social scientists. Associating each speaker with a scalar that models their tendency to change the topic does improve performance on standard tasks, but it’s inadequate to fully describe an individual. Modeling individuals’ perspective (Paul and Girju, 2010), “side” (Thomas et al., 2006), or personal preferences for topics (Grimmer, 2009) would enrich the model and better illuminate the interaction of influence and topic. Statistical analysis of political discourse can help discover patterns that political scientists, who often work via a “close reading,” might otherwise miss. We plan to work with social scientists to validate our implicit hypothesis that our topic shift tendency correlates well with intuitive measures of “influence.” Acknowledgements This research was funded in part by the Army Research Laboratory through ARL Cooperative Agreement W91 1NF-09-2-0072 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory. Jordan Boyd-Graber and Philip Resnik are also supported by US National Science Foundation Grant NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. References [Abbott et al., 2011] Abbott, R., Walker, M., Anand, P., Fox Tree, J. E., Bowmani, R., and King, J. (201 1). How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 2–1 1. [Ahmed and Xing, 2008] Ahmed, A. and Xing, E. P. (2008). Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, pages 219– 230. [Beeferman et al., 1999] Beeferman, D., Berger, A., and Lafferty, J. (1999). Statistical models for text segmentation. Mach. Learn., 34: 177–210. [Blei and Lafferty, 2009] Blei, D. M. and Lafferty, J. (2009). Text Mining: Theory and Applications, chapter Topic Models. Taylor and Francis, London. [Boyd-Graber and Blei, 2008] Boyd-Graber, J. and Blei, D. M. (2008). Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. [Boydstun et al., 2011] Boydstun, A. E., Phillips, C., and Glazier, R. A. (201 1). It’s the economy again, stupid: Agenda control in the 2008 presidential debates. Forthcoming. [Chang and Blei, 2009] Chang, J. and Blei, D. M. (2009). Relational topic models for document networks. In Proceedings of Artificial Intelligence and Statistics. [Chang et al., 2009] Chang, J., Boyd-Graber, J., Wang, C., Gerrish, S., and Blei, D. M. (2009). Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. [Du et al., 2010] Du, L., Buntine, W., and Jin, H. (2010). Sequential latent dirichlet allocation: Discover underlying topic structures within a document. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 148 –157. 86 [Ehlen et al., 2007] Ehlen, P., Purver, M., and Niekrasz, J. (2007). A meeting browser that learns. In In: Proceedings of the AAAI Spring Symposium on Interaction Challenges for Intelligent Assistants. [Eisenstein and Barzilay, 2008] Eisenstein, J. and Barzilay, R. (2008). Bayesian unsupervised topic segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Proceedings of Emperical Methods in Natural Language Processing. [Eisenstein et al., 2010] Eisenstein, J., O’Connor, B., Smith, N. A., and Xing, E. P. (2010). A latent variable model for geographic lexical variation. In EMNLP’10, pages 1277–1287. [Ferguson, 1973] Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. [Fox et al., 2008] Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. (2008). An hdp-hmm for systems with state persistence. In Proceedings of International Conference of Machine Learning. [Galley et al., 2003] Galley, M., McKeown, K., FoslerLussier, E., and Jing, H. (2003). Discourse segmentation of multi-party conversation. In Proceedings of the Association for Computational Linguistics. [Georgescul et al., 2006] Georgescul, M., Clark, A., and Armstrong, S. (2006). Word distributions for thematic segmentation in a support vector machine approach. In Conference on Computational Natural Language Learning. [Gerrish and Blei, 2010] Gerrish, S. and Blei, D. M. (2010). A language-based approach to measuring scholarly impact. In Proceedings of International Conference of Machine Learning. [Griffiths et al., 2005] Griffiths, T. L., Steyvers, M., Blei, D. M., and Tenenbaum, J. B. (2005). Integrating topics and syntax. In Proceedings of Advances in Neural Information Processing Systems. [Grimmer, 2009] Grimmer, J. (2009). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18: 1–35. [Gruber et al., 2007] Gruber, A., Rosen-Zvi, M., and Weiss, Y. (2007). Hidden topic Markov models. In Artificial Intelligence and Statistics. [Hawes et al., 2009] Hawes, T., Lin, J., and Resnik, P. (2009). Elements of a computational model for multiparty discourse: The turn-taking behavior of Supreme Court justices. Journal of the American Society for Information Science and Technology, 60(8): 1607–1615. [Hearst, 1997] Hearst, M. A. (1997). TextTiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33–64. [Hsueh et al., 2006] Hsueh, P.-y., Moore, J. D., and Renals, S. (2006). Automatic segmentation of multiparty dialogue. In Proceedings of the European Chapter of the Association for Computational Linguistics. [Ireland et al., 2011] Ireland, M. E., Slatcher, R. B., Eastwick, P. W., Scissors, L. E., Finkel, E. J., and Pennebaker, J. W. (201 1). Language style matching predicts relationship initiation and stability. Psychological Science, 22(1):39–44. [Janin et al., 2003] Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., and Wooters, C. (2003). The ICSI meeting corpus. In IEEE International Confer- ence on Acoustics, Speech, and Signal Processing. [Johnson, 2010] Johnson, M. (2010). PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the Association for Computational Linguistics. [Morris and Hirst, 1991] Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. [M¨ uller and Quintana, 2004] Mu¨ller, P. and Quintana, F. A. (2004). Nonparametric Bayesian data analysis. Statistical Science, 19(1):95–1 10. [Murray et al., 2005] Murray, G., Renals, S., and Carletta, J. (2005). Extractive summarization of meeting recordings. In European Conference on Speech Communication and Technology. [Neal, 2000] Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249– 265. [Neal, 2003] Neal, R. M. (2003). Slice sampling. Annals of Statistics, 31:705–767. [Olney and Cai, 2005] Olney, A. and Cai, Z. (2005). An orthonormal basis for topic segmentation in tutorial dialogue. In Proceedings of the Human Language Technology Conference. [Paul and Girju, 2010] Paul, M. and Girju, R. (2010). A two-dimensional topic-aspect model for discovering multi-faceted topics. In Association for the Advancement of Artificial Intelligence. [Pele and Werman, 2009] Pele, O. and Werman, M. (2009). Fast and robust earth mover’s distances. In International Conference on Computer Vision. [Pevzner and Hearst, 2002] Pevzner, L. and Hearst, M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28. [Purver, 2011] Purver, M. (201 1). Topic segmentation. In Tur, G. and de Mori, R., editors, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech, pages 291–3 17. Wiley. 87 [Purver et al., 2006] Purver, M., Ko¨rding, K., Griffiths, T. L., and Tenenbaum, J. (2006). Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the Association for Computational Linguistics. [Resnik and Hardisty, 2010] Resnik, P. and Hardisty, E. (2010). Gibbs sampling for the uninitiated. Technical Report UMIACS-TR-2010-04, University of Maryland. http://www.lib.umd.edu/drum/handle/1903/10058. [Reynar, 1998] Reynar, J. C. (1998). Topic Segmentation: Algorithms and Applications. PhD thesis, University of Pennsylvania. [Rosen-Zvi et al., 2004] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. [Rubner et al., 2000] Rubner, Y., Tomasi, C., and Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40:99–121 . [Teh et al., 2006] Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476): 1566–1581. [Thomas et al., 2006] Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from Congressional floor-debate transcripts. In Proceedings of Emperical Methods in Natural Language Processing. [Tur et al., 2010] Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tu¨r, D., Dowding, J., Favre, B., Ferna´ndez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Tien, J., Vergyri, D., and Yang, F. (2010). The CALO meeting assistant system. Trans. Audio, Speech and Lang. Proc., 18: 1601–161 1. [Wallach, 2006] Wallach, H. M. (2006). Topic modeling: Beyond bag-of-words. In Proceedings of International Conference of Machine Learning. [Wallach, 2008] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [Wang et al., 2008] Wang, C., Blei, D. M., and Heckerman, D. (2008). Continuous time dynamic topic models. In Proceedings of Uncertainty in Artificial Intelligence. [Wang and McCallum, 2006] Wang, X. and McCallum, A. (2006). Topics over time: a non-Markov continuoustime model of topical trends. In Knowledge Discovery and Data Mining, Knowledge Discovery and Data Mining.
same-paper 2 0.73701084 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction
Author: Mark Johnson ; Katherine Demuth ; Michael Frank
Abstract: This paper uses an unsupervised model of grounded language acquisition to study the role that social cues play in language acquisition. The input to the model consists of (orthographically transcribed) child-directed utterances accompanied by the set of objects present in the non-linguistic context. Each object is annotated by social cues, indicating e.g., whether the caregiver is looking at or touching the object. We show how to model the task of inferring which objects are being talked about (and which words refer to which objects) as standard grammatical inference, and describe PCFG-based unigram models and adaptor grammar-based collocation models for the task. Exploiting social cues improves the performance of all models. Our models learn the relative importance of each social cue jointly with word-object mappings and collocation structure, consis- tent with the idea that children could discover the importance of particular social information sources during word learning.
3 0.56909174 103 acl-2012-Grammar Error Correction Using Pseudo-Error Sentences and Domain Adaptation
Author: Kenji Imamura ; Kuniko Saito ; Kugatsu Sadamitsu ; Hitoshi Nishikawa
Abstract: This paper presents grammar error correction for Japanese particles that uses discriminative sequence conversion, which corrects erroneous particles by substitution, insertion, and deletion. The error correction task is hindered by the difficulty of collecting large error corpora. We tackle this problem by using pseudoerror sentences generated automatically. Furthermore, we apply domain adaptation, the pseudo-error sentences are from the source domain, and the real-error sentences are from the target domain. Experiments show that stable improvement is achieved by using domain adaptation.
4 0.49889284 211 acl-2012-Using Rejuvenation to Improve Particle Filtering for Bayesian Word Segmentation
Author: Benjamin Borschinger ; Mark Johnson
Abstract: We present a novel extension to a recently proposed incremental learning algorithm for the word segmentation problem originally introduced in Goldwater (2006). By adding rejuvenation to a particle filter, we are able to considerably improve its performance, both in terms of finding higher probability and higher accuracy solutions.
5 0.48785129 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers
Author: Bevan Jones ; Mark Johnson ; Sharon Goldwater
Abstract: Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.
6 0.48000541 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing
7 0.47825581 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning
8 0.47671527 84 acl-2012-Estimating Compact Yet Rich Tree Insertion Grammars
9 0.47492599 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition
10 0.47467884 21 acl-2012-A System for Real-time Twitter Sentiment Analysis of 2012 U.S. Presidential Election Cycle
11 0.47340623 31 acl-2012-Authorship Attribution with Author-aware Topic Models
12 0.47229475 36 acl-2012-BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment
13 0.47194099 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling
14 0.47046512 167 acl-2012-QuickView: NLP-based Tweet Search
15 0.4689365 79 acl-2012-Efficient Tree-Based Topic Modeling
16 0.4670479 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale
17 0.46663147 139 acl-2012-MIX Is Not a Tree-Adjoining Language
18 0.46424451 187 acl-2012-Subgroup Detection in Ideological Discussions
20 0.46171612 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base