acl acl2013 acl2013-282 knowledge-graph by maker-knowledge-mining

282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue


Source: pdf

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 t oyoda @ t kl i s i Abstract While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. [sent-9, score-1.358]

2 This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. [sent-10, score-1.922]

3 1 Introduction When we have a conversation, we usually care about the emotion of the person to whom we speak. [sent-14, score-0.568]

4 To date, the modeling of emotion in a dialogue has extensively been studied in NLP as well as related areas (Forbes-Riley and Litman, 2004; Ayadi et al. [sent-16, score-0.746]

5 However, the past attempts are virtually restricted to estimating the emotion of an addresser1 from her/his utterance. [sent-18, score-0.568]

6 In contrast, few studies have explored how the emotion of the addressee is affected by the utterance. [sent-19, score-0.957]

7 The addressee in this example refers to the left-hand user, who receives the response. [sent-31, score-0.389]

8 With this motivation in mind, the paper inves- tigates two novel tasks: (1) prediction of the addressee’s emotion and (2) generation of the response that elicits a prespecified emotion in the addressee’s mind. [sent-33, score-1.554]

9 For simplicity, we consider, as a history, an utterance and a response to it (Figure 1). [sent-35, score-0.391]

10 Given the history, the system predicts the addressee’s emotion that will be caused by the response. [sent-36, score-0.568]

11 For example, the system outputs JOY when the response is I hope you feel better soon, while it outputs SADNESS when the response is Sorry, but you can ’t join us today 2We adopt Plutchik (1980)’s eight emotional categories in both tasks. [sent-37, score-0.772]

12 In the generation task, on the other hand, the system is provided with an utterance and an emotional category such as JOY or SADNESS, which is referred to as goal emotion. [sent-41, score-0.439]

13 Then the system generates the response that elicits the goal emotion in the addressee’s mind. [sent-42, score-0.947]

14 For example, I hope you feel better soon is generated as a response to Ihave had a high fever for 3 days when the goal emotion is specified as JOY, while Sorry, but you can ’t join us today is generated for SADNESS (Figure 1). [sent-43, score-0.975]

15 Predicting the emotion of an addressee is useful for filtering flames or infelicitous expressions from online messages (Spertus, 1997). [sent-45, score-1.002]

16 The response generator that is aware of the emotion of an addressee is also useful for text completion in online conversation (Hasselgren et al. [sent-46, score-1.235]

17 We employ standard classifiers for predicting the emotion ofan addressee. [sent-52, score-0.65]

18 Our contribution here is to investigate the effectiveness of new features that cannot be used in ordinary emotion recognition, the task of estimating the emotion of a speaker (or writer) from her/his utterance (or writing) (Ayadi et al. [sent-53, score-1.33]

19 To perform the generation task, we build a statistical response generator by following (Ritter et al. [sent-61, score-0.295]

20 To improve on the previous study, we investigate a method for controlling the contents of the response for, in our case, eliciting the goal emotion. [sent-63, score-0.407]

21 Using this data set, we train the classifiers that predict the emotion of an addressee, and the response generators that elicit the goal emotion. [sent-68, score-0.99]

22 2 Emotion-tagged Dialogue Corpus The key in making a supervised approach to predicting and eliciting addressee’s emotion successful is to obtain large-scale, reliable training data effectually. [sent-70, score-0.757]

23 We thus automatically build a largescale emotion-tagged dialogue corpus from microblog posts, and use it as the training data in the prediction and generation tasks. [sent-71, score-0.298]

24 We then explain how to automatically annotate utterances in the extracted dialogues with the addressers’ emotions by using emotional expressions as clues. [sent-74, score-0.777]

25 1 Mining dialogues from Twitter We have first crawled utterances (posts) from Twitter by using the Twitter REST API. [sent-76, score-0.389]

26 ’ We then extracted dialogues from the resulting utterances, assuming that a series of utterances interchangeably made by two users form a dialogue. [sent-81, score-0.388]

27 We here exploited ‘in reply to status id’ field of each utterance provided by Twitter REST API to link to the other, if any, utterance to which it replied. [sent-82, score-0.31]

28 com/ do c s / api / 965 # users672,937 # dialogues 311,541,839 # unique utterances 1,007,403,858 ave. [sent-85, score-0.369]

29 Dg#uiloesa1 86024 02, 0 0 0 , 0 0 0 02345678910+ 180,000,000 Dialogue length (# utterances in dialogue) Figure 2: The number of dialogues plotted against the dialogue length. [sent-92, score-0.547]

30 JOY Table 2: An illustration of an emotion-tagged dialogue: The first column shows a dialogue (a series of utterances interchangeably made by two users), while the second column shows the addresser’s emotion estimated from the utterance. [sent-100, score-1.016]

31 Table 1 lists the statistics of the extracted dialogues, while Figure 2 plots the number of dialogues plotted against the dialogue length (the number of utterances in dialogue). [sent-101, score-0.566]

32 2%) consist of at most 10 utterances, although the longest dialogue includes 1745 utterances and spans more than six weeks. [sent-103, score-0.429]

33 2 Tagging utterances with addressers’ emotions We then automatically labeled utterances in the obtained dialogues with the addressers’ emotions by using emotional expressions as clues (Table 2). [sent-105, score-1.184]

34 In this study, we have adopted Plutchik (1980)’s eight emotional categories (ANGER, ANTICIPATION, DISGUST, FEAR, JOY, SADNESS, SURPRISE, and TRUST) as the targets to label, and manually tailored around ten emotional expressions for each emotional category. [sent-106, score-0.7]

35 4 Because precise annotation is critical in the supervised learning scenario, we annotate utterances with the addressers’ emotions only when the emotional expressions do not: 1. [sent-126, score-0.659]

36 Two human workers measured the precision of the annotation by examining 100 labeled utterances randomly sampled for each emotional category. [sent-134, score-0.559]

37 966 3 Predicting Addressee’s Emotion This section describes a method for predicting emotion elicited in an addressee when s/he receives a response to her/his utterance. [sent-141, score-1.334]

38 The input to this task is a pair of an utterance and a response to it, e. [sent-142, score-0.391]

39 , the two utterances in Figure 1, while the output is the addressee’s emotion among the emotional categories of Plutchik (1980) (JOY and SADNESS for the top and bottom dialogues in Figure 1, respectively). [sent-144, score-1.144]

40 Although a response could elicit multiple emotions in the addressee, in this paper we focus on predicting the most salient emotion elicited in the addressee and cast the prediction as a single-label multi-class classification problem. [sent-145, score-1.637]

41 5 We then construct a one-versus-the-rest classifier6 by combining eight binary classifiers, each of which predicts whether the response elicits each emotional category. [sent-146, score-0.579]

42 For each emotiontagged utterance in the corpus, we assume that the tagged emotion is elicited by the (last) response. [sent-149, score-0.822]

43 We thereby extract the pair of utterances preceding the emotion-tagged utterance and the tagged emotion as one training example. [sent-150, score-0.991]

44 Taking the dialogue in Table 2 as an example, we obtain one training example from the first two utterances and SURPRISE as the emotion elicited in user A. [sent-151, score-1.079]

45 h eT rheeextracted n-grams could indicate a certain action that elicits a specific emotion (e. [sent-153, score-0.67]

46 Because word n-grams themselves are likely to be sparse, we estimate the addressers’ emotions from their utterances and exploit them to induce emotion features. [sent-160, score-0.975]

47 The addresser’s emotion has been reported to influence the addressee’s emotion 5Because microblog posts are short, we expect emotions elicited by a response post not to be very diverse and a multiclass classification to be able to capture the essential crux of the prediction task. [sent-161, score-1.736]

48 , 2012), while the addressee’s emotion just before receiving a response can be a reference to predict her/his emotion in question after receiving the response. [sent-164, score-1.372]

49 To induce emotion features, we exploit the rulebased approach used in Section 2. [sent-165, score-0.568]

50 Since the rule-based approach annotates utterances with emotions only when they contain emotional expressions, we independently train for each emotional category a binary classifier that estimates the addresser’s emotion from her/his utterance and apply it to the unlabeled utterances. [sent-167, score-1.562]

51 W ≤e 3 s)hould emphasize that the features induced from the addressee’s utterance are unique to this task and are hardly available in the related tasks that predicted the emotion of a reader of news articles (Lin and Hsin-Yihn, 2008) or personal sto- ries (Socher et al. [sent-169, score-0.741]

52 4 Eliciting Addressee’s Emotion This section presents a method for generating a response that elicits the goal emotion, which is one of the emotional categories of Plutchik (1980), in the addressee. [sent-172, score-0.586]

53 2, we present how to adapt the model in order to generate a response that elicits the goal emotion in the addressee. [sent-177, score-0.947]

54 Similar to ordinary machine translation systems, the model is learned from pairs of an utterance and a response by using off-the-shelf tools for machine translation. [sent-182, score-0.459]

55 On top of this framework, we have developed a response generator that elicits a specific emotion. [sent-202, score-0.361]

56 We use the emotion-tagged dialogue corpus to learn eight translation models and language models, each of which is specialized in generating the response that elicits one of the eight emotions (Plutchik, 1980). [sent-203, score-0.768]

57 Specifically, the models are learned from utterances preceding ones that are tagged with emotional category. [sent-204, score-0.475]

58 As an example, let us examine to learn models for eliciting SURPRISE from the dialogue in Table 2. [sent-205, score-0.308]

59 In this case, the first two utterances are used to learn the translation model, while only the second utterance is used to learn the language model. [sent-206, score-0.434]

60 Because not all the utterances are tagged with the emotion in emotion-tagged dialogue corpus, only a small fraction of utterances can be used for learning the adapted models. [sent-208, score-1.298]

61 1 Test data To evaluate the proposed method, we built, as test data, sets of an utterance paired with responses that elicit a certain goal emotion (Table 5). [sent-231, score-0.986]

62 Each utterance in the test data has more than one responses that elicit the same goal emotion, because they are used to compute BLEU score (see section 5. [sent-233, score-0.418]

63 We first asked five human worker to produce responses to 80 utterances (10 utterances for each goal emotion). [sent-236, score-0.727]

64 Note that the 80 utterances do not have overlap between workers and that the worker × produced only one response to each utterance. [sent-237, score-0.653]

65 To alleviate the burden on the workers, we actually provided each worker with the utterances in the emotion-tagged corpus. [sent-238, score-0.316]

66 Then we asked each worker to select 80 utterances to which s/he thought s/he could easily respond. [sent-239, score-0.316]

67 We did not allow the same worker to produce more than one response to the same utterance. [sent-243, score-0.301]

68 In this way, we obtained 1200 responses for the 400 utterances in total. [sent-244, score-0.37]

69 Finally, we assessed the data quality to remove responses that were unlikely to elicit the goal emotion. [sent-245, score-0.282]

70 For each utterance-response pair, we asked two workers to judge whether the response elicited the goal emotion. [sent-246, score-0.46]

71 2 Prediction task We first report experimental results on predicting the addressee’s emotion within a dialogue. [sent-253, score-0.627]

72 Table 6 lists the number of utterance-response pairs used to train eight binary classifiers for individual emotional categories, which form a one-versus-the rest classifier for the prediction task. [sent-254, score-0.367]

73 To investigate the impact of the features that are uniquely available in a dialogue data, we compared classifiers trained with the following two sets of features in terms of precision, recall, and F1 for each emotional category. [sent-256, score-0.408]

74 RESPONSE The n-gram and emotion features in- duced from the response. [sent-257, score-0.568]

75 The n-gram and emotion features induced from the response and the addressee’s utterance. [sent-325, score-0.804]

76 We can see that the features induced from the addressee’s utterance significantly improved the prediction performance, F1, for emotions other than FEAR. [sent-327, score-0.355]

77 Table 8 shows a confusion matrix of the classifier using all the features, with mostly predicted emotions bold-faced and mostly confused emotions underlined for each emotional category. [sent-329, score-0.579]

78 The classifier was less likely to confuse positive emotions (JOY and ANTICIPATION) with negative emotion (ANGER, DISGUST, FEAR, and SADNESS) vice versa. [sent-333, score-0.742]

79 In this example, only if the addressee does not know the fact provided by the response, s/he will surprise at it. [sent-343, score-0.457]

80 3 Generation task We next demonstrate the experimental results for eliciting the emotion of the addressee. [sent-345, score-0.698]

81 We use the utterance pairs summarized in Table 6 to learn the translation models and language models for eliciting each emotional category. [sent-346, score-0.542]

82 In this evaluation, the system is provided with the utterance and the goal emotion in the test data and the generated responses are evaluated through BLEU score. [sent-352, score-0.883]

83 The results demonstrate that model adaptation is useful for generating the responses that elicit the goal emotion. [sent-381, score-0.291]

84 In this evaluation, the baseline (no adaptation in Table 10) and proposed method generated a response for each of the 396 utterances in the test data. [sent-390, score-0.515]

85 If the response was regarded as so by either of the workers, it was further judged whether it elicits the goal emotion or not. [sent-393, score-0.967]

86 Especially, we can confirm that the proposed method can generate responses that elicit addresee’s emotion more clearly. [sent-400, score-0.79]

87 , the system) feels anticipation, and consequently the emotion of the addressee is affected by the emotion of the speaker (i. [sent-428, score-1.568]

88 6 Related Work There have been a tremendous amount of studies on predicting the emotion from text or speech data (Ayadi et al. [sent-433, score-0.627]

89 Unlike our prediction task, most of them have exclusively focused on estimating the emotion of a speaker (or writer) from her/his utterance (or writing). [sent-437, score-0.788]

90 (201 1) investigated predicting the emotion of a reader from the text that s/he reads. [sent-439, score-0.627]

91 , It rained suddenly when I went to see the cherry blossoms) and an emotion elicited by it (e. [sent-446, score-0.65]

92 A similar technique would be useful for prediction the emotion of an addressee as well. [sent-450, score-1.001]

93 At this moment, we are unaware of any statistical response generators that model the emotion of the user. [sent-453, score-0.823]

94 Those attempts are similar to our work in that they also aim at eliciting a certain emotion in the addressee. [sent-456, score-0.698]

95 7 Conclusion and Future Work In this paper, we have explored predicting and eliciting the emotion of an addressee by using a large amount of dialogue data obtained from microblog posts. [sent-460, score-1.364]

96 In the first attempt to model the emotion of an addressee in the field of NLP, we demonstrated that the response of the dialogue partner and the previous utterance of the addressee are useful for predicting the emotion. [sent-461, score-1.974]

97 In the generation task, on the other hand, we showed that the 971 model adaptation approach successfully generates the responses that elicit the goal emotion. [sent-462, score-0.327]

98 Survey on speech emotion recognition: Features, classification schemes, and databases. [sent-472, score-0.568]

99 Predicting emotion in spoken dialogue from multiple knowledge sources. [sent-493, score-0.746]

100 Ranking reader emotions using pairwise loss minimization and emotional distribution regression. [sent-518, score-0.363]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('emotion', 0.568), ('addressee', 0.389), ('utterances', 0.251), ('response', 0.236), ('emotional', 0.207), ('dialogue', 0.178), ('emotions', 0.156), ('utterance', 0.155), ('sadness', 0.152), ('joy', 0.135), ('eliciting', 0.13), ('responses', 0.119), ('dialogues', 0.118), ('elicit', 0.103), ('elicits', 0.102), ('workers', 0.101), ('addressers', 0.093), ('disgust', 0.087), ('elicited', 0.082), ('fever', 0.071), ('ritter', 0.07), ('surprise', 0.068), ('addresser', 0.067), ('plutchik', 0.067), ('worker', 0.065), ('anticipation', 0.065), ('fear', 0.064), ('predicting', 0.059), ('balahur', 0.058), ('twitter', 0.053), ('ayadi', 0.047), ('sorry', 0.047), ('expressions', 0.045), ('prediction', 0.044), ('posts', 0.042), ('goal', 0.041), ('okyo', 0.04), ('microblog', 0.04), ('tokyo', 0.037), ('generation', 0.036), ('bleu', 0.036), ('bandyopadhyay', 0.035), ('trust', 0.035), ('eight', 0.034), ('feel', 0.034), ('interpolation', 0.033), ('adapted', 0.033), ('history', 0.03), ('happy', 0.029), ('okumura', 0.028), ('translation', 0.028), ('adaptation', 0.028), ('andres', 0.027), ('boldrini', 0.027), ('dybala', 0.027), ('ghamrawi', 0.027), ('hasselgren', 0.027), ('labtov', 0.027), ('patricio', 0.027), ('tokuhisa', 0.027), ('ynaga', 0.027), ('anger', 0.026), ('join', 0.025), ('confused', 0.024), ('ihave', 0.024), ('gree', 0.024), ('confuses', 0.024), ('montoyo', 0.024), ('classifiers', 0.023), ('generator', 0.023), ('pairs', 0.022), ('japanese', 0.022), ('socher', 0.022), ('feels', 0.022), ('ester', 0.022), ('dg', 0.022), ('gonz', 0.022), ('speaker', 0.021), ('confusions', 0.02), ('sennrich', 0.02), ('prec', 0.02), ('appropriateness', 0.02), ('crawled', 0.02), ('regarded', 0.02), ('humor', 0.019), ('rec', 0.019), ('nez', 0.019), ('generators', 0.019), ('interchangeably', 0.019), ('assessed', 0.019), ('conversation', 0.019), ('sarcasm', 0.019), ('lists', 0.019), ('predicted', 0.018), ('ordinary', 0.018), ('classifier', 0.018), ('japan', 0.017), ('tagged', 0.017), ('weights', 0.017), ('sentiment', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999845 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

2 0.47069609 209 acl-2013-Joint Modeling of News Readerâ•Žs and Comment Writerâ•Žs Emotions

Author: Huanhuan Liu ; Shoushan Li ; Guodong Zhou ; Chu-ren Huang ; Peifeng Li

Abstract: Emotion classification can be generally done from both the writer’s and reader’s perspectives. In this study, we find that two foundational tasks in emotion classification, i.e., reader’s emotion classification on the news and writer’s emotion classification on the comments, are strongly related to each other in terms of coarse-grained emotion categories, i.e., negative and positive. On the basis, we propose a respective way to jointly model these two tasks. In particular, a cotraining algorithm is proposed to improve semi-supervised learning of the two tasks. Experimental evaluation shows the effectiveness of our joint modeling approach. . 1

3 0.22794051 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

Author: Veronica Perez-Rosas ; Rada Mihalcea ; Louis-Philippe Morency

Abstract: During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.

4 0.22147116 284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions

Author: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

Abstract: Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.

5 0.1701894 184 acl-2013-Identification of Speakers in Novels

Author: Hua He ; Denilson Barbosa ; Grzegorz Kondrak

Abstract: Speaker identification is the task of at- tributing utterances to characters in a literary narrative. It is challenging to auto- mate because the speakers of the majority ofutterances are not explicitly identified in novels. In this paper, we present a supervised machine learning approach for the task that incorporates several novel features. The experimental results show that our method is more accurate and general than previous approaches to the problem.

6 0.12871401 79 acl-2013-Character-to-Character Sentiment Analysis in Shakespeare's Plays

7 0.11848418 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

8 0.084284335 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation

9 0.07744246 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

10 0.076314576 311 acl-2013-Semantic Neighborhoods as Hypergraphs

11 0.073482156 197 acl-2013-Incremental Topic-Based Translation Model Adaptation for Conversational Spoken Language Translation

12 0.068724364 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

13 0.067642704 63 acl-2013-Automatic detection of deception in child-produced speech using syntactic complexity features

14 0.056209877 355 acl-2013-TransDoop: A Map-Reduce based Crowdsourced Translation for Complex Domain

15 0.05348355 129 acl-2013-Domain-Independent Abstract Generation for Focused Meeting Summarization

16 0.050203018 265 acl-2013-Outsourcing FrameNet to the Crowd

17 0.050097391 248 acl-2013-Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Estimation

18 0.048979595 315 acl-2013-Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression

19 0.048828989 11 acl-2013-A Multi-Domain Translation Model Framework for Statistical Machine Translation

20 0.04773796 190 acl-2013-Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.118), (1, 0.097), (2, 0.018), (3, 0.079), (4, -0.027), (5, -0.059), (6, 0.1), (7, -0.014), (8, 0.057), (9, 0.127), (10, -0.051), (11, 0.008), (12, -0.066), (13, 0.036), (14, -0.027), (15, -0.042), (16, -0.039), (17, 0.07), (18, 0.142), (19, 0.055), (20, -0.139), (21, -0.347), (22, 0.106), (23, -0.019), (24, -0.166), (25, 0.354), (26, 0.254), (27, -0.001), (28, 0.097), (29, 0.132), (30, 0.099), (31, 0.074), (32, -0.048), (33, 0.084), (34, -0.046), (35, -0.093), (36, 0.038), (37, -0.02), (38, 0.077), (39, 0.058), (40, 0.038), (41, -0.083), (42, 0.069), (43, 0.049), (44, -0.005), (45, -0.075), (46, 0.007), (47, 0.036), (48, 0.039), (49, 0.009)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96272653 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

2 0.91670275 209 acl-2013-Joint Modeling of News Readerâ•Žs and Comment Writerâ•Žs Emotions

Author: Huanhuan Liu ; Shoushan Li ; Guodong Zhou ; Chu-ren Huang ; Peifeng Li

Abstract: Emotion classification can be generally done from both the writer’s and reader’s perspectives. In this study, we find that two foundational tasks in emotion classification, i.e., reader’s emotion classification on the news and writer’s emotion classification on the comments, are strongly related to each other in terms of coarse-grained emotion categories, i.e., negative and positive. On the basis, we propose a respective way to jointly model these two tasks. In particular, a cotraining algorithm is proposed to improve semi-supervised learning of the two tasks. Experimental evaluation shows the effectiveness of our joint modeling approach. . 1

3 0.54932028 184 acl-2013-Identification of Speakers in Novels

Author: Hua He ; Denilson Barbosa ; Grzegorz Kondrak

Abstract: Speaker identification is the task of at- tributing utterances to characters in a literary narrative. It is challenging to auto- mate because the speakers of the majority ofutterances are not explicitly identified in novels. In this paper, we present a supervised machine learning approach for the task that incorporates several novel features. The experimental results show that our method is more accurate and general than previous approaches to the problem.

4 0.52176696 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

Author: Veronica Perez-Rosas ; Rada Mihalcea ; Louis-Philippe Morency

Abstract: During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.

5 0.51676738 284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions

Author: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

Abstract: Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.

6 0.4521502 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

7 0.42186734 79 acl-2013-Character-to-Character Sentiment Analysis in Shakespeare's Plays

8 0.39214551 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE

9 0.37277186 190 acl-2013-Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs

10 0.36214036 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

11 0.31905967 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation

12 0.2997494 257 acl-2013-Natural Language Models for Predicting Programming Comments

13 0.28347591 63 acl-2013-Automatic detection of deception in child-produced speech using syntactic complexity features

14 0.27809232 203 acl-2013-Is word-to-phone mapping better than phone-phone mapping for handling English words?

15 0.27090496 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

16 0.26649082 311 acl-2013-Semantic Neighborhoods as Hypergraphs

17 0.21165438 298 acl-2013-Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms

18 0.20820375 86 acl-2013-Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures

19 0.20393364 178 acl-2013-HEADY: News headline abstraction through event pattern clustering

20 0.19405952 171 acl-2013-Grammatical Error Correction Using Integer Linear Programming


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.071), (6, 0.031), (11, 0.047), (15, 0.016), (21, 0.251), (24, 0.078), (26, 0.048), (28, 0.017), (35, 0.08), (42, 0.044), (48, 0.03), (64, 0.02), (70, 0.033), (88, 0.04), (90, 0.026), (95, 0.057)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86054438 293 acl-2013-Random Walk Factoid Annotation for Collective Discourse

Author: Ben King ; Rahul Jha ; Dragomir Radev ; Robert Mankoff

Abstract: In this paper, we study the problem of automatically annotating the factoids present in collective discourse. Factoids are information units that are shared between instances of collective discourse and may have many different ways ofbeing realized in words. Our approach divides this problem into two steps, using a graph-based approach for each step: (1) factoid discovery, finding groups of words that correspond to the same factoid, and (2) factoid assignment, using these groups of words to mark collective discourse units that contain the respective factoids. We study this on two novel data sets: the New Yorker caption contest data set, and the crossword clues data set.

same-paper 2 0.78574443 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

3 0.74797434 352 acl-2013-Towards Accurate Distant Supervision for Relational Facts Extraction

Author: Xingxing Zhang ; Jianwen Zhang ; Junyu Zeng ; Jun Yan ; Zheng Chen ; Zhifang Sui

Abstract: Distant supervision (DS) is an appealing learning method which learns from existing relational facts to extract more from a text corpus. However, the accuracy is still not satisfying. In this paper, we point out and analyze some critical factors in DS which have great impact on accuracy, including valid entity type detection, negative training examples construction and ensembles. We propose an approach to handle these factors. By experimenting on Wikipedia articles to extract the facts in Freebase (the top 92 relations), we show the impact of these three factors on the accuracy of DS and the remarkable improvement led by the proposed approach.

4 0.72424769 175 acl-2013-Grounded Language Learning from Video Described with Sentences

Author: Haonan Yu ; Jeffrey Mark Siskind

Abstract: We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts si- multaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video.

5 0.66433328 276 acl-2013-Part-of-Speech Induction in Dependency Trees for Statistical Machine Translation

Author: Akihiro Tamura ; Taro Watanabe ; Eiichiro Sumita ; Hiroya Takamura ; Manabu Okumura

Abstract: This paper proposes a nonparametric Bayesian method for inducing Part-ofSpeech (POS) tags in dependency trees to improve the performance of statistical machine translation (SMT). In particular, we extend the monolingual infinite tree model (Finkel et al., 2007) to a bilingual scenario: each hidden state (POS tag) of a source-side dependency tree emits a source word together with its aligned target word, either jointly (joint model), or independently (independent model). Evaluations of Japanese-to-English translation on the NTCIR-9 data show that our induced Japanese POS tags for dependency trees improve the performance of a forest- to-string SMT system. Our independent model gains over 1 point in BLEU by resolving the sparseness problem introduced in the joint model.

6 0.61303002 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

7 0.56274968 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

8 0.5547449 183 acl-2013-ICARUS - An Extensible Graphical Search Tool for Dependency Treebanks

9 0.55314213 83 acl-2013-Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model

10 0.55251026 194 acl-2013-Improving Text Simplification Language Modeling Using Unsimplified Text Data

11 0.55192053 318 acl-2013-Sentiment Relevance

12 0.55125821 185 acl-2013-Identifying Bad Semantic Neighbors for Improving Distributional Thesauri

13 0.55086094 85 acl-2013-Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis

14 0.55045944 99 acl-2013-Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation

15 0.54871547 187 acl-2013-Identifying Opinion Subgroups in Arabic Online Discussions

16 0.54735023 267 acl-2013-PARMA: A Predicate Argument Aligner

17 0.54530632 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

18 0.54385251 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

19 0.54382467 377 acl-2013-Using Supervised Bigram-based ILP for Extractive Summarization

20 0.54377496 253 acl-2013-Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts