acl acl2011 acl2011-33 knowledge-graph by maker-knowledge-mining

33 acl-2011-An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue


Source: pdf

Author: Kristy Boyer ; Joseph Grafsgaard ; Eun Young Ha ; Robert Phillips ; James Lester

Abstract: Dialogue act classification is a central challenge for dialogue systems. Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Raleigh, NC, USA {keboye r , j fgra f s g , eha , Abstract Dialogue act classification is a central challenge for dialogue systems. [sent-4, score-0.803]

2 Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. [sent-5, score-2.327]

3 This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. [sent-6, score-2.684]

4 The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. [sent-7, score-0.806]

5 The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. [sent-8, score-1.438]

6 For these systems, understanding the role of a user’s utterance in the broader context of the dialogue is a key challenge (Sridhar, Bangalore, & Narayanan, 2009). [sent-10, score-0.63]

7 Central to this endeavor is dialogue act classification, which categorizes the intention behind the user’s move (e. [sent-11, score-0.759]

8 Automatic dialogue act classification has been the focus of a 1190 rphi l i le s te r } @ nc su . [sent-14, score-0.803]

9 These models may be further improved by leveraging regularities of the dialogue from both linguistic and extra-linguistic sources. [sent-17, score-0.567]

10 Human interaction has long been understood to include rich phenomena consisting of verbal and nonverbal cues, with facial expressions playing a vital role (Knapp & Hall, 2006; McNeill, 1992; Mehrabian, 2007; Russell, Bachorowski, & Fernandez-Dols, 2003; Schmidt & Cohn, 2001). [sent-19, score-0.704]

11 c s 2o0ci1a1ti Aonss foocria Ctioomnp fourta Ctioomnaplu Ltaintigouniaslti Lcisn,g puaigsetsic 1s190–1199, ited to modeling textual features and not multimodal expressions of emotion such as facial actions. [sent-25, score-0.815]

12 Such multimodal expressions have only just begun to be explored within corpus-based dialogue research (Calvo & D'Mello, 2010; Cavicchio, 2009). [sent-26, score-0.676]

13 This paper presents a novel affect-enriched dialogue act classification approach that leverages knowledge of users’ facial expressions during computer-mediated textual human-human dialogue. [sent-27, score-1.507]

14 Intuitively, the user’s affective state is a promising source of information that may help to distinguish between particular dialogue acts (e. [sent-28, score-0.704]

15 We focus specifically on occurrences of students’ confusion-related facial actions during taskoriented tutorial dialogue. [sent-31, score-0.708]

16 First, confusion is known to be prevalent within tutoring, and its implications for student learning are thought to run deep (Graesser, Lu, Olde, Cooper-Pye, & Whitten, 2005). [sent-33, score-0.207]

17 Finally, automatic facial action recognition technologies are developing rapidly, and confusion-related facial action events are among those that can be reliably recognized automatically (Bartlett et al. [sent-36, score-1.542]

18 This promising development bodes well for the feasibility of automatic real-time confusion detection within dialogue systems. [sent-38, score-0.662]

19 1 Dialogue Act Classification Because of the importance of dialogue act classification within dialogue systems, it has been an active area of research for some time. [sent-40, score-1.397]

20 Early work on automatic dialogue act classification modeled discourse structure with hidden Markov models, experimenting with lexical and prosodic features, and applying the dialogue act model as a constraint to 1191 aid in automatic speech recognition (Stolcke et al. [sent-41, score-1.641]

21 A recently proposed alternative approach involves treating dialogue utterances as documents within a latent semantic analysis framework, and applying feature enhancements that incorporate such information as speaker and utterance duration (Di Eugenio et al. [sent-44, score-0.713]

22 However, it takes a step beyond the previous work by including multimodal affective displays, specifically facial expressions, as features available to an affect-enriched dialogue act classification model. [sent-48, score-1.513]

23 2 Detecting Emotions in Dialogue Detecting emotional states during spoken dialogue is an active area of research, much of which focuses on detecting frustration so that a user can be automatically transferred to a human dialogue agent (López-Cózar et al. [sent-50, score-1.249]

24 Research on spoken dialogue has leveraged lexical features along with discourse cues and acoustic information to classify user emotion, sometimes at a coarse grain along a positive/negative axis (Lee & Narayanan, 2005). [sent-52, score-0.683]

25 Recent work on an affective companion agent has examined user emotion classification within conversational speech (Cavazza et al. [sent-53, score-0.28]

26 In contrast to that spoken dialogue research, the work in this paper is situated within textual dialogue, a widely used modality of communication for which a deeper understanding of user affect may substantially improve system performance. [sent-55, score-0.723]

27 While many projects have focused on linguistic cues, recent work has begun to explore numerous channels for affect detection including facial actions, electrocardiograms, skin conductance, and posture sensors (Calvo & D'Mello, 2010). [sent-56, score-0.666]

28 A recent project in a map task domain investigates some of these sources of affect data within task-oriented dialogue (Cavicchio, 2009). [sent-57, score-0.639]

29 Like that work, the current project utilizes facial action tagging, for which promising automatic technologies exist (Bartlett et al. [sent-58, score-0.76]

30 However, we leverage the recognized expressions of emotion for the task of dialogue act classification. [sent-60, score-0.927]

31 3 Categorizing Emotions and Discourse within Dialogue Sets of emotion taxonomies for discourse and dialogue are often application-specific, for example, focusing on the frustration of users who are interacting with a spoken dialogue system (LópezCózar et al. [sent-62, score-1.336]

32 In contrast, the most widely utilized emotion frameworks are not application-specific; for example, Ekman’s Facial Action Coding System (FACS) has been widely used as a rigorous technique for coding facial movements based on human facial anatomy (Ekman & Friesen, 1978). [sent-64, score-1.349]

33 Within this framework, facial movements are categorized into facial action units, which represent discrete movements of muscle groups. [sent-65, score-1.427]

34 Additionally, facial action descriptors (for movements not derived from facial muscles) and movement and visibility codes are included. [sent-66, score-1.392]

35 Ekman’s basic emotions (Ekman, 1999) have been used in recent work on classifying emotion ex- pressed within blog text (Das & Bandyopadhyay, 2009), while other recent work (Nguyen, 2010) utilizes Russell’s core affect model (Russell, 2003) for a similar task. [sent-67, score-0.21]

36 During tutorial dialogue, students may not frequently experience Ekman’s basic emotions of happiness, sadness, anger, fear, surprise, and disgust. [sent-68, score-0.19]

37 Instead, students appear to more frequently experience cognitive-affective states such as flow and confusion (Calvo & D'Mello, 2010). [sent-69, score-0.14]

38 Our work leverages Ekman’s facial tagging scheme to identify a particular facial action unit, Action Unit 4 (AU4), that has been observed to correlate with confusion (Craig, D'Mello, Witherspoon, Sullins, & Graesser, 2004; D'Mello, Craig, Sullins, & Graesser, 2006; McDaniel et al. [sent-70, score-1.451]

39 4 Importance of Confusion in Tutorial Dialogue Among the affective states that students experience during tutorial dialogue, confusion is prevalent, and its implications for student learning are signif1192 icant. [sent-73, score-0.406]

40 Students may express such confusion within dialogue as uncertainty, to which human tutors often adapt in a context-dependent fashion (Forbes-Riley et al. [sent-75, score-0.7]

41 Moreover, implementing adaptations to student uncertainty within a dialogue system can improve the effectiveness of the system (Forbes-Riley et al. [sent-77, score-0.706]

42 For tutorial dialogue, the importance of understanding student utterances is paramount for a system to positively impact student learning (Dzikovska, Moore, Steinhauser, & Campbell, 2010). [sent-79, score-0.346]

43 The importance of frustration as a cognitive-affective state during learning suggests that the presence of student confusion may serve as a useful constraining feature for dialogue act classification of student utterances. [sent-80, score-1.128]

44 This paper explores the use of facial expression features in this way. [sent-81, score-0.662]

45 3 Task-Oriented Dialogue Corpus The corpus was collected during a textual humanhuman tutorial dialogue study in the domain of introductory computer science (Boyer, Phillips, et al. [sent-82, score-0.658]

46 Students solved an introductory computer programming problem and carried on textual dialogue with tutors, who viewed a synchronized version of the students’ problem-solving workspace. [sent-84, score-0.592]

47 1 Dialogue act annotation The dialogue act annotation scheme (Table 1) was applied manually. [sent-92, score-0.995]

48 Dialogue act tags and relative frequencies across fourteen dialogues in video corpus StudentA Dctia logue Example FRreelq. [sent-96, score-0.262]

49 2 Task action annotation The tutoring sessions were task-oriented, focusing on a computer programming exercise. [sent-114, score-0.221]

50 Each of those subtasks also had numerous fine-grained goals, and student task actions either contributed or did not contribute to the goals. [sent-116, score-0.19]

51 First, the subtask structure was annotated hierarchically, and then each task action was labeled for correctness according to the requirements of the assignment. [sent-119, score-0.206]

52 Inter-annotator agreement was computed on 20% of the corpus at the leaves of the subtask tagging scheme, and resulted in a simple kappa of κ=. [sent-120, score-0.127]

53 However, the leaves of the annotation scheme feature an implicit ordering (subtasks were completed in order, and adjacent subtasks are semantically more similar than subtasks at a greater distance); therefore, a weighted kappa is also meaningful to consider for this annotation. [sent-122, score-0.146]

54 Excerpt from corpus illustrating annota- tions and interplay between dialogue and task 13:38:09 Student: How do I know where to end? [sent-127, score-0.567]

55 3 Lexical and Syntactic Features In addition to the manually annotated dialogue and task features described above, syntactic features of each utterance were automatically extracted using the Stanford Parser (De Marneffe et al. [sent-133, score-0.68]

56 Our prior work has shown that these lexical and syntactic features are highly predictive of dialogue acts during task-oriented tutorial dialogue (Boyer, Ha et al. [sent-138, score-1.274]

57 The FACS certification process requires annotators to pass a test designed to analyze their agreement with reference coders on a set of spontaneous facial expressions (Ekman & Rosenberg, 2005). [sent-141, score-0.722]

58 This annotator viewed the videos continuously and paused the playback whenever notable facial displays of Action Unit 4 (AU4: Brow Lowerer) were seen. [sent-142, score-0.639]

59 This action unit was chosen for this study based on its correlations with confusion in prior research (Craig, D'Mello, Witherspoon, Sullins, & Graesser, 2004; D'Mello, Craig, Sullins, & Graesser, 2006; McDaniel et al. [sent-143, score-0.262]

60 This annotator followed the same method as the first annotator, pausing the video at any point to tag facial action events. [sent-146, score-0.798]

61 At any given time in the video, the coder was first identifying whether an action unit event existed, and then describing the facial movements that were present. [sent-147, score-0.826]

62 In this way, the action unit event tags spanned discrete durations of varying length, as specified by the coders. [sent-149, score-0.194]

63 Windows in which both annotators agreed that no facial action event was present were tagged by default as neutral. [sent-152, score-0.76]

64 Figure 1 illustrates facial expressions that display facial Action Unit 4. [sent-153, score-1.299]

65 Kappa values for inter-annotator agreement on facial action events Granularity sec ½ sec sec 1 sec (PBr eoswen Lcoew oefr eArU) 4 ¼ . [sent-155, score-0.88]

66 86 (Brow Lowerer) Despite the fact that promising automatic approaches exist to identifying many facial action units (Bartlett et al. [sent-159, score-0.76]

67 First, manual annotation is more robust than automatic recognition of facial action units, and manual annotation facilitated an exploratory, comprehensive view of student facial expressions during learning through task-oriented dialogue. [sent-161, score-1.617]

68 Although a detailed discussion of the other emotions present in the corpus is beyond the scope of this paper, Figure 2 illustrates some other spontaneous student facial expressions that differ from those associated with confusion. [sent-162, score-0.886]

69 1194 5 Models The goal of the modeling experiment was to determine whether the addition of confusion-related facial expression features significantly boosts dialogue act classification accuracy for student utterances. [sent-163, score-1.577]

70 1 Features We take a vector-based approach, in which the features consist of the following: Utterance Features • Dialogue act features: Manually annotated dialogue act for the past three utterances. [sent-165, score-0.976]

71 These features include tutor dialogue acts, annotated with a scheme analogous to that used to annotate student utterances (Boyer et al. [sent-166, score-0.81]

72 2 Modeling Approach A logistic regression approach was used to classify the dialogue acts based on the above feature vectors. [sent-169, score-0.616]

73 The goal of this work is to explore the utility of confusion-related facial features in the context of particular dialogue act types. [sent-173, score-1.381]

74 For this reason, a specialized classifier was learned by dialogue act. [sent-174, score-0.596]

75 3 Classification Results The classification accuracy and kappa for each specialized classifier is displayed in Table 4. [sent-176, score-0.131]

76 Note that kappa statistics adjust for the accuracy that would be expected by majority-baseline chance; a kappa statistic of zero indicates that the classifier performed equal to chance, and a positive kappa statistic indicates that the classifier performed better than chance. [sent-177, score-0.174]

77 As the table illustrates, the feature selection chose to utilize the AU4 feature for every dialogue act except STATEMENT (S). [sent-179, score-0.759]

78 For GROUNDING (G) and REQUEST FOR FEEDBACK (RF), the facial expression features significantly improved the classification accuracy compared to a model that was learned without affective features. [sent-181, score-0.794]

79 6 Discussion Dialogue act classification is an essential task for dialogue systems, and it has been addressed with a variety of modeling approaches and feature sets. [sent-182, score-0.803]

80 We have presented a novel approach that treats facial expressions of students as constraining features for an affect-enriched dialogue act classification model in task-oriented tutorial dialogue. [sent-183, score-1.645]

81 The results suggest that knowledge of the student’s confusion-related facial expressions can significantly enhance dialogue act classification for two types of dialogue acts, GROUNDING and REQUEST FOR FEEDBACK. [sent-184, score-2.049]

82 1 Features Selected for Classification Out of more than 1500 features available during feature selection, each of the specialized dialogue act classifiers selected between 30 and 50 features in each condition (with and without affect features). [sent-189, score-0.883]

83 To gain insight into the specific features that were useful for classifying these dialogue acts, it is useful to examine which of the AU4 history features were chosen during feature selection. [sent-190, score-0.617]

84 Absence of this confusion-related facial action unit was associated with a higher probability of a grounding act, such as an acknowledgement. [sent-192, score-0.844]

85 For REQUEST FOR FEEDBACK, the predictive features were presence or absence of AU4 within ten seconds of the longest available history (three turns in the past), as well as the presence of AU4 within five seconds of the current utterance (the utterance whose dialogue act is being classified). [sent-194, score-0.964]

86 2 Implications The results presented here demonstrate that leveraging knowledge of user affect, in particular of spontaneous facial expressions, may improve the performance of dialogue act classification models. [sent-199, score-1.478]

87 Perhaps most interestingly, displays of confusionrelated facial actions prior to a student dialogue move enabled an affect-enriched classifier to recognize requests for feedback with significantly greater accuracy than a classifier that did not have access to the facial action features. [sent-200, score-2.131]

88 Requesting feedback also seems to be an important behavior of students, characteristically engaged in more frequently by women than men, and more frequently by students with lower incoming knowledge than by students with higher incoming knowledge (Boyer, Vouk, & Lester, 2007). [sent-202, score-0.194]

89 First, the time-consuming nature of manual facial action tagging restricted the number of dialogues that could be tagged. [sent-205, score-0.818]

90 For example, the per1197 formance of the affect-enriched classifier was better for dialogue acts of interest such as positive feedback and questions, but this difference was not statistically reliable. [sent-207, score-0.666]

91 The field is only just beginning to understand facial expressions during learning and to correlate these facial actions with emotions. [sent-209, score-1.321]

92 Finally, the results of manual facial action annotation may constitute upper-bound findings for applying automatic facial expression analysis to dialogue act classification. [sent-211, score-2.178]

93 In particular, the role of facial expressions in humanhuman dialogue is widely recognized. [sent-213, score-1.246]

94 Facial expressions offer a promising channel for understanding the emotions experienced by users of dialogue systems, particularly given the ubiquity of webcam technologies and the increasing number of dialogue systems that are deployed on webcamenabled devices. [sent-214, score-1.268]

95 Dialogue act classification models have not fully leveraged some of the techniques emerging from work on sentiment analysis. [sent-217, score-0.236]

96 These approaches may prove particularly useful for identifying emotions in dialogue utterances. [sent-218, score-0.619]

97 Another important direction for future work involves more fully exploring the ways in which affect expression differs between textual and spoken dialogue. [sent-219, score-0.134]

98 Finally, as automatic facial tagging technologies mature, they may prove powerful enough to enable broadly deployed dialogue systems to feasibly leverage facial expression data in the near future. [sent-220, score-1.827]

99 Modeling dialogue structure with adjacency pair analysis and hidden Markov models. [sent-276, score-0.567]

100 Combining lexical, syntactic and prosodic cues for improved online dialog act tagging. [sent-569, score-0.217]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('facial', 0.597), ('dialogue', 0.567), ('act', 0.192), ('action', 0.163), ('ekman', 0.12), ('student', 0.112), ('boyer', 0.098), ('affective', 0.088), ('pantic', 0.087), ('sullins', 0.087), ('emotion', 0.086), ('expressions', 0.082), ('graesser', 0.075), ('students', 0.072), ('confusion', 0.068), ('bartlett', 0.067), ('tutorial', 0.066), ('brow', 0.065), ('utterance', 0.063), ('kappa', 0.058), ('utterances', 0.056), ('lowerer', 0.054), ('vouk', 0.054), ('phillips', 0.053), ('grounding', 0.053), ('emotions', 0.052), ('craig', 0.051), ('feedback', 0.05), ('tutor', 0.05), ('acts', 0.049), ('zar', 0.048), ('affect', 0.045), ('actions', 0.045), ('classification', 0.044), ('facs', 0.043), ('mcdaniel', 0.043), ('roisman', 0.043), ('witherspoon', 0.043), ('spontaneous', 0.043), ('subtask', 0.043), ('videos', 0.042), ('sridhar', 0.041), ('request', 0.041), ('expression', 0.04), ('calvo', 0.038), ('tutors', 0.038), ('video', 0.038), ('narayanan', 0.038), ('rf', 0.036), ('tutoring', 0.036), ('user', 0.035), ('rotaru', 0.035), ('movements', 0.035), ('coding', 0.034), ('subtasks', 0.033), ('ambadar', 0.033), ('friesen', 0.033), ('frustration', 0.033), ('mello', 0.033), ('olde', 0.033), ('wallis', 0.033), ('dialogues', 0.032), ('discourse', 0.032), ('zeng', 0.031), ('russell', 0.031), ('unit', 0.031), ('sec', 0.03), ('bangalore', 0.03), ('reed', 0.029), ('specialized', 0.029), ('litman', 0.028), ('within', 0.027), ('tagging', 0.026), ('features', 0.025), ('textual', 0.025), ('prosodic', 0.025), ('eugenio', 0.025), ('nonverbal', 0.025), ('ha', 0.024), ('spoken', 0.024), ('channels', 0.024), ('sigdial', 0.024), ('display', 0.023), ('emotional', 0.023), ('preceded', 0.023), ('recognition', 0.022), ('bachorowski', 0.022), ('buggy', 0.022), ('cavicchio', 0.022), ('disequilibrium', 0.022), ('dzikovska', 0.022), ('knapp', 0.022), ('lester', 0.022), ('moriyama', 0.022), ('schmidt', 0.022), ('silovsky', 0.022), ('steinhauser', 0.022), ('whitten', 0.022), ('annotation', 0.022), ('education', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 33 acl-2011-An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue

Author: Kristy Boyer ; Joseph Grafsgaard ; Eun Young Ha ; Robert Phillips ; James Lester

Abstract: Dialogue act classification is a central challenge for dialogue systems. Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. 1

2 0.5359804 185 acl-2011-Joint Identification and Segmentation of Domain-Specific Dialogue Acts for Conversational Dialogue Systems

Author: Fabrizio Morbini ; Kenji Sagae

Abstract: Individual utterances often serve multiple communicative purposes in dialogue. We present a data-driven approach for identification of multiple dialogue acts in single utterances in the context of dialogue systems with limited training data. Our approach results in significantly increased understanding of user intent, compared to two strong baselines.

3 0.38452524 91 acl-2011-Data-oriented Monologue-to-Dialogue Generation

Author: Paul Piwek ; Svetlana Stoyanchev

Abstract: This short paper introduces an implemented and evaluated monolingual Text-to-Text generation system. The system takes monologue and transforms it to two-participant dialogue. After briefly motivating the task of monologue-to-dialogue generation, we describe the system and present an evaluation in terms of fluency and accuracy.

4 0.30493084 272 acl-2011-Semantic Information and Derivation Rules for Robust Dialogue Act Detection in a Spoken Dialogue System

Author: Wei-Bin Liang ; Chung-Hsien Wu ; Chia-Ping Chen

Abstract: In this study, a novel approach to robust dialogue act detection for error-prone speech recognition in a spoken dialogue system is proposed. First, partial sentence trees are proposed to represent a speech recognition output sentence. Semantic information and the derivation rules of the partial sentence trees are extracted and used to model the relationship between the dialogue acts and the derivation rules. The constructed model is then used to generate a semantic score for dialogue act detection given an input speech utterance. The proposed approach is implemented and evaluated in a Mandarin spoken dialogue system for tour-guiding service. Combined with scores derived from the ASR recognition probability and the dialogue history, the proposed approach achieves 84.3% detection accuracy, an absolute improvement of 34.7% over the baseline of the semantic slot-based method with 49.6% detection accuracy.

5 0.26045957 227 acl-2011-Multimodal Menu-based Dialogue with Speech Cursor in DICO II+

Author: Staffan Larsson ; Alexander Berman ; Jessica Villing

Abstract: Alexander Berman Jessica Villing Talkamatic AB University of Gothenburg Sweden Sweden alex@ t alkamat i . se c jessi ca@ l ing .gu . s e 2 In-vehicle dialogue systems This paper describes Dico II+, an in-vehicle dialogue system demonstrating a novel combination of flexible multimodal menu-based dialogueand a “speech cursor” which enables menu navigation as well as browsing long list using haptic input and spoken output.

6 0.19965234 260 acl-2011-Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model

7 0.19791661 312 acl-2011-Turn-Taking Cues in a Human Tutoring Corpus

8 0.13777617 21 acl-2011-A Pilot Study of Opinion Summarization in Conversations

9 0.13109289 257 acl-2011-Question Detection in Spoken Conversations Using Textual Conversations

10 0.12526065 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

11 0.12210999 149 acl-2011-Hierarchical Reinforcement Learning and Hidden Markov Models for Task-Oriented Natural Language Generation

12 0.11197273 118 acl-2011-Entrainment in Speech Preceding Backchannels.

13 0.07680317 252 acl-2011-Prototyping virtual instructors from human-human corpora

14 0.063122466 288 acl-2011-Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications

15 0.059102252 83 acl-2011-Contrasting Multi-Lingual Prosodic Cues to Predict Verbal Feedback for Rapport

16 0.057354107 156 acl-2011-IMASS: An Intelligent Microblog Analysis and Summarization System

17 0.054792337 205 acl-2011-Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments

18 0.05196384 207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework

19 0.050640393 95 acl-2011-Detection of Agreement and Disagreement in Broadcast Conversations

20 0.048914276 218 acl-2011-MemeTube: A Sentiment-based Audiovisual System for Analyzing and Displaying Microblog Messages


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.118), (1, 0.084), (2, -0.033), (3, 0.017), (4, -0.414), (5, 0.458), (6, -0.105), (7, -0.054), (8, 0.002), (9, 0.012), (10, 0.165), (11, 0.025), (12, 0.111), (13, -0.029), (14, 0.066), (15, -0.004), (16, 0.04), (17, 0.01), (18, -0.026), (19, 0.031), (20, -0.086), (21, 0.006), (22, 0.086), (23, -0.113), (24, 0.007), (25, -0.0), (26, -0.077), (27, 0.04), (28, 0.045), (29, 0.024), (30, 0.019), (31, -0.029), (32, -0.067), (33, -0.005), (34, -0.005), (35, -0.009), (36, -0.008), (37, -0.031), (38, 0.019), (39, 0.014), (40, 0.025), (41, 0.031), (42, 0.033), (43, -0.01), (44, 0.007), (45, 0.005), (46, -0.004), (47, 0.016), (48, -0.013), (49, -0.04)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98498964 185 acl-2011-Joint Identification and Segmentation of Domain-Specific Dialogue Acts for Conversational Dialogue Systems

Author: Fabrizio Morbini ; Kenji Sagae

Abstract: Individual utterances often serve multiple communicative purposes in dialogue. We present a data-driven approach for identification of multiple dialogue acts in single utterances in the context of dialogue systems with limited training data. Our approach results in significantly increased understanding of user intent, compared to two strong baselines.

same-paper 2 0.98025239 33 acl-2011-An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue

Author: Kristy Boyer ; Joseph Grafsgaard ; Eun Young Ha ; Robert Phillips ; James Lester

Abstract: Dialogue act classification is a central challenge for dialogue systems. Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. 1

3 0.94850326 91 acl-2011-Data-oriented Monologue-to-Dialogue Generation

Author: Paul Piwek ; Svetlana Stoyanchev

Abstract: This short paper introduces an implemented and evaluated monolingual Text-to-Text generation system. The system takes monologue and transforms it to two-participant dialogue. After briefly motivating the task of monologue-to-dialogue generation, we describe the system and present an evaluation in terms of fluency and accuracy.

4 0.93992186 227 acl-2011-Multimodal Menu-based Dialogue with Speech Cursor in DICO II+

Author: Staffan Larsson ; Alexander Berman ; Jessica Villing

Abstract: Alexander Berman Jessica Villing Talkamatic AB University of Gothenburg Sweden Sweden alex@ t alkamat i . se c jessi ca@ l ing .gu . s e 2 In-vehicle dialogue systems This paper describes Dico II+, an in-vehicle dialogue system demonstrating a novel combination of flexible multimodal menu-based dialogueand a “speech cursor” which enables menu navigation as well as browsing long list using haptic input and spoken output.

5 0.82995349 272 acl-2011-Semantic Information and Derivation Rules for Robust Dialogue Act Detection in a Spoken Dialogue System

Author: Wei-Bin Liang ; Chung-Hsien Wu ; Chia-Ping Chen

Abstract: In this study, a novel approach to robust dialogue act detection for error-prone speech recognition in a spoken dialogue system is proposed. First, partial sentence trees are proposed to represent a speech recognition output sentence. Semantic information and the derivation rules of the partial sentence trees are extracted and used to model the relationship between the dialogue acts and the derivation rules. The constructed model is then used to generate a semantic score for dialogue act detection given an input speech utterance. The proposed approach is implemented and evaluated in a Mandarin spoken dialogue system for tour-guiding service. Combined with scores derived from the ASR recognition probability and the dialogue history, the proposed approach achieves 84.3% detection accuracy, an absolute improvement of 34.7% over the baseline of the semantic slot-based method with 49.6% detection accuracy.

6 0.66063476 312 acl-2011-Turn-Taking Cues in a Human Tutoring Corpus

7 0.63951617 260 acl-2011-Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model

8 0.62798977 118 acl-2011-Entrainment in Speech Preceding Backchannels.

9 0.52941602 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

10 0.48363298 252 acl-2011-Prototyping virtual instructors from human-human corpora

11 0.35174757 257 acl-2011-Question Detection in Spoken Conversations Using Textual Conversations

12 0.34519908 149 acl-2011-Hierarchical Reinforcement Learning and Hidden Markov Models for Task-Oriented Natural Language Generation

13 0.31893897 21 acl-2011-A Pilot Study of Opinion Summarization in Conversations

14 0.23616645 156 acl-2011-IMASS: An Intelligent Microblog Analysis and Summarization System

15 0.21539576 215 acl-2011-MACAON An NLP Tool Suite for Processing Word Lattices

16 0.20799153 95 acl-2011-Detection of Agreement and Disagreement in Broadcast Conversations

17 0.17911179 35 acl-2011-An ERP-based Brain-Computer Interface for text entry using Rapid Serial Visual Presentation and Language Modeling

18 0.17072284 73 acl-2011-Collective Classification of Congressional Floor-Debate Transcripts

19 0.16747794 207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework

20 0.16583793 338 acl-2011-Wikulu: An Extensible Architecture for Integrating Natural Language Processing Techniques with Wikis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.068), (17, 0.04), (26, 0.032), (37, 0.059), (39, 0.037), (41, 0.127), (55, 0.015), (59, 0.034), (62, 0.012), (72, 0.046), (91, 0.023), (92, 0.224), (96, 0.116), (97, 0.01), (98, 0.049)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79793268 33 acl-2011-An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue

Author: Kristy Boyer ; Joseph Grafsgaard ; Eun Young Ha ; Robert Phillips ; James Lester

Abstract: Dialogue act classification is a central challenge for dialogue systems. Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. 1

2 0.7387166 36 acl-2011-An Efficient Indexer for Large N-Gram Corpora

Author: Hakan Ceylan ; Rada Mihalcea

Abstract: We introduce a new publicly available tool that implements efficient indexing and retrieval of large N-gram datasets, such as the Web1T 5-gram corpus. Our tool indexes the entire Web1T dataset with an index size of only 100 MB and performs a retrieval of any N-gram with a single disk access. With an increased index size of 420 MB and duplicate data, it also allows users to issue wild card queries provided that the wild cards in the query are contiguous. Furthermore, we also implement some of the smoothing algorithms that are designed specifically for large datasets and are shown to yield better language models than the traditional ones on the Web1T 5gram corpus (Yuret, 2008). We demonstrate the effectiveness of our tool and the smoothing algorithms on the English Lexical Substi- tution task by a simple implementation that gives considerable improvement over a basic language model.

3 0.72848308 208 acl-2011-Lexical Normalisation of Short Text Messages: Makn Sens a #twitter

Author: Bo Han ; Timothy Baldwin

Abstract: Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this paper, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalising ill-formed words. Our method uses a classifier to detect ill-formed words, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn’t require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter.

4 0.63962144 172 acl-2011-Insertion, Deletion, or Substitution? Normalizing Text Messages without Pre-categorization nor Supervision

Author: Fei Liu ; Fuliang Weng ; Bingqing Wang ; Yang Liu

Abstract: Most text message normalization approaches are based on supervised learning and rely on human labeled training data. In addition, the nonstandard words are often categorized into different types and specific models are designed to tackle each type. In this paper, we propose a unified letter transformation approach that requires neither pre-categorization nor human supervision. Our approach models the generation process from the dictionary words to nonstandard tokens under a sequence labeling framework, where each letter in the dictionary word can be retained, removed, or substituted by other letters/digits. To avoid the expensive and time consuming hand labeling process, we automatically collected a large set of noisy training pairs using a novel webbased approach and performed character-level . alignment for model training. Experiments on both Twitter and SMS messages show that our system significantly outperformed the stateof-the-art deletion-based abbreviation system and the jazzy spell checker (absolute accuracy gain of 21.69% and 18. 16% over jazzy spell checker on the two test sets respectively).

5 0.63611394 185 acl-2011-Joint Identification and Segmentation of Domain-Specific Dialogue Acts for Conversational Dialogue Systems

Author: Fabrizio Morbini ; Kenji Sagae

Abstract: Individual utterances often serve multiple communicative purposes in dialogue. We present a data-driven approach for identification of multiple dialogue acts in single utterances in the context of dialogue systems with limited training data. Our approach results in significantly increased understanding of user intent, compared to two strong baselines.

6 0.63523984 135 acl-2011-Faster and Smaller N-Gram Language Models

7 0.63010567 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

8 0.62599653 219 acl-2011-Metagrammar engineering: Towards systematic exploration of implemented grammars

9 0.62153602 56 acl-2011-Bayesian Inference for Zodiac and Other Homophonic Ciphers

10 0.62040871 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

11 0.61989772 312 acl-2011-Turn-Taking Cues in a Human Tutoring Corpus

12 0.6182909 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

13 0.61778116 189 acl-2011-K-means Clustering with Feature Hashing

14 0.61277759 232 acl-2011-Nonparametric Bayesian Machine Transliteration with Synchronous Adaptor Grammars

15 0.61107957 328 acl-2011-Using Cross-Entity Inference to Improve Event Extraction

16 0.61048913 139 acl-2011-From Bilingual Dictionaries to Interlingual Document Representations

17 0.61026573 83 acl-2011-Contrasting Multi-Lingual Prosodic Cues to Predict Verbal Feedback for Rapport

18 0.60805607 40 acl-2011-An Error Analysis of Relation Extraction in Social Media Documents

19 0.60516644 143 acl-2011-Getting the Most out of Transition-based Dependency Parsing

20 0.60464895 58 acl-2011-Beam-Width Prediction for Efficient Context-Free Parsing