acl acl2013 acl2013-230 knowledge-graph by maker-knowledge-mining

230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems


Source: pdf

Author: Svitlana Volkova ; Pallavi Choudhury ; Chris Quirk ; Bill Dolan ; Luke Zettlemoyer

Abstract: Procedural dialog systems can help users achieve a wide range of goals. However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. We consider domains where the required task knowledge exists in textual form (e.g., instructional web pages) and where system builders have access to statements of user intent (e.g., search query logs or dialog interactions). To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu s Abstract Procedural dialog systems can help users achieve a wide range of goals. [sent-5, score-0.821]

2 However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. [sent-6, score-0.821]

3 In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. [sent-7, score-0.867]

4 To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. [sent-13, score-0.817]

5 Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users. [sent-14, score-0.818]

6 1 Introduction Procedural dialog systems aim to assist users with a wide range of goals. [sent-15, score-0.821]

7 , 2011), or enable interaction with a health care a system (S) and user (U) that can be automatically achieved by learning from instructional web page and query click logs. [sent-20, score-0.66]

8 However, such systems are challenging to build, currently requiring expensive, expert engineering of significant domain-specific task knowledge and dialog management strategies. [sent-24, score-0.821]

9 In this paper, we present a new approach for learning procedural dialog systems from taskoriented textual resources in combination with light, non-expert supervision. [sent-25, score-0.867]

10 Ac s2s0o1ci3a Atiosnso fcoirat Cio nm foprut Caotimonpaulta Lti nognuails Lti cnsg,u piasgteics 16 9–1679, There are two key challenges: we must (1) learn to convert the textual knowledge into a usable form and (2) learn a dialog manager that provides robust assistance given such knowledge. [sent-36, score-0.841]

11 Next, we present an approach that uses example user intent statements to simulate dialog interactions, and learns how to best map user utterances to nodes in these induced dialog trees. [sent-42, score-2.18]

12 When combined, these approaches produce a complete dialog system that can engage in conversations by automatically moving between the nodes of a large collection of induced dialog trees. [sent-43, score-1.702]

13 Experiments in the Windows Office help domain demonstrate that it is possible to build an effective end-to-end dialog system. [sent-44, score-0.794]

14 We evaluate the dialog tree construction and dialog management components in isolation, demonstrating high accuracy (in the 80-90% range). [sent-45, score-1.668]

15 2 Overview of Approach Our task-oriented dialog system understands user utterances by mapping them to nodes in dialog trees generated from instructional text. [sent-48, score-2.067]

16 Figure 2 shows an example of a set of instructions and the corresponding dialog tree. [sent-49, score-0.95]

17 We aim to convert this text into a form that will enable a dialog system to automatically assist with the described task. [sent-54, score-0.818]

18 , Figure 2, right) with nodes to represent entire documents (labeled as topics t), nodes to represent user goals or intents (g), and system action nodes (a) that enable execution of specific commands. [sent-57, score-0.82]

19 Finally, each node has an associated system action as, which can prompt user input (e. [sent-58, score-0.53]

20 Section 3 presents a scalable approach for inducing dialog trees. [sent-63, score-0.794]

21 Dialog Management To understand user intent and provide task assistance, we need a dialog management approach that specifies what the system should do and say. [sent-64, score-1.148]

22 We adopt a simple approach that at all times maintains an index into a node in a dialog tree. [sent-65, score-0.9]

23 Specifically, we learn classifiers that, given the dialog interaction history, predict how to pick the next tree node from the space of all nodes in the dialog trees that define the task knowledge. [sent-78, score-1.872]

24 The resulting approach allows for significant flexibility in traversing the dialog trees. [sent-81, score-0.794]

25 2 We found that queries longer that 4-5 words often resembled natural language utterances that could be used for dialog interac2http: //office . [sent-83, score-0.899]

26 com 1670 Figure 2: An example instructional text paired with a section of the corresponding dialog tree. [sent-85, score-0.934]

27 We also collected instructional texts from the web pages that describe how to solve 76 of the most pressing user goals, as indicated by query click log statistics. [sent-87, score-0.531]

28 3 Building Dialog Trees from Instructions Our first problem is to convert sets of instructions for user goals to dialog trees, as shown in Figure 2. [sent-90, score-1.295]

29 In addition, we manually associate each node in a dialog tree with a training set of 10 queries. [sent-92, score-0.953]

30 qm per topic ti, our goals are as follows: – – – OHP with goals, instructions and dialog trees. [sent-117, score-1.172]

31 il; - from topics, goals and instructions, construct dialog trees f1. [sent-124, score-0.973]

32 Classify instructions to user interaction types thereby identifying system action nodes as1 . [sent-128, score-0.641]

33 Figure 2 (left) presents an example of a topic extracted from the help page, and a set of goals and instructions annotated with user action types. [sent-136, score-0.74]

34 In the next few sections of the paper, we outline an overall system component design demonstrating how queries and topics are mapped to the dialog trees in Figure 3. [sent-137, score-0.95]

35 The figure shows manyto-one relations between queries and topics, oneto-many relations between topics and goals, goals and instructions, and one-to-one relations between topics and dialog trees. [sent-138, score-1.082]

36 Experimental Setup As described in Section 2, our dataset consists of 76 goals grouped into 30 topics (average 2-3 goals per topic) for a total of 246 instructions (average 3 instructions per goal). [sent-147, score-0.647]

37 We manually label all instructions with user action au categories. [sent-148, score-0.569]

38 The example instructions with corresponding user action labels are shown in Figure 2 (left) . [sent-154, score-0.518]

39 Finally, for every topic we automatically construct a dialog tree as shown in Figure 2 (right). [sent-177, score-0.916]

40 The dialog tree includes a topic t1with goals g1 . [sent-178, score-1.069]

41 A dialog tree encodes a user-system dialog flow about a topic ti represented as a directed unweighted graph fi = (V, E) where top- . [sent-183, score-1.748]

42 For example, in the dialog tree in Figure 2 there is a relation t1 → g4 between the topic t1 “add and format page →nu gmbers” and the goal g4 “include page of page X of Y with the page number”. [sent-201, score-1.281]

43 Moreover, in the dialog tree, the topic level node has one index i ∈ [1. [sent-202, score-0.969]

44 4 Understanding Initial Queries This section presents a model for classifying ini- tial user queries to nodes in a dialog tree, which allows for a variety of different types of queries. [sent-218, score-1.132]

45 ) 1672 Figure 4: Mapping initial user queries to the nodes on different depth in a dialog tree. [sent-226, score-1.175]

46 Problem Definition Given an initial query, the dialog system initializes to a state s0, searches for the deepest relevant node given a query, and maps the query to a node on a topic ti, goal gj or action ak level in the dialog tree fi, as shown in Figure 4. [sent-227, score-2.505]

47 More formally, as input, we are given automatically constructed dialog trees f1. [sent-228, score-0.82]

48 fn for instructional text (help pages) annotated with topic, goal and action nodes and associated with system actions as shown in Figure 2 (right). [sent-231, score-0.528]

49 From the query logs, we associate queries with each node type: topic qt, goal qg and action qa. [sent-232, score-0.628]

50 We join these dialog trees representing different topics into a dialog network by introducing a global root. [sent-234, score-1.683]

51 Within the network, we aim to find (1) an initial dialog state s0 that maximizes the probability of state given a query p(s0 |q, θ); and (2) the deepest relevant node v ∈ V on topic ti, goal gj or adceetipoens ak depth nino tdheev vtr ∈ee. [sent-235, score-1.465]

52 V Initial Dialog State Model We aim to predict the best node in a dialog tree ti, gj , al ∈ V based on a user query q. [sent-236, score-1.337]

53 A query-to-node mapping bisa encoded as an initial dialog state s0 represented by a . [sent-237, score-0.902]

54 , binary vector over all nodes in the dialog network: = s0 [t1, g1. [sent-240, score-0.863]

55 We employ a log-linear model and try to maximize initial dialog state distribution over the space of all nodes in a dialog network: p(s0|q,θ) =Pes0P0eiPθiiφθi(isφ0i,(qs)00,q), (3) Optimization followPs Eq. [sent-248, score-1.765]

56 Lexical features included query ngrams (up to 3grams) associated with every node in a dialog tree with removed stopwords and stemming query unigrams. [sent-251, score-1.268]

57 71 Table 2: Initial dialog state classification results where L stands for lexical features, 10TFIDF - 10 best tf-idf scores, PO - prompt overlap, QO - query overlap, and QHistO - query history overlap. [sent-273, score-1.221]

58 tf-idf scores, query ngram overlap with the topic and goal descriptions, as well as system action prompts, and query ngram overlap with a history including queries from parent nodes. [sent-274, score-0.89]

59 Experimental Setup For each dialog tree, nodes corresponding to single instructions were hand-annotated with a small set of user queries, as described in Section 3. [sent-275, score-1.211]

60 Results The initial dialog state classification model of finding a single node given an initial query is described in Eq. [sent-277, score-1.171]

61 We chose two simple baselines: (1) randomly select a node in a dialog network and (2) use a tfidf 1-best Stemming, stopword removal and including top 10 tf-idf results as features led to a 19% increase in accuracy on an action node level over baseline (2). [sent-279, score-1.216]

62 Adding the following features led to an overall 26% improvement: query overlap with a system prompt (PO), query overlap with other node queries (QO), and query overlap with its parent queries (QHistO) . [sent-280, score-0.939]

63 For nodes deeper in the network, the task ofmapping a user query to an action becomes more challenging. [sent-282, score-0.572]

64 4We use cosine similarity to rank all nodes in a dialog network and select the node with the highest rank. [sent-285, score-1.009]

65 1673 derstate the utility of the resulting dialog system. [sent-286, score-0.794]

66 As long as a misclassification results being assigned to a too-high node within the correct dialog tree, the user will experience a graceful failure: they may be forced to answer some redundant questions, but they will still be able to accomplish the task. [sent-288, score-1.14]

67 5 Understanding Query Refinements We also developed a classifier model for mapping followup queries to the nodes in a dialog network, while maintaining a dialog state that summarizes the history of the current interaction. [sent-289, score-1.841]

68 Problem Definition Similar to the problem definition in Section 4, we are given a network of dialog trees f1. [sent-290, score-0.881]

69 fn and a query q0, but in addition we are given the previous dialog state s, which contains the previous user utterance q and the last system action as. [sent-293, score-1.407]

70 We aim to find a new dialog state s0 that pairs a node from the dialog tree with updated history information, thereby undergoing a dialog state update. [sent-294, score-2.734]

71 We learn a linear classifier that models p(s0|q0, q, as, θ), the dialog state update distribution, qwhere we constrain the new state s0 to contain the new utterance q0 we are interpreting. [sent-295, score-1.003]

72 An append action defines a dialog state update when transitioning from a node to its children at any depth in the same dialog tree e. [sent-298, score-2.175]

73 An override action defines a dialog state update when transitioning from a goal to its sibling node. [sent-306, score-1.245]

74 It could also be from an action node5 to another in itsparent sibling node in the same dialog tree e. [sent-307, score-1.123]

75 z (from an action node to another action n→od ae in a different goal in the same dialog tree) etc. [sent-315, score-1.305]

76 A reset action defines a dialog state update when transitioningfrom a node in a current dialog tree to any other node at any depth in a dialog tree other than the current dialog tree e. [sent-317, score-3.93]

77 z must be to a different goal or an action node in a different goal but in the same dialog tree. [sent-321, score-1.2]

78 Finally, a reset macbteiorn → →sh “ocuhladn g bee used when the user’s intent is to restart the dialog (e. [sent-331, score-0.974]

79 and reset dialog state update actions in Table 3. [sent-336, score-1.046]

80 1674 Figure 5 illustrates examples of append, override and reset dialog state updates. [sent-337, score-1.022]

81 Dialog State Update Model We use a log-linear model to maximize a dialog state distribution over the space of all nodes in a dialog network: p(s0|q0,q,asθ) =Pse00PeiPθiiφθii(φs0i,(qs00,0a,qs0,,qa)s,q), Optimization is doneP Pas (4) described in Section 3. [sent-339, score-1.743]

82 Experimental Setup Ideally, dialog systems should be evaluated relative to large volumes of real user interaction data. [sent-340, score-1.016]

83 Our query log data, however, does not include dialog turns, and so we turn to simulated user behavior to test our system. [sent-341, score-1.127]

84 To define a state s we sample a query q from a set of queries per node v and get a corresponding system action as for this node; to define a state s0, we sample a new query q0 from another node v0 ∈ V, v v0 which is sampled using a prior probability bi vased towards append: p(append)=0. [sent-345, score-0.937]

85 This prior distribution defines a dialog strategy where the user primarily continues the current goal and rarely resets. [sent-349, score-1.051]

86 Results Table 4 reports results for dialog state updates for topic, goal and action nodes. [sent-352, score-1.15]

87 We also report performance for two types of dialog updates such as: append (App. [sent-353, score-0.923]

88 Table 4: Dialog state updates classification accuracies where L stands for lexical features, Q query overlap, P - prompt overlap, SQ - previous state query overlap, S0Q - new state query overlap, S0ParQ - new state parent query overlap. [sent-365, score-1.001]

89 1675 6 The Complete Dialog System Following the overall setup described in Section 2, we integrate the learned models into a complete dialog system. [sent-366, score-0.815]

90 7 Related work To the best of our knowledge, this paper presents the first effort to induce full procedural dialog systems from instructional text and query click logs. [sent-395, score-1.206]

91 Dialog Generation from Text Similarly to Piwek’s work (2007; 2010; 2011), we study extracting dialog knowledge from documents (monologues or instructions). [sent-406, score-0.794]

92 There is no model of dialog management or user interaction, and the approach does not use any machine learning. [sent-408, score-1.013]

93 In contrast, to the best of our knowledge, we are the first to demonstrate it is possible to learn complete, interactive dialog systems using instructional texts (and nonexpert annotation). [sent-409, score-0.934]

94 Dialog Modeling and User Simulation Many existing dialog systems learn dialog strategies from user interactions (Young, 2010; Rieser and Lemon, 2008). [sent-417, score-1.807]

95 Moreover, dialog data is often limited and, therefore, user simulation is commonly used (Scheffler and Young, 2002; Schatzmann et al. [sent-418, score-1.013]

96 Our overall approach is also related to many other dialog management approaches, including those that construct dialog graphs from dialog data via clustering (Lee et al. [sent-421, score-2.409]

97 , 2009), optimize dialog strategy using reinforcement learning (RL) (Scheffler and Young, 2002; Rieser and Lemon, 2008), or combine RL with information state update rules (Heeman, 2007). [sent-424, score-0.945]

98 8 Conclusions and Future Work This paper presented a novel approach for automatically constructing procedural dialog systems with light supervision, given only textual resources such as instructional text and search query click logs. [sent-426, score-1.206]

99 Although we showed it is possible to build complete systems, more work will be required to scale the approach to new domains, scale the complexity of the dialog manager, and explore the range of possible textual knowledge sources that could be incorporated. [sent-428, score-0.815]

100 A discriminative classification-based approach to information state updates for a multi-domain dialog system. [sent-509, score-0.915]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('dialog', 0.794), ('user', 0.192), ('action', 0.17), ('instructions', 0.156), ('goals', 0.153), ('query', 0.141), ('instructional', 0.14), ('intent', 0.111), ('node', 0.106), ('append', 0.094), ('state', 0.086), ('footer', 0.078), ('queries', 0.077), ('page', 0.075), ('override', 0.073), ('procedural', 0.073), ('reset', 0.069), ('topic', 0.069), ('nodes', 0.069), ('goal', 0.065), ('logs', 0.063), ('branavan', 0.06), ('actions', 0.06), ('click', 0.058), ('overlap', 0.058), ('tree', 0.053), ('gj', 0.051), ('au', 0.051), ('piwek', 0.049), ('intents', 0.045), ('scheffler', 0.045), ('traum', 0.045), ('dialogue', 0.043), ('insert', 0.042), ('network', 0.04), ('qo', 0.039), ('ti', 0.038), ('prompt', 0.038), ('update', 0.037), ('updates', 0.035), ('instruction', 0.034), ('qhisto', 0.033), ('steinhauser', 0.033), ('office', 0.033), ('ngrams', 0.033), ('header', 0.033), ('dialogs', 0.033), ('artzi', 0.033), ('virtual', 0.033), ('asli', 0.03), ('celikyilmaz', 0.03), ('luke', 0.03), ('interaction', 0.03), ('georgila', 0.03), ('rieser', 0.03), ('dzikovska', 0.03), ('zettlemoyer', 0.029), ('topics', 0.029), ('reinforcement', 0.028), ('utterances', 0.028), ('simulation', 0.027), ('schatzmann', 0.027), ('interactions', 0.027), ('users', 0.027), ('management', 0.027), ('trees', 0.026), ('redundant', 0.026), ('dilek', 0.025), ('assistance', 0.024), ('system', 0.024), ('ak', 0.024), ('museum', 0.023), ('aggarwal', 0.023), ('gk', 0.023), ('ngram', 0.023), ('microsoft', 0.023), ('manager', 0.023), ('caine', 0.022), ('forbell', 0.022), ('gerten', 0.022), ('graceful', 0.022), ('gwendolyn', 0.022), ('katsamanis', 0.022), ('kushman', 0.022), ('leanne', 0.022), ('priti', 0.022), ('rizzo', 0.022), ('initial', 0.022), ('definition', 0.021), ('history', 0.021), ('depth', 0.021), ('complete', 0.021), ('il', 0.021), ('po', 0.02), ('questions', 0.02), ('asked', 0.02), ('parent', 0.02), ('transitioning', 0.02), ('jillian', 0.02), ('charlie', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

Author: Svitlana Volkova ; Pallavi Choudhury ; Chris Quirk ; Bill Dolan ; Luke Zettlemoyer

Abstract: Procedural dialog systems can help users achieve a wide range of goals. However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. We consider domains where the required task knowledge exists in textual form (e.g., instructional web pages) and where system builders have access to statements of user intent (e.g., search query logs or dialog interactions). To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users.

2 0.54109764 124 acl-2013-Discriminative state tracking for spoken dialog systems

Author: Angeliki Metallinou ; Dan Bohus ; Jason Williams

Abstract: In spoken dialog systems, statistical state tracking aims to improve robustness to speech recognition errors by tracking a posterior distribution over hidden dialog states. Current approaches based on generative or discriminative models have different but important shortcomings that limit their accuracy. In this paper we discuss these limitations and introduce a new approach for discriminative state tracking that overcomes them by leveraging the problem structure. An offline evaluation with dialog data collected from real users shows improvements in both state tracking accuracy and the quality of the posterior probabilities. Features that encode speech recognition error patterns are particularly helpful, and training requires rel- atively few dialogs.

3 0.29800847 168 acl-2013-Generating Recommendation Dialogs by Extracting Information from User Reviews

Author: Kevin Reschke ; Adam Vogel ; Dan Jurafsky

Abstract: Recommendation dialog systems help users navigate e-commerce listings by asking questions about users’ preferences toward relevant domain attributes. We present a framework for generating and ranking fine-grained, highly relevant questions from user-generated reviews. We demonstrate ourapproachon anew dataset just released by Yelp, and release a new sentiment lexicon with 1329 adjectives for the restaurant domain.

4 0.095012859 183 acl-2013-ICARUS - An Extensible Graphical Search Tool for Dependency Treebanks

Author: Markus Gartner ; Gregor Thiele ; Wolfgang Seeker ; Anders Bjorkelund ; Jonas Kuhn

Abstract: We present ICARUS, a versatile graphical search tool to query dependency treebanks. Search results can be inspected both quantitatively and qualitatively by means of frequency lists, tables, or dependency graphs. ICARUS also ships with plugins that enable it to interface with tool chains running either locally or remotely.

5 0.094468772 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

Author: Srinivasan Janarthanam ; Oliver Lemon ; Phil Bartie ; Tiphaine Dalmas ; Anna Dickinson ; Xingkun Liu ; William Mackaness ; Bonnie Webber

Abstract: We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks.

6 0.083122551 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

7 0.075352147 55 acl-2013-Are Semantically Coherent Topic Models Useful for Ad Hoc Information Retrieval?

8 0.073689483 132 acl-2013-Easy-First POS Tagging and Dependency Parsing with Beam Search

9 0.073265575 99 acl-2013-Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation

10 0.069118291 273 acl-2013-Paraphrasing Adaptation for Web Search Ranking

11 0.067824587 285 acl-2013-Propminer: A Workflow for Interactive Information Extraction and Exploration using Dependency Trees

12 0.067351095 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

13 0.066117004 107 acl-2013-Deceptive Answer Prediction with User Preference Graph

14 0.063893206 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

15 0.060018454 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

16 0.059269927 266 acl-2013-PAL: A Chatterbot System for Answering Domain-specific Questions

17 0.05534393 315 acl-2013-Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression

18 0.053948011 121 acl-2013-Discovering User Interactions in Ideological Discussions

19 0.051268842 290 acl-2013-Question Analysis for Polish Question Answering

20 0.051156174 231 acl-2013-Linggle: a Web-scale Linguistic Search Engine for Words in Context


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.133), (1, 0.046), (2, -0.041), (3, -0.033), (4, 0.003), (5, -0.001), (6, 0.148), (7, -0.146), (8, 0.013), (9, 0.019), (10, -0.026), (11, 0.074), (12, -0.013), (13, 0.017), (14, 0.07), (15, -0.104), (16, -0.067), (17, 0.107), (18, 0.035), (19, -0.106), (20, -0.21), (21, -0.075), (22, 0.156), (23, 0.096), (24, 0.267), (25, -0.194), (26, 0.096), (27, 0.406), (28, -0.049), (29, -0.157), (30, -0.093), (31, 0.086), (32, 0.166), (33, 0.016), (34, -0.134), (35, -0.029), (36, 0.017), (37, 0.024), (38, 0.171), (39, -0.072), (40, 0.02), (41, -0.012), (42, -0.022), (43, -0.014), (44, -0.035), (45, -0.101), (46, -0.067), (47, 0.039), (48, 0.038), (49, 0.037)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.963696 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

Author: Svitlana Volkova ; Pallavi Choudhury ; Chris Quirk ; Bill Dolan ; Luke Zettlemoyer

Abstract: Procedural dialog systems can help users achieve a wide range of goals. However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. We consider domains where the required task knowledge exists in textual form (e.g., instructional web pages) and where system builders have access to statements of user intent (e.g., search query logs or dialog interactions). To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users.

2 0.91377616 124 acl-2013-Discriminative state tracking for spoken dialog systems

Author: Angeliki Metallinou ; Dan Bohus ; Jason Williams

Abstract: In spoken dialog systems, statistical state tracking aims to improve robustness to speech recognition errors by tracking a posterior distribution over hidden dialog states. Current approaches based on generative or discriminative models have different but important shortcomings that limit their accuracy. In this paper we discuss these limitations and introduce a new approach for discriminative state tracking that overcomes them by leveraging the problem structure. An offline evaluation with dialog data collected from real users shows improvements in both state tracking accuracy and the quality of the posterior probabilities. Features that encode speech recognition error patterns are particularly helpful, and training requires rel- atively few dialogs.

3 0.7153098 168 acl-2013-Generating Recommendation Dialogs by Extracting Information from User Reviews

Author: Kevin Reschke ; Adam Vogel ; Dan Jurafsky

Abstract: Recommendation dialog systems help users navigate e-commerce listings by asking questions about users’ preferences toward relevant domain attributes. We present a framework for generating and ranking fine-grained, highly relevant questions from user-generated reviews. We demonstrate ourapproachon anew dataset just released by Yelp, and release a new sentiment lexicon with 1329 adjectives for the restaurant domain.

4 0.49659693 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

Author: Srinivasan Janarthanam ; Oliver Lemon ; Phil Bartie ; Tiphaine Dalmas ; Anna Dickinson ; Xingkun Liu ; William Mackaness ; Bonnie Webber

Abstract: We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks.

5 0.36828834 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

6 0.35821947 176 acl-2013-Grounded Unsupervised Semantic Parsing

7 0.33592328 183 acl-2013-ICARUS - An Extensible Graphical Search Tool for Dependency Treebanks

8 0.30571371 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

9 0.30224699 266 acl-2013-PAL: A Chatterbot System for Answering Domain-specific Questions

10 0.28435075 100 acl-2013-Crowdsourcing Interaction Logs to Understand Text Reuse from the Web

11 0.27117196 268 acl-2013-PATHS: A System for Accessing Cultural Heritage Collections

12 0.26580417 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

13 0.24002528 190 acl-2013-Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs

14 0.2394743 313 acl-2013-Semantic Parsing with Combinatory Categorial Grammars

15 0.23771605 271 acl-2013-ParaQuery: Making Sense of Paraphrase Collections

16 0.2373601 285 acl-2013-Propminer: A Workflow for Interactive Information Extraction and Exploration using Dependency Trees

17 0.23332497 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

18 0.22598876 133 acl-2013-Efficient Implementation of Beam-Search Incremental Parsers

19 0.22125937 325 acl-2013-Smoothed marginal distribution constraints for language modeling

20 0.21733807 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.036), (6, 0.057), (11, 0.053), (15, 0.018), (24, 0.074), (26, 0.048), (28, 0.014), (35, 0.082), (40, 0.011), (42, 0.036), (48, 0.035), (57, 0.21), (64, 0.018), (70, 0.062), (83, 0.013), (88, 0.039), (90, 0.019), (95, 0.047)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83311415 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

Author: Svitlana Volkova ; Pallavi Choudhury ; Chris Quirk ; Bill Dolan ; Luke Zettlemoyer

Abstract: Procedural dialog systems can help users achieve a wide range of goals. However, such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies. In this paper, we demonstrate that it is possible to learn procedural dialog systems given only light supervision, of the type that can be provided by non-experts. We consider domains where the required task knowledge exists in textual form (e.g., instructional web pages) and where system builders have access to statements of user intent (e.g., search query logs or dialog interactions). To learn from such textual resources, we describe a novel approach that first automatically extracts task knowledge from instructions, then learns a dialog manager over this task knowledge to provide assistance. Evaluation in a Microsoft Office domain shows that the individual components are highly accurate and can be integrated into a dialog system that provides effective help to users.

2 0.77718991 325 acl-2013-Smoothed marginal distribution constraints for language modeling

Author: Brian Roark ; Cyril Allauzen ; Michael Riley

Abstract: We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of wellknown Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed when pruning Kneser-Ney models (Siivola et al., 2007; Chelba et al., 2010), while retaining the benefits of such marginal distribution constraints. We present experimental results for heavily pruned backoff ngram models, and demonstrate perplexity and word error rate reductions when used with various baseline smoothing methods. An open-source version of the algorithm has been released as part of the OpenGrm ngram library.1

3 0.76812083 23 acl-2013-A System for Summarizing Scientific Topics Starting from Keywords

Author: Rahul Jha ; Amjad Abu-Jbara ; Dragomir Radev

Abstract: In this paper, we investigate the problem of automatic generation of scientific surveys starting from keywords provided by a user. We present a system that can take a topic query as input and generate a survey of the topic by first selecting a set of relevant documents, and then selecting relevant sentences from those documents. We discuss the issues of robust evaluation of such systems and describe an evaluation corpus we generated by manually extracting factoids, or information units, from 47 gold standard documents (surveys and tutorials) on seven topics in Natural Language Processing. We have manually annotated 2,625 sentences with these factoids (around 375 sentences per topic) to build an evaluation corpus for this task. We present evaluation results for the performance of our system using this annotated data.

4 0.72371089 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

Author: Anthony Fader ; Luke Zettlemoyer ; Oren Etzioni

Abstract: We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.

5 0.61190277 318 acl-2013-Sentiment Relevance

Author: Christian Scheible ; Hinrich Schutze

Abstract: A number of different notions, including subjectivity, have been proposed for distinguishing parts of documents that convey sentiment from those that do not. We propose a new concept, sentiment relevance, to make this distinction and argue that it better reflects the requirements of sentiment analysis systems. We demonstrate experimentally that sentiment relevance and subjectivity are related, but different. Since no large amount of labeled training data for our new notion of sentiment relevance is available, we investigate two semi-supervised methods for creating sentiment relevance classifiers: a distant supervision approach that leverages structured information about the domain of the reviews; and transfer learning on feature representations based on lexical taxonomies that enables knowledge transfer. We show that both methods learn sentiment relevance classifiers that perform well.

6 0.61061025 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

7 0.60716826 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

8 0.60632658 358 acl-2013-Transition-based Dependency Parsing with Selectional Branching

9 0.60581768 377 acl-2013-Using Supervised Bigram-based ILP for Extractive Summarization

10 0.60351372 185 acl-2013-Identifying Bad Semantic Neighbors for Improving Distributional Thesauri

11 0.60050404 224 acl-2013-Learning to Extract International Relations from Political Context

12 0.59947848 99 acl-2013-Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation

13 0.59887677 83 acl-2013-Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model

14 0.59821272 85 acl-2013-Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis

15 0.59665471 346 acl-2013-The Impact of Topic Bias on Quality Flaw Prediction in Wikipedia

16 0.5960173 194 acl-2013-Improving Text Simplification Language Modeling Using Unsimplified Text Data

17 0.59600711 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

18 0.59379536 176 acl-2013-Grounded Unsupervised Semantic Parsing

19 0.59327191 107 acl-2013-Deceptive Answer Prediction with User Preference Graph

20 0.59315687 158 acl-2013-Feature-Based Selection of Dependency Paths in Ad Hoc Information Retrieval