acl acl2010 acl2010-168 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Adam Vogel ; Dan Jurafsky
Abstract: We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths.
Reference: text
sentIndex sentText sentNum sentScore
1 edu , Abstract We present a system that learns to follow navigational natural language directions. [sent-2, score-0.221]
2 Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. [sent-3, score-0.424]
3 Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. [sent-4, score-0.245]
4 We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. [sent-5, score-0.698]
5 We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths. [sent-6, score-0.489]
6 , 2004) must cope with the inherent ambiguity in spatial descriptions. [sent-10, score-0.371]
7 The semantics of imperative and spatial language is heavily dependent on the physical setting it is situated in, motivating automated learning approaches to acquiring meaning. [sent-11, score-0.544]
8 In contrast, we present an apprenticeship learning system which learns to imitate human instruction following, without linguistic annotation. [sent-13, score-0.637]
9 Solved using a reinforcement learning algorithm, our system acquires the meaning of spatial words through 1. [sent-14, score-0.564]
10 you’re between springbok and highest viewpoint Figure 1: A path appears on the instruction giver’s map, who describes it to the instruction follower. [sent-17, score-1.079]
11 This draws on the intuition that children learn to use spatial language through a mixture of observing adult language usage and situated interaction in the world, usually without explicit definitions (Tanz, 1980). [sent-19, score-0.449]
12 Our system learns to follow navigational directions in a route following task. [sent-20, score-0.375]
13 , 1991), a collection of spoken dialogs describing paths to take through a map. [sent-22, score-0.111]
14 In this setting, two participants, the instruction giver and instruction follower, each have a map composed of named landmarks. [sent-23, score-1.424]
15 Furthermore, the instruction giver has a route drawn on her map, and it is her task to describe the path to the instruction follower, who cannot see the reference path. [sent-24, score-1.59]
16 Our system learns to interpret these navigational directions, without access to explicit linguistic annotation. [sent-25, score-0.221]
17 We frame direction following as an apprenticeship learning problem and solve it with a reinforcement learning algorithm, extending previous work on interpreting instructions by Branavan et al. [sent-26, score-0.465]
18 c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 806–814, from world state to action, which most closely follows the reference route. [sent-30, score-0.229]
19 Our state space combines world and linguistic features, representing both our current position on the map and the com- municative content of the utterances we are interpreting. [sent-31, score-0.334]
20 Using this reward signal as a form of supervision, we learn a policy to maximize the expected reward on unseen examples. [sent-33, score-0.509]
21 2 Related Work Levit and Roy (2007) developed a spatial semantics for the Map Task corpus. [sent-34, score-0.405]
22 They represent instructions as Navigational Information Units, which decompose the meaning of an instruction into orthogonal constituents such as the reference object, the type of movement, and quantitative aspect. [sent-35, score-0.647]
23 For example, they represent the meaning of “move two inches toward the house” as a reference object (the house), a path descriptor (towards), and a quantitative aspect (two inches). [sent-36, score-0.338]
24 These representations are then combined to form a path through the map. [sent-37, score-0.159]
25 The semantics in our paper is simpler, eschewing quantitative aspects and path descriptors, and instead focusing on reference objects and frames of reference. [sent-39, score-0.328]
26 Learning to follow instructions by interacting with the world was recently introduced by Branavan et al. [sent-41, score-0.154]
27 Our reinforcement learning formulation follows closely from their work. [sent-43, score-0.211]
28 Their approach can incorporate expert supervision into the reward function in a similar manner to this paper, but is also able to learn effectively from environment feedback alone. [sent-44, score-0.367]
29 In the Map Task corpus we only observe expert route following behavior, but are not told how portions of the text correspond to parts of the path, leading to a difficult learning problem. [sent-46, score-0.274]
30 The semantics of spatial language has been studied for some time in the linguistics literature. [sent-47, score-0.405]
31 Talmy (1983) classifies the way spatial meaning is Figure 2: The instruction giver and instruction follower face each other, and cannot see each others maps. [sent-48, score-1.789]
32 encoded syntactically, and Fillmore (1997) studies spatial terms as a subset ofdeictic language, which depends heavily on non-linguistic context. [sent-49, score-0.397]
33 Levin- son (2003) conducted a cross-linguistic semantic typology of spatial systems. [sent-50, score-0.371]
34 Levinson categorizes the frames of reference, or spatial coordinate systems1 , into 1. [sent-51, score-0.415]
35 road to the north of the house” Ex: “the Levinson further classifies allocentric frames of reference into absolute, which includes the cardinal directions, and intrinsic, which refers to a featured side of an object, such as “the front of the car”. [sent-56, score-0.346]
36 The intrinsic frame of reference occurs rarely in the Map Task corpus and is ignored, as speakers tend not to mention features of the landmarks beyond their names. [sent-58, score-0.301]
37 Regier (1996) studied the learning of spatial language from static 2-D diagrams, learning to distinguish between terms with a connectionist model. [sent-59, score-0.433]
38 In contrast, we learn from whole texts paired with a 1Not all languages exhibit all frames of reference. [sent-61, score-0.12]
39 We use similar geometric features as Regier, capturing the allocentric frame of reference. [sent-65, score-0.193]
40 Spatial semantics have also been explored in physically grounded systems. [sent-66, score-0.137]
41 Kuipers (2000) developed the Spatial Semantic Hierarchy, a knowledge representation formalism for representing different levels of granularity in spatial knowledge. [sent-67, score-0.371]
42 More generally, apprenticeship learning is well studied in the reinforcement learning literature, where the goal is to mimic the behavior of an expert in some decision making domain. [sent-71, score-0.382]
43 , 1991) is a set of dialogs between an instruction giver and an instruction follower. [sent-74, score-1.336]
44 Each participant has a map with small named landmarks. [sent-75, score-0.156]
45 Additionally, the instruction giver has a path drawn on her map, and must communicate this path to the instruction follower in natural language. [sent-76, score-1.708]
46 Figure 1 shows a portion of the instruction giver’s map and a sample of the instruction giver language which describes part of the path. [sent-77, score-1.381]
47 We restrict our attention to just the utterances of the instruction giver, ignoring the instruction follower. [sent-80, score-0.977]
48 This is to reduce redundancy and noise in the data - the instruction follower rarely introduces new information, instead asking for clarification or giving confirmation. [sent-81, score-0.618]
49 The landmarks on the instruction follower map sometimes differ in location from the instruction giver’s. [sent-82, score-1.397]
50 We ignore this caveat, giving the system access to the instruction giver’s landmarks, without the reference path. [sent-83, score-0.543]
51 Our task is to build an automated instruction follower. [sent-84, score-0.46]
52 Whereas the original participants could speak freely, our system does not have the ability to query the instruction giver and must instead rely only on the previously recorded dialogs. [sent-85, score-0.792]
53 4 Reinforcement Learning Formulation We frame the direction following task as a sequential decision making problem. [sent-89, score-0.118]
54 We interpret utterances in order, where our interpretation is expressed by moving on the map. [sent-90, score-0.108]
55 Our goal is to construct a series of moves in the map which most closely matches the expert path. [sent-91, score-0.357]
56 × We define intermediate steps in our interpretation as states in a set S, and interpretive steps as actions drawn from a set A. [sent-92, score-0.123]
57 To measure the fidelity of our path with respect to the expert, we define a reward function R : S A → R+ which measures etwhea utility cotifo choosing a particular action in a particular state. [sent-93, score-0.596]
58 Executing action a in state s carries us to a new state s0, and we denote this transition function by s0 = T(s, a). [sent-94, score-0.444]
59 , um) and is paired with a map, which is composed of a set of named landmarks (l1, . [sent-100, score-0.264]
60 1 State The states of our decision making problem combine both our position in the dialog d and the path we have taken so far on the map. [sent-105, score-0.235]
61 A state s ∈ S is composed koefn s = (ui, l, c), mwahpe. [sent-106, score-0.123]
62 re A Al itsa tteh es n∈ am Se ids landmark we are located next to and c is a cardinal direction drawn from {North, South, East, West} dwirheicchti nde dterramwinnes fr wmhi {chN srithde, oofu lh we are on. [sent-107, score-0.261]
63 Lastly, ui is the utterance in d we are currently interpreting. [sent-108, score-0.172]
64 2Our learning algorithm is not dependent on a deterministic transition function and can be applied to domains with stochastic transitions, such as robot locomotion. [sent-109, score-0.177]
65 2 Action An action a ∈ A is composed of a named landmAanr kac tl,i tnhe a target osf c othme action, together dw liatnh a cardinal direction c which determines which side to pass lon. [sent-111, score-0.434]
66 In this case, we interpret an utterance without moving on the map. [sent-113, score-0.118]
67 A target l together with a cardinal direction c determine a point on the map, which is a fixed distance from l in the direction of c. [sent-114, score-0.208]
68 We make the assumption that at most one instruction occurs in a given utterance. [sent-115, score-0.46]
69 This does not always hold true - the instruction giver sometimes chains commands together in a single utterance. [sent-116, score-0.765]
70 3 Transition Executing action a = (l0, c0) in state s = (ui, l, c) leads us to a new state s0 = T(s, a). [sent-118, score-0.351]
71 This transition moves us to the next utterance to interpret, and moves our location to the target of the action. [sent-119, score-0.233]
72 Figure 3 displays the state transitions two different actions. [sent-121, score-0.117]
73 To form a path through the map, we connect these state waypoints with a path planner3 based on A∗, where the landmarks are obstacles. [sent-122, score-0.588]
74 4 Reward We define a reward function R(s, a) which measures the utility of executing action a in state s. [sent-125, score-0.619]
75 We wish to construct a route which follows the expert path as closely as possible. [sent-126, score-0.424]
76 We consider a proposed route P close to the expert path Pe if P visits landmarks in the same order as Pe, and also passes them on the correct side. [sent-127, score-0.605]
77 For a given transition s = (ui, l, c), a = (l0, c0), we have a binary feature indicating if the expert path moves from lto l0. [sent-128, score-0.487]
78 In Figure 3, both a1 and visit the next landmark in the correct order. [sent-129, score-0.116]
79 To measure if an action is to the correct side of a landmark, we have another binary feature indicating if Pe passes l0 on side c. [sent-130, score-0.405]
80 In addition, we have a feature which counts the number of words in ui which also occur in the name of l0. [sent-132, score-0.149]
81 Our reward function is a linear combination of these features. [sent-138, score-0.199]
82 5 Policy We formally define an interpretive strategy as a policy π : S → A, a mapping from states to actpioolnisc. [sent-140, score-0.217]
83 y Oπu :r goal i sA t,o a fi mnadp a policy π wstahtiecsh maximizes the expected reward Eπ [R(s, π(s))] . [sent-141, score-0.296]
84 A given Q function implicitly defines a policy π by π(s) = maax Q(s, a). [sent-143, score-0.159]
85 (3) Basic reinforcement learning methods treat states as atomic entities, in essence estimating Vπ as a table. [sent-144, score-0.194]
86 However, at test time we are following new directions for a map we haven’t previously seen. [sent-145, score-0.194]
87 The linguistic information in our feature representation includes the instruction giver utterance and the names of landmarks on the map. [sent-150, score-1.066]
88 Additionally, we furnish our algorithm with a list of English spatial terms, shown in Table 1. [sent-151, score-0.371]
89 Learning exactly which words influence decision making is difficult; reinforcement learning algorithms have problems with the large, sparsefeature vectors common innatural language processing. [sent-153, score-0.159]
90 For a given state s = (u, l, c) and action a = (l0, c0), our feature vector φ(s, a) is composed of the following: 809 above, below, under, underneath, over, bottom, Input: Dialog set D Table 1: The list of given spatial terms. [sent-154, score-0.729]
91 Egocentric Spatial: Binary feature which conjoins ithce cpaartdiianla:l Bdirinecatryion f we move cinh with each spatial term w ∈ u. [sent-158, score-0.505]
92 We conjoin this direction with each spatial term, giving binary features such as “the word down appears in the utterance and we move to the south”. [sent-161, score-0.605]
93 In a given state st, we act according nttos a probabilistic policy dteef sined in terms of the Q function. [sent-166, score-0.238]
94 After every transition we update θ, which changes how we act in subsequent steps. [sent-167, score-0.12]
95 If we act greedily with respect to our current Q function, we might never visit states which are acTransition function T Learning rate αt Output: Feature weights θ 1 Initialize θ to small random values 2 until θ converges do 15 return θ Algorithm 1: The SARSA learning algorithm. [sent-169, score-0.161]
96 We utilize Boltzmann exploration, for which Pr(at|st;θ) =Peax0epx(p1τθ(Tτ1φθ(Tsφt(,satt,)a)0)) The parameter τ (5) isP referred to as the tempera- ture, with a higher temperature causing more exploration, and a lower temperature causing more exploitation. [sent-171, score-0.154]
97 Acting with this exploration policy, we iterate through the training dialogs, updating our feature weights θ as we go. [sent-173, score-0.12]
98 The update step looks at two successive state transitions. [sent-174, score-0.108]
99 Suppose we are in state st, execute action at, receive reward rt = R(st, at), transition to state st+1, and there choose action at+1 . [sent-175, score-0.798]
100 (9) 6 Experimental Design We evaluate our system on the Map Task corpus, splitting the corpus into 96 training dialogs and 32 test dialogs. [sent-185, score-0.111]
wordName wordTfidf (topN-words)
[('instruction', 0.46), ('spatial', 0.371), ('giver', 0.305), ('action', 0.191), ('landmarks', 0.19), ('reward', 0.168), ('path', 0.159), ('map', 0.156), ('follower', 0.131), ('policy', 0.128), ('reinforcement', 0.128), ('navigational', 0.119), ('route', 0.116), ('dialogs', 0.111), ('allocentric', 0.109), ('ui', 0.105), ('executing', 0.102), ('st', 0.1), ('expert', 0.097), ('apprenticeship', 0.095), ('cardinal', 0.082), ('landmark', 0.082), ('state', 0.08), ('sarsa', 0.071), ('utterance', 0.067), ('grounded', 0.066), ('direction', 0.063), ('transition', 0.062), ('instructions', 0.062), ('utterances', 0.057), ('reference', 0.056), ('south', 0.055), ('frame', 0.055), ('conjoins', 0.054), ('egocentric', 0.054), ('inches', 0.054), ('interpretive', 0.054), ('kuipers', 0.054), ('regier', 0.054), ('robot', 0.053), ('closely', 0.052), ('moves', 0.052), ('interpret', 0.051), ('learns', 0.051), ('follow', 0.051), ('house', 0.049), ('physical', 0.049), ('hcrc', 0.048), ('temperature', 0.048), ('utility', 0.047), ('learn', 0.045), ('frames', 0.044), ('feature', 0.044), ('levinson', 0.044), ('composed', 0.043), ('passes', 0.043), ('binary', 0.041), ('exploration', 0.041), ('dialog', 0.041), ('world', 0.041), ('pe', 0.039), ('anderson', 0.039), ('branavan', 0.039), ('rock', 0.039), ('directions', 0.038), ('transitions', 0.037), ('passing', 0.037), ('physically', 0.037), ('move', 0.036), ('iterate', 0.035), ('states', 0.035), ('quantitative', 0.035), ('meaning', 0.034), ('semantics', 0.034), ('visit', 0.034), ('windows', 0.034), ('drawn', 0.034), ('null', 0.033), ('situated', 0.033), ('indicating', 0.032), ('paired', 0.031), ('function', 0.031), ('credit', 0.031), ('learning', 0.031), ('locality', 0.03), ('portions', 0.03), ('act', 0.03), ('ex', 0.029), ('causing', 0.029), ('geometric', 0.029), ('update', 0.028), ('dw', 0.028), ('classifies', 0.028), ('giving', 0.027), ('side', 0.027), ('participants', 0.027), ('convergence', 0.026), ('rt', 0.026), ('heavily', 0.026), ('supervision', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000002 168 acl-2010-Learning to Follow Navigational Directions
Author: Adam Vogel ; Dan Jurafsky
Abstract: We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths.
2 0.2985622 202 acl-2010-Reading between the Lines: Learning to Map High-Level Instructions to Commands
Author: S.R.K. Branavan ; Luke Zettlemoyer ; Regina Barzilay
Abstract: In this paper, we address the task of mapping high-level instructions to sequences of commands in an external environment. Processing these instructions is challenging—they posit goals to be achieved without specifying the steps required to complete them. We describe a method that fills in missing information using an automatically derived environment model that encodes states, transitions, and commands that cause these transitions to happen. We present an efficient approximate approach for learning this environment model as part of a policygradient reinforcement learning algorithm for text interpretation. This design enables learning for mapping high-level instructions, which previous statistical methods cannot handle.1
3 0.16413406 35 acl-2010-Automated Planning for Situated Natural Language Generation
Author: Konstantina Garoufi ; Alexander Koller
Abstract: We present a natural language generation approach which models, exploits, and manipulates the non-linguistic context in situated communication, using techniques from AI planning. We show how to generate instructions which deliberately guide the hearer to a location that is convenient for the generation of simple referring expressions, and how to generate referring expressions with context-dependent adjectives. We implement and evaluate our approach in the framework of the Challenge on Generating Instructions in Virtual Environments, finding that it performs well even under the constraints of realtime generation.
4 0.14859781 167 acl-2010-Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems
Author: Srinivasan Janarthanam ; Oliver Lemon
Abstract: We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical ‘jargon’ names of the domain entities. In such cases, dialogue systems must be able to model the user’s (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the sys- tem learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user’s domain expertise.
5 0.091343157 239 acl-2010-Towards Relational POMDPs for Adaptive Dialogue Management
Author: Pierre Lison
Abstract: Open-ended spoken interactions are typically characterised by both structural complexity and high levels of uncertainty, making dialogue management in such settings a particularly challenging problem. Traditional approaches have focused on providing theoretical accounts for either the uncertainty or the complexity of spoken dialogue, but rarely considered the two issues simultaneously. This paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap. We represent the interaction as a Partially Observable Markov Decision Process (POMDP) over a rich state space incorporating both dialogue, user, and environment models. The tractability of the resulting POMDP can be preserved using a mechanism for dynamically constraining the action space based on prior knowledge over locally relevant dialogue structures. These constraints are encoded in a small set of general rules expressed as a Markov Logic network. The first-order expressivity of Markov Logic enables us to leverage the rich relational structure of the problem and efficiently abstract over large regions ofthe state and action spaces.
6 0.088139176 187 acl-2010-Optimising Information Presentation for Spoken Dialogue Systems
7 0.086629018 190 acl-2010-P10-5005 k2opt.pdf
8 0.082792237 142 acl-2010-Importance-Driven Turn-Bidding for Spoken Dialogue Systems
9 0.064182557 149 acl-2010-Incorporating Extra-Linguistic Information into Reference Resolution in Collaborative Task Dialogue
10 0.062332463 55 acl-2010-Bootstrapping Semantic Analyzers from Non-Contradictory Texts
11 0.056569453 97 acl-2010-Efficient Path Counting Transducers for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices
12 0.053536706 184 acl-2010-Open-Domain Semantic Role Labeling by Modeling Word Spans
13 0.052090086 13 acl-2010-A Rational Model of Eye Movement Control in Reading
14 0.050547451 6 acl-2010-A Game-Theoretic Model of Metaphorical Bargaining
15 0.050312884 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing
16 0.048831649 47 acl-2010-Beetle II: A System for Tutoring and Computational Linguistics Experimentation
17 0.048777312 215 acl-2010-Speech-Driven Access to the Deep Web on Mobile Devices
18 0.047759734 108 acl-2010-Expanding Verb Coverage in Cyc with VerbNet
19 0.047744323 194 acl-2010-Phrase-Based Statistical Language Generation Using Graphical Models and Active Learning
20 0.047281045 135 acl-2010-Hindi-to-Urdu Machine Translation through Transliteration
topicId topicWeight
[(0, -0.136), (1, 0.062), (2, -0.033), (3, -0.133), (4, -0.032), (5, -0.148), (6, -0.106), (7, 0.041), (8, 0.006), (9, 0.03), (10, -0.029), (11, -0.031), (12, 0.03), (13, -0.004), (14, -0.029), (15, -0.106), (16, 0.093), (17, 0.112), (18, -0.046), (19, 0.067), (20, 0.089), (21, -0.099), (22, -0.094), (23, 0.065), (24, 0.058), (25, -0.063), (26, -0.017), (27, 0.037), (28, -0.117), (29, -0.032), (30, 0.207), (31, -0.317), (32, 0.146), (33, 0.041), (34, -0.039), (35, 0.107), (36, 0.001), (37, 0.255), (38, -0.161), (39, -0.03), (40, 0.089), (41, -0.058), (42, 0.065), (43, 0.059), (44, -0.021), (45, -0.022), (46, 0.017), (47, -0.137), (48, 0.03), (49, 0.011)]
simIndex simValue paperId paperTitle
same-paper 1 0.96756357 168 acl-2010-Learning to Follow Navigational Directions
Author: Adam Vogel ; Dan Jurafsky
Abstract: We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths.
2 0.91721195 202 acl-2010-Reading between the Lines: Learning to Map High-Level Instructions to Commands
Author: S.R.K. Branavan ; Luke Zettlemoyer ; Regina Barzilay
Abstract: In this paper, we address the task of mapping high-level instructions to sequences of commands in an external environment. Processing these instructions is challenging—they posit goals to be achieved without specifying the steps required to complete them. We describe a method that fills in missing information using an automatically derived environment model that encodes states, transitions, and commands that cause these transitions to happen. We present an efficient approximate approach for learning this environment model as part of a policygradient reinforcement learning algorithm for text interpretation. This design enables learning for mapping high-level instructions, which previous statistical methods cannot handle.1
3 0.78194356 35 acl-2010-Automated Planning for Situated Natural Language Generation
Author: Konstantina Garoufi ; Alexander Koller
Abstract: We present a natural language generation approach which models, exploits, and manipulates the non-linguistic context in situated communication, using techniques from AI planning. We show how to generate instructions which deliberately guide the hearer to a location that is convenient for the generation of simple referring expressions, and how to generate referring expressions with context-dependent adjectives. We implement and evaluate our approach in the framework of the Challenge on Generating Instructions in Virtual Environments, finding that it performs well even under the constraints of realtime generation.
4 0.52950078 190 acl-2010-P10-5005 k2opt.pdf
Author: empty-author
Abstract: unkown-abstract
5 0.44630212 239 acl-2010-Towards Relational POMDPs for Adaptive Dialogue Management
Author: Pierre Lison
Abstract: Open-ended spoken interactions are typically characterised by both structural complexity and high levels of uncertainty, making dialogue management in such settings a particularly challenging problem. Traditional approaches have focused on providing theoretical accounts for either the uncertainty or the complexity of spoken dialogue, but rarely considered the two issues simultaneously. This paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap. We represent the interaction as a Partially Observable Markov Decision Process (POMDP) over a rich state space incorporating both dialogue, user, and environment models. The tractability of the resulting POMDP can be preserved using a mechanism for dynamically constraining the action space based on prior knowledge over locally relevant dialogue structures. These constraints are encoded in a small set of general rules expressed as a Markov Logic network. The first-order expressivity of Markov Logic enables us to leverage the rich relational structure of the problem and efficiently abstract over large regions ofthe state and action spaces.
6 0.3331055 167 acl-2010-Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems
7 0.32083589 142 acl-2010-Importance-Driven Turn-Bidding for Spoken Dialogue Systems
8 0.31979498 187 acl-2010-Optimising Information Presentation for Spoken Dialogue Systems
9 0.30795929 13 acl-2010-A Rational Model of Eye Movement Control in Reading
10 0.30507275 55 acl-2010-Bootstrapping Semantic Analyzers from Non-Contradictory Texts
11 0.27316964 224 acl-2010-Talking NPCs in a Virtual Game World
12 0.25983837 61 acl-2010-Combining Data and Mathematical Models of Language Change
13 0.25929275 85 acl-2010-Detecting Experiences from Weblogs
14 0.24605206 92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.
15 0.21751206 179 acl-2010-Now, Where Was I? Resumption Strategies for an In-Vehicle Dialogue System
16 0.21545504 64 acl-2010-Complexity Assumptions in Ontology Verbalisation
17 0.21434346 29 acl-2010-An Exact A* Method for Deciphering Letter-Substitution Ciphers
18 0.20750855 18 acl-2010-A Study of Information Retrieval Weighting Schemes for Sentiment Analysis
19 0.20680811 108 acl-2010-Expanding Verb Coverage in Cyc with VerbNet
20 0.20638087 43 acl-2010-Automatically Generating Term Frequency Induced Taxonomies
topicId topicWeight
[(14, 0.045), (25, 0.07), (36, 0.318), (39, 0.013), (42, 0.038), (59, 0.078), (73, 0.036), (74, 0.012), (76, 0.014), (78, 0.036), (83, 0.065), (84, 0.034), (98, 0.137)]
simIndex simValue paperId paperTitle
same-paper 1 0.78057671 168 acl-2010-Learning to Follow Navigational Directions
Author: Adam Vogel ; Dan Jurafsky
Abstract: We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths.
2 0.77458537 16 acl-2010-A Statistical Model for Lost Language Decipherment
Author: Benjamin Snyder ; Regina Barzilay ; Kevin Knight
Abstract: In this paper we propose a method for the automatic decipherment of lost languages. Given a non-parallel corpus in a known related language, our model produces both alphabetic mappings and translations of words into their corresponding cognates. We employ a non-parametric Bayesian framework to simultaneously capture both low-level character mappings and highlevel morphemic correspondences. This formulation enables us to encode some of the linguistic intuitions that have guided human decipherers. When applied to the ancient Semitic language Ugaritic, the model correctly maps 29 of 30 letters to their Hebrew counterparts, and deduces the correct Hebrew cognate for 60% of the Ugaritic words which have cognates in Hebrew.
3 0.76192629 231 acl-2010-The Prevalence of Descriptive Referring Expressions in News and Narrative
Author: Raquel Hervas ; Mark Finlayson
Abstract: Generating referring expressions is a key step in Natural Language Generation. Researchers have focused almost exclusively on generating distinctive referring expressions, that is, referring expressions that uniquely identify their intended referent. While undoubtedly one of their most important functions, referring expressions can be more than distinctive. In particular, descriptive referring expressions those that provide additional information not required for distinction are critical to flu– – ent, efficient, well-written text. We present a corpus analysis in which approximately one-fifth of 7,207 referring expressions in 24,422 words ofnews and narrative are descriptive. These data show that if we are ever to fully master natural language generation, especially for the genres of news and narrative, researchers will need to devote more attention to understanding how to generate descriptive, and not just distinctive, referring expressions. 1 A Distinctive Focus Generating referring expressions is a key step in Natural Language Generation (NLG). From early treatments in seminal papers by Appelt (1985) and Reiter and Dale (1992) to the recent set of Referring Expression Generation (REG) Challenges (Gatt et al., 2009) through different corpora available for the community (Eugenio et al., 1998; van Deemter et al., 2006; Viethen and Dale, 2008), generating referring expressions has become one of the most studied areas of NLG. Researchers studying this area have, almost without exception, focused exclusively on how to generate distinctive referring expressions, that is, referring expressions that unambiguously idenMark Alan Finlayson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA, 02139 USA markaf@mit .edu tify their intended referent. Referring expressions, however, may be more than distinctive. It is widely acknowledged that they can be used to achieve multiple goals, above and beyond distinction. Here we focus on descriptive referring expressions, that is, referring expressions that are not only distinctive, but provide additional information not required for identifying their intended referent. Consider the following text, in which some of the referring expressions have been underlined: Once upon a time there was a man, who had three daughters. They lived in a house and their dresses were made of fabric. While a bit strange, the text is perfectly wellformed. All the referring expressions are distinctive, in that we can properly identify the referents of each expression. But the real text, the opening lines to the folktale The Beauty and the Beast, is actually much more lyrical: Once upon a time there was a rich merchant, who had three daughters. They lived in a very fine house and their gowns were made of the richest fabric sewn with jewels. All the boldfaced portions namely, the choice of head nouns, the addition of adjectives, the use of appositive phrases serve to perform a descriptive function, and, importantly, are all unnecessary for distinction! In all of these cases, the author is using the referring expressions as a vehicle for communicating information about the referents. This descriptive information is sometimes – – new, sometimes necessary for understanding the text, and sometimes just for added flavor. But when the expression is descriptive, as opposed to distinctive, this additional information is not required for identifying the referent of the expression, and it is these sorts of referring expressions that we will be concerned with here. 49 Uppsala,P Srwoce de dni,n 1g1s- 1of6 t Jhuely AC 20L1 20 .1 ?0c 2 C0o1n0fe Aresnsoceci Sathio rnt f Poarp Ceorsm,p paugteastio 4n9a–l5 L4i,nguistics Although these sorts of referring expression have been mostly ignored by researchers in this area1 , we show in this corpus study that descriptive expressions are in fact quite prevalent: nearly one-fifth of referring expressions in news and narrative are descriptive. In particular, our data, the trained judgments of native English speakers, show that 18% of all distinctive referring expressions in news and 17% of those in narrative folktales are descriptive. With this as motivation, we argue that descriptive referring expressions must be studied more carefully, especially as the field progresses from referring in a physical, immediate context (like that in the REG Challenges) to generating more literary forms of text. 2 Corpus Annotation This is a corpus study; our procedure was therefore to define our annotation guidelines (Section 2.1), select texts to annotate (2.2), create an annotation tool for our annotators (2.3), and, finally, train annotators, have them annotate referring expressions’ constituents and function, and then adjudicate the double-annotated texts into a gold standard (2.4). 2.1 Definitions We wrote an annotation guide explaining the difference between distinctive and descriptive referring expressions. We used the guide when training annotators, and it was available to them while annotating. With limited space here we can only give an outline of what is contained in the guide; for full details see (Finlayson and Herv a´s, 2010a). Referring Expressions We defined referring expressions as referential noun phrases and their coreferential expressions, e.g., “John kissed Mary. She blushed.”. This included referring expressions to generics (e.g., “Lions are fierce”), dates, times, and numbers, as well as events if they were referred to using a noun phrase. We included in each referring expression all the determiners, quantifiers, adjectives, appositives, and prepositional phrases that syntactically attached to that expression. When referring expressions were nested, all the nested referring expressions were also marked separately. Nuclei vs. Modifiers In the only previous corpus study of descriptive referring expressions, on 1With the exception of a small amount of work, discussed in Section 4. museum labels, Cheng et al. (2001) noted that descriptive information is often integrated into referring expressions using modifiers to the head noun. To study this, and to allow our results to be more closely compared with Cheng’s, we had our annotators split referring expressions into their constituents, portions called either nuclei or modifiers. The nuclei were the portions of the referring expression that performed the ‘core’ referring function; the modifiers were those portions that could be varied, syntactically speaking, independently of the nuclei. Annotators then assigned a distinctive or descriptive function to each constituent, rather than the referring expression as a whole. Normally, the nuclei corresponded to the head of the noun phrase. In (1), the nucleus is the token king, which we have here surrounded with square brackets. The modifiers, surrounded by parentheses, are The and old. (1) (The) (old) [king] was wise. Phrasal modifiers were marked as single modifiers, for example, in (2). (2) (The) [roof] (of the house) collapsed. It is significant that we had our annotators mark and tag the nuclei of referring expressions. Cheng and colleagues only mentioned the possibility that additional information could be introduced in the modifiers. However, O’Donnell et al. (1998) observed that often the choice of head noun can also influence the function of a referring expression. Consider (3), in which the word villain is used to refer to the King. (3) The King assumed the throne today. I ’t trust (that) [villain] one bit. don The speaker could have merely used him to refer to the King–the choice of that particular head noun villain gives us additional information about the disposition of the speaker. Thus villain is descriptive. Function: Distinctive vs. Descriptive As already noted, instead of tagging the whole referring expression, annotators tagged each constituent (nuclei and modifiers) as distinctive or descriptive. The two main tests for determining descriptiveness were (a) if presence of the constituent was unnecessary for identifying the referent, or (b) if 50 the constituent was expressed using unusual or ostentatious word choice. If either was true, the constituent was considered descriptive; otherwise, it was tagged as distinctive. In cases where the constituent was completely irrelevant to identifying the referent, it was tagged as descriptive. For example, in the folktale The Princess and the Pea, from which (1) was extracted, there is only one king in the entire story. Thus, in that story, the king is sufficient for identification, and therefore the modifier old is descriptive. This points out the importance of context in determining distinctiveness or descriptiveness; if there had been a roomful of kings, the tags on those modifiers would have been reversed. There is some question as to whether copular predicates, such as the plumber in (4), are actually referring expressions. (4) John is the plumber Our annotators marked and tagged these constructions as normal referring expressions, but they added an additional flag to identify them as copular predicates. We then excluded these constructions from our final analysis. Note that copular predicates were treated differently from appositives: in appositives the predicate was included in the referring expression, and in most cases (again, depending on context) was marked descriptive (e.g., John, the plumber, slept.). 2.2 Text Selection Our corpus comprised 62 texts, all originally written in English, from two different genres, news and folktales. We began with 30 folktales of different sizes, totaling 12,050 words. These texts were used in a previous work on the influence of dialogues on anaphora resolution algorithms (Aggarwal et al., 2009); they were assembled with an eye toward including different styles, different authors, and different time periods. Following this, we matched, approximately, the number of words in the folktales by selecting 32 texts from Wall Street Journal section of the Penn Treebank (Marcus et al., 1993). These texts were selected at ran- dom from the first 200 texts in the corpus. 2.3 The Story Workbench We used the Story Workbench application (Finlayson, 2008) to actually perform the annotation. The Story Workbench is a semantic annotation program that, among other things, includes the ability to annotate referring expressions and coreferential relationships. We added the ability to annotate nuclei, modifiers, and their functions by writing a workbench “plugin” in Java that could be installed in the application. The Story Workbench is not yet available to the public at large, being in a limited distribution beta testing phase. The developers plan to release it as free software within the next year. At that time, we also plan to release our plugin as free, downloadable software. 2.4 Annotation & Adjudication The main task of the study was the annotation of the constituents of each referring expression, as well as the function (distinctive or descriptive) of each constituent. The system generated a first pass of constituent analysis, but did not mark functions. We hired two native English annotators, neither of whom had any linguistics background, who corrected these automatically-generated constituent analyses, and tagged each constituent as descriptive or distinctive. Every text was annotated by both annotators. Adjudication of the differences was conducted by discussion between the two annotators; the second author moderated these discussions and settled irreconcilable disagreements. We followed a “train-as-you-go” paradigm, where there was no distinct training period, but rather adjudication proceeded in step with annotation, and annotators received feedback during those sessions. We calculated two measures of inter-annotator agreement: a kappa statistic and an f-measure, shown in Table 1. All of our f-measures indicated that annotators agreed almost perfectly on the location of referring expressions and their breakdown into constituents. These agreement calculations were performed on the annotators’ original corrected texts. All the kappa statistics were calculated for two tags (nuclei vs. modifier for the constituents, and distinctive vs. descriptive for the functions) over both each token assigned to a nucleus or modifier and each referring expression pair. Our kappas indicate moderate to good agreement, especially for the folktales. These results are expected because of the inherent subjectivity of language. During the adjudication sessions it became clear that different people do not consider the same information 51 as obvious or descriptive for the same concepts, and even the contexts deduced by each annotators from the texts were sometimes substantially different. 3 Results Table 2 lists the primary results of the study. We considered a referring expression descriptive if any of its constituents were descriptive. Thus, 18% of the referring expressions in the corpus added additional information beyond what was required to unambiguously identify their referent. The results were similar in both genres. Tales Articles Total Texts303262 Words Sentences 12,050 904 12,372 571 24,422 1,475 Ref. Exp.3,6813,5267,207 Dist. Ref. Exp. 3,057 2,830 5,887 Desc. Ref. Exp. 609 672 1,281 % Dist. Ref.83%81%82% % Desc. Ref. 17% 19% Table 2: Primary results. 18% Table 3 contains the percentages of descriptive and distinctive tags broken down by constituent. Like Cheng’s results, our analysis shows that descriptive referring expressions make up a significant fraction of all referring expressions. Although Cheng did not examine nuclei, our results show that the use of descriptive nuclei is small but not negligible. 4 Relation to the Field Researchers working on generating referring expressions typically acknowledge that referring expressions can perform functions other than distinction. Despite this widespread acknowledgment, researchers have, for the most part, explicitly ignored these functions. Exceptions to this trend Tales Articles Total Nuclei3,6663,5027,168 Max. Nuc/Ref Dist. Nuc. 1 95% 1 97% 1 96% Desc. Nuc. 5% 3% 4% Modifiers2,2773,6275,904 Avg. Mod/Ref Max. Mod/Ref Dist. Mod. Desc. Mod. 0.6 4 78% 22% 1.0 6 81% 19% 0.8 6 80% 20% Table 3: Breakdown of Constituent Tags are three. First is the general study of aggregation in the process of referring expression generation. Second and third are corpus studies by Cheng et al. (2001) and Jordan (2000a) that bear on the prevalence of descriptive referring expressions. The NLG subtask of aggregation can be used to imbue referring expressions with a descriptive function (Reiter and Dale, 2000, §5.3). There is a specific nk (iRned otefr aggregation 0c0al0le,d § embedding t ihsa at moves information from one clause to another inside the structure of a separate noun phrase. This type of aggregation can be used to transform two sentences such as “The princess lived in a castle. She was pretty ” into “The pretty princess lived in a castle ”. The adjective pretty, previously a cop- ular predicate, becomes a descriptive modifier of the reference to the princess, making the second text more natural and fluent. This kind of aggregation is widely used by humans for making the discourse more compact and efficient. In order to create NLG systems with this ability, we must take into account the caveat, noted by Cheng (1998), that any non-distinctive information in a referring expression must not lead to confusion about the distinctive function of the referring expression. This is by no means a trivial problem this sort of aggregation interferes with referring and coherence planning at both a local and global level (Cheng and Mellish, 2000; Cheng et al., 2001). It is clear, from the current state of the art of NLG, that we have not yet obtained a deep enough understanding of aggregation to enable us to handle these interactions. More research on the topic is needed. Two previous corpus studies have looked at the use of descriptive referring expressions. The first showed explicitly that people craft descriptive referring expressions to accomplish different – 52 goals. Jordan and colleagues (Jordan, 2000b; Jordan, 2000a) examined the use of referring expressions using the COCONUT corpus (Eugenio et al., 1998). They tested how domain and discourse goals can influence the content of non-pronominal referring expressions in a dialogue context, checking whether or not a subject’s goals led them to include non-referring information in a referring expression. Their results are intriguing because they point toward heretofore unexamined constraints, utilities and expectations (possibly genre- or styledependent) that may underlie the use ofdescriptive information to perform different functions, and are not yet captured by aggregation modules in particular or NLG systems in general. In the other corpus study, which partially inspired this work, Cheng and colleagues analyzed a set of museum descriptions, the GNOME corpus (Poesio, 2004), for the pragmatic functions of referring expressions. They had three functions in their study, in contrast to our two. Their first function (marked by their uniq tag) was equiv- alent to our distinctive function. The other two were specializations of our descriptive tag, where they differentiated between additional information that helped to understand the text (int), or additional information not necessary for understanding (att r). Despite their annotators seeming to have trouble distinguishing between the latter two tags, they did achieve good overall inter-annotator agreement. They identified 1,863 modifiers to referring expressions in their corpus, of which 47.3% fulfilled a descriptive (att r or int) function. This is supportive of our main assertion, namely, that descriptive referring expressions, not only crucial for efficient and fluent text, are actually a significant phenomenon. It is interesting, though, that Cheng’s fraction of descriptive referring expression was so much higher than ours (47.3% versus our 18%). We attribute this substantial difference to genre, in that Cheng studied museum labels, in which the writer is spaceconstrained, having to pack a lot of information into a small label. The issue bears further study, and perhaps will lead to insights into differences in writing style that may be attributed to author or genre. 5 Contributions We make two contributions in this paper. First, we assembled, double-annotated, and adjudicated into a gold-standard a corpus of 24,422 words. We marked all referring expressions, coreferential relations, and referring expression constituents, and tagged each constituent as having a descriptive or distinctive function. We wrote an annotation guide and created software that allows the annotation of this information in free text. The corpus and the guide are available on-line in a permanent digital archive (Finlayson and Herv a´s, 2010a; Finlayson and Herv a´s, 2010b). The software will also be released in the same archive when the Story Workbench annotation application is released to the public. This corpus will be useful for the automatic generation and analysis of both descriptive and distinctive referring expressions. Any kind of system intended to generate text as humans do must take into account that identifica- tion is not the only function of referring expressions. Many analysis applications would benefit from the automatic recognition of descriptive referring expressions. Second, we demonstrated that descriptive referring expressions comprise a substantial fraction (18%) of the referring expressions in news and narrative. Along with museum descriptions, studied by Cheng, it seems that news and narrative are genres where authors naturally use a large number ofdescriptive referring expressions. Given that so little work has been done on descriptive referring expressions, this indicates that the field would be well served by focusing more attention on this phenomenon. Acknowledgments This work was supported in part by the Air Force Office of Scientific Research under grant number A9550-05-1-0321, as well as by the Office of Naval Research under award number N00014091059. Any opinions, findings, and con- clusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the Office of Naval Research. This research is also partially funded the Spanish Ministry of Education and Science (TIN200914659-C03-01) and Universidad Complutense de Madrid (GR58/08). We also thank Whitman Richards, Ozlem Uzuner, Peter Szolovits, Patrick Winston, Pablo Gerv a´s, and Mark Seifter for their helpful comments and discussion, and thank our annotators Saam Batmanghelidj and Geneva Trotter. 53 References Alaukik Aggarwal, Pablo Gerv a´s, and Raquel Herv a´s. 2009. Measuring the influence of errors induced by the presence of dialogues in reference clustering of narrative text. In Proceedings of ICON-2009: 7th International Conference on Natural Language Processing, India. Macmillan Publishers. Douglas E. Appelt. 1985. Planning English referring expressions. Artificial Intelligence, 26: 1–33. Hua Cheng and Chris Mellish. 2000. Capturing the interaction between aggregation and text planning in two generation systems. In INLG ’00: First international conference on Natural Language Generation 2000, pages 186–193, Morristown, NJ, USA. Association for Computational Linguistics. Hua Cheng, Massimo Poesio, Renate Henschel, and Chris Mellish. 2001 . Corpus-based np modifier generation. In NAACL ’01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies 2001, pages 1–8, Morristown, NJ, USA. Association for Computational Linguistics. Hua Cheng. 1998. Embedding new information into referring expressions. In ACL-36: Proceedings of the 36thAnnual Meeting ofthe Associationfor Computational Linguistics and 17th International Conference on Computational Linguistics, pages 1478– 1480, Morristown, NJ, USA. Association for Computational Linguistics. Barbara Di Eugenio, Johanna D. Moore, Pamela W. Jordan, and Richmond H. Thomason. 1998. An empirical investigation of proposals in collaborative dialogues. In Proceedings of the 17th international conference on Computational linguistics, pages 325–329, Morristown, NJ, USA. Association for Computational Linguistics. Mark A. Finlayson and Raquel Herv a´s. 2010a. Annotation guide for the UCM/MIT indications, referring expressions, and coreference corpus (UMIREC corpus). Technical Report MIT-CSAIL-TR-2010-025, MIT Computer Science and Artificial Intelligence Laboratory. http://hdl.handle.net/1721. 1/54765. Mark A. Finlayson and Raquel Herv a´s. 2010b. UCM/MIT indications, referring expressions, and coreference corpus (UMIREC corpus). Work product, MIT Computer Science and Artificial Intelligence Laboratory. http://hdl.handle.net/1721 .1/54766. Mark A. Finlayson. 2008. Collecting semantics in the wild: The Story Workbench. In Proceedings of the AAAI Fall Symposium on Naturally-Inspired Artificial Intelligence, pages 46–53, Menlo Park, CA, USA. AAAI Press. Albert Gatt, Anja Belz, and Eric Kow. 2009. The TUNA-REG challenge 2009: overview and evaluation results. In ENLG ’09: Proceedings of the 12th European Workshop on Natural Language Generation, pages 174–182, Morristown, NJ, USA. Association for Computational Linguistics. Pamela W. Jordan. 2000a. Can nominal expressions achieve multiple goals?: an empirical study. In ACL ’00: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 142– 149, Morristown, NJ, USA. Association for Computational Linguistics. Pamela W. Jordan. 2000b. Influences on attribute selection in redescriptions: A corpus study. In Proceedings of CogSci2000, pages 250–255. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: the penn treebank. Computational Linguistics, 19(2):3 13–330. Michael O’Donnell, Hua Cheng, and Janet Hitzeman. 1998. Integrating referring and informing in NP planning. In Proceedings of COLING-ACL’98 Workshop on the Computational Treatment of Nominals, pages 46–56. Massimo Poesio. 2004. Discourse annotation and semantic annotation in the GNOME corpus. In DiscAnnotation ’04: Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 72–79, Morristown, NJ, USA. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 1992. A fast algorithm for the generation of referring expressions. In Proceedings of the 14th conference on Computational linguistics, Nantes, France. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press. Kees van Deemter, Ielka van der Sluis, and Albert Gatt. 2006. Building a semantically transparent corpus for the generation of referring expressions. In Proceedings of the 4th International Conference on Natural Language Generation (Special Session on Data Sharing and Evaluation), INLG-06. Jette Viethen and Robert Dale. 2008. The use of spatial relations in referring expressions. In Proceedings of the 5th International Conference on Natural Language Generation. 54
4 0.5315147 167 acl-2010-Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems
Author: Srinivasan Janarthanam ; Oliver Lemon
Abstract: We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical ‘jargon’ names of the domain entities. In such cases, dialogue systems must be able to model the user’s (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the sys- tem learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user’s domain expertise.
5 0.51622397 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar
Author: Mohit Bansal ; Dan Klein
Abstract: We present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set, in contrast to much recent work on tree selection in data-oriented parsing and treesubstitution grammar learning. We require only simple, deterministic grammar symbol refinement, in contrast to recent work on latent symbol refinement. Moreover, our parser requires no explicit lexicon machinery, instead parsing input sentences as character streams. Despite its simplicity, our parser achieves accuracies of over 88% F1 on the standard English WSJ task, which is competitive with substantially more complicated state-of-theart lexicalized and latent-variable parsers. Additional specific contributions center on making implicit all-fragments parsing efficient, including a coarse-to-fine inference scheme and a new graph encoding.
6 0.51579887 214 acl-2010-Sparsity in Dependency Grammar Induction
7 0.51413345 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing
8 0.51340115 62 acl-2010-Combining Orthogonal Monolingual and Multilingual Sources of Evidence for All Words WSD
9 0.51330024 202 acl-2010-Reading between the Lines: Learning to Map High-Level Instructions to Commands
10 0.51093554 172 acl-2010-Minimized Models and Grammar-Informed Initialization for Supertagging with Highly Ambiguous Lexicons
11 0.50938636 71 acl-2010-Convolution Kernel over Packed Parse Forest
12 0.50906515 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation
13 0.50598824 169 acl-2010-Learning to Translate with Source and Target Syntax
14 0.50508308 162 acl-2010-Learning Common Grammar from Multilingual Corpus
15 0.50472617 261 acl-2010-Wikipedia as Sense Inventory to Improve Diversity in Web Search Results
16 0.50457674 55 acl-2010-Bootstrapping Semantic Analyzers from Non-Contradictory Texts
17 0.50390249 133 acl-2010-Hierarchical Search for Word Alignment
18 0.5025667 245 acl-2010-Understanding the Semantic Structure of Noun Phrase Queries
19 0.50188744 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features
20 0.50173372 116 acl-2010-Finding Cognate Groups Using Phylogenies