acl acl2013 acl2013-90 knowledge-graph by maker-knowledge-mining

90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features


Source: pdf

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. [sent-5, score-0.703]

2 They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. [sent-6, score-0.303]

3 We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. [sent-7, score-0.929]

4 Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. [sent-8, score-0.317]

5 This leads to more natural and less repetitive surface realisation. [sent-9, score-0.35]

6 It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. [sent-10, score-0.726]

7 1 Introduction Surface realisation typically aims to produce output that is grammatically well-formed, natural and cohesive. [sent-12, score-0.461]

8 In interactive settings such as generation within a spoken dialogue system (SDS), a cuayahuit l | o . [sent-18, score-0.512]

9 In addition, since interactions are dynamic, generator inputs from the dialogue manager can sometimes be partial or subject to subsequent modification. [sent-22, score-0.608]

10 Since dialogue acts are passed on to the generation module as soon as possible, this can sometimes lead to incomplete generator inputs (because the user is still speaking), or inputs that are subject to later modification (because of an initial ASR mis-recognition). [sent-24, score-0.769]

11 In this paper, we propose to formulate surface realisation as a sequence labelling task. [sent-25, score-0.816]

12 Our main hypothesis is that the use of global context in a CRF with semantic trees can lead to surface realisations that are better phrased, more natural and less repetitive than taking only local features into account. [sent-32, score-0.618]

13 In addition, we compare our system with alternative surface realisation methods from the literature, namely, a rank and boost approach and n-grams. [sent-34, score-0.74]

14 Ac s2s0o1ci3a Atiosnso fcoirat Cio nm foprut Caotimonpaulta Lti nognuails Lti cnsg,u piasgteics 1254–1263, to surface realisation within incremental systems, because CRFs are able to model context across full as well as partial generator inputs which may undergo modifications during generation. [sent-37, score-1.174]

15 As a demonstration, we apply our model to incremental surface realisation in a proof-of-concept study. [sent-38, score-0.944]

16 (2009) who also use CRFs to find the best surface realisation from a semantic tree. [sent-40, score-0.781]

17 ’s generator does not take context beyond the current utterance into account and is thus restricted to local features. [sent-43, score-0.346]

18 In terms of surface realisation from graphical models (and within the context of SDSs), our approach is also related to work by Georgila et al. [sent-45, score-0.785]

19 The last approach is also concerned with generating restaurant recommendations within an SDS. [sent-48, score-0.34]

20 In terms of surface realisation for SDSs, Oh and Rudnicky (2000) present foundational work in using an n-gram-based system. [sent-51, score-0.74]

21 They train a surface realiser based on a domain-dependent language model and use an overgeneration and ranking approach. [sent-52, score-0.374]

22 Candidate utterances are ranked according to a penalty function which penalises too long or short utterances, repetitious utterances and utterances which either contain more or less information than required by the dialogue act. [sent-53, score-0.522]

23 SPaRKy was also developed for the domain of restaurant recommendations and was shown to be equivalent to or better than a carefully designed templatebased generator which had received high human ratings in the past (Stent et al. [sent-58, score-0.472]

24 This could present a problem in incremental settings, where generation speed is of particular importance. [sent-65, score-0.35]

25 More work on trainable realisation for SDSs generally includes Bulyko and Ostendorf (2002) who use finite state transducers, Nakatsu and White (2006) who use supervised learning, Varges (2006) who uses chart generation, and Konstas and Lapata (2012) who use weighted hypergraphs, among others. [sent-69, score-0.461]

26 1 Tree-based Semantic Representations The restaurant recommendations we generate can include any of the attributes shown in Table 1. [sent-71, score-0.435]

27 It is then the task of the surface realiser to find the best realisation, including whether to present them in one or several sentences. [sent-72, score-0.347]

28 This often is a sentence planning decision, but in our approach it is handled using CRF-based surface realisation. [sent-73, score-0.279]

29 The semantic forms underlying surface realisation can be produced in many ways. [sent-74, score-0.781]

30 While the user is speaking, the dialogue manager sends dialogue acts to the NLG module, which uses reinforcement learning to order semantic attributes and produce a semantic tree (see Dethlefs et al. [sent-76, score-1.148]

31 This paper focuses on surface realisation from these trees using a CRF as shown in the surface realisation module. [sent-78, score-1.48]

32 As shown in the architecture diagram in Figure 1, a CRF surface realiser takes a semantic tree as input. [sent-122, score-0.445]

33 2 Conditional Random Fields for Phrase-Based Surface Realisation The main idea of our approach is to treat surface realisation as a sequence labelling task in which a sequence of semantic inputs needs to be labelled with appropriate surface realisations. [sent-138, score-1.225]

34 The task is therefore to find a mapping between (observed) 1256 lexical, syntactic and semantic features and a (hidden) best surface realisation. [sent-139, score-0.32]

35 We use the linear-chain Conditional Random Field (CRF) model for statistical phrase-based surface realisation, see Figure 2 (a). [sent-140, score-0.279]

36 This probabilistic model defines the posterior probability of la- bels (surface realisation phrases) y={y1 , . [sent-141, score-0.461]

37 The generation context includes everything that has been generated for the current utterance so far. [sent-162, score-0.315]

38 shtml The semantics for each node are derived from the input dialogue acts (these are listed in Table 1) and are associated with nodes. [sent-173, score-0.38]

39 The lexical items are present in the generation context and are mapped to semantic tree nodes. [sent-174, score-0.289]

40 , each generation step needs to take the features of the entire generation history into account. [sent-177, score-0.292]

41 For the first constituent, The Beluga, this corresponds to the features { ˆ BEGIN NAME} indicating dthse beginning ourfe a sentence (where empty fdeicatautrinesg are omitted), gt ohef beginning of a new generation context and the next semantic slot required. [sent-179, score-0.279]

42 In this way, a sequence of surface form constituents is generated correspond- ing to latent states in the CRF. [sent-185, score-0.317]

43 Since global utterance features capture the full generation context (i. [sent-186, score-0.374]

44 This is useful for longer restaurant recommendations which may span over more than one utterance. [sent-189, score-0.34]

45 In this way, our approach implicitly treats sentence planning decisions such as the distribution of content over a set of messages in the same way as (or as part of) surface realisation. [sent-196, score-0.279]

46 A further capability of our surface realiser is that it can generate complete phrases from full as well as partial dialogue acts. [sent-197, score-0.723]

47 A demonstration of this is given in Section 5 in an application to incremental surface realisation. [sent-199, score-0.483]

48 To train the CRF, we used a data set of 552 restaurant recommendations from the website The 1257 List. [sent-200, score-0.34]

49 3 The data contains recommendations such as Located in the city centre, Beluga is a stylish yet laid-back restaurant with a smart menu of modern European cuisine. [sent-201, score-0.377]

50 4 Grammar Induction The grammar g of surface realisation candidates is obtained through an automatic grammar induction algorithm which can be run on unlabelled data and requires only minimal human intervention. [sent-203, score-0.8]

51 This grammar defines the surface realisation space for the CRFs. [sent-204, score-0.77]

52 We provide the human corpus of restaurant recommendations from Section 3. [sent-205, score-0.34]

53 The remainder needs to be hand-annotated at the moment, which includes mainly attributes like restaurant names or quality attributes, such as delicate, exquisite, etc. [sent-211, score-0.346]

54 We assume that cohesion can be identified by untrained judges as natural, well-phrased and non-repetitive surface forms. [sent-220, score-0.333]

55 1: functionFINDGRAMMAR(utterances u, semantic attributes a) return grammar 2: for each utterance u do 3: if u contains a semantic attribute from a, such as venue, cuisine, etc. [sent-225, score-0.366]

56 (201 w syhsitcehm generates restaurant recommendations based on the SPaRKy system (Walker et al. [sent-234, score-0.34]

57 1 Human Rating Study We carried out a user rating study on the CrowdFlower crowd sourcing Each participant was shown part of a real human-system dialogue that emerged as part of the CLASSiC project evaluation (Rieser et al. [sent-241, score-0.431]

58 Each dialogue contained two variations for one of the utterances. [sent-246, score-0.33]

59 Table 2 gives an example of a dialogue segment presented to the participants. [sent-249, score-0.33]

60 The restaurant Gourmet Burger is an outstanding, expensive restaurant located in the central area. [sent-258, score-0.557]

61 Table 2: Example dialogue for participants to compare alternative outputs in italics, USR=user, SYS A=CRF (global), SYS B=CRF(local). [sent-266, score-0.33]

62 com pare Possibly this is because the local context taken into account by both systems was not enough to ensure cohesion across surface phrases. [sent-298, score-0.467]

63 While CRF (global) often decides to aggregate attributes into one sentence, such as the Beluga is an outstanding restaurant in the city centre, CLASSiC tends to rely more on individual messages, such as The Beluga is an outstanding restaurant. [sent-315, score-0.443]

64 (2010) who also generate restaurant recommendations and asked similar questions to participants as we did. [sent-320, score-0.34]

65 8]) Table 4: Example dialogue where the dialogue manager needs to send incremental updates to the NLG. [sent-355, score-1.027]

66 Incremental surface realisation from semantic trees for this dialogue is shown in Figure 3. [sent-356, score-1.111]

67 5 Incremental Surface Realisation Recent years have seen increased interest in incremental dialogue processing (Skantze and Schlangen, 2009; Schlangen and Skantze, 2009). [sent-358, score-0.534]

68 From a dialogue perspective, they can be said to work on partial rather than full dialogue acts. [sent-360, score-0.706]

69 With respect to surface realisation, incremental NLG systems have predominantly relied on pre-defined templates (Purver and Otsuka, 2003; Skantze and Hjalmarsson, 2010; Dethlefs et al. [sent-361, score-0.483]

70 , 2003), a constraint satisfaction-based NLG architecture and marks important progress towards more flexible incremental surface realisation. [sent-366, score-0.483]

71 Especially for long utterances or such that are separated by user turns, this may lead to surface form increments that are not well connected and lack cohesion. [sent-368, score-0.396]

72 1 Application to Incremental SR This section will discuss a proof-of-concept application of our approach to incremental surface realisation. [sent-370, score-0.483]

73 Table 4 shows an example dialogue between a user and system that contains a number of incremental phenomena that require hypothesis updates, system corrections and user bargeins. [sent-371, score-0.64]

74 Incremental surface realisation for this dialogue is shown in Figure 3, where processing steps are indicated as bold-face numbers and are triggered by partial dialogue acts that are sent from the dialogue manager, such as inform(area=centre [0. [sent-372, score-1.826]

75 Once a dialogue act is observed by the NLG system, a reinforcement learning agent determines the order of attributes and produces a semantic tree, as described in Section 3. [sent-375, score-0.524]

76 In the dialogue in Table 4, the user first asks for a nice restaurant in the centre. [sent-378, score-0.634]

77 The dialogue manager constructs a first attribute-value slot, inform(area=centre [0. [sent-379, score-0.423]

78 In a second step, the semantically annotated node gets expanded into a surface form that is chosen from a set of candidates (shown in curly brackets). [sent-385, score-0.279]

79 Step 3 then further expands the current tree adding a node for the food type and the name of a restaurant that the dialogue manager had passed. [sent-392, score-0.876]

80 Primitive attributes contain a single semantic type, such as area, whereas complex attributes contain multiple types, such as food, name and need to be decomposed in a later processing step (see steps 4 and 6). [sent-394, score-0.284]

81 Step 5 again uses the CRF 7Note here that the information passed on to the NLG is distinct from the dialogue manager’s own actions. [sent-395, score-0.33]

82 In the example, the NLG is asked to generate a recommendation, but the dialogue manager actually decides to clarify the user’s preferences due to low confidence. [sent-396, score-0.423]

83 } Figure 3: Example of incremental surface realisation, where each generation step is indicated by a number. [sent-407, score-0.629]

84 Syntactic information in the form of parse categories are also taken into account for surface realisation, but have been omitted in this figure. [sent-410, score-0.376]

85 obtain the next surface realisation that connects with the previous one (so that a sequence of realisation “labels” appears: Right in the city centre and Bangkok). [sent-411, score-1.33]

86 This is important, because the local context would otherwise be restricted to a partial dialogue act, which can be much smaller than a full dialogue act and thus lead to short, repetitive sentences. [sent-413, score-0.877]

87 The dialogue continues as the system implicitly confirms the user’s preferred restaurant (SYS 1). [sent-414, score-0.581]

88 As a consequence, the dialogue manager needs to update its initial hypotheses and communicate this to NLG. [sent-416, score-0.423]

89 Afterwards, the dialogue continues and NLG involves mainly expanding the current tree into a full sequence of surface realisations for partial dialogue acts which come together into a full utterance. [sent-419, score-1.198]

90 They add new partial dialogue acts to the semantic tree. [sent-422, score-0.467]

91 For our application, the maximal context is 9 semantic attributes (for a surface form that uses all possible 10 attributes). [sent-425, score-0.46]

92 Updates are triggered by the hypothesis updates of the dialogue manager. [sent-428, score-0.4]

93 Whenever generated output needs to be modified, old expansions and surface forms are deleted first, before new ones can be expanded in their place. [sent-431, score-0.341]

94 2 Updates and Processing Speed Results Since fast responses are crucial in incremental systems, we measured the average time our system took for a surface realisation. [sent-433, score-0.483]

95 Since updates take effect directly on partial dialogue acts, rather than the full generated utterance, we require around 50% less updates as if generating from scratch for every changed input hypothesis. [sent-439, score-0.516]

96 6 Conclusion and Future Directions We have presented a novel technique for surface realisation that treats generation as a sequence la- belling task by combining a CRF with tree-based semantic representations. [sent-441, score-0.965]

97 An essential property of interactive surface realisers is to keep track of the utterance context including dependencies between linguistic features to generate cohesive utterances. [sent-442, score-0.506]

98 Keeping track of the global context is also important for incremental systems since generator inputs can be incomplete or subject to modification. [sent-446, score-0.474]

99 In a proof-of-concept study, we have argued that our approach is applicable to incremental surface realisation. [sent-447, score-0.483]

100 In addition, we may compare different sequence labelling algorithms for surface realisation (Nguyen and Guo, 2007) or segmented CRFs (Sarawagi and Cohen, 2005) and apply our method to more complex surface realisation domains such as text generation or summarisation. [sent-451, score-1.702]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('realisation', 0.461), ('dialogue', 0.33), ('surface', 0.279), ('restaurant', 0.251), ('incremental', 0.204), ('crf', 0.188), ('sparky', 0.169), ('generation', 0.146), ('beluga', 0.138), ('dethlefs', 0.136), ('utterance', 0.124), ('venue', 0.123), ('rieser', 0.109), ('skantze', 0.108), ('attributes', 0.095), ('sys', 0.094), ('manager', 0.093), ('nlg', 0.093), ('food', 0.092), ('recommendations', 0.089), ('generator', 0.088), ('gourmet', 0.077), ('schlangen', 0.077), ('repetitive', 0.071), ('updates', 0.07), ('inform', 0.069), ('cuay', 0.068), ('realisations', 0.068), ('realiser', 0.068), ('classic', 0.067), ('crfs', 0.065), ('utterances', 0.064), ('sdss', 0.061), ('global', 0.059), ('reinforcement', 0.058), ('tree', 0.057), ('located', 0.055), ('local', 0.055), ('cohesion', 0.054), ('centre', 0.054), ('usr', 0.054), ('mairesse', 0.054), ('nina', 0.054), ('name', 0.053), ('user', 0.053), ('inputs', 0.051), ('acts', 0.05), ('rating', 0.048), ('phrasing', 0.047), ('slot', 0.047), ('bangkok', 0.046), ('partial', 0.046), ('context', 0.045), ('ratings', 0.044), ('phone', 0.043), ('burger', 0.043), ('walker', 0.043), ('semantic', 0.041), ('heriberto', 0.041), ('huitl', 0.041), ('sds', 0.041), ('stent', 0.041), ('conditional', 0.04), ('area', 0.039), ('labelling', 0.038), ('sequence', 0.038), ('phrased', 0.038), ('city', 0.037), ('spoken', 0.036), ('oliver', 0.036), ('expansions', 0.035), ('helen', 0.035), ('verena', 0.035), ('attribute', 0.035), ('account', 0.034), ('gabriel', 0.034), ('parse', 0.032), ('fields', 0.032), ('hastie', 0.032), ('categories', 0.031), ('barges', 0.031), ('bulyko', 0.031), ('buschmeier', 0.031), ('postcode', 0.031), ('realisers', 0.031), ('spud', 0.031), ('grammar', 0.03), ('outstanding', 0.03), ('nodes', 0.028), ('sigdial', 0.028), ('sarawagi', 0.027), ('georgila', 0.027), ('nakatsu', 0.027), ('optimising', 0.027), ('overgeneration', 0.027), ('purver', 0.027), ('responsive', 0.027), ('rudnicky', 0.027), ('track', 0.027), ('deleted', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999923 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

2 0.13175569 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

Author: Srinivasan Janarthanam ; Oliver Lemon ; Phil Bartie ; Tiphaine Dalmas ; Anna Dickinson ; Xingkun Liu ; William Mackaness ; Bonnie Webber

Abstract: We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks.

3 0.11848418 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

4 0.11171058 86 acl-2013-Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures

Author: Sina Zarriess ; Jonas Kuhn

Abstract: We suggest a generation task that integrates discourse-level referring expression generation and sentence-level surface realization. We present a data set of German articles annotated with deep syntax and referents, including some types of implicit referents. Our experiments compare several architectures varying the order of a set of trainable modules. The results suggest that a revision-based pipeline, with intermediate linearization, significantly outperforms standard pipelines or a parallel architecture.

5 0.10388429 337 acl-2013-Tag2Blog: Narrative Generation from Satellite Tag Data

Author: Kapila Ponnamperuma ; Advaith Siddharthan ; Cheng Zeng ; Chris Mellish ; Rene van der Wal

Abstract: The aim of the Tag2Blog system is to bring satellite tagged wild animals “to life” through narratives that place their movements in an ecological context. Our motivation is to use such automatically generated texts to enhance public engagement with a specific species reintroduction programme, although the protocols developed here can be applied to any animal or other movement study that involves signal data from tags. We are working with one of the largest nature conservation charities in Europe in this regard, focusing on a single species, the red kite. We describe a system that interprets a sequence of locational fixes obtained from a satellite tagged individual, and constructs a story around its use of the landscape.

6 0.10229472 129 acl-2013-Domain-Independent Abstract Generation for Focused Meeting Summarization

7 0.090632737 184 acl-2013-Identification of Speakers in Novels

8 0.089914888 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

9 0.088392958 168 acl-2013-Generating Recommendation Dialogs by Extracting Information from User Reviews

10 0.086309902 197 acl-2013-Incremental Topic-Based Translation Model Adaptation for Conversational Spoken Language Translation

11 0.068079256 315 acl-2013-Semi-Supervised Semantic Tagging of Conversational Understanding using Markov Topic Regression

12 0.067616172 249 acl-2013-Models of Semantic Representation with Visual Attributes

13 0.067450047 303 acl-2013-Robust multilingual statistical morphological generation models

14 0.066016339 21 acl-2013-A Statistical NLG Framework for Aggregated Planning and Realization

15 0.063893206 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

16 0.061626304 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

17 0.061221179 375 acl-2013-Using Integer Linear Programming in Concept-to-Text Generation to Produce More Compact Texts

18 0.059515398 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation

19 0.059299231 182 acl-2013-High-quality Training Data Selection using Latent Topics for Graph-based Semi-supervised Learning

20 0.053151917 46 acl-2013-An Infinite Hierarchical Bayesian Model of Phrasal Translation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.157), (1, 0.02), (2, -0.027), (3, -0.021), (4, -0.022), (5, 0.016), (6, 0.076), (7, -0.068), (8, -0.012), (9, 0.047), (10, -0.084), (11, 0.026), (12, -0.035), (13, 0.009), (14, -0.024), (15, -0.058), (16, 0.02), (17, 0.032), (18, 0.044), (19, -0.072), (20, -0.115), (21, -0.101), (22, 0.11), (23, 0.033), (24, 0.052), (25, 0.057), (26, 0.114), (27, 0.036), (28, 0.002), (29, 0.007), (30, -0.003), (31, -0.01), (32, 0.068), (33, 0.113), (34, -0.037), (35, 0.071), (36, -0.035), (37, 0.027), (38, 0.036), (39, 0.031), (40, 0.075), (41, 0.099), (42, 0.073), (43, 0.006), (44, -0.048), (45, 0.056), (46, 0.121), (47, 0.008), (48, -0.178), (49, -0.078)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93565798 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

2 0.77796263 86 acl-2013-Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures

Author: Sina Zarriess ; Jonas Kuhn

Abstract: We suggest a generation task that integrates discourse-level referring expression generation and sentence-level surface realization. We present a data set of German articles annotated with deep syntax and referents, including some types of implicit referents. Our experiments compare several architectures varying the order of a set of trainable modules. The results suggest that a revision-based pipeline, with intermediate linearization, significantly outperforms standard pipelines or a parallel architecture.

3 0.73710674 337 acl-2013-Tag2Blog: Narrative Generation from Satellite Tag Data

Author: Kapila Ponnamperuma ; Advaith Siddharthan ; Cheng Zeng ; Chris Mellish ; Rene van der Wal

Abstract: The aim of the Tag2Blog system is to bring satellite tagged wild animals “to life” through narratives that place their movements in an ecological context. Our motivation is to use such automatically generated texts to enhance public engagement with a specific species reintroduction programme, although the protocols developed here can be applied to any animal or other movement study that involves signal data from tags. We are working with one of the largest nature conservation charities in Europe in this regard, focusing on a single species, the red kite. We describe a system that interprets a sequence of locational fixes obtained from a satellite tagged individual, and constructs a story around its use of the landscape.

4 0.66896671 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

Author: Srinivasan Janarthanam ; Oliver Lemon ; Phil Bartie ; Tiphaine Dalmas ; Anna Dickinson ; Xingkun Liu ; William Mackaness ; Bonnie Webber

Abstract: We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks.

5 0.63942772 190 acl-2013-Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs

Author: Adam Vogel ; Christopher Potts ; Dan Jurafsky

Abstract: Conversational implicatures involve reasoning about multiply nested belief structures. This complexity poses significant challenges for computational models of conversation and cognition. We show that agents in the multi-agent DecentralizedPOMDP reach implicature-rich interpretations simply as a by-product of the way they reason about each other to maximize joint utility. Our simulations involve a reference game of the sort studied in psychology and linguistics as well as a dynamic, interactional scenario involving implemented artificial agents.

6 0.61143458 21 acl-2013-A Statistical NLG Framework for Aggregated Planning and Realization

7 0.60907155 184 acl-2013-Identification of Speakers in Novels

8 0.58564067 1 acl-2013-"Let Everything Turn Well in Your Wife": Generation of Adult Humor Using Lexical Constraints

9 0.580697 375 acl-2013-Using Integer Linear Programming in Concept-to-Text Generation to Produce More Compact Texts

10 0.53890419 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE

11 0.53513855 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

12 0.52035171 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

13 0.51054972 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation

14 0.4927496 129 acl-2013-Domain-Independent Abstract Generation for Focused Meeting Summarization

15 0.49175045 203 acl-2013-Is word-to-phone mapping better than phone-phone mapping for handling English words?

16 0.47624046 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

17 0.46154648 303 acl-2013-Robust multilingual statistical morphological generation models

18 0.45879835 311 acl-2013-Semantic Neighborhoods as Hypergraphs

19 0.44734606 364 acl-2013-Typesetting for Improved Readability using Lexical and Syntactic Information

20 0.44196057 360 acl-2013-Translating Italian connectives into Italian Sign Language


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.046), (6, 0.054), (11, 0.048), (15, 0.013), (24, 0.051), (26, 0.041), (28, 0.016), (35, 0.063), (42, 0.099), (48, 0.035), (70, 0.04), (83, 0.235), (88, 0.054), (90, 0.046), (95, 0.061)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79313529 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

2 0.73329484 202 acl-2013-Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web

Author: Katsuma Narisawa ; Yotaro Watanabe ; Junta Mizuno ; Naoaki Okazaki ; Kentaro Inui

Abstract: This paper presents novel methods for modeling numerical common sense: the ability to infer whether a given number (e.g., three billion) is large, small, or normal for a given context (e.g., number of people facing a water shortage). We first discuss the necessity of numerical common sense in solving textual entailment problems. We explore two approaches for acquiring numerical common sense. Both approaches start with extracting numerical expressions and their context from the Web. One approach estimates the distribution ofnumbers co-occurring within a context and examines whether a given value is large, small, or normal, based on the distri- bution. Another approach utilizes textual patterns with which speakers explicitly expresses their judgment about the value of a numerical expression. Experimental results demonstrate the effectiveness of both approaches.

3 0.64257568 185 acl-2013-Identifying Bad Semantic Neighbors for Improving Distributional Thesauri

Author: Olivier Ferret

Abstract: Distributional thesauri are now widely used in a large number of Natural Language Processing tasks. However, they are far from containing only interesting semantic relations. As a consequence, improving such thesaurus is an important issue that is mainly tackled indirectly through the improvement of semantic similarity measures. In this article, we propose a more direct approach focusing on the identification of the neighbors of a thesaurus entry that are not semantically linked to this entry. This identification relies on a discriminative classifier trained from unsupervised selected examples for building a distributional model of the entry in texts. Its bad neighbors are found by applying this classifier to a representative set of occurrences of each of these neighbors. We evaluate the interest of this method for a large set of English nouns with various frequencies.

4 0.62619454 207 acl-2013-Joint Inference for Fine-grained Opinion Extraction

Author: Bishan Yang ; Claire Cardie

Abstract: This paper addresses the task of finegrained opinion extraction the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks – of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.

5 0.59175432 56 acl-2013-Argument Inference from Relevant Event Mentions in Chinese Argument Extraction

Author: Peifeng Li ; Qiaoming Zhu ; Guodong Zhou

Abstract: As a paratactic language, sentence-level argument extraction in Chinese suffers much from the frequent occurrence of ellipsis with regard to inter-sentence arguments. To resolve such problem, this paper proposes a novel global argument inference model to explore specific relationships, such as Coreference, Sequence and Parallel, among relevant event mentions to recover those intersentence arguments in the sentence, discourse and document layers which represent the cohesion of an event or a topic. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our global argument inference model over a state-of-the-art baseline. 1

6 0.58927643 226 acl-2013-Learning to Prune: Context-Sensitive Pruning for Syntactic MT

7 0.58779907 18 acl-2013-A Sentence Compression Based Framework to Query-Focused Multi-Document Summarization

8 0.58572388 127 acl-2013-Docent: A Document-Level Decoder for Phrase-Based Statistical Machine Translation

9 0.58546007 225 acl-2013-Learning to Order Natural Language Texts

10 0.58527571 68 acl-2013-Bilingual Data Cleaning for SMT using Graph-based Random Walk

11 0.58427405 98 acl-2013-Cross-lingual Transfer of Semantic Role Labeling Models

12 0.58409309 38 acl-2013-Additive Neural Networks for Statistical Machine Translation

13 0.58237123 172 acl-2013-Graph-based Local Coherence Modeling

14 0.58204496 83 acl-2013-Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model

15 0.58168423 353 acl-2013-Towards Robust Abstractive Multi-Document Summarization: A Caseframe Analysis of Centrality and Domain

16 0.58167762 343 acl-2013-The Effect of Higher-Order Dependency Features in Discriminative Phrase-Structure Parsing

17 0.58150035 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

18 0.58148861 166 acl-2013-Generalized Reordering Rules for Improved SMT

19 0.58040702 70 acl-2013-Bilingually-Guided Monolingual Dependency Grammar Induction

20 0.57769179 223 acl-2013-Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation