emnlp emnlp2013 emnlp2013-106 knowledge-graph by maker-knowledge-mining

106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation


Source: pdf

Author: Ioannis Konstas ; Mirella Lapata

Abstract: In a language generation system, a content planner selects which elements must be included in the output text and the ordering between them. Recent empirical approaches perform content selection without any ordering and have thus no means to ensure that the output is coherent. In this paper we focus on the problem of generating text from a database and present a trainable end-to-end generation system that includes both content selection and ordering. Content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data. We develop two approaches: the first one is inspired from Rhetorical Structure Theory and represents the document as a tree of discourse relations between database records; the second one requires little linguistic sophistication and uses tree structures to represent global patterns of database record sequences within a document. Experimental evaluation on two domains yields considerable improvements over the state of the art for both approaches.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In a language generation system, a content planner selects which elements must be included in the output text and the ordering between them. [sent-4, score-0.369]

2 In this paper we focus on the problem of generating text from a database and present a trainable end-to-end generation system that includes both content selection and ordering. [sent-6, score-0.505]

3 Content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data. [sent-7, score-0.441]

4 Generation systems typically follow a pipeline architecture consisting of three components: content planning (selecting and ordering the . [sent-12, score-0.305]

5 uk parts of the input to be mentioned in the output text), sentence planning (determining the structure and lexical content of individual sentences), and surface realization (verbalizing the chosen content in natural language). [sent-16, score-0.557]

6 , by treating sentence planning and surface realization as one component (Angeli et al. [sent-25, score-0.274]

7 , 2010), by implementing content selection without any document planning (Konstas and Lapata, 2012; Angeli et al. [sent-26, score-0.457]

8 , 2010; Kim and Mooney, 2010), or by eliminating content planning entirely (Belz, 2008; Wong and Mooney, 2007). [sent-27, score-0.305]

9 In this paper we present a trainable end-to-end generation system that captures all components of the traditional pipeline, including document planning. [sent-28, score-0.241]

10 , 2005; Belz, 2008; Chen and Mooney, 2008; Kim and Mooney, 2010), our model performs content planning (i. [sent-30, score-0.305]

11 oc d2s0 i1n3 N Aastusorcaila Ltiaonng fuoarg Ceo Pmrpoucetastsi onnga,l p Laignegsu 1is5t0ic3s–1514, Figure 1: Database records and corresponding text for (a) weather forecasting and (b) Windows troubleshooting. [sent-36, score-0.401]

12 The input to our model is a set of database records and collocated descriptions, examples of which are shown in Figure 1. [sent-43, score-0.555]

13 Given this input, we define a probabilistic context-free grammar (PCFG) that captures the structure of the database and how it can be verbalized. [sent-44, score-0.328]

14 Specifically, we extend the model of Konstas and Lapata (2012) which also uses a PCFG to perform content selection and surface realization, but does not capture any aspect of document planning. [sent-45, score-0.295]

15 We represent content plans with grammar rules which operate on the document level and are embedded on top of the original PCFG. [sent-46, score-0.551]

16 We essentially learn a discourse grammar following two approaches. [sent-47, score-0.238]

17 The first one is linguistically naive but applicable to multiple languages and domains; it extracts rules representing global patterns of record sequences within a sentence and among sentences from a training corpus. [sent-48, score-0.448]

18 The second approach learns document plans based on Rhetorical Structure Theory (RST; Mann and Thomson, 1988); it therefore has a solid linguistic foundation, but is resource intensive as it assumes access to a text-level discourse parser. [sent-49, score-0.331]

19 We learn document plans automatically using both representations and develop a tractable decoding algorithm for finding the best output, i. [sent-50, score-0.235]

20 To the best of our knowledge, this is the first data-driven model to incorporate document planning in a joint end-to-end system. [sent-53, score-0.292]

21 2 Related Work Content planning is a fundamental component in a natural generation system. [sent-57, score-0.279]

22 It is therefore not surprising that many content planners have been based on theories of discourse coherence (Hovy, 1993; Scott and de Souza, 1990). [sent-59, score-0.34]

23 In all cases, content plans are created manually, sometimes through corpus analysis. [sent-61, score-0.248]

24 More recent data-driven work focuses on end-to-end systems rather than individual components, however without taking document planning into account. [sent-69, score-0.292]

25 (2009) that selects which database records to talk about and then use an existing surface realizer (Wong and Mooney, 2007) to render the chosen records in natural language. [sent-71, score-0.858]

26 They break record selection into a series of locally coherent decisions, by first deciding on what records to talk about. [sent-76, score-0.705]

27 Konstas and Lapata (2012) propose a joint model, which recasts content selection and surface realization into a parsing problem. [sent-79, score-0.244]

28 Their model optimizes the choice of records, fields and words simultaneously, however they still select and order records locally. [sent-80, score-0.383]

29 We replace their content selection mechanism (which is based on a simple markovized chaining of records) with global document representations. [sent-81, score-0.289]

30 A plan in our model is identified either as a sequence of sentences, each containing a sequence of records, or as a tree where the internal nodes denote discourse information and the leaf nodes correspond to records. [sent-82, score-0.31]

31 3 Problem Formulation The generator takes as input a set of database records d and outputs a text g that verbalizes some of these records. [sent-83, score-0.56]

32 Each record token ri ∈ d, with 1 ≤ i≤ |d|, has a type ri. [sent-84, score-0.392]

33 For example, in Figure 1b, win-target is a record type with three fields: cmd (denotes the action the user must perform on an object on their screen, e. [sent-91, score-0.392]

34 , database records paired with texts w (see Figure 1). [sent-98, score-0.492]

35 , it selects which types of records belong to each sentence (or phrase) and how these sentences (or phrases) should be ordered. [sent-101, score-0.333]

36 Then it selects appropriate record tokens for each type and progressively chooses the most relevant fields; then, based on the values of the fields, it generates the final text, word by word. [sent-102, score-0.451]

37 The latter is essentially a PCFG which captures both the structure of the input database and the way it renders into natural language. [sent-104, score-0.244]

38 This grammar-based approach lends itself well to the incorporation of document planning which has traditionally assumed tree-like representations. [sent-105, score-0.292]

39 , the relationship between records, records and fields, fields and words. [sent-109, score-0.383]

40 These rules are domain-independent and could be applied to any database provided it follows the same structure. [sent-110, score-0.25]

41 , in rule (2) i j, so that a record cannot emit itself). [sent-116, score-0.392]

42 lReu (2le) (1) d je,fi snoe tsh tahte a expansion nfnroomt e tmheit start symbol S to the first record R of type start. [sent-117, score-0.462]

43 The rules in (2) implement content selection, by choosing appropriate records from the database and generating a sequence. [sent-118, score-0.666]

44 start) is a place-holder symbol for the set of fields of record token rj. [sent-122, score-0.457]

45 This method is locally optimal, since it only keeps track of the previous type of record for each re-write. [sent-123, score-0.392]

46 The rules in (3) conclude content selection on the field level, i. [sent-124, score-0.267]

47 During training, the records, fields and values of database d and the words w from the associated text are observed, and the model learns the mapping between them. [sent-136, score-0.3]

48 The mapping between the database and the observed text is unknown and thus the weights of the rules define a hidden correspondence h between records, fields and their values. [sent-138, score-0.364]

49 Decoding Given a trained grammar G and an input scenario from a database d, the model generates text by finding the most likely derivation, i. [sent-139, score-0.472]

50 , the likeli- hood of the grammar for a given database input scenario d. [sent-167, score-0.403]

51 5 Extensions In this section we extend the model of Konstas and Lapata (2012) by developing two more sophisticated content selection approaches which are informed by a global plan of the document to be generated. [sent-169, score-0.307]

52 1 Planning with Record Sequences Grammar Our key idea is to replace the content selection mechanism of the original model with a document plan which essentially defines a grammar on record types. [sent-171, score-0.793]

53 Then a sentence is further split into a sequence of record types. [sent-173, score-0.383]

54 Contrary to the original model, we observe a complete sequence2 of record types, split into sentences. [sent-174, score-0.344]

55 This way we learn domain-specific patterns of frequently occurring record type sequences among the sentences of a document, as well as more local structures within a sentence. [sent-175, score-0.432]

56 We thus substitute rules (1)–(2) in Figure 2 with sub-grammar GRSE based on record type sequences: Definition 1(GRSE grammar) GRSE = {ΣR, NRSE, PRSE, D} 2Note that a sequence is different from a permutation, as we may allow repetitions or omissions of certain record types. [sent-176, score-0.839]

57 tj) · where t is a record type, ti, tj, tl and tm may overlap and ra, rk are record tokens of type ti and tj respectively. [sent-196, score-0.852]

58 ·|s(t1j)| where s(t) is a function that returns the set of records with type t (Liang et al. [sent-212, score-0.354]

59 Similarly to the original grammar G, we employ the use of features (in parentheses) to denote a sequence of record types. [sent-215, score-0.525]

60 The same record types may recur in different sentences, but not in the same one. [sent-216, score-0.344]

61 The weight of rule (a) is simply the joint probability of all the record types present, ordered and segmented appropriately into sentences in the document, given the start symbol. [sent-217, score-0.426]

62 Once record types have been selected (on a per sentence basis) we move on to rule (b) which describes how each non-terminal SENT expands to an ordered sequence of records R, as they are observed within a sentence (see the terminal symbol ‘. [sent-218, score-0.808]

63 Notice that a record type ti may correspond to several record tokens ra. [sent-220, score-0.769]

64 Rules (3)–(5) in grammar G make decisions on these tokens based on the overall content of the database and the field/value selection. [sent-221, score-0.465]

65 The weight of this rule is the product of the weights of each record type. [sent-222, score-0.392]

66 , |s(t) |} for record type t, where |s(t) | is the n{u1m, . [sent-226, score-0.392]

67 Figure 3d shows an example tree for the database input in Figure 1b, using GRSE and assuming that the alignments between records and text are given. [sent-230, score-0.662]

68 The 1507 top level of the tree refers to the sequence of record types as they are observed in the text. [sent-231, score-0.429]

69 The first sentence contains three records with types ‘desktop’, ‘start’ and ‘start-target’, each corresponding to the textual segments click start, point to settings, and then click control panel. [sent-232, score-0.446]

70 The next level on the tree, denotes the choice of record tokens for each sentence, provided that we have decided on the choice and order of their types (see Figure 3b). [sent-233, score-0.344]

71 In Figure 3d, the bottom-left sub-tree corresponds to the choice of the first three records of Figure 1b. [sent-234, score-0.306]

72 Rule (a) enumerates all possible com- binations of record type sequences and the number grows exponentially even for a few record types and a small sequence size. [sent-237, score-0.815]

73 To tackle this problem, we extracted rules for GRSE from the training data, based on the assumption that there will be far fewer unique sequences of record types per dataset than exhaustively enumerating all possibilities. [sent-238, score-0.448]

74 For each scenario, we obtain a word-by-word alignment between the database records and the corresponding text. [sent-239, score-0.492]

75 We then map the aligned record tokens to their corresponding types, merge adjacent words with the same type and segment on punctuation (see Figure 3b). [sent-243, score-0.392]

76 For GRSE, we take the alignments of records on words and map them to their corresponding types (a); we then segment record types into sentences (b); and finally, create a tree using grammar GRSE (c). [sent-273, score-0.894]

77 For GRST, we segment the text into EDUs based on the records they align to (d) and output the discourse tree (omitted here for brevity’s sake); we build the document plan once we substitute the EDUs with their corresponding record types (e). [sent-274, score-1.003]

78 Note that the original grammar is limited to the generation of categorical and integer values. [sent-276, score-0.293]

79 v ∈ V where V is the set of words for the fields of type string, and gen str is a function that takes the value of a string-typed field f. [sent-282, score-0.253]

80 , database records paired with texts), we make the following assumption: each ×× record corresponds to a unique non-overlapping span in the collocated text, and can be therefore mapped to an EDU. [sent-296, score-0.868]

81 Assuming the text has been segmented and aligned to a sequence of records, we can create a discourse tree with record types (in place of their corresponding EDUs) as leaf nodes. [sent-297, score-0.607]

82 Figure r3 eea gives tlehe, adnisdco Du ∈rse N tree for the database input of Figure 1b, using GRST. [sent-302, score-0.263]

83 The database has 12 record types, each scenario contains on average 36 records, 5. [sent-325, score-0.574]

84 Grammar Extraction and Parameter Setting We obtained alignments between database records and textual segments for both domains and grammars (GRSE and GRST) using the unsupervised model of Liang et al. [sent-350, score-0.577]

85 In both cases content plans were extracted from (noisy) unsupervised alignments. [sent-389, score-0.248]

86 ), semantic correctness (does the meaning conveyed by the text correspond to the database input? [sent-397, score-0.25]

87 This suggests that document plans induced solely from data are of similar quality to those informed by RST. [sent-412, score-0.235]

88 Their system defines trigger patterns that specifically lexicalize record fields containing numbers. [sent-415, score-0.421]

89 In contrast, on WINHELP it is difficult to explicitly specify such patterns, as none of the record fields are numeric; as a result their system performs poorly compared to 7https : //www. [sent-416, score-0.421]

90 The heuristics performed mostly anchor matching between database records and words in the text (e. [sent-420, score-0.529]

91 This is probably because the dataset shows more structural variations in the choice of record types at the document level, and therefore the grammar extracted from the unsupervised alignments is noisier. [sent-428, score-0.639]

92 Interestingly, we observe that document planning improves system output overall, not only in terms of coherence. [sent-439, score-0.324]

93 As far as coherence is concerned, the two content planners are rated comparably (differences in the means are not significant). [sent-441, score-0.244]

94 In sum, we observe that integrating document planning either via GRSE or GRST boosts performance. [sent-444, score-0.292]

95 Document plans induced from record sequences exhibit similar performance, compared to those generated using expert-derived linguistic knowledge. [sent-445, score-0.522]

96 8 Conclusions In this paper, we have proposed an end-to-end system that generates text from database input and captures all components of the traditional generation pipeline, including document planning. [sent-449, score-0.494]

97 Document plans are induced automatically from training data and are represented intuitively by PCFG rules capturing the structure of the database and the way it renders to text. [sent-450, score-0.415]

98 Our second approach draws inspiration from Rhetorical Structure Theory (Mann and Thomson, 1988) and represents a document as a tree with intermediate nodes corresponding to discourse relations, and leaf nodes to database records. [sent-453, score-0.47]

99 Our models could also benefit from the development of more sophisticated planners either via grammar refinement or more expressive grammar formalisms (Cohn et al. [sent-458, score-0.363]

100 Empirically estimating order constraints for content planning in generation. [sent-531, score-0.305]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('grse', 0.379), ('record', 0.344), ('records', 0.306), ('grst', 0.253), ('weathergov', 0.221), ('planning', 0.195), ('database', 0.186), ('winhelp', 0.174), ('angeli', 0.165), ('konstas', 0.163), ('grammar', 0.142), ('plans', 0.138), ('rhetorical', 0.126), ('content', 0.11), ('edus', 0.105), ('document', 0.097), ('discourse', 0.096), ('duboue', 0.095), ('rst', 0.088), ('elaboration', 0.088), ('generation', 0.084), ('sent', 0.081), ('nrst', 0.079), ('planner', 0.079), ('planners', 0.079), ('fields', 0.077), ('lapata', 0.074), ('click', 0.07), ('rules', 0.064), ('weather', 0.058), ('liang', 0.058), ('alignments', 0.056), ('coherence', 0.055), ('selection', 0.055), ('tj', 0.05), ('str', 0.05), ('mann', 0.05), ('rule', 0.048), ('type', 0.048), ('howald', 0.047), ('nrse', 0.047), ('prse', 0.047), ('prst', 0.047), ('branavan', 0.047), ('reiter', 0.047), ('mooney', 0.047), ('tree', 0.046), ('pcfg', 0.046), ('realization', 0.046), ('plan', 0.045), ('leaf', 0.045), ('scenario', 0.044), ('mellish', 0.041), ('markovization', 0.041), ('gen', 0.04), ('sequences', 0.04), ('sequence', 0.039), ('henceforth', 0.038), ('field', 0.038), ('carlson', 0.037), ('text', 0.037), ('symbol', 0.036), ('terminal', 0.035), ('start', 0.034), ('string', 0.034), ('operating', 0.034), ('scenarios', 0.034), ('integer', 0.034), ('trainable', 0.033), ('tl', 0.033), ('surface', 0.033), ('ti', 0.033), ('categorical', 0.033), ('output', 0.032), ('generates', 0.032), ('relations', 0.032), ('collocated', 0.032), ('ikonstas', 0.032), ('kibble', 0.032), ('passwords', 0.032), ('xplantioe', 0.032), ('mckeown', 0.031), ('input', 0.031), ('theory', 0.031), ('williams', 0.029), ('windows', 0.029), ('participants', 0.029), ('domains', 0.029), ('productions', 0.027), ('renders', 0.027), ('markovized', 0.027), ('gn', 0.027), ('stent', 0.027), ('decisions', 0.027), ('selects', 0.027), ('components', 0.027), ('inf', 0.027), ('correctness', 0.027), ('sandra', 0.027), ('wong', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation

Author: Ioannis Konstas ; Mirella Lapata

Abstract: In a language generation system, a content planner selects which elements must be included in the output text and the ordering between them. Recent empirical approaches perform content selection without any ordering and have thus no means to ensure that the output is coherent. In this paper we focus on the problem of generating text from a database and present a trainable end-to-end generation system that includes both content selection and ordering. Content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data. We develop two approaches: the first one is inspired from Rhetorical Structure Theory and represents the document as a tree of discourse relations between database records; the second one requires little linguistic sophistication and uses tree structures to represent global patterns of database record sequences within a document. Experimental evaluation on two domains yields considerable improvements over the state of the art for both approaches.

2 0.1385375 76 emnlp-2013-Exploiting Discourse Analysis for Article-Wide Temporal Classification

Author: Jun-Ping Ng ; Min-Yen Kan ; Ziheng Lin ; Wei Feng ; Bin Chen ; Jian Su ; Chew Lim Tan

Abstract: In this paper we classify the temporal relations between pairs of events on an article-wide basis. This is in contrast to much of the existing literature which focuses on just event pairs which are found within the same or adjacent sentences. To achieve this, we leverage on discourse analysis as we believe that it provides more useful semantic information than typical lexico-syntactic features. We propose the use of several discourse analysis frameworks, including 1) Rhetorical Structure Theory (RST), 2) PDTB-styled discourse relations, and 3) topical text segmentation. We explain how features derived from these frameworks can be effectively used with support vector machines (SVM) paired with convolution kernels. Experiments show that our proposal is effective in improving on the state-of-the-art significantly by as much as 16% in terms of F1, even if we only adopt less-than-perfect automatic discourse analyzers and parsers. Making use of more accurate discourse analysis can further boost gains to 35%.

3 0.10565983 174 emnlp-2013-Single-Document Summarization as a Tree Knapsack Problem

Author: Tsutomu Hirao ; Yasuhisa Yoshida ; Masaaki Nishino ; Norihito Yasuda ; Masaaki Nagata

Abstract: Recent studies on extractive text summarization formulate it as a combinatorial optimization problem such as a Knapsack Problem, a Maximum Coverage Problem or a Budgeted Median Problem. These methods successfully improved summarization quality, but they did not consider the rhetorical relations between the textual units of a source document. Thus, summaries generated by these methods may lack logical coherence. This paper proposes a single document summarization method based on the trimming of a discourse tree. This is a two-fold process. First, we propose rules for transforming a rhetorical structure theorybased discourse tree into a dependency-based discourse tree, which allows us to take a tree- . trimming approach to summarization. Second, we formulate the problem of trimming a dependency-based discourse tree as a Tree Knapsack Problem, then solve it with integer linear programming (ILP). Evaluation results showed that our method improved ROUGE scores.

4 0.09460748 127 emnlp-2013-Max-Margin Synchronous Grammar Induction for Machine Translation

Author: Xinyan Xiao ; Deyi Xiong

Abstract: Traditional synchronous grammar induction estimates parameters by maximizing likelihood, which only has a loose relation to translation quality. Alternatively, we propose a max-margin estimation approach to discriminatively inducing synchronous grammars for machine translation, which directly optimizes translation quality measured by BLEU. In the max-margin estimation of parameters, we only need to calculate Viterbi translations. This further facilitates the incorporation of various non-local features that are defined on the target side. We test the effectiveness of our max-margin estimation framework on a competitive hierarchical phrase-based system. Experiments show that our max-margin method significantly outperforms the traditional twostep pipeline for synchronous rule extraction by 1.3 BLEU points and is also better than previous max-likelihood estimation method.

5 0.062937878 19 emnlp-2013-Adaptor Grammars for Learning Non-Concatenative Morphology

Author: Jan A. Botha ; Phil Blunsom

Abstract: This paper contributes an approach for expressing non-concatenative morphological phenomena, such as stem derivation in Semitic languages, in terms of a mildly context-sensitive grammar formalism. This offers a convenient level of modelling abstraction while remaining computationally tractable. The nonparametric Bayesian framework of adaptor grammars is extended to this richer grammar formalism to propose a probabilistic model that can learn word segmentation and morpheme lexicons, including ones with discontiguous strings as elements, from unannotated data. Our experiments on Hebrew and three variants of Arabic data find that the additional expressiveness to capture roots and templates as atomic units improves the quality of concatenative segmentation and stem identification. We obtain 74% accuracy in identifying triliteral Hebrew roots, while performing morphological segmentation with an F1-score of 78. 1.

6 0.062864155 40 emnlp-2013-Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction

7 0.061274823 10 emnlp-2013-A Multi-Teraflop Constituency Parser using GPUs

8 0.059570875 152 emnlp-2013-Predicting the Presence of Discourse Connectives

9 0.059182324 194 emnlp-2013-Unsupervised Relation Extraction with General Domain Knowledge

10 0.057269983 119 emnlp-2013-Learning Distributions over Logical Forms for Referring Expression Generation

11 0.053492907 201 emnlp-2013-What is Hidden among Translation Rules

12 0.052434329 14 emnlp-2013-A Synchronous Context Free Grammar for Time Normalization

13 0.052310433 187 emnlp-2013-Translation with Source Constituency and Dependency Trees

14 0.050639316 167 emnlp-2013-Semi-Markov Phrase-Based Monolingual Alignment

15 0.050305735 50 emnlp-2013-Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization

16 0.050206121 5 emnlp-2013-A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task

17 0.049123235 24 emnlp-2013-Application of Localized Similarity for Web Documents

18 0.047510415 164 emnlp-2013-Scaling Semantic Parsers with On-the-Fly Ontology Matching

19 0.046066839 71 emnlp-2013-Efficient Left-to-Right Hierarchical Phrase-Based Translation with Improved Reordering

20 0.043568172 166 emnlp-2013-Semantic Parsing on Freebase from Question-Answer Pairs


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.188), (1, -0.009), (2, -0.007), (3, 0.093), (4, -0.05), (5, 0.02), (6, -0.002), (7, -0.024), (8, 0.006), (9, 0.03), (10, -0.026), (11, -0.024), (12, -0.035), (13, 0.096), (14, 0.031), (15, 0.117), (16, -0.076), (17, -0.01), (18, 0.038), (19, 0.076), (20, 0.021), (21, -0.017), (22, 0.041), (23, -0.07), (24, -0.041), (25, -0.034), (26, 0.068), (27, -0.223), (28, 0.093), (29, -0.153), (30, -0.083), (31, -0.019), (32, 0.099), (33, 0.061), (34, -0.087), (35, -0.057), (36, -0.183), (37, -0.114), (38, 0.026), (39, -0.108), (40, 0.074), (41, -0.002), (42, -0.095), (43, -0.035), (44, -0.087), (45, 0.112), (46, 0.006), (47, 0.022), (48, -0.019), (49, 0.078)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93097073 106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation

Author: Ioannis Konstas ; Mirella Lapata

Abstract: In a language generation system, a content planner selects which elements must be included in the output text and the ordering between them. Recent empirical approaches perform content selection without any ordering and have thus no means to ensure that the output is coherent. In this paper we focus on the problem of generating text from a database and present a trainable end-to-end generation system that includes both content selection and ordering. Content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data. We develop two approaches: the first one is inspired from Rhetorical Structure Theory and represents the document as a tree of discourse relations between database records; the second one requires little linguistic sophistication and uses tree structures to represent global patterns of database record sequences within a document. Experimental evaluation on two domains yields considerable improvements over the state of the art for both approaches.

2 0.68190229 174 emnlp-2013-Single-Document Summarization as a Tree Knapsack Problem

Author: Tsutomu Hirao ; Yasuhisa Yoshida ; Masaaki Nishino ; Norihito Yasuda ; Masaaki Nagata

Abstract: Recent studies on extractive text summarization formulate it as a combinatorial optimization problem such as a Knapsack Problem, a Maximum Coverage Problem or a Budgeted Median Problem. These methods successfully improved summarization quality, but they did not consider the rhetorical relations between the textual units of a source document. Thus, summaries generated by these methods may lack logical coherence. This paper proposes a single document summarization method based on the trimming of a discourse tree. This is a two-fold process. First, we propose rules for transforming a rhetorical structure theorybased discourse tree into a dependency-based discourse tree, which allows us to take a tree- . trimming approach to summarization. Second, we formulate the problem of trimming a dependency-based discourse tree as a Tree Knapsack Problem, then solve it with integer linear programming (ILP). Evaluation results showed that our method improved ROUGE scores.

3 0.5738287 76 emnlp-2013-Exploiting Discourse Analysis for Article-Wide Temporal Classification

Author: Jun-Ping Ng ; Min-Yen Kan ; Ziheng Lin ; Wei Feng ; Bin Chen ; Jian Su ; Chew Lim Tan

Abstract: In this paper we classify the temporal relations between pairs of events on an article-wide basis. This is in contrast to much of the existing literature which focuses on just event pairs which are found within the same or adjacent sentences. To achieve this, we leverage on discourse analysis as we believe that it provides more useful semantic information than typical lexico-syntactic features. We propose the use of several discourse analysis frameworks, including 1) Rhetorical Structure Theory (RST), 2) PDTB-styled discourse relations, and 3) topical text segmentation. We explain how features derived from these frameworks can be effectively used with support vector machines (SVM) paired with convolution kernels. Experiments show that our proposal is effective in improving on the state-of-the-art significantly by as much as 16% in terms of F1, even if we only adopt less-than-perfect automatic discourse analyzers and parsers. Making use of more accurate discourse analysis can further boost gains to 35%.

4 0.53772217 14 emnlp-2013-A Synchronous Context Free Grammar for Time Normalization

Author: Steven Bethard

Abstract: We present an approach to time normalization (e.g. the day before yesterday⇒20 13-04- 12) based on a synchronous contex⇒t free grammar. Synchronous rules map the source language to formally defined operators for manipulating times (FINDENCLOSED, STARTATENDOF, etc.). Time expressions are then parsed using an extended CYK+ algorithm, and converted to a normalized form by applying the operators recursively. For evaluation, a small set of synchronous rules for English time expressions were developed. Our model outperforms HeidelTime, the best time normalization system in TempEval 2013, on four different time normalization corpora.

5 0.49388593 152 emnlp-2013-Predicting the Presence of Discourse Connectives

Author: Gary Patterson ; Andrew Kehler

Abstract: We present a classification model that predicts the presence or omission of a lexical connective between two clauses, based upon linguistic features of the clauses and the type of discourse relation holding between them. The model is trained on a set of high frequency relations extracted from the Penn Discourse Treebank and achieves an accuracy of 86.6%. Analysis of the results reveals that the most informative features relate to the discourse dependencies between sequences of coherence relations in the text. We also present results of an experiment that provides insight into the nature and difficulty of the task.

6 0.49151996 10 emnlp-2013-A Multi-Teraflop Constituency Parser using GPUs

7 0.42223978 40 emnlp-2013-Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction

8 0.38236177 19 emnlp-2013-Adaptor Grammars for Learning Non-Concatenative Morphology

9 0.36126328 5 emnlp-2013-A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task

10 0.35479456 161 emnlp-2013-Rule-Based Information Extraction is Dead! Long Live Rule-Based Information Extraction Systems!

11 0.35461703 178 emnlp-2013-Success with Style: Using Writing Style to Predict the Success of Novels

12 0.34439394 61 emnlp-2013-Detecting Promotional Content in Wikipedia

13 0.34174424 153 emnlp-2013-Predicting the Resolution of Referring Expressions from User Behavior

14 0.32711968 63 emnlp-2013-Discourse Level Explanatory Relation Extraction from Product Reviews Using First-Order Logic

15 0.32410035 127 emnlp-2013-Max-Margin Synchronous Grammar Induction for Machine Translation

16 0.32267046 35 emnlp-2013-Automatically Detecting and Attributing Indirect Quotations

17 0.30871385 50 emnlp-2013-Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization

18 0.30712482 203 emnlp-2013-With Blinkers on: Robust Prediction of Eye Movements across Readers

19 0.30280387 171 emnlp-2013-Shift-Reduce Word Reordering for Machine Translation

20 0.2968148 72 emnlp-2013-Elephant: Sequence Labeling for Word and Sentence Segmentation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.056), (18, 0.031), (21, 0.29), (22, 0.045), (30, 0.095), (45, 0.014), (50, 0.026), (51, 0.163), (66, 0.056), (71, 0.032), (75, 0.031), (77, 0.016), (96, 0.02), (97, 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77012467 106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation

Author: Ioannis Konstas ; Mirella Lapata

Abstract: In a language generation system, a content planner selects which elements must be included in the output text and the ordering between them. Recent empirical approaches perform content selection without any ordering and have thus no means to ensure that the output is coherent. In this paper we focus on the problem of generating text from a database and present a trainable end-to-end generation system that includes both content selection and ordering. Content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data. We develop two approaches: the first one is inspired from Rhetorical Structure Theory and represents the document as a tree of discourse relations between database records; the second one requires little linguistic sophistication and uses tree structures to represent global patterns of database record sequences within a document. Experimental evaluation on two domains yields considerable improvements over the state of the art for both approaches.

2 0.58398557 56 emnlp-2013-Deep Learning for Chinese Word Segmentation and POS Tagging

Author: Xiaoqing Zheng ; Hanyang Chen ; Tianyu Xu

Abstract: This study explores the feasibility of performing Chinese word segmentation (CWS) and POS tagging by deep learning. We try to avoid task-specific feature engineering, and use deep layers of neural networks to discover relevant features to the tasks. We leverage large-scale unlabeled data to improve internal representation of Chinese characters, and use these improved representations to enhance supervised word segmentation and POS tagging models. Our networks achieved close to state-of-theart performance with minimal computational cost. We also describe a perceptron-style algorithm for training the neural networks, as an alternative to maximum-likelihood method, to speed up the training process and make the learning algorithm easier to be implemented.

3 0.58104414 143 emnlp-2013-Open Domain Targeted Sentiment

Author: Margaret Mitchell ; Jacqui Aguilar ; Theresa Wilson ; Benjamin Van Durme

Abstract: We propose a novel approach to sentiment analysis for a low resource setting. The intuition behind this work is that sentiment expressed towards an entity, targeted sentiment, may be viewed as a span of sentiment expressed across the entity. This representation allows us to model sentiment detection as a sequence tagging problem, jointly discovering people and organizations along with whether there is sentiment directed towards them. We compare performance in both Spanish and English on microblog data, using only a sentiment lexicon as an external resource. By leveraging linguisticallyinformed features within conditional random fields (CRFs) trained to minimize empirical risk, our best models in Spanish significantly outperform a strong baseline, and reach around 90% accuracy on the combined task of named entity recognition and sentiment prediction. Our models in English, trained on a much smaller dataset, are not yet statistically significant against their baselines.

4 0.57446039 47 emnlp-2013-Collective Opinion Target Extraction in Chinese Microblogs

Author: Xinjie Zhou ; Xiaojun Wan ; Jianguo Xiao

Abstract: Microblog messages pose severe challenges for current sentiment analysis techniques due to some inherent characteristics such as the length limit and informal writing style. In this paper, we study the problem of extracting opinion targets of Chinese microblog messages. Such fine-grained word-level task has not been well investigated in microblogs yet. We propose an unsupervised label propagation algorithm to address the problem. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. Topics in microblogs are identified by hashtags or using clustering algorithms. Experimental results on Chinese microblogs show the effectiveness of our framework and algorithms.

5 0.57373399 107 emnlp-2013-Interactive Machine Translation using Hierarchical Translation Models

Author: Jesus Gonzalez-Rubio ; Daniel Ortiz-Martinez ; Jose-Miguel Benedi ; Francisco Casacuberta

Abstract: Current automatic machine translation systems are not able to generate error-free translations and human intervention is often required to correct their output. Alternatively, an interactive framework that integrates the human knowledge into the translation process has been presented in previous works. Here, we describe a new interactive machine translation approach that is able to work with phrase-based and hierarchical translation models, and integrates error-correction all in a unified statistical framework. In our experiments, our approach outperforms previous interactive translation systems, and achieves estimated effort reductions of as much as 48% relative over a traditional post-edition system.

6 0.57320416 48 emnlp-2013-Collective Personal Profile Summarization with Social Networks

7 0.5730207 140 emnlp-2013-Of Words, Eyes and Brains: Correlating Image-Based Distributional Semantic Models with Neural Representations of Concepts

8 0.57300979 114 emnlp-2013-Joint Learning and Inference for Grammatical Error Correction

9 0.57227081 53 emnlp-2013-Cross-Lingual Discriminative Learning of Sequence Models with Posterior Regularization

10 0.5699985 175 emnlp-2013-Source-Side Classifier Preordering for Machine Translation

11 0.56977087 81 emnlp-2013-Exploring Demographic Language Variations to Improve Multilingual Sentiment Analysis in Social Media

12 0.56955254 38 emnlp-2013-Bilingual Word Embeddings for Phrase-Based Machine Translation

13 0.56951523 36 emnlp-2013-Automatically Determining a Proper Length for Multi-Document Summarization: A Bayesian Nonparametric Approach

14 0.56814069 167 emnlp-2013-Semi-Markov Phrase-Based Monolingual Alignment

15 0.56741726 13 emnlp-2013-A Study on Bootstrapping Bilingual Vector Spaces from Non-Parallel Data (and Nothing Else)

16 0.56719422 132 emnlp-2013-Mining Scientific Terms and their Definitions: A Study of the ACL Anthology

17 0.56692564 83 emnlp-2013-Exploring the Utility of Joint Morphological and Syntactic Learning from Child-directed Speech

18 0.56664324 15 emnlp-2013-A Systematic Exploration of Diversity in Machine Translation

19 0.56587398 82 emnlp-2013-Exploring Representations from Unlabeled Data with Co-training for Chinese Word Segmentation

20 0.56583285 157 emnlp-2013-Recursive Autoencoders for ITG-Based Translation