emnlp emnlp2012 emnlp2012-94 knowledge-graph by maker-knowledge-mining

94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming


Source: pdf

Author: Kristian Woodsend ; Mirella Lapata

Abstract: Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract Multi-document summarization involves many aspects of content selection and surface realization. [sent-4, score-0.551]

2 The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. [sent-5, score-0.416]

3 We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. [sent-6, score-0.273]

4 The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. [sent-7, score-0.587]

5 Of the many summarization paradigms that have been identified over the years (see Sparck Jones (1999) and Mani (2001) for comprehensive overviews), multi-document summarization the task of producing summaries from clusters of thematically related documents has consistently attracted attention. [sent-10, score-0.847]

6 uk Despite considerable research effort, the automatic generation of multi-document summaries that resemble those written by humans remains challenging. [sent-15, score-0.332]

7 This is primarily due to the task itself which is complex and subject to several constraints: the summary must be maximally informative and minimally redundant, grammatical, coherent, adhere to a pre-specified length and stylistic conventions. [sent-16, score-0.342]

8 An ideal model would learn to output summaries that simultaneously meet all these constraints from data (i. [sent-17, score-0.332]

9 Initial global formulations of the multi-document summarization task focused on extractive summarization and used approximate greedy algorithms for finding the sentences of the summary. [sent-22, score-0.52]

10 McDonald (2007) proposes an integer linear programming formulation that maximizes the sum of relevance scores of the selected sentences penalized by the — problem. [sent-25, score-0.222]

11 (2008) develop an exact solution for a model similar to Filatova and Hatzivassiloglou (2004) under the assumption that the value of a summary is the sum of values of the unique concepts (approximated by bigrams) it contains. [sent-31, score-0.264]

12 , 2011) extends this model to allow sentence compression in the form of word or constituent deletion. [sent-34, score-0.207]

13 In this paper we propose a model for multidocument summarization that attempts to cover many different aspects of the task such as content selection, surface realization, paraphrasing, and stylistic conventions. [sent-35, score-0.591]

14 These aspects are learned separately using specific “expert” predictors, but are optimized jointly using an integer linear programming model (ILP) to generate the output summary. [sent-36, score-0.306]

15 2 All experts are learned from data without requiring additional annotation over and above the summaries written for each document cluster. [sent-37, score-0.517]

16 Our predictors include the use of unique bigram information to model content and avoid redundancy, positional information to model important and poor locations of content, and language modeling to capture stylistic conventions. [sent-38, score-0.467]

17 The experts work collaboratively to rewrite the content using rules extracted from document clusters and model summaries. [sent-40, score-0.573]

18 Specifically, we propose quasi-synchronous tree substitution grammar (QTSG) as a flexible formalism to learn general treeedits from loosely-aligned phrase structure trees. [sent-42, score-0.213]

19 We evaluate our model on the 100-word “non2Our task is standard multi-document summarization and should not be confused with “guided” summarization where system and human summarizers are given a list of important aspects to cover in the summary. [sent-43, score-0.476]

20 , relating to content or style) a summary must meet, but these are learned rather than specified in advance. [sent-46, score-0.407]

21 Experimental results show that our method obtains performance comparable and in some cases superior to state-of-the-art, in terms of ROUGE and human ratings of summary grammaticality and informativeness. [sent-48, score-0.289]

22 As all of the different experts are learned from data, it could easily adapt to other summarization styles or conventions as needed. [sent-50, score-0.335]

23 ILP-based models have been developed for several subtasks ranging from sentence compression (Clarke and La- pata, 2008), to single- and multi-document summarization (McDonald, 2007; Martins and Smith, 2009; Gillick and Favre, 2009; Woodsend and Lapata, 2010; Berg-Kirkpatrick et al. [sent-52, score-0.412]

24 Most of these approaches are either purely extractive or implement a single rewrite operation, namely word deletion. [sent-56, score-0.249]

25 A key assumption in their model which we also follow is that input documents contain a variety of concepts, each of which are allocated a value, and the goal of a good summary is to maximize the sum of these values subject to the length constraint. [sent-60, score-0.289]

26 They essentially combine the same bigram content scoring system with features relating to the parse tree which they learn using a maximum-margin SVM trained on annotated gold-standard compressions. [sent-66, score-0.362]

27 Our multi-document summarization modeljointly optimizes different aspects ofthe task involving both content selection and surface realization. [sent-67, score-0.551]

28 Our rewrite rules are encoded in quasi-synchronous tree substitution grammar and learned automatically from source documents and their summaries. [sent-75, score-0.703]

29 Secondly, our content selection component extends to features beyond the bigram horizon, as we learn to identify important concepts based on syntactic and positional information. [sent-77, score-0.382]

30 , content, rewrite rules, style) of the summarization problem jointly; although decoupling learning from inference is perhaps less elegant from a modeling perspective, the learning process is more robust and reliable. [sent-82, score-0.379]

31 3 Modeling There are many aspects to producing a good summary of multiple documents. [sent-83, score-0.292]

32 Stylistic features may be differ- ent in the summary from original documents. [sent-85, score-0.226]

33 For instance, summaries tend to use more concise language, sources are not attributed as they are in news articles, and relative dates are not included. [sent-86, score-0.332]

34 In addition, the summary must be fluent, coherent, and re235 spect a pre-specified maximum length requirement. [sent-87, score-0.27]

35 We present an approach where elements of all the above considerations are learned from training data by separate dedicated components, and then combined in an integer linear programme. [sent-88, score-0.256]

36 Content selection is performed partly through identifying the most salient topics (bigrams); an additional component learns to identify which information from the source documents should be in the summary based on positional information. [sent-89, score-0.582]

37 QTSG rules, learned from the training corpus, are used to generate alternative compressions and paraphrases of the source sentences, in the style suit- able for the summaries. [sent-91, score-0.344]

38 Finally, an ILP model combines the output of these components into a summary, jointly optimizing content selection and surface realization preferences, and providing the flexibility to treat some components as soft while others as hard constraints. [sent-92, score-0.558]

39 Nodes in the parse tree represent points where QTSG rules can be applied (and paraphrases generated), and they also represent decision points for the ILP. [sent-96, score-0.333]

40 (2008) in modeling the information content of the summary as the weighted sum of the individual information units it contains. [sent-100, score-0.356]

41 The weight w of each bigram is calculated from the number of source documents where the bigram was seen. [sent-102, score-0.334]

42 The counting mechanism is achieved by linking the variables z indicating nodes in the parse tree and b indicating bigrams: bj≤i∈N∑: j∈Bizi ∀j ∈B (2) where Bi ⊂ B is the subset of bigrams that are contained in ⊂node i. [sent-107, score-0.326]

43 Specifically, sentences in the cluster documents were aligned to sentences from corresponding human summaries. [sent-117, score-0.227]

44 Alignment was based rather simply on identifying the sentence pairs with the highest number of overlapping bigrams, without compensating for sentence length, or matching the sequence of information in the summaries and source documents (Nelken and Schieber, 236 salience of summary content. [sent-118, score-0.94]

45 Matched sentences in the source documents were given positive labels, while unaligned sen- tences were given negative labels. [sent-121, score-0.239]

46 We trained an SVM on this data (tree nodes and their labels) using surface features that do not overlap with bigram information: sentence and paragraph position, POS-tag information. [sent-123, score-0.304]

47 The summary can be given a salience score fS(z) using the raw SVM prediction scores of the individual parse tree nodes: fS(z) =i∈∑N(Φ(i)·θ)zi (3) where Φ(i) is the feature vector for node i, and θ the weights learned by the SVM. [sent-125, score-0.611]

48 4 Surface Realization Using Style Some sentences in the source documents will make poor summary sentences, despite the information they contain, and therefore contrary to the predictions of the content selection indicators described above. [sent-127, score-0.719]

49 This may be because the source sentence is very short, or is expressed as a quotation, or con- tains many pronouns that will not be resolved when the sentence is extracted. [sent-128, score-0.207]

50 Our idea is to learn which sentences are poor from a stylistic perspective using again aligned training data. [sent-129, score-0.226]

51 We train a second SVM on the aligned sentences and their labels using surface features at the sentence level, such as sentence length and POS-tag information. [sent-130, score-0.267]

52 The predictions of the SVM are incorporated into the ILP as a hard constraint, by forcing all parse tree nodes within those sentences predicted as poor (the set N−) to be zero: zi 3. [sent-134, score-0.438]

53 (4) Surface Realization Using Lexical Preferences Human-written summaries differ from the source news articles in a number of ways. [sent-136, score-0.441]

54 They delete extraneous information, merge material from several sentences, employ paraphrases and syntactic transformations, change the order of the source sentences and replace phrases or clauses with more general or specific descriptions. [sent-137, score-0.233]

55 Aside from the logistics of gathering training data large enough to provide robust estimates, we believe that a more compelling approach is to focus on the words that are unlikely to appear in the summary despite appearing in the source documents. [sent-141, score-0.335]

56 Table 3 shows lexemes that appear in both source and summary documents, but where the likelihood of the lexeme appearing in the summary is much less than that of it appearing the document, taking into account that the summary is much shorter anyway. [sent-143, score-0.834]

57 As the amount of training data tends to be limited there are usually only a few human-written summaries available per document cluster we use a unigram language model, but conceivably a longer-range n-gram could be employed in the same vein. [sent-154, score-0.421]

58 We incorporate preferences about summary language into the model as a soft constraint. [sent-155, score-0.325]

59 6 Quasi-synchronous Tree Substitution Grammar Rewrite rules involving substitutions, deletions and reorderings are captured in our model using a quasisynchronous tree substitution grammar. [sent-160, score-0.302]

60 Given an input (source) sentence S1 or its parse tree T1, the QTSG contains rules for generating possible translation trees T2. [sent-161, score-0.293]

61 A grammar node in the target tree T2 is modeled on a subset of nodes in the source tree, with a rather loose alignment between the trees. [sent-162, score-0.466]

62 We extract QTSG rules from aligned source and summary sentence pairs represented by their phrase structure trees. [sent-163, score-0.537]

63 Direct parent nodes are aligned where more than one child node aligns. [sent-165, score-0.231]

64 We do not assume an alignment between source and target root nodes, nor do we require a surjective alignment of all target nodes to the source tree. [sent-167, score-0.398]

65 QTSG rules are then created from aligned nodes above the leaf node level if all the nodes in the target tree can be explained using nodes from the source. [sent-168, score-0.657]

66 Individual rewrite rules describe the mapping of source tree fragments into target tree fragments, and so the grammar represents the space of valid target trees that can be produced from a given source tree (Eisner, 2003; Cohn and Lapata, 2009). [sent-169, score-0.831]

67 Many of the rules relate to the compression of noun phrases through deletion, and examples are shown in the upper box. [sent-171, score-0.251]

68 An important rewrite operation is the abstraction of a sentence from a more complex source sentence, adding final punctuation if necessary (lower box). [sent-173, score-0.332]

69 At generation, paraphrases are created from source sentence parse trees by identifying and applying QTSG rules with matching structure. [sent-174, score-0.391]

70 The transduction process starts at the root node of the parse tree, applying QTSG rules to sub-trees until leaf nodes are reached. [sent-175, score-0.348]

71 We use the set C ⊂ N to be the set of nodes where a choice of paraphrases is available, and Ci ⊂ N, i∈ C to be the actual paraphrases of i. [sent-180, score-0.278]

72 Where t⊂here are alternatives, it makes sense of course to select only one, which we implement using the constraint: j∑∈Cizj= zi ∀i ∈C, j ∈Ci (6) More generally, we need to constrain the output to ensure that a parse tree structure is maintained. [sent-181, score-0.221]

73 For each node i∈ N, the set Di ⊂ N contains the list of dependent in ∈odes (both ance⊂stors and descendants) of node i, so that each set Di contains the nodes that depend on the presence of i. [sent-182, score-0.242]

74 We introduce a constraint to force node ito be present if any of its dependent nodes are chosen: zj → zi ∀i ∈ N, j ∈ Di (7) 3. [sent-183, score-0.241]

75 7 The ILP Objective The model we propose for generating a multidocument summary is expressed as an integer linear programme and incorporates the content selection and surface realization preferences, as well as the soft and hard constraints described in the preceding sections. [sent-184, score-0.846]

76 Note that the scores in the objective are for each tree node and not each sentence. [sent-187, score-0.206]

77 This affords the model flexibility: the content selection elements are generally not competing with each other to give a decision on a sentence (see McDonald (2007)). [sent-188, score-0.336]

78 The ILP is implicitly searching the grammar rules for ways to rewrite the sentence, with the aim of including the salient nodes while removing negative-scoring nodes (deleting them increases the score of the node to zero). [sent-190, score-0.635]

79 Figure 2 shows an example of a source sentence where the bigram, salience and language preference components of the ILP work together to score nodes in the parse tree. [sent-191, score-0.467]

80 As a rewrite possibility, the rewrite rule shown bottom left is available, which will remove the negative node. [sent-193, score-0.38]

81 Because of the high compression rate in this task, sentence alignment leads to an unbalanced data set. [sent-205, score-0.247]

82 The classifiers on their own would thus not be great predictors of salience or style, but in practice they were useful for breaking ties in bigram scores. [sent-214, score-0.261]

83 The resulting integer linear programmes were solved using SCIP,4 and it took 55 seconds on average to read in and solve a document cluster problem. [sent-222, score-0.211]

84 We evaluated the output summaries in two ways, using automatic measures and human judgements. [sent-241, score-0.332]

85 240 clusters from the test set and generated summaries with our model (and its lesser variations). [sent-251, score-0.374]

86 Finally, they were presented with a summary and asked to rate it along two dimensions: grammaticality (is the summary fluent and grammatical? [sent-256, score-0.515]

87 The final columns show the number of source sentences, the average compression ratio, and the proportion of sentences modified. [sent-265, score-0.302]

88 The multiple aspects ILP system (MA-ILP) yields ROUGE scores similar to B-K, despite performing rewriting operations which increase the scope for error and without requiring any hand-crafted compression rules or manually annotated training data. [sent-267, score-0.383]

89 Clearly the bigram content indicators are an important element for the ROUGE scores, as their removal yields a reduction of 2. [sent-271, score-0.211]

90 The model without QTSG rules (ILP w/o QTSG) is effectively limited to sentence extraction, and removing rewrite rules also lowers ROUGE scores to levels similar to ICSI-1. [sent-273, score-0.409]

91 We also show the number of source sentences (Count), the average compression ratio (CR %) and the proportion of sentences modified (Mod %) by each system. [sent-277, score-0.337]

92 All the subsystems are more ag- gressive in their rewriting than when used in combination (higher TER, higher compression rate and a larger number of sentences are modified). [sent-283, score-0.259]

93 We elicited grammaticality and informativeness ratings for a randomly selected model summary, ICSI-1, B-K, the multiple aspect ILP (MA-ILP), and the ILP w/o style which we included in this study as it performed best under ROUGE. [sent-286, score-0.226]

94 Notice that summaries created by the ILP w/o style are rated poorly by humans, contrary to ROUGE. [sent-291, score-0.46]

95 The style component stops very short Table 6: Example summaries generated by the multiple aspects model (MA-ILP). [sent-292, score-0.493]

96 sentences and quotations from being included in the summary even if they have quite high bigram or content scores. [sent-293, score-0.472]

97 Without it, the model tends to generate summaries that are fragmentary and lacking proper context, resulting in lower grammaticality (and informativeness) when judged by humans. [sent-294, score-0.395]

98 This is not entirely surprising as our model includes additional content selection elements over and above the bigram units. [sent-298, score-0.336]

99 Example output summaries of the full ILP model are shown in Table 6. [sent-300, score-0.332]

100 In the future, we also plan to test the ability of the model to adapt to other multi-document summarization tasks, where the location of summary in- formation is not as regular as it is in news articles. [sent-306, score-0.431]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('qtsg', 0.351), ('summaries', 0.332), ('ilp', 0.259), ('summary', 0.226), ('summarization', 0.205), ('gillick', 0.189), ('woodsend', 0.175), ('rewrite', 0.174), ('compression', 0.158), ('rouge', 0.154), ('content', 0.13), ('salience', 0.112), ('source', 0.109), ('nodes', 0.1), ('tree', 0.1), ('style', 0.095), ('rules', 0.093), ('integer', 0.09), ('paraphrases', 0.089), ('realization', 0.089), ('flr', 0.088), ('svm', 0.086), ('stylistic', 0.084), ('bigram', 0.081), ('experts', 0.079), ('selection', 0.076), ('kristian', 0.075), ('extractive', 0.075), ('bigrams', 0.075), ('surface', 0.074), ('node', 0.071), ('zi', 0.07), ('informativeness', 0.068), ('predictors', 0.068), ('substitution', 0.067), ('aspects', 0.066), ('rewriting', 0.066), ('documents', 0.063), ('grammaticality', 0.063), ('soft', 0.062), ('aligned', 0.06), ('mirella', 0.059), ('ter', 0.059), ('tac', 0.058), ('benoit', 0.057), ('positional', 0.057), ('filatova', 0.057), ('fb', 0.057), ('document', 0.055), ('cohn', 0.054), ('parse', 0.051), ('learned', 0.051), ('favre', 0.051), ('fs', 0.051), ('deletion', 0.051), ('salient', 0.051), ('sentence', 0.049), ('elements', 0.049), ('lexemes', 0.047), ('poor', 0.047), ('grammar', 0.046), ('components', 0.046), ('abstractive', 0.044), ('dorit', 0.044), ('hochba', 0.044), ('lmax', 0.044), ('sparck', 0.044), ('spect', 0.044), ('expert', 0.042), ('deletions', 0.042), ('clusters', 0.042), ('alignment', 0.04), ('concepts', 0.038), ('deshpande', 0.038), ('goldstein', 0.038), ('nelken', 0.038), ('dilek', 0.038), ('tuesday', 0.038), ('stsg', 0.038), ('preferences', 0.037), ('hard', 0.035), ('objective', 0.035), ('sentences', 0.035), ('dedicated', 0.034), ('inderjeet', 0.034), ('cluster', 0.034), ('lapata', 0.034), ('separately', 0.034), ('leaf', 0.033), ('redundancy', 0.033), ('contrary', 0.033), ('programming', 0.033), ('linear', 0.032), ('negative', 0.032), ('importantly', 0.032), ('affords', 0.032), ('multidocument', 0.032), ('maximally', 0.032), ('shifts', 0.032), ('penalized', 0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming

Author: Kristian Woodsend ; Mirella Lapata

Abstract: Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.

2 0.23797138 56 emnlp-2012-Framework of Automatic Text Summarization Using Reinforcement Learning

Author: Seonggi Ryang ; Takeshi Abekawa

Abstract: We present a new approach to the problem of automatic text summarization called Automatic Summarization using Reinforcement Learning (ASRL) in this paper, which models the process of constructing a summary within the framework of reinforcement learning and attempts to optimize the given score function with the given feature representation of a summary. We demonstrate that the method of reinforcement learning can be adapted to automatic summarization problems naturally and simply, and other summarizing techniques, such as sentence compression, can be easily adapted as actions of the framework. The experimental results indicated ASRL was superior to the best performing method in DUC2004 and comparable to the state of the art ILP-style method, in terms of ROUGE scores. The results also revealed ASRL can search for sub-optimal solutions efficiently under conditions for effectively selecting features and the score function.

3 0.11070186 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

Author: Jianxing Yu ; Zheng-Jun Zha ; Tat-Seng Chua

Abstract: This paper proposes to generate appropriate answers for opinion questions about products by exploiting the hierarchical organization of consumer reviews. The hierarchy organizes product aspects as nodes following their parent-child relations. For each aspect, the reviews and corresponding opinions on this aspect are stored. We develop a new framework for opinion Questions Answering, which enables accurate question analysis and effective answer generation by making use the hierarchy. In particular, we first identify the (explicit/implicit) product aspects asked in the questions and their sub-aspects by referring to the hierarchy. We then retrieve the corresponding review fragments relevant to the aspects from the hierarchy. In order to gener- ate appropriate answers from the review fragments, we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review salience, coherence, diversity, and parent-child relations among the aspects. We conduct evaluations on 11 popular products in four domains. The evaluated corpus contains 70,359 consumer reviews and 220 questions on these products. Experimental results demonstrate the effectiveness of our approach.

4 0.11062826 99 emnlp-2012-On Amortizing Inference Cost for Structured Prediction

Author: Vivek Srikumar ; Gourab Kundu ; Dan Roth

Abstract: This paper deals with the problem of predicting structures in the context of NLP. Typically, in structured prediction, an inference procedure is applied to each example independently of the others. In this paper, we seek to optimize the time complexity of inference over entire datasets, rather than individual examples. By considering the general inference representation provided by integer linear programs, we propose three exact inference theorems which allow us to re-use earlier solutions for certain instances, thereby completely avoiding possibly expensive calls to the inference procedure. We also identify several approximation schemes which can provide further speedup. We instantiate these ideas to the structured prediction task of semantic role labeling and show that we can achieve a speedup of over 2.5 using our approach while retaining the guarantees of exactness and a further speedup of over 3 using approximations that do not degrade performance.

5 0.089161187 58 emnlp-2012-Generalizing Sub-sentential Paraphrase Acquisition across Original Signal Type of Text Pairs

Author: Aurelien Max ; Houda Bouamor ; Anne Vilnat

Abstract: This paper describes a study on the impact of the original signal (text, speech, visual scene, event) of a text pair on the task of both manual and automatic sub-sentential paraphrase acquisition. A corpus of 2,500 annotated sentences in English and French is described, and performance on this corpus is reported for an efficient system combination exploiting a large set of features for paraphrase recognition. A detailed quantified typology of subsentential paraphrases found in our corpus types is given.

6 0.087243013 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

7 0.080150634 18 emnlp-2012-An Empirical Investigation of Statistical Significance in NLP

8 0.078171536 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

9 0.077821903 16 emnlp-2012-Aligning Predicates across Monolingual Comparable Texts using Graph-based Clustering

10 0.076842949 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

11 0.075837962 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

12 0.071763739 33 emnlp-2012-Discovering Diverse and Salient Threads in Document Collections

13 0.07174132 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

14 0.071347974 81 emnlp-2012-Learning to Map into a Universal POS Tagset

15 0.070826747 1 emnlp-2012-A Bayesian Model for Learning SCFGs with Discontiguous Rules

16 0.07050515 39 emnlp-2012-Enlarging Paraphrase Collections through Generalization and Instantiation

17 0.06937325 130 emnlp-2012-Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars

18 0.064836286 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

19 0.062911302 104 emnlp-2012-Parse, Price and Cut-Delayed Column and Row Generation for Graph Based Parsers

20 0.06082847 82 emnlp-2012-Left-to-Right Tree-to-String Decoding with Prediction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.259), (1, -0.03), (2, -0.059), (3, 0.047), (4, 0.005), (5, 0.05), (6, -0.047), (7, -0.041), (8, -0.016), (9, 0.166), (10, 0.147), (11, 0.119), (12, -0.172), (13, -0.012), (14, -0.001), (15, 0.002), (16, 0.055), (17, 0.021), (18, 0.203), (19, -0.036), (20, 0.175), (21, 0.308), (22, 0.247), (23, -0.063), (24, -0.192), (25, -0.018), (26, -0.127), (27, -0.044), (28, 0.045), (29, 0.056), (30, 0.051), (31, 0.235), (32, -0.139), (33, 0.0), (34, 0.075), (35, -0.098), (36, -0.04), (37, -0.026), (38, -0.047), (39, 0.088), (40, 0.007), (41, 0.024), (42, -0.021), (43, 0.028), (44, -0.002), (45, -0.045), (46, 0.072), (47, -0.021), (48, 0.02), (49, -0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95137703 94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming

Author: Kristian Woodsend ; Mirella Lapata

Abstract: Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.

2 0.82846808 56 emnlp-2012-Framework of Automatic Text Summarization Using Reinforcement Learning

Author: Seonggi Ryang ; Takeshi Abekawa

Abstract: We present a new approach to the problem of automatic text summarization called Automatic Summarization using Reinforcement Learning (ASRL) in this paper, which models the process of constructing a summary within the framework of reinforcement learning and attempts to optimize the given score function with the given feature representation of a summary. We demonstrate that the method of reinforcement learning can be adapted to automatic summarization problems naturally and simply, and other summarizing techniques, such as sentence compression, can be easily adapted as actions of the framework. The experimental results indicated ASRL was superior to the best performing method in DUC2004 and comparable to the state of the art ILP-style method, in terms of ROUGE scores. The results also revealed ASRL can search for sub-optimal solutions efficiently under conditions for effectively selecting features and the score function.

3 0.45880142 99 emnlp-2012-On Amortizing Inference Cost for Structured Prediction

Author: Vivek Srikumar ; Gourab Kundu ; Dan Roth

Abstract: This paper deals with the problem of predicting structures in the context of NLP. Typically, in structured prediction, an inference procedure is applied to each example independently of the others. In this paper, we seek to optimize the time complexity of inference over entire datasets, rather than individual examples. By considering the general inference representation provided by integer linear programs, we propose three exact inference theorems which allow us to re-use earlier solutions for certain instances, thereby completely avoiding possibly expensive calls to the inference procedure. We also identify several approximation schemes which can provide further speedup. We instantiate these ideas to the structured prediction task of semantic role labeling and show that we can achieve a speedup of over 2.5 using our approach while retaining the guarantees of exactness and a further speedup of over 3 using approximations that do not degrade performance.

4 0.37281469 33 emnlp-2012-Discovering Diverse and Salient Threads in Document Collections

Author: Jennifer Gillenwater ; Alex Kulesza ; Ben Taskar

Abstract: We propose a novel probabilistic technique for modeling and extracting salient structure from large document collections. As in clustering and topic modeling, our goal is to provide an organizing perspective into otherwise overwhelming amounts of information. We are particularly interested in revealing and exploiting relationships between documents. To this end, we focus on extracting diverse sets of threads—singlylinked, coherent chains of important documents. To illustrate, we extract research threads from citation graphs and construct timelines from news articles. Our method is highly scalable, running on a corpus of over 30 million words in about four minutes, more than 75 times faster than a dynamic topic model. Finally, the results from our model more closely resemble human news summaries according to several metrics and are also preferred by human judges.

5 0.33895147 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Much of the writing styles recognized in rhetorical and composition theories involve deep syntactic elements. However, most previous research for computational stylometric analysis has relied on shallow lexico-syntactic patterns. Some very recent work has shown that PCFG models can detect distributional difference in syntactic styles, but without offering much insights into exactly what constitute salient stylistic elements in sentence structure characterizing each authorship. In this paper, we present a comprehensive exploration of syntactic elements in writing styles, with particular emphasis on interpretable characterization of stylistic elements. We present analytic insights with respect to the authorship attribution task in two different domains. ,

6 0.32665354 17 emnlp-2012-An "AI readability" Formula for French as a Foreign Language

7 0.31309494 18 emnlp-2012-An Empirical Investigation of Statistical Significance in NLP

8 0.27410477 130 emnlp-2012-Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars

9 0.2717407 16 emnlp-2012-Aligning Predicates across Monolingual Comparable Texts using Graph-based Clustering

10 0.26809147 1 emnlp-2012-A Bayesian Model for Learning SCFGs with Discontiguous Rules

11 0.26807371 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

12 0.26710391 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

13 0.26548773 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

14 0.26228574 128 emnlp-2012-Translation Model Based Cross-Lingual Language Model Adaptation: from Word Models to Phrase Models

15 0.25307095 133 emnlp-2012-Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision

16 0.25128382 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

17 0.24836418 121 emnlp-2012-Supervised Text-based Geolocation Using Language Models on an Adaptive Grid

18 0.24638215 104 emnlp-2012-Parse, Price and Cut-Delayed Column and Row Generation for Graph Based Parsers

19 0.23946694 88 emnlp-2012-Minimal Dependency Length in Realization Ranking

20 0.23744527 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(16, 0.058), (25, 0.018), (34, 0.047), (60, 0.083), (63, 0.519), (64, 0.011), (65, 0.011), (73, 0.013), (74, 0.042), (76, 0.034), (80, 0.024), (86, 0.019), (95, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95712179 94 emnlp-2012-Multiple Aspect Summarization Using Integer Linear Programming

Author: Kristian Woodsend ; Mirella Lapata

Abstract: Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.

2 0.95411545 90 emnlp-2012-Modelling Sequential Text with an Adaptive Topic Model

Author: Lan Du ; Wray Buntine ; Huidong Jin

Abstract: Topic models are increasingly being used for text analysis tasks, often times replacing earlier semantic techniques such as latent semantic analysis. In this paper, we develop a novel adaptive topic model with the ability to adapt topics from both the previous segment and the parent document. For this proposed model, a Gibbs sampler is developed for doing posterior inference. Experimental results show that with topic adaptation, our model significantly improves over existing approaches in terms of perplexity, and is able to uncover clear sequential structure on, for example, Herman Melville’s book “Moby Dick”.

3 0.92402816 100 emnlp-2012-Open Language Learning for Information Extraction

Author: Mausam ; Michael Schmitz ; Stephen Soderland ; Robert Bart ; Oren Etzioni

Abstract: Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOEparse. –

4 0.91266894 17 emnlp-2012-An "AI readability" Formula for French as a Foreign Language

Author: Thomas Francois ; Cedrick Fairon

Abstract: This paper present a new readability formula for French as a foreign language (FFL), which relies on 46 textual features representative of the lexical, syntactic, and semantic levels as well as some of the specificities of the FFL context. We report comparisons between several techniques for feature selection and various learning algorithms. Our best model, based on support vector machines (SVM), significantly outperforms previous FFL formulas. We also found that semantic features behave poorly in our case, in contrast with some previous readability studies on English as a first language.

5 0.83700329 97 emnlp-2012-Natural Language Questions for the Web of Data

Author: Mohamed Yahya ; Klaus Berberich ; Shady Elbassuoni ; Maya Ramanath ; Volker Tresp ; Gerhard Weikum

Abstract: The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the . in question translation and the resulting query answering.

6 0.68682402 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

7 0.68033582 8 emnlp-2012-A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes

8 0.66727287 115 emnlp-2012-SSHLDA: A Semi-Supervised Hierarchical Topic Model

9 0.62287712 103 emnlp-2012-PATTY: A Taxonomy of Relational Patterns with Semantic Types

10 0.5829584 33 emnlp-2012-Discovering Diverse and Salient Threads in Document Collections

11 0.57508796 42 emnlp-2012-Entropy-based Pruning for Phrase-based Machine Translation

12 0.57483983 128 emnlp-2012-Translation Model Based Cross-Lingual Language Model Adaptation: from Word Models to Phrase Models

13 0.57421571 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

14 0.57154024 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

15 0.56572896 11 emnlp-2012-A Systematic Comparison of Phrase Table Pruning Techniques

16 0.55600566 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

17 0.5539149 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

18 0.55063748 19 emnlp-2012-An Entity-Topic Model for Entity Linking

19 0.54877794 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games

20 0.54567528 114 emnlp-2012-Revisiting the Predictability of Language: Response Completion in Social Media