acl acl2010 acl2010-39 knowledge-graph by maker-knowledge-mining

39 acl-2010-Automatic Generation of Story Highlights


Source: pdf

Author: Kristian Woodsend ; Mirella Lapata

Abstract: In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating “story highlights”—a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In this paper we present a joint content selection and compression model for single-document summarization. [sent-4, score-0.388]

2 The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. [sent-5, score-0.188]

3 Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. [sent-6, score-0.18]

4 Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content. [sent-8, score-0.789]

5 Since the latter is beyond the capabilities of current NLP technology, most work today focuses on extractive summarization, where a summary is created simply by identifying and subsequently concatenating the most important sentences in a document. [sent-14, score-0.199]

6 This is in marked contrast with hand-written summaries which often combine several pieces of information from the original document (Jing, 2002) and exhibit many rewrite operations such as substitutions, insertions, deletions, or reorderings. [sent-21, score-0.235]

7 Sentence compression is often regarded as a promising first step towards ameliorating some of the problems associated with extractive summarization. [sent-22, score-0.381]

8 Interfacing extractive summarization with a sentence compression module could improve the conciseness of the generated summaries and render them more informative (Jing, 2000; Lin, 2003; Zajic et al. [sent-25, score-0.678]

9 Despite the bulk of work on sentence compression and summarization (see Clarke and Lapata 2008 and Mani 2001 for overviews) only a handful of approaches attempt to do both in a joint model (Daum ´e III and Marcu, 2002; Daum e´ III, 2006; Lin, 2003; Martins and Smith, 2009). [sent-27, score-0.562]

10 One rea- son for this might be the performance of sentence compression systems which falls short of attaining grammaticality levels of human output. [sent-28, score-0.539]

11 For example, Clarke and Lapata (2008) evaluate a range of state-of-the-art compression systems across different domains and show that machine generated compressions are consistently perceived as worse than the human gold standard. [sent-29, score-0.329]

12 c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 565–574, marization that incorporates compression into the task. [sent-35, score-0.329]

13 A key insight in our approach is to formulate summarization as a phrase rather than sentence extraction problem. [sent-36, score-0.33]

14 Obviously, our output summaries must meet additional requirements such as sentence length, overall length, topic coverage and, importantly, grammaticality. [sent-38, score-0.22]

15 We combine phrase and dependency information into a single data structure, which allows us to express grammaticality as constraints across phrase dependencies. [sent-39, score-0.502]

16 We apply our model to the task of generating highlights for a single document. [sent-41, score-0.643]

17 Examples of CNN news articles with human-authored highlights are shown in Table 1. [sent-42, score-0.692]

18 Importantly, they represent the gist of the entire document and thus often differ substantially from the first n sentences in the article (Svore et al. [sent-44, score-0.214]

19 Experimental results show that our model’s output is comparable to hand-written highlights both in terms of grammaticality and informativeness. [sent-47, score-0.789]

20 2 Related work Much effort in automatic summarization has been devoted to sentence extraction which is often formalized as a classification task (Kupiec et al. [sent-48, score-0.2]

21 Given appropriately annotated training data, a binary classifier learns to predict for each document sentence if it is worth extracting. [sent-50, score-0.171]

22 Wan and Paris (2008) segment sentences heuristically into clauses before extraction takes place, and show that this improves summarization quality. [sent-56, score-0.182]

23 A few previous approaches have attempted to interface sentence compression with summarization. [sent-62, score-0.392]

24 Martins and Smith (2009) formulate a joint sentence extraction and summarization model as an ILP. [sent-68, score-0.233]

25 In the context of multi-document summarization, Daum e´ III’s (2006) vine-growth model creates summaries incrementally, either by starting a new sentence or by growing already existing ones. [sent-71, score-0.193]

26 We also develop an ILP-based compression and summarization model, however, several key differences set our approach apart. [sent-73, score-0.466]

27 Firstly, content selection is performed at the phrase rather than sentence level. [sent-74, score-0.219]

28 Secondly, the combination of phrase and dependency information into a single data structure is new, and important in allowing us to express grammaticality as constraints across phrase dependencies, rather than resorting to a lan566 original highlights that accompanied each story. [sent-75, score-1.112]

29 Head- line generation models typically extract individual words from a document to produce a very short summary, whereas we extract phrases and ensure that they are combined into grammatical sentences through our ILP constraints. [sent-81, score-0.255]

30 CNN highlights are written by humans; we aim to do this automatically. [sent-93, score-0.61]

31 As can be seen, the highlights are written in a compressed, almost telegraphic manner. [sent-101, score-0.647]

32 For example, the document sentence “The poll found 69 percent of blacks said King ’s vision has been fulfilled. [sent-106, score-0.208]

33 In general, there is a fair amount of lexical overlap between document sentences and highlights (42. [sent-109, score-0.763]

34 44%) but the correspondence between document sentences and highlights is not always one-to-one. [sent-110, score-0.763]

35 Also note that the highlights need not form a coherent summary, each of them is relatively stand-alone, and there is little co-referencing between them. [sent-112, score-0.61]

36 567 Figure 1: An example phrase structure (a) and dependency (b) tree for the sentence “But whites remain less optimistic, the survey found. [sent-113, score-0.399]

37 Overall, we observe a high degree of compression both at the document and sentence level. [sent-120, score-0.5]

38 The highlights summary tends to be ten times shorter than the corresponding article. [sent-121, score-0.686]

39 Furthermore, individual highlights have almost half the length of document sentences. [sent-122, score-0.756]

40 4 Modeling The objective of our model is to create the most informative story highlights possible, subject to constraints relating to sentence length, overall summary length, topic coverage, and grammaticality. [sent-123, score-0.919]

41 Specifically, the model selects phrases from which to form the highlights, and each highlight is created from a single sentence through phrase deletion. [sent-126, score-0.434]

42 Sentence Representation We obtain syntactic information by parsing every sentence twice, once with a phrase structure parser and once with a dependency parser. [sent-135, score-0.24]

43 The phrase structure and dependency-based representations for the sentence “But whites remain less optimistic, the survey found. [sent-136, score-0.352]

44 We then combine the output from the two parsers, by mapping the dependencies to the edges of the phrase structure tree in a greedy fashion, shown in Figure 2(a). [sent-138, score-0.192]

45 Starting at the top node of the dependency graph, we choose a node iand a dependency arc to node j. [sent-139, score-0.253]

46 We assign the label of the dependency i→ j to the first aunsslaigbnetlehde edge ofrfo tmhe p etop j dine nthcey phrase osttrhuec ftiurrset tree. [sent-141, score-0.177]

47 Finally, leaf nodes are merged into parent phrases, until each phrase node contains a minimum of two tokens, shown in Figure 2(b). [sent-146, score-0.32]

48 Because of this minimum length rule, it is possible for a merged node to be a clause rather than a phrase, but in the subsequent description we will use the term phrase rather loosely to describe any merged leaf node. [sent-147, score-0.397]

49 Figure 2: Dependencies are mapped onto phrase structure tree (a) and leaf nodes are merged with parent phrases (b). [sent-151, score-0.339]

50 ILP model The merged phrase structure tree, such as shown in Figure 2(b), is the actual input to our model. [sent-152, score-0.215]

51 Each phrase in the document is given a salience score. [sent-153, score-0.315]

52 We obtain these scores from the output of a supervised machine learning algorithm that predicts for each phrase whether it should be included in the highlights or not (see Section 5 for details). [sent-154, score-0.827]

53 Let S be the set of sentences in a document, P be the set of phrases, and Ps ⊂ P be the set of phrases in each sentence s ∈ S. [sent-155, score-0.18]

54 iLse tth fi d seento otfe pthhrea ssaelsien cocen score gfo trh phrase i, td ∈etermined by the machine learning algorithm, and li is its length in tokens. [sent-158, score-0.168]

55 Let the sets Di ⊂ P, ∀i ∈ P capture the phrase dependency infor⊂matio,n ∀ f ∈or each phrase i, where each set Di contains the phrases that depend on the presence of i. [sent-163, score-0.379]

56 Our objective function function is given in Equation (1a): it is the sum of the salience scores of all the phrases chosen to form the highlights of a given document, subject to the constraints in Equations (1b)–(1j). [sent-164, score-0.897]

57 (1i) (1j) Constraint (1b) ensures that the generated highlights do not exceed a total budget of LT tokens. [sent-169, score-0.61]

58 Highlights on a small screen device would presumably be shorter than highlights for news articles on the web. [sent-171, score-0.692]

59 In particular, these constraints stop highlights formed from sentences at the beginning of the document (which tend to have 569 high salience scores) from being too long. [sent-174, score-0.888]

60 We enforce grammatical correctness through constraint (1f) which ensures that the phrase dependencies are respected. [sent-179, score-0.171]

61 The phrase dependency constraints, contained in the set Di and enforced by (1f), are the result of two rules based on the typed dependency information: 1. [sent-182, score-0.224]

62 The parent node p of the current node imust always be included if i is, unless the edge p → iis of type ccomp (clausal complement) or →ad ivc isl (adverbial clause), sianl w cohmicphl case tit) is possible to include iwithout including p. [sent-185, score-0.231]

63 If the phrase “the survey” is chosen, then the parent node “found” will be included, and from our first rule the ccomp phrase must also be included, which results in the output: “But whites remain less optimistic, the survey found. [sent-188, score-0.57]

64 ” If, on the other hand, the clause “But whites remain less optimistic” is chosen, then due to our second rule there is no constraint that forces the parent phrase “found” to be included in the highlights. [sent-189, score-0.395]

65 Which output is chosen (if any) depends on the scores of the phrases involved, and the influence of the other constraints. [sent-192, score-0.167]

66 Constraint (1g) tells the ILP to create a highlight if one of its constituent phrases is chosen. [sent-193, score-0.182]

67 Finally, note that a maximum number of highlights NS can be set beforehand, and (1h) limits the highlights to this maximum. [sent-194, score-1.22]

68 Two annotators manually aligned the highlights and document sentences. [sent-197, score-0.718]

69 Specifically, each sentence in the document was assigned one of three alignment labels: must be in the summary (1), could be in the summary (2), and is not in the summary (3). [sent-198, score-0.399]

70 The annotators were asked to label document sentences whose content was identical to the highlights as “must be in the summary”, sentences with partially overlapping content as “could be in the summary” and the remainder as “should not be in the summary”. [sent-199, score-0.86]

71 The mapping of sentence labels to phrase labels was unsupervised: if the phrase came from a sentence labeled (1), and there was a unigram overlap (excluding stop words) between the phrase and any of the original highlights, we marked this phrase with a positive label. [sent-203, score-0.646]

72 Highlight generation We generated highlights for a test set of 600 documents. [sent-211, score-0.64]

73 The ILP model (see Equation (1)) was parametrized as follows: the maximum number of highlights NS was 4, the overall limit on length LT was 75 tokens, the length of each highlight was in the range of [8, 28] tokens, and the topic coverage set T contained the top 5 tf. [sent-216, score-0.857]

74 The solution was converted into highlights by concatenating the chosen leaf nodes in order. [sent-220, score-0.688]

75 Summarization In order to examine the generality of our model and compare with previous work, we also evaluated our system on a vanilla summarization task. [sent-224, score-0.17]

76 There are no sentence length or grammaticality constraints, as there is no sentence compression. [sent-239, score-0.311]

77 For the highlight generation task, the original CNN highlights were used as the reference. [sent-245, score-0.75]

78 In addition, we evaluated the generated highlights by eliciting human judgments. [sent-247, score-0.61]

79 Participants were presented with a news article and its corresponding highlights and were asked to rate the latter along three dimensions: informativeness (do the highlights represent the article’s main topics? [sent-248, score-1.355]

80 An ideal system would receive high numbers for grammaticality and informativeness and a low number for verbosity. [sent-253, score-0.208]

81 We randomly selected nine documents from the test set and generated highlights with our model and the sentence-based ILP baseline. [sent-254, score-0.676]

82 We also included the original highlights as a gold standard. [sent-255, score-0.637]

83 Each document in the DUC-2002 dataset is paired with 4A Latin square design ensured that subjects did not two different highlights of the same document. [sent-261, score-0.718]

84 In both measures, the ILP sentence baseline has the best recall, while the ILP phrase model has the best precision (the differences are statistically significant). [sent-265, score-0.226]

85 Average highlight lengths are shown in Table 3, and the compression rates they represent. [sent-268, score-0.439]

86 Our phrase model achieves the highest compression rates, whereas the sentence-based model tends to select long sentences even in comparison to the lead baseline. [sent-269, score-0.596]

87 There was no statistically significant difference in the grammaticality between the highlights generated by the phrase ILP system and the original CNN highlights (means differences were compared using a Post-hoc Tukey test). [sent-273, score-1.497]

88 The grammaticality of the sentence ILP was significantly higher overall as no compression took place (α < 0. [sent-274, score-0.539]

89 er of sentences, tokens per sentence, and compression rate, for CNN articles, their highlights, the ILP phrase model, and two baselines. [sent-281, score-0.459]

90 The highlights created by the sentence ILP were considered significantly more verbose (α < 0. [sent-287, score-0.699]

91 Overall, the highlights generated by the phrase ILP model were not significantly different from those written by humans. [sent-289, score-0.773]

92 The phrase ILP model achieves a significantly better F-score (for both ROUGE- 1 and ROUGE-2) over the lead baseline, the sentence ILP model, and Martins and Smith. [sent-296, score-0.252]

93 572 Table 5: Generated highlights for the stories in Table 1using the phrase ILP model. [sent-303, score-0.74]

94 Furthermore, as a standalone sentence compression system it yields state of the art performance, comparable to McDonald’s (2006) discriminative model and superior to Hedge Trimmer (Zajic et al. [sent-308, score-0.425]

95 7 Conclusions In this paper we proposed a joint content selection and compression model for single-document summarization. [sent-310, score-0.388]

96 Applying the model to the generation of “story highlights” (and single document summaries) shows that it is a viable alternative to extraction-based systems. [sent-314, score-0.171]

97 confirm that our system manages to create summaries at a high compression rate and yet maintain the informativeness and grammaticality of a competitive extractive system. [sent-323, score-0.686]

98 We would also like to generalize the model to arbitrary rewrite operations, as our results indicate that compression rates are likely to improve with more sophisticated paraphrasing. [sent-329, score-0.419]

99 Improving summarization performance by sentence compression a pilot study. [sent-427, score-0.529]

100 Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. [sent-501, score-0.574]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('highlights', 0.61), ('ilp', 0.355), ('compression', 0.329), ('grammaticality', 0.147), ('martins', 0.141), ('summarization', 0.137), ('phrase', 0.13), ('cnn', 0.12), ('highlight', 0.11), ('document', 0.108), ('summaries', 0.097), ('whites', 0.093), ('salience', 0.077), ('summary', 0.076), ('clarke', 0.073), ('phrases', 0.072), ('zajic', 0.065), ('sentence', 0.063), ('story', 0.062), ('informativeness', 0.061), ('ccomp', 0.056), ('svore', 0.056), ('woodsend', 0.056), ('daum', 0.055), ('node', 0.053), ('mani', 0.053), ('jing', 0.053), ('merged', 0.052), ('extractive', 0.052), ('compressed', 0.051), ('optimistic', 0.049), ('constraints', 0.048), ('integer', 0.047), ('dependency', 0.047), ('clausal', 0.045), ('headline', 0.045), ('sentences', 0.045), ('smith', 0.044), ('rouge', 0.044), ('news', 0.043), ('leaf', 0.043), ('technische', 0.042), ('parent', 0.042), ('lapata', 0.041), ('constraint', 0.041), ('articles', 0.039), ('iii', 0.039), ('length', 0.038), ('nenkova', 0.038), ('achterberg', 0.037), ('blacks', 0.037), ('koch', 0.037), ('kristian', 0.037), ('lmys', 0.037), ('pslixi', 0.037), ('telegraphic', 0.037), ('verbosity', 0.037), ('wunderling', 0.037), ('chosen', 0.035), ('remain', 0.033), ('di', 0.033), ('model', 0.033), ('documents', 0.033), ('survey', 0.033), ('extraneous', 0.033), ('trimmer', 0.033), ('webexp', 0.033), ('output', 0.032), ('ns', 0.032), ('mirella', 0.032), ('article', 0.031), ('generation', 0.03), ('seattle', 0.03), ('mcdonald', 0.03), ('edges', 0.03), ('sparck', 0.03), ('gist', 0.03), ('hedge', 0.03), ('maxx', 0.03), ('xi', 0.03), ('rewrite', 0.03), ('universit', 0.03), ('variables', 0.029), ('clause', 0.029), ('lin', 0.029), ('scores', 0.028), ('coverage', 0.028), ('siddharthan', 0.028), ('conroy', 0.028), ('included', 0.027), ('sophisticated', 0.027), ('objective', 0.027), ('interviews', 0.027), ('kupiec', 0.027), ('lt', 0.026), ('formulation', 0.026), ('created', 0.026), ('lead', 0.026), ('content', 0.026), ('bars', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999905 39 acl-2010-Automatic Generation of Story Highlights

Author: Kristian Woodsend ; Mirella Lapata

Abstract: In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating “story highlights”—a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content.

2 0.19150256 130 acl-2010-Hard Constraints for Grammatical Function Labelling

Author: Wolfgang Seeker ; Ines Rehbein ; Jonas Kuhn ; Josef Van Genabith

Abstract: For languages with (semi-) free word order (such as German), labelling grammatical functions on top of phrase-structural constituent analyses is crucial for making them interpretable. Unfortunately, most statistical classifiers consider only local information for function labelling and fail to capture important restrictions on the distribution of core argument functions such as subject, object etc., namely that there is at most one subject (etc.) per clause. We augment a statistical classifier with an integer linear program imposing hard linguistic constraints on the solution space output by the classifier, capturing global distributional restrictions. We show that this improves labelling quality, in particular for argument grammatical functions, in an intrinsic evaluation, and, importantly, grammar coverage for treebankbased (Lexical-Functional) grammar acquisition and parsing, in an extrinsic evaluation.

3 0.18757199 77 acl-2010-Cross-Language Document Summarization Based on Machine Translation Quality Prediction

Author: Xiaojun Wan ; Huiying Li ; Jianguo Xiao

Abstract: Cross-language document summarization is a task of producing a summary in one language for a document set in a different language. Existing methods simply use machine translation for document translation or summary translation. However, current machine translation services are far from satisfactory, which results in that the quality of the cross-language summary is usually very poor, both in readability and content. In this paper, we propose to consider the translation quality of each sentence in the English-to-Chinese cross-language summarization process. First, the translation quality of each English sentence in the document set is predicted with the SVM regression method, and then the quality score of each sentence is incorporated into the summarization process. Finally, the English sentences with high translation quality and high informativeness are selected and translated to form the Chinese summary. Experimental results demonstrate the effectiveness and usefulness of the proposed approach. 1

4 0.17448471 14 acl-2010-A Risk Minimization Framework for Extractive Speech Summarization

Author: Shih-Hsiang Lin ; Berlin Chen

Abstract: In this paper, we formulate extractive summarization as a risk minimization problem and propose a unified probabilistic framework that naturally combines supervised and unsupervised summarization models to inherit their individual merits as well as to overcome their inherent limitations. In addition, the introduction of various loss functions also provides the summarization framework with a flexible but systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Experiments on speech summarization show that the methods deduced from our framework are very competitive with existing summarization approaches. 1

5 0.16806854 264 acl-2010-Wrapping up a Summary: From Representation to Generation

Author: Josef Steinberger ; Marco Turchi ; Mijail Kabadjov ; Ralf Steinberger ; Nello Cristianini

Abstract: The main focus of this work is to investigate robust ways for generating summaries from summary representations without recurring to simple sentence extraction and aiming at more human-like summaries. This is motivated by empirical evidence from TAC 2009 data showing that human summaries contain on average more and shorter sentences than the system summaries. We report encouraging preliminary results comparable to those attained by participating systems at TAC 2009.

6 0.15827371 38 acl-2010-Automatic Evaluation of Linguistic Quality in Multi-Document Summarization

7 0.12283915 11 acl-2010-A New Approach to Improving Multilingual Summarization Using a Genetic Algorithm

8 0.12239082 46 acl-2010-Bayesian Synchronous Tree-Substitution Grammar Induction and Its Application to Sentence Compression

9 0.12095662 8 acl-2010-A Hybrid Hierarchical Model for Multi-Document Summarization

10 0.11830817 240 acl-2010-Training Phrase Translation Models with Leaving-One-Out

11 0.10904238 188 acl-2010-Optimizing Informativeness and Readability for Sentiment Summarization

12 0.10638373 15 acl-2010-A Semi-Supervised Key Phrase Extraction Approach: Learning from Title Phrases through a Document Semantic Network

13 0.095351569 124 acl-2010-Generating Image Descriptions Using Dependency Relational Patterns

14 0.095162131 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining

15 0.090760842 171 acl-2010-Metadata-Aware Measures for Answer Summarization in Community Question Answering

16 0.087747365 136 acl-2010-How Many Words Is a Picture Worth? Automatic Caption Generation for News Images

17 0.08746016 127 acl-2010-Global Learning of Focused Entailment Graphs

18 0.072723798 31 acl-2010-Annotation

19 0.065132678 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields

20 0.064448975 196 acl-2010-Plot Induction and Evolutionary Search for Story Generation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.213), (1, 0.016), (2, -0.081), (3, -0.013), (4, -0.045), (5, -0.001), (6, 0.043), (7, -0.295), (8, -0.044), (9, -0.054), (10, 0.009), (11, -0.064), (12, -0.105), (13, 0.08), (14, -0.009), (15, -0.055), (16, 0.012), (17, 0.04), (18, 0.047), (19, 0.032), (20, -0.025), (21, 0.059), (22, 0.053), (23, -0.002), (24, 0.043), (25, -0.044), (26, 0.056), (27, -0.017), (28, 0.05), (29, -0.034), (30, 0.008), (31, -0.031), (32, -0.109), (33, -0.033), (34, 0.034), (35, 0.017), (36, -0.067), (37, -0.053), (38, -0.168), (39, -0.005), (40, 0.046), (41, -0.075), (42, 0.03), (43, -0.107), (44, 0.013), (45, -0.181), (46, 0.002), (47, 0.004), (48, -0.03), (49, 0.041)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94245893 39 acl-2010-Automatic Generation of Story Highlights

Author: Kristian Woodsend ; Mirella Lapata

Abstract: In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating “story highlights”—a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content.

2 0.71739316 264 acl-2010-Wrapping up a Summary: From Representation to Generation

Author: Josef Steinberger ; Marco Turchi ; Mijail Kabadjov ; Ralf Steinberger ; Nello Cristianini

Abstract: The main focus of this work is to investigate robust ways for generating summaries from summary representations without recurring to simple sentence extraction and aiming at more human-like summaries. This is motivated by empirical evidence from TAC 2009 data showing that human summaries contain on average more and shorter sentences than the system summaries. We report encouraging preliminary results comparable to those attained by participating systems at TAC 2009.

3 0.68028623 11 acl-2010-A New Approach to Improving Multilingual Summarization Using a Genetic Algorithm

Author: Marina Litvak ; Mark Last ; Menahem Friedman

Abstract: Automated summarization methods can be defined as “language-independent,” if they are not based on any languagespecific knowledge. Such methods can be used for multilingual summarization defined by Mani (2001) as “processing several languages, with summary in the same language as input.” In this paper, we introduce MUSE, a languageindependent approach for extractive summarization based on the linear optimization of several sentence ranking measures using a genetic algorithm. We tested our methodology on two languages—English and Hebrew—and evaluated its performance with ROUGE-1 Recall vs. state- of-the-art extractive summarization approaches. Our results show that MUSE performs better than the best known multilingual approach (TextRank1) in both languages. Moreover, our experimental results on a bilingual (English and Hebrew) document collection suggest that MUSE does not need to be retrained on each language and the same model can be used across at least two different languages.

4 0.67876291 14 acl-2010-A Risk Minimization Framework for Extractive Speech Summarization

Author: Shih-Hsiang Lin ; Berlin Chen

Abstract: In this paper, we formulate extractive summarization as a risk minimization problem and propose a unified probabilistic framework that naturally combines supervised and unsupervised summarization models to inherit their individual merits as well as to overcome their inherent limitations. In addition, the introduction of various loss functions also provides the summarization framework with a flexible but systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Experiments on speech summarization show that the methods deduced from our framework are very competitive with existing summarization approaches. 1

5 0.61646461 38 acl-2010-Automatic Evaluation of Linguistic Quality in Multi-Document Summarization

Author: Emily Pitler ; Annie Louis ; Ani Nenkova

Abstract: To date, few attempts have been made to develop and validate methods for automatic evaluation of linguistic quality in text summarization. We present the first systematic assessment of several diverse classes of metrics designed to capture various aspects of well-written text. We train and test linguistic quality models on consecutive years of NIST evaluation data in order to show the generality of results. For grammaticality, the best results come from a set of syntactic features. Focus, coherence and referential clarity are best evaluated by a class of features measuring local coherence on the basis of cosine similarity between sentences, coreference informa- tion, and summarization specific features. Our best results are 90% accuracy for pairwise comparisons of competing systems over a test set of several inputs and 70% for ranking summaries of a specific input.

6 0.61275798 140 acl-2010-Identifying Non-Explicit Citing Sentences for Citation-Based Summarization.

7 0.60678893 77 acl-2010-Cross-Language Document Summarization Based on Machine Translation Quality Prediction

8 0.57441074 130 acl-2010-Hard Constraints for Grammatical Function Labelling

9 0.55835837 8 acl-2010-A Hybrid Hierarchical Model for Multi-Document Summarization

10 0.54498035 15 acl-2010-A Semi-Supervised Key Phrase Extraction Approach: Learning from Title Phrases through a Document Semantic Network

11 0.52146828 122 acl-2010-Generating Fine-Grained Reviews of Songs from Album Reviews

12 0.51292938 196 acl-2010-Plot Induction and Evolutionary Search for Story Generation

13 0.50909066 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields

14 0.5014537 188 acl-2010-Optimizing Informativeness and Readability for Sentiment Summarization

15 0.48398998 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing

16 0.45205668 124 acl-2010-Generating Image Descriptions Using Dependency Relational Patterns

17 0.44331402 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining

18 0.4148213 171 acl-2010-Metadata-Aware Measures for Answer Summarization in Community Question Answering

19 0.40436658 136 acl-2010-How Many Words Is a Picture Worth? Automatic Caption Generation for News Images

20 0.39975718 240 acl-2010-Training Phrase Translation Models with Leaving-One-Out


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(4, 0.02), (14, 0.019), (19, 0.207), (25, 0.054), (33, 0.021), (39, 0.022), (40, 0.011), (42, 0.028), (44, 0.015), (59, 0.09), (72, 0.019), (73, 0.049), (78, 0.036), (83, 0.129), (84, 0.044), (97, 0.013), (98, 0.125), (99, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84422135 39 acl-2010-Automatic Generation of Story Highlights

Author: Kristian Woodsend ; Mirella Lapata

Abstract: In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating “story highlights”—a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content.

2 0.83501917 134 acl-2010-Hierarchical Sequential Learning for Extracting Opinions and Their Attributes

Author: Yejin Choi ; Claire Cardie

Abstract: Automatic opinion recognition involves a number of related tasks, such as identifying the boundaries of opinion expression, determining their polarity, and determining their intensity. Although much progress has been made in this area, existing research typically treats each of the above tasks in isolation. In this paper, we apply a hierarchical parameter sharing technique using Conditional Random Fields for fine-grained opinion analysis, jointly detecting the boundaries of opinion expressions as well as determining two of their key attributes polarity and intensity. Our experimental results show that our proposed approach improves the performance over a baseline that does not — exploit hierarchical structure among the classes. In addition, we find that the joint approach outperforms a baseline that is based on cascading two separate components.

3 0.70813978 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields

Author: Jackie Chi Kit Cheung ; Gerald Penn

Abstract: One goal of natural language generation is to produce coherent text that presents information in a logical order. In this paper, we show that topological fields, which model high-level clausal structure, are an important component of local coherence in German. First, we show in a sentence ordering experiment that topological field information improves the entity grid model of Barzilay and Lapata (2008) more than grammatical role and simple clausal order information do, particularly when manual annotations of this information are not available. Then, we incorporate the model enhanced with topological fields into a natural language generation system that generates constituent orders for German text, and show that the added coherence component improves performance slightly, though not statistically significantly.

4 0.70681566 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese

Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng

Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1

5 0.70597005 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition

Author: Partha Pratim Talukdar ; Fernando Pereira

Abstract: Graph-based semi-supervised learning (SSL) algorithms have been successfully used to extract class-instance pairs from large unstructured and structured text collections. However, a careful comparison of different graph-based SSL algorithms on that task has been lacking. We compare three graph-based SSL algorithms for class-instance acquisition on a variety of graphs constructed from different domains. We find that the recently proposed MAD algorithm is the most effective. We also show that class-instance extraction can be significantly improved by adding semantic information in the form of instance-attribute edges derived from an independently developed knowledge base. All of our code and data will be made publicly available to encourage reproducible research in this area.

6 0.70203859 158 acl-2010-Latent Variable Models of Selectional Preference

7 0.70163238 71 acl-2010-Convolution Kernel over Packed Parse Forest

8 0.70121634 195 acl-2010-Phylogenetic Grammar Induction

9 0.70073557 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification

10 0.70054042 65 acl-2010-Complexity Metrics in an Incremental Right-Corner Parser

11 0.70021844 184 acl-2010-Open-Domain Semantic Role Labeling by Modeling Word Spans

12 0.69896632 1 acl-2010-"Ask Not What Textual Entailment Can Do for You..."

13 0.69799119 55 acl-2010-Bootstrapping Semantic Analyzers from Non-Contradictory Texts

14 0.69697791 247 acl-2010-Unsupervised Event Coreference Resolution with Rich Linguistic Features

15 0.69680852 13 acl-2010-A Rational Model of Eye Movement Control in Reading

16 0.69646502 245 acl-2010-Understanding the Semantic Structure of Noun Phrase Queries

17 0.6962024 18 acl-2010-A Study of Information Retrieval Weighting Schemes for Sentiment Analysis

18 0.69384098 155 acl-2010-Kernel Based Discourse Relation Recognition with Temporal Ordering Information

19 0.69369262 251 acl-2010-Using Anaphora Resolution to Improve Opinion Target Identification in Movie Reviews

20 0.69278556 219 acl-2010-Supervised Noun Phrase Coreference Research: The First Fifteen Years