emnlp emnlp2012 emnlp2012-9 knowledge-graph by maker-knowledge-mining

9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution


Source: pdf

Author: Timothy O'Keefe ; Silvia Pareti ; James R. Curran ; Irena Koprinska ; Matthew Honnibal

Abstract: Quote extraction and attribution is the task of automatically extracting quotes from text and attributing each quote to its correct speaker. The present state-of-the-art system uses gold standard information from previous decisions in its features, which, when removed, results in a large drop in performance. We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information. We present results on two new corpora and an augmented version of a third, achieving a new state-of-the-art for systems using only realistic features.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 au NSW Abstract Quote extraction and attribution is the task of automatically extracting quotes from text and attributing each quote to its correct speaker. [sent-6, score-1.38]

2 We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information. [sent-8, score-0.189]

3 1 Introduction News stories are often driven by the quotes made by politicians, sports stars, musicians, and celebrities. [sent-10, score-0.653]

4 When these stories exit the news cycle, the quotes they contain are often forgotten by both readers and journalists. [sent-11, score-0.697]

5 A system that automatically extracts quotes and attributes those quotes to the correct speaker would enable readers and journalists to place news in the context of all comments made by a person on a given topic. [sent-12, score-1.566]

6 Though quote attribution may appear to be a straightforward task, the simple rule-based approaches proposed thus far have produced disappointing results. [sent-13, score-0.714]

7 Going beyond these to machine learning approaches presents several problems that make quote attribution surprisingly difficult. [sent-14, score-0.714]

8 The main challenge is that while a large portion of quotes can be attributed to a speaker based on simple rules, 790 ? [sent-15, score-0.955]

9 au the remainder have few or no contextual clues as to who the correct speaker is. [sent-23, score-0.356]

10 Additionally, many quote sequences, such as dialogues, rely on the reader understanding that there is an alternating sequence of speakers, which creates dependencies between attribution decisions made by a classifier. [sent-24, score-0.843]

11 Elson and McKeown (2010) is the only study that directly uses machine learning in quote attribution, treating the task as a classification task, where each quote is attributed independently of other quotes. [sent-25, score-1.055]

12 To handle conversations and similar constructs they use gold standard information about speakers of previous quotes as features for their model. [sent-26, score-0.766]

13 The primary contribution of this paper is that we reformulate quote attribution as a sequence labelling task. [sent-28, score-0.8]

14 Our results show that a quote attribution system using only realistic features is highly feasible for the news domain, with accuracies of 92. [sent-33, score-0.796]

15 lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl 2 Background Early work into quote attribution by Zhang et al. [sent-38, score-0.714]

16 While they were able to extract quotes with high precision and recall, their attribution accuracy was highly dependent on the document in question, ranging from 47. [sent-40, score-0.803]

17 Their system proved to be very good at extracting quotes through simple rules, but when using a handcrafted decision tree to attribute those quotes to a speaker, they achieved an accuracy of only 65. [sent-44, score-1.236]

18 More recently, SAPIENS, a Frenchlanguage quote extraction and attribution system, was developed by de La Clergerie et al. [sent-49, score-0.714]

19 It conducts a full parse of the text, which allows it to use patterns to extract direct and indirect quotes, as well as the speaker of each quote. [sent-51, score-0.371]

20 Their evaluation found that 19 out of 40 quotes (47. [sent-52, score-0.594]

21 For each quote they first find the nearest speech verb, they then find the grammatical actor of that speech verb, and finally they select the appropriate speaker for that actor. [sent-57, score-0.877]

22 (2010) describe PICTOR, which is principally a quote visualisation tool. [sent-63, score-0.505]

23 Their aim was to automatically identify both quotes and speakers, and then to attribute each quote to a speaker, in a corpus of classic literature that they compiled themselves. [sent-68, score-1.149]

24 To attribute a quote to a speaker they first classified the quotes into categories. [sent-73, score-1.443]

25 Several of the categories have a speaker explicit in their structure, so they attribute quotes to those speakers with no further processing. [sent-74, score-1.081]

26 For the remaining categories, they cast the attribution problem as a binary classification task, where each quote-speaker pair has a “speaker” or “not speaker” label predicted by the classifier. [sent-75, score-0.264]

27 They then reconciled these independent decisions using various techniques to produce a single speaker prediction for each quote. [sent-76, score-0.393]

28 First their corpus does not include quotes where all three annotators chose different speakers. [sent-80, score-0.646]

29 While these quotes include some cases where the annotators chose coreferent spans, it also includes cases of legitimate disagreement about the speaker. [sent-81, score-0.646]

30 In total it contains 3,126 quotes annotated with their speakers. [sent-91, score-0.594]

31 Elson and McKeown used an automated system to find named entity spans and nominal mentions in the text, with the named entities being linked to form a coreference chain (they did not link nominal mentions). [sent-92, score-0.224]

32 To ensure quality, all annotations from poorly performing annotators were removed, as were quotes where each annotator chose a different speaker. [sent-94, score-0.666]

33 Though excluding some quotes ensures quality annotations, it causes gaps in the quote chains, which is a problem for sequence labelling. [sent-95, score-1.149]

34 To rectify this, we conducted additional annotation of the quotes that were excluded by the origi792 nal authors. [sent-97, score-0.619]

35 2 PDTB Attribution Corpus Extension (WSJ) Our next corpus is an extension to the attribution annotations found in the Penn Discourse TreeBank (PDTB). [sent-103, score-0.229]

36 From this corpus we use only direct quotes and the directly quoted portions of mixed quotes, giving us 4,923 quotes. [sent-107, score-0.683]

37 For the set of potential speakers we use the BBN pronoun coreference and entity type corpus (Weischedel and Brunstein, 2005), with automatically coreferred pronouns. [sent-108, score-0.214]

38 We automatically matched BBN entities to PDTB extension speakers, and included the PDTB speaker where no matching BBN entity could be found. [sent-109, score-0.357]

39 This means an automatic system has an opportunity to find the correct speaker for all quotes in the corpus. [sent-110, score-0.928]

40 Raw agreement on the speaker of each quote was high at 98. [sent-116, score-0.821]

41 4 Corpus Comparisons In order to compare the corpora we categorise the quotes into the categories defined by Elson and McKeown (2010), as shown in Table 1. [sent-122, score-0.639]

42 We assigned quotes to these categories by testing (after text preprocessing) whether the quote belonged to each category, in the order shown below: 1. [sent-123, score-1.123]

43 Trigram the quote appears consecutively with a mention of an entity, and a reported speech verb, in any order; – 2. [sent-124, score-0.55]

44 Added the quote is in the same paragraph as another quote that precedes it; – 4. [sent-126, score-1.094]

45 Conversation the quote appears in a paragraph on its own, and the two paragraphs preceding the current paragraph each contain a sin– gle quote, with alternating speakers; 5. [sent-127, score-0.779]

46 Alone – the quote is in a paragraph on its own; 6. [sent-128, score-0.589]

47 Miscellaneous the quote matches none of the preceding categories. [sent-129, score-0.539]

48 com 793 Unsurprisingly, the two corpora from the news domain share similar proportions of quotes in each category. [sent-136, score-0.659]

49 The main differences are that the SMH uses a larger number of pronouns compared to the WSJ, which tends to use explicit attribution more frequently. [sent-137, score-0.209]

50 The SMH also has a significant proportion of quotes that appear alone in a paragraph, while the WSJ has almost none. [sent-138, score-0.594]

51 Finally, when attributing a quote using a trigram pattern, the SMH mostly uses the Quote-Person-Said pattern, while the WSJ mostly uses the Quote-Said-Person pattern. [sent-139, score-0.558]

52 Most notably the LIT corpus has a much higher proportion of quotes that fall into the Conversation and Alone categories. [sent-142, score-0.594]

53 The two news corpora have more quotes in the Trigram and Backoff categories. [sent-144, score-0.659]

54 4 Quote Extraction Quote extraction is the task of finding the spans that represent quotes within a document. [sent-145, score-0.615]

55 There are three types of quotes that can appear: 1. [sent-146, score-0.594]

56 Direct quotes appear entirely between quotation marks, and are used to indicate that the speaker said precisely what is written; 2. [sent-147, score-0.969]

57 Indirect quotes do not appear between or contain quotation marks, and are used to get the speaker’s point across without implying that the speaker used the exact words of the quote; 3. [sent-148, score-0.969]

58 Mixed quotes are indirect quotes that contain a directly quoted portion. [sent-149, score-1.273]

59 In this work, we limit ourselves to detecting direct quotes and the direct portions of mixed quotes. [sent-150, score-0.653]

60 To extract quotes we use a regular expression that searches for text between quotation marks. [sent-151, score-0.653]

61 We also deal with the special case of multi-paragraph quotes where one quotation mark opens the quote and every new paragraph that forms part of the quote, with a final quotation mark only at the very end of the quote. [sent-152, score-1.301]

62 5 Quote Attribution Given a document with a set of quotes and a set of entities, quote attribution is the task of finding the entity that represents the speaker of each quote, based on the context provided by the document. [sent-154, score-1.665]

63 Identifying the correct entity can involve choosing either an entire coreference chain representing an entity, or identifying a specific span of text that represents the entity. [sent-155, score-0.19]

64 Despite this, the best evidence about which chain is the speaker is found in the context of the individual text spans, and most existing systems aim to get the particular entity span correct. [sent-157, score-0.434]

65 For each quote it proceeds with the following steps: 1. [sent-162, score-0.505]

66 Search backwards in the text from the end of the sentence the quote appears in for a reported speech verb 2. [sent-163, score-0.577]

67 Replace all quotes and speakers with special symbols; 2. [sent-175, score-0.713]

68 The features for a particular pair of target quote (q) and target speaker (s) are summarised below. [sent-186, score-0.838]

69 Distance features including number of words between q and s, number of paragraphs between q and s, number of quotes between q and s, and number of entity mentions between q and s CorpusGSoelqduenPcreed FeatNuroense WLITSJ7874. [sent-187, score-0.724]

70 They then reconcile these 15 classifications into one speaker predic795 tion for the quote. [sent-202, score-0.316]

71 While E&M; experimented with several different reconciliation methods, we simply chose the speaker with the highest probability attached to its “speaker” label. [sent-203, score-0.379]

72 In their work, E&M; make a simplifying assumption that all previous attribution decisions were correct. [sent-209, score-0.261]

73 In Table 2 we show the effect of replacing the gold standard sequence features with features based on the predicted labels, or with no sequence features at all. [sent-211, score-0.208]

74 As the classifications are independent the n decisions need to be reconciled, as more than one speaker might be predicted. [sent-219, score-0.368]

75 We reconcile the n decisions by attributing the quote to the speaker with the highest “speaker” probability. [sent-220, score-0.905]

76 Using a binary class with reconciliation in a greedy decoding model is equivalent to the method in Elson and McKeown (2010), except that the gold standard sequence features are replaced with predicted sequence features. [sent-221, score-0.358]

77 In other words, the candidate speaker immediately preceding the quote would be labelled “speaker1”, the speaker preceding it would be “speaker2” and so on. [sent-228, score-1.205]

78 This representation means that candidate speakers need to directly compete for probability mass, although it has the drawback that the evidence for the higher-numbered speakers is quite sparse. [sent-230, score-0.238]

79 The key difference is that where there were individual features that were calculated with respect to the speaker, there are now n features, one for each of the speaker candidates. [sent-232, score-0.333]

80 This allows the model to account for the strength of other candidates when assigning a speaker label. [sent-233, score-0.316]

81 8 Sequence Decoding We noted in the previous section that the E&M; results are based on the unrealistic assumption that all previous quotes were attributed correctly. [sent-234, score-0.675]

82 We believe the transition information is important as many quotes have no explicit attribution in the text, and instead rely on the reader understanding something about the sequence of speakers. [sent-236, score-0.876]

83 For these experiments we regard the set of speaker attributions in a document as the sequence that we want to decode. [sent-237, score-0.395]

84 Each individual state therefore represents a sequence of w previous attribution deci- sions, and a decision for the current quote. [sent-238, score-0.279]

85 Either the transition probabilities from state to state can be learned explicitly, or the w previous attribution decisions can be used to build the sequence features for the current state, which implicitly encodes the transition probabilities. [sent-240, score-0.374]

86 The final decision for each quote is then just the speaker which is predicted by the sequence with the largest joint probability. [sent-250, score-0.912]

87 As we already know that they are accurate indicators of the speaker we assign them a probability of 100%, which effectively forces the Viterbi decoder to choose the category predictions when they are available. [sent-252, score-0.396]

88 It is worth noting that quotes are only assigned to the Conversation category if the two prior quotes had alternating speakers. [sent-253, score-1.258]

89 As such, during the Viterbi decoding the categorisation of the quote actually needs to be recalculated with regard to the two previous attribution decisions. [sent-254, score-0.76]

90 By forcing the Viterbi decoder to choose category predictions when they are available, we get the advantage that quote sequences with no intervening text may be forced into the Conversation category, which is typically under-represented otherwise. [sent-255, score-0.605]

91 We account for this by using a first-order linear chain CRF model, which learns the probabilities of progressing from speaker to speaker more directly. [sent-264, score-0.679]

92 This indicates that the classifier is putting too much weight on the gold standard sequence features during training, and is misled into making poor decisions when the predicted features are used during test time. [sent-279, score-0.193]

93 10 Conclusion In this paper, we present the first large-scale evaluation of a quote attribution system on newswire from the 1989 Wall Street Journal (WSJ) and the 2009 Sydney Morning Herald (SMH), as well as comparing against previous work (Elson and McKeown, 2010) on 19th-century literature. [sent-345, score-0.714]

94 We demonstrate that by treating quote attribution as a sequence labelling task, we can achieve results that are very close to their results on newswire, though not for literature. [sent-347, score-0.8]

95 We will also explore other approaches to representing quote 798 attribution with a CRF. [sent-349, score-0.714]

96 For the task more broadly, it would be beneficial to compare methods of finding indirect and mixed quotes, and to evaluate how well quote attribution performs on those quotes as opposed to just direct quotes. [sent-350, score-1.384]

97 1% for the WSJ corpus, demonstrate it is possible to develop an accurate and practical quote extraction system. [sent-353, score-0.505]

98 Automatic attribution of quoted speech in literary narrative. [sent-381, score-0.286]

99 A naive salience-based method for speaker identification in fiction books. [sent-393, score-0.345]

100 Automatic extraction of quotes and topics from news feeds. [sent-431, score-0.638]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('quotes', 0.594), ('quote', 0.505), ('speaker', 0.316), ('elson', 0.215), ('attribution', 0.209), ('smh', 0.125), ('speakers', 0.119), ('mckeown', 0.106), ('crf', 0.1), ('paragraph', 0.084), ('lit', 0.078), ('pdtb', 0.063), ('wsj', 0.06), ('quotation', 0.059), ('stories', 0.059), ('coreference', 0.054), ('decisions', 0.052), ('herald', 0.05), ('morning', 0.05), ('sequence', 0.05), ('quoted', 0.049), ('chain', 0.047), ('decoding', 0.046), ('viterbi', 0.045), ('attributed', 0.045), ('paragraphs', 0.045), ('news', 0.044), ('bbn', 0.043), ('category', 0.043), ('entity', 0.041), ('sydney', 0.04), ('pareti', 0.038), ('reconciliation', 0.038), ('sagot', 0.038), ('predictions', 0.037), ('gold', 0.036), ('unrealistic', 0.036), ('indirect', 0.036), ('labelling', 0.036), ('class', 0.035), ('preceding', 0.034), ('binary', 0.034), ('attributing', 0.032), ('honnibal', 0.032), ('quotations', 0.032), ('rulebased', 0.032), ('drop', 0.032), ('greedy', 0.031), ('span', 0.03), ('attributions', 0.029), ('fiction', 0.029), ('attribute', 0.028), ('conversation', 0.028), ('speech', 0.028), ('alternating', 0.027), ('annotators', 0.027), ('mentions', 0.027), ('backwards', 0.025), ('clergerie', 0.025), ('crfsuite', 0.025), ('glass', 0.025), ('hachey', 0.025), ('italicised', 0.025), ('keefe', 0.025), ('mamede', 0.025), ('pouliquen', 0.025), ('reconciled', 0.025), ('rectify', 0.025), ('rosa', 0.025), ('sarmento', 0.025), ('chose', 0.025), ('categories', 0.024), ('transition', 0.023), ('au', 0.022), ('classic', 0.022), ('erb', 0.022), ('mixed', 0.021), ('trigram', 0.021), ('predicted', 0.021), ('corpora', 0.021), ('spans', 0.021), ('realistic', 0.021), ('annotations', 0.02), ('curran', 0.02), ('decision', 0.02), ('sequences', 0.02), ('crc', 0.019), ('davis', 0.019), ('nsw', 0.019), ('unsurprising', 0.019), ('visualizing', 0.019), ('verb', 0.019), ('direct', 0.019), ('dialogues', 0.018), ('schneider', 0.018), ('silvia', 0.018), ('correct', 0.018), ('nominal', 0.017), ('features', 0.017), ('mention', 0.017)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution

Author: Timothy O'Keefe ; Silvia Pareti ; James R. Curran ; Irena Koprinska ; Matthew Honnibal

Abstract: Quote extraction and attribution is the task of automatically extracting quotes from text and attributing each quote to its correct speaker. The present state-of-the-art system uses gold standard information from previous decisions in its features, which, when removed, results in a large drop in performance. We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information. We present results on two new corpora and an augmented version of a third, achieving a new state-of-the-art for systems using only realistic features.

2 0.063697509 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents

Author: Heeyoung Lee ; Marta Recasens ; Angel Chang ; Mihai Surdeanu ; Dan Jurafsky

Abstract: We introduce a novel coreference resolution system that models entities and events jointly. Our iterative method cautiously constructs clusters of entity and event mentions using linear regression to model cluster merge operations. As clusters are built, information flows between entity and event clusters through features that model semantic role dependencies. Our system handles nominal and verbal events as well as entities, and our joint formulation allows information from event coreference to help entity coreference, and vice versa. In a cross-document domain with comparable documents, joint coreference resolution performs significantly better (over 3 CoNLL F1 points) than two strong baselines that resolve entities and events separately.

3 0.063525006 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Much of the writing styles recognized in rhetorical and composition theories involve deep syntactic elements. However, most previous research for computational stylometric analysis has relied on shallow lexico-syntactic patterns. Some very recent work has shown that PCFG models can detect distributional difference in syntactic styles, but without offering much insights into exactly what constitute salient stylistic elements in sentence structure characterizing each authorship. In this paper, we present a comprehensive exploration of syntactic elements in writing styles, with particular emphasis on interpretable characterization of stylistic elements. We present analytic insights with respect to the authorship attribution task in two different domains. ,

4 0.047721814 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

Author: Annie Louis ; Ani Nenkova

Abstract: We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles. 59.3 (100.0) Intro 50.3 (100.0) 1166 Rel wk 55.4 (100.0) >= 0.663.8 (67.2)50.8 (71.1)58.6 (75.9) >= 0.7 67.2 (32.0) 54.4 (38.6) 63.3 (52.8) >= 0.8 74.0 (10.0) 51.6 (22.0) 63.0 (25.7) >= 0.9 91.7 (2.0) 30.6 (5.0) 68.1 (7.2) Table 9: Accuracy (% examples) above each confidence level for the conference versus workshop task. These results are shown in Table 9. The proportion of examples under each setting is also indicated. When only examples above 0.6 confidence are examined, the classifier has a higher accuracy of63.8% for abstracts and covers close to 70% of the examples. Similarly, when a cutoff of 0.7 is applied to the confidence for predicting related work sections, we achieve 63.3% accuracy for 53% of examples. So we can consider that 30 to 47% of the examples in the two sections respectively are harder to tell apart. Interestingly however even high confidence predictions on introductions remain incorrect. These results show that our model can successfully distinguish the structure of articles beyond just clearly incoherent permutation examples. 7 Conclusion Our work is the first to develop an unsupervised model for intentional structure and to show that it has good accuracy for coherence prediction and also complements entity and lexical structure of discourse. This result raises interesting questions about how patterns captured by these different coherence metrics vary and how they can be combined usefully for predicting coherence. We plan to explore these ideas in future work. We also want to analyze genre differences to understand if the strength of these coherence dimensions varies with genre. Acknowledgements This work is partially supported by a Google research grant and NSF CAREER 0953445 award. References Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1): 1–34. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL-HLT, pages 113–120. Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL, pages 9–16. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180. Jackie C.K. Cheung and Gerald Penn. 2010. Utilizing extra-sentential context for parsing. In Proceedings of EMNLP, pages 23–33. Christelle Cocco, Rapha ¨el Pittier, Fran ¸cois Bavaud, and Aris Xanthos. 2011. Segmentation and clustering of textual sequences: a typological approach. In Proceedings of RANLP, pages 427–433. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 3 1:25–70. Isaac G. Councill, C. Lee Giles, and Min-Yen Kan. 2008. Parscit: An open-source crf reference string parsing package. In Proceedings of LREC, pages 661–667. Micha Elsner and Eugene Charniak. 2008. Coreferenceinspired coherence modeling. In Proceedings of ACLHLT, Short Papers, pages 41–44. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of ACL-HLT, pages 125–129. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of NAACL-HLT, pages 436–443. Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Transactions on Speech and Language Processing, 3(2): 1–16. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 3(12): 175–204. Yufan Guo, Anna Korhonen, and Thierry Poibeau. 2011. A weakly-supervised approach to argumentative zoning of scientific documents. In Proceedings of EMNLP, pages 273–283. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-HLT, pages 586–594, June. 1167 Nikiforos Karamanis, Chris Mellish, Massimo Poesio, and Jon Oberlander. 2009. Evaluating centering for information ordering using corpora. Computational Linguistics, 35(1):29–46. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423–430. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Proceedings of IJCAI. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of ACL, pages 545–552. Maria Liakata and Larisa Soldatova. 2008. Guidelines for the annotation of general scientific concepts. JISC Project Report. Maria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisation and zoning of scientific papers. In Proceedings of LREC. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of EMNLP, pages 343–351. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of ACL-HLT, pages 997– 1006. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of EMNLP, pages 186–195. Dragomir R. Radev, Mark Thomas Joseph, Bryan Gibson, and Pradeep Muthukrishnan. 2009. A Bibliometric and Network Analysis ofthe field of Computational Linguistics. Journal of the American Society for Information Science and Technology. David Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. In Proceedings of the 28th Annual Conference of the Cognitive Science Society, pages 685–690. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the fifth conference on Applied natural language processing, pages 16–19. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of COLING-ACL, pages 803–810. John Swales. 1990. Genre analysis: English in academic and research settings, volume 11. Cambridge University Press. Simone Teufel and Marc Moens. 2000. What’s yours and what’s mine: determining intellectual attribution in scientific text. In Proceedings of EMNLP, pages 9– 17. Simone Teufel, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumentation in research articles. In Proceedings of EACL, pages 110–1 17. Ying Zhao, George Karypis, and Usama Fayyad. 2005. Hierarchical clustering algorithms for document datasets. Data Mining and Knowledge Discovery, 10: 141–168. 1168

5 0.045739252 76 emnlp-2012-Learning-based Multi-Sieve Co-reference Resolution with Knowledge

Author: Lev Ratinov ; Dan Roth

Abstract: We explore the interplay of knowledge and structure in co-reference resolution. To inject knowledge, we use a state-of-the-art system which cross-links (or “grounds”) expressions in free text to Wikipedia. We explore ways of using the resulting grounding to boost the performance of a state-of-the-art co-reference resolution system. To maximize the utility of the injected knowledge, we deploy a learningbased multi-sieve approach and develop novel entity-based features. Our end system outperforms the state-of-the-art baseline by 2 B3 F1 points on non-transcript portion of the ACE 2004 dataset.

6 0.04274945 70 emnlp-2012-Joint Chinese Word Segmentation, POS Tagging and Parsing

7 0.040363759 120 emnlp-2012-Streaming Analysis of Discourse Participants

8 0.039932489 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

9 0.039850928 73 emnlp-2012-Joint Learning for Coreference Resolution with Markov Logic

10 0.039654423 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge

11 0.038852789 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

12 0.036854882 98 emnlp-2012-No Noun Phrase Left Behind: Detecting and Typing Unlinkable Entities

13 0.036720514 93 emnlp-2012-Multi-instance Multi-label Learning for Relation Extraction

14 0.035905775 32 emnlp-2012-Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants

15 0.034278717 113 emnlp-2012-Resolving This-issue Anaphora

16 0.032579128 7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis

17 0.032004211 122 emnlp-2012-Syntactic Surprisal Affects Spoken Word Duration in Conversational Contexts

18 0.031306028 106 emnlp-2012-Part-of-Speech Tagging for Chinese-English Mixed Texts with Dynamic Features

19 0.031281475 21 emnlp-2012-Assessment of ESL Learners' Syntactic Competence Based on Similarity Measures

20 0.03104735 72 emnlp-2012-Joint Inference for Event Timeline Construction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.125), (1, 0.04), (2, 0.022), (3, -0.043), (4, 0.005), (5, 0.002), (6, -0.033), (7, -0.066), (8, -0.002), (9, -0.015), (10, -0.015), (11, -0.029), (12, -0.081), (13, 0.041), (14, -0.017), (15, -0.003), (16, 0.005), (17, -0.069), (18, -0.082), (19, 0.034), (20, -0.024), (21, -0.017), (22, 0.041), (23, -0.046), (24, -0.042), (25, 0.104), (26, 0.087), (27, 0.0), (28, 0.006), (29, -0.087), (30, -0.027), (31, 0.13), (32, 0.066), (33, 0.034), (34, -0.19), (35, -0.056), (36, 0.03), (37, 0.107), (38, -0.1), (39, -0.016), (40, 0.172), (41, 0.083), (42, 0.061), (43, 0.089), (44, -0.339), (45, -0.191), (46, -0.13), (47, -0.161), (48, 0.247), (49, -0.14)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95413166 9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution

Author: Timothy O'Keefe ; Silvia Pareti ; James R. Curran ; Irena Koprinska ; Matthew Honnibal

Abstract: Quote extraction and attribution is the task of automatically extracting quotes from text and attributing each quote to its correct speaker. The present state-of-the-art system uses gold standard information from previous decisions in its features, which, when removed, results in a large drop in performance. We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information. We present results on two new corpora and an augmented version of a third, achieving a new state-of-the-art for systems using only realistic features.

2 0.55279934 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Much of the writing styles recognized in rhetorical and composition theories involve deep syntactic elements. However, most previous research for computational stylometric analysis has relied on shallow lexico-syntactic patterns. Some very recent work has shown that PCFG models can detect distributional difference in syntactic styles, but without offering much insights into exactly what constitute salient stylistic elements in sentence structure characterizing each authorship. In this paper, we present a comprehensive exploration of syntactic elements in writing styles, with particular emphasis on interpretable characterization of stylistic elements. We present analytic insights with respect to the authorship attribution task in two different domains. ,

3 0.38699141 122 emnlp-2012-Syntactic Surprisal Affects Spoken Word Duration in Conversational Contexts

Author: Vera Demberg ; Asad Sayeed ; Philip Gorinski ; Nikolaos Engonopoulos

Abstract: We present results of a novel experiment to investigate speech production in conversational data that links speech rate to information density. We provide the first evidence for an association between syntactic surprisal and word duration in recorded speech. Using the AMI corpus which contains transcriptions of focus group meetings with precise word durations, we show that word durations correlate with syntactic surprisal estimated from the incremental Roark parser over and above simpler measures, such as word duration estimated from a state-of-the-art text-to-speech system and word frequencies, and that the syntactic surprisal estimates are better predictors of word durations than a simpler version of surprisal based on trigram probabilities. This result supports the uniform information density (UID) hypothesis and points a way to more realistic artificial speech generation.

4 0.335262 118 emnlp-2012-Source Language Adaptation for Resource-Poor Machine Translation

Author: Pidong Wang ; Preslav Nakov ; Hwee Tou Ng

Abstract: We propose a novel, language-independent approach for improving machine translation from a resource-poor language to X by adapting a large bi-text for a related resource-rich language and X (the same target language). We assume a small bi-text for the resourcepoor language to X pair, which we use to learn word-level and phrase-level paraphrases and cross-lingual morphological variants between the resource-rich and the resource-poor language; we then adapt the former to get closer to the latter. Our experiments for Indonesian/Malay–English translation show that using the large adapted resource-rich bitext yields 6.7 BLEU points of improvement over the unadapted one and 2.6 BLEU points over the original small bi-text. Moreover, combining the small bi-text with the adapted bi-text outperforms the corresponding combinations with the unadapted bi-text by 1.5– 3 BLEU points. We also demonstrate applicability to other languages and domains.

5 0.24833322 120 emnlp-2012-Streaming Analysis of Discourse Participants

Author: Benjamin Van Durme

Abstract: Inferring attributes of discourse participants has been treated as a batch-processing task: data such as all tweets from a given author are gathered in bulk, processed, analyzed for a particular feature, then reported as a result of academic interest. Given the sources and scale of material used in these efforts, along with potential use cases of such analytic tools, discourse analysis should be reconsidered as a streaming challenge. We show that under certain common formulations, the batchprocessing analytic framework can be decomposed into a sequential series of updates, using as an example the task of gender classification. Once in a streaming framework, and motivated by large data sets generated by social media services, we present novel results in approximate counting, showing its applicability to space efficient streaming classification.

6 0.23026136 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

7 0.22131169 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

8 0.22076817 7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis

9 0.21128479 76 emnlp-2012-Learning-based Multi-Sieve Co-reference Resolution with Knowledge

10 0.19862039 84 emnlp-2012-Linking Named Entities to Any Database

11 0.1928844 17 emnlp-2012-An "AI readability" Formula for French as a Foreign Language

12 0.1914023 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

13 0.18684238 92 emnlp-2012-Multi-Domain Learning: When Do Domains Matter?

14 0.17829183 48 emnlp-2012-Exploring Adaptor Grammars for Native Language Identification

15 0.17587222 103 emnlp-2012-PATTY: A Taxonomy of Relational Patterns with Semantic Types

16 0.17396966 79 emnlp-2012-Learning Syntactic Categories Using Paradigmatic Representations of Word Context

17 0.17037846 77 emnlp-2012-Learning Constraints for Consistent Timeline Extraction

18 0.16860273 83 emnlp-2012-Lexical Differences in Autobiographical Narratives from Schizophrenic Patients and Healthy Controls

19 0.16595829 29 emnlp-2012-Concurrent Acquisition of Word Meaning and Lexical Categories

20 0.16470452 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.014), (16, 0.039), (25, 0.015), (34, 0.052), (39, 0.011), (45, 0.013), (60, 0.086), (63, 0.077), (64, 0.02), (65, 0.03), (67, 0.319), (70, 0.023), (73, 0.024), (74, 0.068), (76, 0.037), (80, 0.023), (82, 0.012), (86, 0.023), (95, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.74132156 9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution

Author: Timothy O'Keefe ; Silvia Pareti ; James R. Curran ; Irena Koprinska ; Matthew Honnibal

Abstract: Quote extraction and attribution is the task of automatically extracting quotes from text and attributing each quote to its correct speaker. The present state-of-the-art system uses gold standard information from previous decisions in its features, which, when removed, results in a large drop in performance. We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information. We present results on two new corpora and an augmented version of a third, achieving a new state-of-the-art for systems using only realistic features.

2 0.43700743 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

Author: Jayant Krishnamurthy ; Tom Mitchell

Abstract: We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms ofweak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.

3 0.43091333 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

Author: Valentin I. Spitkovsky ; Hiyan Alshawi ; Daniel Jurafsky

Abstract: We present a new family of models for unsupervised parsing, Dependency and Boundary models, that use cues at constituent boundaries to inform head-outward dependency tree generation. We build on three intuitions that are explicit in phrase-structure grammars but only implicit in standard dependency formulations: (i) Distributions of words that occur at sentence boundaries such as English determiners resemble constituent edges. (ii) Punctuation at sentence boundaries further helps distinguish full sentences from fragments like headlines and titles, allowing us to model grammatical differences between complete and incomplete sentences. (iii) Sentence-internal punctuation boundaries help with longer-distance dependencies, since punctuation correlates with constituent edges. Our models induce state-of-the-art dependency grammars for many languages without — — special knowledge of optimal input sentence lengths or biased, manually-tuned initializers.

4 0.42423403 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

Author: Jianxing Yu ; Zheng-Jun Zha ; Tat-Seng Chua

Abstract: This paper proposes to generate appropriate answers for opinion questions about products by exploiting the hierarchical organization of consumer reviews. The hierarchy organizes product aspects as nodes following their parent-child relations. For each aspect, the reviews and corresponding opinions on this aspect are stored. We develop a new framework for opinion Questions Answering, which enables accurate question analysis and effective answer generation by making use the hierarchy. In particular, we first identify the (explicit/implicit) product aspects asked in the questions and their sub-aspects by referring to the hierarchy. We then retrieve the corresponding review fragments relevant to the aspects from the hierarchy. In order to gener- ate appropriate answers from the review fragments, we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review salience, coherence, diversity, and parent-child relations among the aspects. We conduct evaluations on 11 popular products in four domains. The evaluated corpus contains 70,359 consumer reviews and 220 questions on these products. Experimental results demonstrate the effectiveness of our approach.

5 0.42349559 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

Author: Lizhen Qu ; Rainer Gemulla ; Gerhard Weikum

Abstract: We propose the weakly supervised MultiExperts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.

6 0.42085817 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

7 0.41822702 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

8 0.41752753 8 emnlp-2012-A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes

9 0.41672763 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games

10 0.41630352 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents

11 0.41615197 97 emnlp-2012-Natural Language Questions for the Web of Data

12 0.4156962 81 emnlp-2012-Learning to Map into a Universal POS Tagset

13 0.41424763 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon

14 0.41379991 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

15 0.41300616 122 emnlp-2012-Syntactic Surprisal Affects Spoken Word Duration in Conversational Contexts

16 0.41258267 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

17 0.40989015 114 emnlp-2012-Revisiting the Predictability of Language: Response Completion in Social Media

18 0.4093838 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

19 0.4077484 42 emnlp-2012-Entropy-based Pruning for Phrase-based Machine Translation

20 0.4076305 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure