acl acl2011 acl2011-73 knowledge-graph by maker-knowledge-mining

73 acl-2011-Collective Classification of Congressional Floor-Debate Transcripts


Source: pdf

Author: Clinton Burfoot ; Steven Bird ; Timothy Baldwin

Abstract: This paper explores approaches to sentiment classification of U.S. Congressional floordebate transcripts. Collective classification techniques are used to take advantage of the informal citation structure present in the debates. We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. Our experimental evaluation shows that the mean-field algorithm obtains the best results for the task, significantly outperforming the benchmark technique.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 au Abstract This paper explores approaches to sentiment classification of U. [sent-4, score-0.237]

2 Collective classification techniques are used to take advantage of the informal citation structure present in the debates. [sent-7, score-0.653]

3 We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. [sent-8, score-0.668]

4 1 Introduction Supervised document classification is a well-studied task. [sent-10, score-0.237]

5 Research has been performed across many document types with a variety of classification tasks. [sent-11, score-0.237]

6 Examples are topic classification of newswire articles (Yang and Liu, 1999), sentiment classifica- tion of movie reviews (Pang et al. [sent-12, score-0.237]

7 , 2002), and satire classification of news articles (Burfoot and Baldwin, 2009). [sent-13, score-0.2]

8 This and other work has established the usefulness of document classifiers as stand-alone systems and as components of broader NLP systems. [sent-14, score-0.145]

9 This paper deals with methods relevant to supervised document classification in domains with network structures, where collective classification can yield better performance than approaches that consider documents in isolation. [sent-15, score-0.898]

10 Simply put, a network structure is any set of relationships between documents that can be used to assist the document classification process. [sent-16, score-0.353]

11 Web encyclopedias and scholarly 1506 publications are two examples of document domains where network structures have been used to assist classification (Gantner and Schmidt-Thieme, 2009; Cao and Gao, 2005). [sent-17, score-0.353]

12 The contribution of this research is in four parts: (1) we introduce an approach that gives better than state of the art performance for collective classification on the ConVote corpus of congressional debate transcripts (Thomas et al. [sent-18, score-0.814]

13 In the next section (Section 2) we provide a formal definition of collective classification and describe the ConVote corpus that is the basis for our experimental evaluation. [sent-20, score-0.585]

14 Subsequently, we describe and critique the established benchmark approach for congressional floor-debate transcript classification, before describing approaches based on three alternative collective classification algorithms (Section 3). [sent-21, score-0.812]

15 1 Task Definition Collective Classification Given a network and an object o in the network, there are three types of correlations that can be used ProceedingPso orftla thned 4,9 Otrhe Agonnn,u Jauln Mee 1e9t-i2ng4, o 2f0 t1h1e. [sent-25, score-0.126]

16 Standard approaches to classification generally ignore any network information and only take into account the correlations in (1). [sent-29, score-0.326]

17 Collective classification takes advantage of the network by using all three sources. [sent-31, score-0.276]

18 Formally, collective classification takes a graph, made up of nodes V = {V1, . [sent-34, score-0.621]

19 It consists of 3,857 speeches organized into 53 debates on specific pieces of legislation. [sent-50, score-0.165]

20 Each speech is tagged with the identity of the speaker and a “for” or “against” label derived from congressional voting records. [sent-51, score-0.261]

21 We apply collective classification to ConVote debates by letting V refer to the individual speakers in a dbeatbeasteb aynledt populating rNto using dthivei cdiutaatlisopne graph ibneatween speakers. [sent-53, score-0.801]

22 Tpohen text of each instance is the concatenation of the speeches by a speaker within a debate. [sent-55, score-0.152]

23 This results in a corpus of 1,699 instances with a roughly even class distribution. [sent-56, score-0.172]

24 1507 3 × Collective Classification Techniques In this section we describe techniques for performing collective classification on the ConVote corpus. [sent-61, score-0.585]

25 Let Ψ denote a set of functions representing the classification preferences produced by the contentonly and citation classifiers: • • For each Vi ∈ V, φi ∈ Ψ is a function φi: L RFo+r e∪a {0}. [sent-64, score-0.728]

26 Later in this section we will describe three collective classification algorithms capable of performing overall classification based on these inputs: (1) the minimum-cut approach, which is the benchmark for collective classification with ConVote, established by Thomas et al. [sent-66, score-1.447]

27 Iterative-classifier approach: This approach incorporates content-only and citation features into a single local classifier that works on the assumption that correct neighbor labels are already known. [sent-70, score-0.688]

28 For a detailed introduction to collective classification see Sen et al. [sent-75, score-0.585]

29 The phrase gentlewoman from New York by speaker 4001 15 is annotated as a reference to speaker 400378. [sent-99, score-0.17]

30 The content-only classifier is trained to predict y or n based on the unigram presence features found in speeches. [sent-105, score-0.184]

31 The citation classifier is trained to predict “same class” or “different class” labels based on the unigram presence features found in the context windows (30 tokens before, 20 tokens after) surrounding citations for each pair of speakers 1508 in the debate. [sent-106, score-0.905]

32 /2 |d di >|< ≤ − 2 σ2 iσ ;i where σi is the standard deviation of the decision plane distance, di, over all of the instances in the debate and φi (n) = 1 φi (y). [sent-109, score-0.341]

33 The citation classifier output is processed s 1im−iφlarly:1 − ψij(y,y) =α0α · dij/4σij dθdi j ≤j>< di4 θ jσ;≤ij 4σij; where σij is the standard deviation of the decision plane distance, dij over all of the citations in the debate and ψij (n, n) = ψij (y, y). [sent-110, score-0.974]

34 classify each citation context window separately, so their ψ values are actually calculated in a slightly more complicated way. [sent-114, score-0.581]

35 The cost function is modeled in a flow graph where extra source and sink nodes represent the y and n labels respectively. [sent-116, score-0.173]

36 Pairs classified in the “same class” class are linked with capacities defined by ψ. [sent-119, score-0.194]

37 An exact optimum and corresponding overall classification is efficiently computed by finding the minimum-cut of the flow graph (Blum and Chawla, 2001). [sent-120, score-0.305]

38 point out, the model has no way of representing the “different class” output from the citation classifier and these citations must be discarded. [sent-126, score-0.7]

39 Inspection of the corpus shows that approximately 80% of citations indicate agreement, meaning that for the present task the impact of discarding this information may not be large. [sent-128, score-0.152]

40 However, the primary utility in collective approaches lies in their ability to fill in gaps in information not picked up by content-only classification. [sent-129, score-0.385]

41 The use of standard deviations appears problematic as, intuitively, the strength of a classification should be independent of its variance. [sent-132, score-0.256]

42 As a case in point, consider a set of instances in a debate all classified as similarly weak positives by the SVM. [sent-133, score-0.219]

43 The minimum-cut approach places instances in either the positive or negative class depending on which side of the cut they fall on. [sent-135, score-0.172]

44 This means that no measure of classification confidence is available. [sent-136, score-0.2]

45 A measure of classification 1509 confidence may also be necessary for incorporation into a broader system, e. [sent-138, score-0.2]

46 Tuning the α and θ parameters is likely to become a source of inaccuracy in cases where the tuning and test debates have dissimilar link structures. [sent-141, score-0.218]

47 For example, if the tuning debates tend to have fewer, more accurate links the α parameter will be higher. [sent-142, score-0.194]

48 This will not produce good results if the test debates have more frequent, less accurate links. [sent-143, score-0.134]

49 minimum-cut ap- proach to incorporate “different class” citation classifications. [sent-147, score-0.453]

50 They use post hoc adjustments of graph capacities based on simple heuristics. [sent-148, score-0.124]

51 Two of the three approaches they trial appear to offer performance improvements: The SetTo heuristic: This heuristic works through E in order and tries to force Vi and Vj into different classes for every “different class” (dij < 0) citation classifier output where i< j. [sent-149, score-0.645]

52 The most obvious problem is the arbitrary nature of the manipulations, which produce a flow graph that has an indistinct relationship to the outputs of the two classifiers. [sent-158, score-0.16]

53 Finally, we note that the confidence of the cita- θ tion classifier is not embodied in the graph structure. [sent-166, score-0.15]

54 The algorithm computes the fixed pPoint equation for every node and continues to do so uPntil the marginal probabilities bj (vj) stabilize. [sent-180, score-0.124]

55 Mean-field can be shown to be a variational method in the same way as loopy belief propagation, using a simpler trial distribution. [sent-181, score-0.222]

56 Probabilistic SVM Normalisation: Unlike minimum-cut, the Markov random field approaches have inherent support for the “different class” output of the citation classifier. [sent-184, score-0.532]

57 By applying this technique to the base classifiers, we can produce new, simpler Ψ functions, φi(y) = Pi and ψij (y, y) = Pij where Pi is the probabilistic normalized output of the content-only classifier and Pij is the probabilistic normalized output of the citation classifier. [sent-189, score-0.577]

58 4 Iterative Classifier Approach The dual-classifier approaches described above represent global attempts to solve the collective classification problem. [sent-196, score-0.585]

59 We can choose to narrow our focus to the local level, in which we aim to produce the best classification for a single instance with the assumption that all other parts of the problem (i. [sent-197, score-0.316]

60 , 2007), defined in Algorithm 1, is a simple technique for performing collective classification using such a local classifier. [sent-201, score-0.642]

61 After bootstrapping with a content-only classifier, it repeatedly generates new estimates for vi based on its current knowledge of Ni. [sent-202, score-0.26]

62 Citation presence and Citation count: Given that the majority of citations represent the “same class” relationship (see Section 3. [sent-207, score-0.204]

63 1), we can anticipate that content-only classification performance will be improved if we add features to represent the presence of neighbours of each class. [sent-208, score-0.252]

64 , are the elements of ui, the binary unigram feature vector used by the content-only classifier to represent instance i. [sent-216, score-0.162]

65 Alternatively, we can represent neighbor labels using binary citation presence values where any non-zero count becomes a 1in the feature vector. [sent-217, score-0.643]

66 Context window: We can adopt a more nuanced model for citation information if we incorporate the citation context window features into the feature vector. [sent-218, score-1.034]

67 This is, in effect, a synthesis of the content-only and citation feature models. [sent-219, score-0.453]

68 Context window features come from the product space L C, where C is the set of unigrams used in ciLtat ×ion C c,o wntheexrte w Cin isdo twhes aentd o ci dneignroatmess t uhsee dco innte cxitwindow features for instance i. [sent-220, score-0.13]

69 This approach implements the intuition that speakers indicate their voting intentions by the words they use to refer to speakers whose vote is known. [sent-228, score-0.183]

70 As an example, consider the context window fea- uij uij, × ture AGREE-FOR, indicating the presence of the agree unigram in the citation window I agree with the gentleman from Louisiana, where the label for the gentleman from Louisiana instance is y. [sent-230, score-0.988]

71 1 Local Classification We evaluate three models for local classification: citation presence features, citation count features and context window features. [sent-249, score-1.198]

72 In each case the SVM classifier is given feature vectors with both contentonly and citation information, as described in Section 3. [sent-250, score-0.594]

73 Table 1 shows that context window performs the best with 89. [sent-252, score-0.128]

74 Knowing the words used in citations of each class is better than knowing the number of citations in each class, and better still than only knowing which classes of citations exist. [sent-258, score-0.615]

75 These results represent an upper-bound for the performance of the iterative classifier, which relies on iteration to produce the reliable information about citations given here by oracle. [sent-259, score-0.26]

76 2 Collective Classification Table 2 shows overall results for the three collective classification algorithms. [sent-261, score-0.585]

77 The iterative classifier was run separately with citation count and context win1512 MethodAccuracy (%) Majority 52. [sent-262, score-0.71]

78 All three local classifiers are significant over the in-isolation classifier (p < . [sent-268, score-0.231]

79 dow citation features, the two best performing local classification methods, both with a threshold of 30 iterations. [sent-270, score-0.71]

80 Collective classification techniques can only have an impact on connected instances, so these figures are most im- portant. [sent-272, score-0.267]

81 The figures for all instances show the performance of the classifiers in our real-world task, where both connected and isolated instances need to be classified and the end-user may not distinguish between the two types. [sent-273, score-0.375]

82 Each of the four collective classifiers outperform the minimum-cut benchmark over connected instances, with the iterative classifier (context window) (79. [sent-274, score-0.753]

83 The dual-classifier approaches based on loopy belief propagation and mean-field do better than the iterative-classifier approaches by an average of about 3%. [sent-279, score-0.234]

84 Iterative classification performs slightly better with citation count features than with context window features, despite the fact that the context window model performs better in the local classifier evaluation. [sent-280, score-1.116]

85 We speculate that this may be due to citation count performing better when given incorrect neighbor labels. [sent-281, score-0.559]

86 This is an aspect of local classifier performance we do not otherwise measure, so a clear conclusion is not possible. [sent-282, score-0.152]

87 3 A Note on Error Propagation and Experimental Configuration Early in our experimental work we noticed that performance often varied greatly depending on the debates that were allocated to training, tuning and testing. [sent-290, score-0.152]

88 This leads us to conclude that the performance of collective classification methods is highly variable. [sent-297, score-0.585]

89 They note that the cost of incorrectly classifying a given instance can be magnified in collective classification, because errors are propagated throughout the network. [sent-299, score-0.415]

90 The extent to which this happens may depend on the random interaction between base classification accuracy and network structure. [sent-300, score-0.311]

91 From these statistical and theoretical factors we infer that more reliable conclusions can be drawn from collective classification experiments that use cross-validation instead of a single, fixed data split. [sent-302, score-0.585]

92 (2009) use ICA to improve sentiment polarity classification of dialogue acts in a corpus of multi-party meeting transcripts. [sent-304, score-0.269]

93 provides another argument for the usefulness of collective classification 1513 (specifically ICA), in this case as applied at a dialogue act level and relying on a complex system of annotations for link information. [sent-308, score-0.683]

94 Concessions to other stances are modeled, but there are no overt citations in the data that could be used to induce the network structure required for collective classification. [sent-310, score-0.613]

95 Pang and Lee (2005) use metric labeling to perform multi-class collective classification of movie reviews. [sent-311, score-0.585]

96 Metric labeling is a multi-class equivalent of the minimum-cut technique in which optimization is done over a cost function incorporating content-only and citation scores. [sent-312, score-0.453]

97 In cases where both citation and similarity links are present, the overall link score is taken as the sum of the two scores. [sent-319, score-0.561]

98 In the framework of this research, the approach would be to train a link meta-classifier to take scores from both link classifiers and output an overall link probability. [sent-321, score-0.277]

99 We rejected linear-chain CRFs as a candidate approach for our evaluation on the grounds that the arbitrarily connected graphs used in collective classification can not be fully represented in graphical format, i. [sent-326, score-0.652]

100 12† Table 2: Speaker classification accuracies (%) over connected, isolated and all instances. [sent-375, score-0.239]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('citation', 0.453), ('collective', 0.385), ('vi', 0.26), ('classification', 0.2), ('ij', 0.188), ('vj', 0.185), ('convote', 0.183), ('citations', 0.152), ('incby', 0.137), ('setto', 0.137), ('congressional', 0.121), ('debate', 0.108), ('debates', 0.105), ('window', 0.1), ('loopy', 0.099), ('classifier', 0.095), ('class', 0.093), ('plane', 0.081), ('thomas', 0.08), ('iterative', 0.079), ('classifiers', 0.079), ('instances', 0.079), ('network', 0.076), ('burfoot', 0.069), ('capacities', 0.069), ('belief', 0.068), ('connected', 0.067), ('svm', 0.067), ('propagation', 0.067), ('link', 0.066), ('bansal', 0.063), ('speaker', 0.062), ('speeches', 0.06), ('vyi', 0.06), ('local', 0.057), ('speakers', 0.056), ('deviations', 0.056), ('uij', 0.056), ('trial', 0.055), ('count', 0.055), ('graph', 0.055), ('presence', 0.052), ('neighbor', 0.051), ('correlations', 0.05), ('markov', 0.05), ('flow', 0.05), ('somasundaran', 0.05), ('benchmark', 0.048), ('tuning', 0.047), ('bilgic', 0.046), ('contentonly', 0.046), ('gentleman', 0.046), ('gentlewoman', 0.046), ('louisiana', 0.046), ('pvj', 0.046), ('vxi', 0.046), ('bj', 0.046), ('deviation', 0.045), ('field', 0.044), ('sen', 0.044), ('offer', 0.042), ('links', 0.042), ('marginal', 0.041), ('assist', 0.04), ('dij', 0.04), ('pij', 0.04), ('label', 0.04), ('isolated', 0.039), ('voting', 0.038), ('node', 0.037), ('ica', 0.037), ('normalisation', 0.037), ('unigram', 0.037), ('sentiment', 0.037), ('document', 0.037), ('nodes', 0.036), ('pairwise', 0.036), ('ahead', 0.035), ('random', 0.035), ('knowing', 0.033), ('neighbors', 0.033), ('intentions', 0.033), ('dialogue', 0.032), ('classified', 0.032), ('labels', 0.032), ('ai', 0.031), ('instance', 0.03), ('platt', 0.029), ('transcript', 0.029), ('preferences', 0.029), ('established', 0.029), ('produce', 0.029), ('pang', 0.029), ('message', 0.028), ('ni', 0.028), ('di', 0.028), ('context', 0.028), ('crfs', 0.028), ('outputs', 0.026), ('normalization', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 73 acl-2011-Collective Classification of Congressional Floor-Debate Transcripts

Author: Clinton Burfoot ; Steven Bird ; Timothy Baldwin

Abstract: This paper explores approaches to sentiment classification of U.S. Congressional floordebate transcripts. Collective classification techniques are used to take advantage of the informal citation structure present in the debates. We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. Our experimental evaluation shows that the mean-field algorithm obtains the best results for the task, significantly outperforming the benchmark technique.

2 0.36261809 71 acl-2011-Coherent Citation-Based Summarization of Scientific Papers

Author: Amjad Abu-Jbara ; Dragomir Radev

Abstract: In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the pro- posed approach outperforms several baselines in terms of both extraction quality and fluency.

3 0.3008869 281 acl-2011-Sentiment Analysis of Citations using Sentence Structure-Based Features

Author: Awais Athar

Abstract: Sentiment analysis of citations in scientific papers and articles is a new and interesting problem due to the many linguistic differences between scientific texts and other genres. In this paper, we focus on the problem of automatic identification of positive and negative sentiment polarity in citations to scientific papers. Using a newly constructed annotated citation sentiment corpus, we explore the effectiveness of existing and novel features, including n-grams, specialised science-specific lexical features, dependency relations, sentence splitting and negation features. Our results show that 3-grams and dependencies perform best in this task; they outperform the sentence splitting, science lexicon and negation based features.

4 0.12223788 201 acl-2011-Learning From Collective Human Behavior to Introduce Diversity in Lexical Choice

Author: Vahed Qazvinian ; Dragomir R. Radev

Abstract: We analyze collective discourse, a collective human behavior in content generation, and show that it exhibits diversity, a property of general collective systems. Using extensive analysis, we propose a novel paradigm for designing summary generation systems that reflect the diversity of perspectives seen in reallife collective summarization. We analyze 50 sets of summaries written by human about the same story or artifact and investigate the diversity of perspectives across these summaries. We show how different summaries use various phrasal information units (i.e., nuggets) to express the same atomic semantic units, called factoids. Finally, we present a ranker that employs distributional similarities to build a net- work of words, and captures the diversity of perspectives by detecting communities in this network. Our experiments show how our system outperforms a wide range of other document ranking systems that leverage diversity.

5 0.090394318 332 acl-2011-Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification

Author: Danushka Bollegala ; David Weir ; John Carroll

Abstract: We describe a sentiment classification method that is applicable when we do not have any labeled data for a target domain but have some labeled data for multiple other domains, designated as the source domains. We automat- ically create a sentiment sensitive thesaurus using both labeled and unlabeled data from multiple source domains to find the association between words that express similar sentiments in different domains. The created thesaurus is then used to expand feature vectors to train a binary classifier. Unlike previous cross-domain sentiment classification methods, our method can efficiently learn from multiple source domains. Our method significantly outperforms numerous baselines and returns results that are better than or comparable to previous cross-domain sentiment classification methods on a benchmark dataset containing Amazon user reviews for different types of products.

6 0.080286503 270 acl-2011-SciSumm: A Multi-Document Summarization System for Scientific Articles

7 0.08009658 204 acl-2011-Learning Word Vectors for Sentiment Analysis

8 0.077687576 174 acl-2011-Insights from Network Structure for Text Mining

9 0.076282896 183 acl-2011-Joint Bilingual Sentiment Classification with Unlabeled Parallel Corpora

10 0.070606098 292 acl-2011-Target-dependent Twitter Sentiment Classification

11 0.069656312 258 acl-2011-Ranking Class Labels Using Query Sessions

12 0.065250821 133 acl-2011-Extracting Social Power Relationships from Natural Language

13 0.064877838 139 acl-2011-From Bilingual Dictionaries to Interlingual Document Representations

14 0.062891245 221 acl-2011-Model-Based Aligner Combination Using Dual Decomposition

15 0.062373642 103 acl-2011-Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation

16 0.062339921 260 acl-2011-Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model

17 0.060844474 10 acl-2011-A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing

18 0.059580285 123 acl-2011-Exact Decoding of Syntactic Translation Models through Lagrangian Relaxation

19 0.059287243 150 acl-2011-Hierarchical Text Classification with Latent Concepts

20 0.059099939 54 acl-2011-Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.194), (1, 0.115), (2, 0.027), (3, -0.014), (4, -0.032), (5, 0.027), (6, -0.026), (7, 0.049), (8, -0.015), (9, -0.033), (10, -0.001), (11, -0.056), (12, -0.086), (13, 0.076), (14, -0.159), (15, -0.041), (16, -0.021), (17, -0.015), (18, 0.087), (19, -0.113), (20, -0.061), (21, -0.131), (22, 0.125), (23, -0.031), (24, 0.046), (25, 0.111), (26, -0.115), (27, 0.276), (28, -0.152), (29, 0.026), (30, -0.122), (31, 0.067), (32, -0.067), (33, 0.068), (34, 0.053), (35, -0.045), (36, 0.027), (37, 0.003), (38, -0.099), (39, 0.097), (40, 0.029), (41, 0.013), (42, -0.026), (43, 0.067), (44, -0.005), (45, -0.1), (46, -0.026), (47, -0.044), (48, -0.031), (49, -0.141)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93731868 73 acl-2011-Collective Classification of Congressional Floor-Debate Transcripts

Author: Clinton Burfoot ; Steven Bird ; Timothy Baldwin

Abstract: This paper explores approaches to sentiment classification of U.S. Congressional floordebate transcripts. Collective classification techniques are used to take advantage of the informal citation structure present in the debates. We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. Our experimental evaluation shows that the mean-field algorithm obtains the best results for the task, significantly outperforming the benchmark technique.

2 0.82983488 71 acl-2011-Coherent Citation-Based Summarization of Scientific Papers

Author: Amjad Abu-Jbara ; Dragomir Radev

Abstract: In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the pro- posed approach outperforms several baselines in terms of both extraction quality and fluency.

3 0.64478022 281 acl-2011-Sentiment Analysis of Citations using Sentence Structure-Based Features

Author: Awais Athar

Abstract: Sentiment analysis of citations in scientific papers and articles is a new and interesting problem due to the many linguistic differences between scientific texts and other genres. In this paper, we focus on the problem of automatic identification of positive and negative sentiment polarity in citations to scientific papers. Using a newly constructed annotated citation sentiment corpus, we explore the effectiveness of existing and novel features, including n-grams, specialised science-specific lexical features, dependency relations, sentence splitting and negation features. Our results show that 3-grams and dependencies perform best in this task; they outperform the sentence splitting, science lexicon and negation based features.

4 0.62357944 67 acl-2011-Clairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis

Author: Amjad Abu-Jbara ; Dragomir Radev

Abstract: In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.

5 0.57993472 201 acl-2011-Learning From Collective Human Behavior to Introduce Diversity in Lexical Choice

Author: Vahed Qazvinian ; Dragomir R. Radev

Abstract: We analyze collective discourse, a collective human behavior in content generation, and show that it exhibits diversity, a property of general collective systems. Using extensive analysis, we propose a novel paradigm for designing summary generation systems that reflect the diversity of perspectives seen in reallife collective summarization. We analyze 50 sets of summaries written by human about the same story or artifact and investigate the diversity of perspectives across these summaries. We show how different summaries use various phrasal information units (i.e., nuggets) to express the same atomic semantic units, called factoids. Finally, we present a ranker that employs distributional similarities to build a net- work of words, and captures the diversity of perspectives by detecting communities in this network. Our experiments show how our system outperforms a wide range of other document ranking systems that leverage diversity.

6 0.48787624 270 acl-2011-SciSumm: A Multi-Document Summarization System for Scientific Articles

7 0.47675893 298 acl-2011-The ACL Anthology Searchbench

8 0.39766628 133 acl-2011-Extracting Social Power Relationships from Natural Language

9 0.3947762 84 acl-2011-Contrasting Opposing Views of News Articles on Contentious Issues

10 0.3874321 174 acl-2011-Insights from Network Structure for Text Mining

11 0.3701874 8 acl-2011-A Corpus of Scope-disambiguated English Text

12 0.36244956 338 acl-2011-Wikulu: An Extensible Architecture for Integrating Natural Language Processing Techniques with Wikis

13 0.36023131 314 acl-2011-Typed Graph Models for Learning Latent Attributes from Names

14 0.35395294 212 acl-2011-Local Histograms of Character N-grams for Authorship Attribution

15 0.35199657 231 acl-2011-Nonlinear Evidence Fusion and Propagation for Hyponymy Relation Mining

16 0.34556442 150 acl-2011-Hierarchical Text Classification with Latent Concepts

17 0.34257475 165 acl-2011-Improving Classification of Medical Assertions in Clinical Notes

18 0.33120924 286 acl-2011-Social Network Extraction from Texts: A Thesis Proposal

19 0.32426593 162 acl-2011-Identifying the Semantic Orientation of Foreign Words

20 0.3216041 50 acl-2011-Automatic Extraction of Lexico-Syntactic Patterns for Detection of Negation and Speculation Scopes


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.044), (17, 0.05), (26, 0.026), (37, 0.134), (39, 0.042), (41, 0.09), (55, 0.025), (59, 0.043), (65, 0.242), (72, 0.031), (88, 0.015), (91, 0.04), (96, 0.128), (97, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80723274 73 acl-2011-Collective Classification of Congressional Floor-Debate Transcripts

Author: Clinton Burfoot ; Steven Bird ; Timothy Baldwin

Abstract: This paper explores approaches to sentiment classification of U.S. Congressional floordebate transcripts. Collective classification techniques are used to take advantage of the informal citation structure present in the debates. We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. Our experimental evaluation shows that the mean-field algorithm obtains the best results for the task, significantly outperforming the benchmark technique.

2 0.69399512 62 acl-2011-Blast: A Tool for Error Analysis of Machine Translation Output

Author: Sara Stymne

Abstract: We present BLAST, an open source tool for error analysis of machine translation (MT) output. We believe that error analysis, i.e., to identify and classify MT errors, should be an integral part ofMT development, since it gives a qualitative view, which is not obtained by standard evaluation methods. BLAST can aid MT researchers and users in this process, by providing an easy-to-use graphical user interface. It is designed to be flexible, and can be used with any MT system, language pair, and error typology. The annotation task can be aided by highlighting similarities with a reference translation.

3 0.66616011 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

Author: Yee Seng Chan ; Dan Roth

Abstract: In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.

4 0.6646775 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

Author: Shasha Liao ; Ralph Grishman

Abstract: Annotating training data for event extraction is tedious and labor-intensive. Most current event extraction tasks rely on hundreds of annotated documents, but this is often not enough. In this paper, we present a novel self-training strategy, which uses Information Retrieval (IR) to collect a cluster of related documents as the resource for bootstrapping. Also, based on the particular characteristics of this corpus, global inference is applied to provide more confident and informative data selection. We compare this approach to self-training on a normal newswire corpus and show that IR can provide a better corpus for bootstrapping and that global inference can further improve instance selection. We obtain gains of 1.7% in trigger labeling and 2.3% in role labeling through IR and an additional 1.1% in trigger labeling and 1.3% in role labeling by applying global inference. 1

5 0.66193181 111 acl-2011-Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation

Author: Nathan Green

Abstract: Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure’s effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation. .

6 0.66107106 103 acl-2011-Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation

7 0.65779346 311 acl-2011-Translationese and Its Dialects

8 0.65659475 58 acl-2011-Beam-Width Prediction for Efficient Context-Free Parsing

9 0.65486085 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

10 0.65360594 277 acl-2011-Semi-supervised Relation Extraction with Large-scale Word Clustering

11 0.65336537 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

12 0.65268987 331 acl-2011-Using Large Monolingual and Bilingual Corpora to Improve Coordination Disambiguation

13 0.65118909 92 acl-2011-Data point selection for cross-language adaptation of dependency parsers

14 0.65104711 292 acl-2011-Target-dependent Twitter Sentiment Classification

15 0.65063214 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

16 0.64979625 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

17 0.64838725 209 acl-2011-Lexically-Triggered Hidden Markov Models for Clinical Document Coding

18 0.64810562 183 acl-2011-Joint Bilingual Sentiment Classification with Unlabeled Parallel Corpora

19 0.64751875 202 acl-2011-Learning Hierarchical Translation Structure with Linguistic Annotations

20 0.64688039 34 acl-2011-An Algorithm for Unsupervised Transliteration Mining with an Application to Word Alignment