acl acl2013 acl2013-41 knowledge-graph by maker-knowledge-mining

41 acl-2013-Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation


Source: pdf

Author: Or Biran ; Kathleen McKeown

Abstract: We present a reformulation of the word pair features typically used for the task of disambiguating implicit relations in the Penn Discourse Treebank. Our word pair features achieve significantly higher performance than the previous formulation when evaluated without additional features. In addition, we present results for a full system using additional features which achieves close to state of the art performance without resorting to gold syntactic parses or to context outside the relation.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We present a reformulation of the word pair features typically used for the task of disambiguating implicit relations in the Penn Discourse Treebank. [sent-3, score-0.649]

2 Our word pair features achieve significantly higher performance than the previous formulation when evaluated without additional features. [sent-4, score-0.294]

3 In addition, we present results for a full system using additional features which achieves close to state of the art performance without resorting to gold syntactic parses or to context outside the relation. [sent-5, score-0.176]

4 1 Introduction Discourse relations such as contrast and causality are part of what makes a text coherent. [sent-6, score-0.111]

5 Being able to automatically identify these relations is important for many NLP tasks such as generation, question answering and textual entailment. [sent-7, score-0.111]

6 In some cases, discourse relations contain an explicit marker such as but or because which makes it easy to identify the relation. [sent-8, score-1.002]

7 Prior work (Pitler and Nenkova, 2009) showed that where explicit markers exist, the class of the relation can be disambiguated with f-scores higher than 90%. [sent-9, score-0.446]

8 Predicting the class of implicit discourse relations, however, is much more difficult. [sent-10, score-0.55]

9 Without an explicit marker to rely on, work on this task initially focused on using lexical cues in the form of word pairs mined from large corpora where they appear around an explicit marker (Marcu and Echihabi, 2002). [sent-11, score-1.501]

10 The intuition is that these pairs will tend to represent semantic relationships which are related to the discourse marker (for example, word pairs often appearing around but may tend to be antonyms). [sent-12, score-1.112]

11 While this approach showed some success and has been used extensively in later work, it has been pointed out by multiple authors that many of the most useful word pairs Kathleen McKeown Columbia University Department of Computer Science kathy@cs. [sent-13, score-0.236]

12 edu are pairs of very common functional words, which contradicts the original intuition, and it is hard to explain why these are useful. [sent-15, score-0.194]

13 In this work we focus on the task of identifying and disambiguating implicit discourse relations which have no explicit marker. [sent-16, score-0.814]

14 In particular, we present a reformulation of the word pair features that have most often been used for this task in the past, replacing the sparse lexical features with dense aggregated score features. [sent-17, score-0.429]

15 We show that our formulation outperforms the original one while requiring less features, and that using a stop list of functional words does not significantly affect performance, suggesting that these features indeed represent semantically related content word pairs. [sent-19, score-0.434]

16 In addition, we present a system which combines these word pairs with additional features to achieve near state of the art performance without the use of syntactic parse features and of context outside the arguments of the relation. [sent-20, score-0.496]

17 1 2 Related Work This line of research began with (Marcu and Echihabi, 2002), who used a small number of unambiguous explicit markers and patterns involving them, such as [Arg1, but Arg2] to collect sets of word pairs from a large corpus using the crossproduct of the words in Arg1 and Arg2. [sent-22, score-0.628]

18 The authors created a feature out of each pair and built a naive bayes model directly from the unannotated corpus, updating the priors and posteriors using maximum likelihood. [sent-23, score-0.178]

19 While they demonstrated 1Reliable syntactic parses are not always available in domains other than newswire, and context (preceding relations, especially explicit relations) is not always available in some applications such as generation and question answering. [sent-24, score-0.145]

20 Second, it is constructed with the same unsupervised method they use to extract the word pairs by assuming that the patterns correspond to a particular relation and collecting the arguments from an unannotated corpus. [sent-29, score-0.484]

21 Even if the assumption is correct, these arguments are really taken from explicit relations with their markers removed, which as others have pointed out (Blair-Goldensohn et al. [sent-30, score-0.532]

22 More recently, implicit relation prediction has been evaluated on annotated implicit relations from the Penn Discourse Treebank (Prasad et al. [sent-33, score-0.674]

23 PDTB uses hierarchical relation types which abstract over other theories of discourse such as RST (Mann and Thompson, 1987) and SDRT (Asher and Lascarides, 2003). [sent-35, score-0.388]

24 It contains 40, 600 annotated relations from the WSJ corpus. [sent-36, score-0.111]

25 Each relation has two arguments, Arg1 and Arg2, and the annotators decide whether it is explicit or implicit. [sent-37, score-0.248]

26 They used word pairs as well as additional features to train four binary classifiers, each corresponding to one of the high-level PDTB relation classes. [sent-40, score-0.379]

27 Although other features proved to be useful, word pairs were still the major contributor to most of these classifiers. [sent-41, score-0.246]

28 In fact, their best system for comparison included only the word pair features, and for all other classes other than expansion the word pair features alone achieved an f-score within 2 points of the best system. [sent-42, score-0.44]

29 Interestingly, they found that training the word pair features on PDTB itself was more useful than training them on an external corpus like Marcu and Echihabi (2002), although in some cases they resort to information gain in the external corpus for filtering the word pairs. [sent-43, score-0.266]

30 (2010) used a similar method and added features that explicitly try to predict the implicit marker in the relation, increasing performance. [sent-45, score-0.77]

31 Most recently to the best of our knowledge, Park and Cardie (2012) achieved the highest performance by optimizing the feature set. [sent-46, score-0.036]

32 , 2009), who are unique in evaluating on the more fine-grained second-level relation classes. [sent-48, score-0.103]

33 1 The Problem: Sparsity While Marcu and Echihabi (2002)’s approach of training a classifier from an unannotated corpus provides a relatively large amount of training data, this data does not consist of true implicit relations. [sent-50, score-0.293]

34 In fact, even the larger corpus of Marcu and Echihabi (2002) may not be quite large enough to solve the sparsity issue, given that the number of word pairs is quadratic in the vocabulary. [sent-53, score-0.208]

35 (2007) report that using even a very small stop list (25 words) significantly reduces performance, which is counter-intuitive. [sent-55, score-0.101]

36 They attribute this finding to the sparsity of the feature space. [sent-56, score-0.077]

37 , 2009) also shows that the top word pairs (ranked by information gain) all contain common functional words, and are not at all the semantically-related content words that were imagined. [sent-58, score-0.248]

38 In the case of some reportedly useful word pairs (the-and; inthe; the-of. [sent-59, score-0.201]

39 ) it is hard to explain how they might affect performance except through overfitting. [sent-62, score-0.067]

40 2 The Solution: Aggregation Representing each word pair as a single feature has the advantage of allowing the weights for each pair to be learned directly from the data. [sent-64, score-0.248]

41 Another possible approach is to aggregate some of the pairs together and learn weights from the data only for the aggregated sets of words. [sent-66, score-0.23]

42 For this approach to be effective, the pairs we choose to group together should have similar meaning with regard to predicting the relation. [sent-67, score-0.177]

43 They used aggregated word pair set features to predict whether or not a sentence is argumentative. [sent-69, score-0.297]

44 Their method is to group together word pairs that have been collected around the same explicit discourse marker: for every discourse marker such as therefore or however, they have a single feature whose value depends only on the word pairs 70 collected around that marker. [sent-70, score-1.68]

45 This is reasonable given the intuition that the marker pattern is unambiguous and points at a particular relation. [sent-71, score-0.572]

46 Using one feature per marker can be seen as analogous (yet complementary) to Zhou et al. [sent-72, score-0.497]

47 (2010)’s approach of trying to predict the implicit connective by giving a score to each marker using a language model. [sent-73, score-0.691]

48 This work uses binary features which only indicate the appearance of one or more of the pairs. [sent-74, score-0.079]

49 The original frequencies of the word pairs are not used anywhere. [sent-75, score-0.167]

50 A more powerful approach is to use an informed function to weight the word pairs used inside each feature. [sent-76, score-0.167]

51 3 Our Approach Our approach is similar in that we choose to aggregate word pairs that were collected around the same explicit marker. [sent-78, score-0.394]

52 We first assembled a list of all 102 discourse markers used in PDTB, in both explicit and implicit relations. [sent-79, score-0.856]

53 2 Next, we extract word pairs for each marker from the Gigaword corpus by taking the cross product of words that appear in a sentence around that marker. [sent-80, score-0.719]

54 This is a simpler approach than using patterns - for example, the marker because can appear in two patterns: [Arg1 because Arg2] and [because Arg1, Arg2], and we only use the first. [sent-81, score-0.544]

55 We leave the task of listing the possible patterns for each of the 102 markers to future work because of the significant manual effort required. [sent-82, score-0.205]

56 Meanwhile, we rely on the fact that we use a very large corpus and hope that the simple pattern [Arg1 marker Arg2] is enough to make our features useful. [sent-83, score-0.572]

57 There are, of course, markers for which this pattern does not normally apply, such as by comparison or on one hand. [sent-84, score-0.238]

58 We expect these features to be down-weighted by the final classifier, as explained at the end of this section. [sent-85, score-0.109]

59 When collecting the pairs, we stem the words and discard pairs which appear only once around the marker. [sent-86, score-0.273]

60 We can think of each discourse marker as having a corresponding unordered “document”, where each word pair is a term with an associated frequency. [sent-87, score-0.879]

61 We want to create a feature for each marker such that for each data instance (that is, for each potential relation in the PDTB data) the value for the feature is the relevance of the marker document to the data instance. [sent-88, score-1.097]

62 2in implicit relations, there is no marker in the text but the implicit marker is provided by the human annotators Each data instance in PDTB consists of two arguments, and can therefore also be represented as a set of word pairs extracted from the crossproduct of the two arguments. [sent-89, score-1.617]

63 To represent the relevance of the instance to each marker, we set the value of the marker feature to the cosine similarity of the data instance and the marker’s “document”, where each word pair is a dimension. [sent-90, score-0.63]

64 Idf is calculated normally given that the set of all documents is defined as the 102 marker documents. [sent-94, score-0.504]

65 We then train a binary classifier (logistic regression) using these 102 features for each of the four high-level relations in PDTB: comparison, con- tingency, expansion and temporal. [sent-95, score-0.244]

66 To make sure our results are comparable to previous work, we treat EntRel relations as instances of expansion and use sections 2-20 for training and sections 2122 for testing. [sent-96, score-0.165]

67 As mentioned earlier, there are markers that do not fit the simple pattern we use. [sent-99, score-0.195]

68 In particular, some markers always or often appear as the first term of a sentence. [sent-100, score-0.204]

69 For these, we expect the list of word pairs to be empty or almost empty, since in most sentences there are no words on the left (and recall that we discard pairs that appear only once). [sent-101, score-0.418]

70 Since the features created for these markers will be uninformative, we expect them to be weighted down by the classifier and have no significant effect on prediction. [sent-102, score-0.272]

71 4 Evaluation of Word Pairs For our main evaluation, we evaluate the performance of word pair features when used with no additional features. [sent-103, score-0.242]

72 Our word pair features outperform the previous formulation (represented by the results reported by (Pitler et al. [sent-105, score-0.264]

73 For most relation classes, significantly better than pmi. [sent-107, score-0.103]

74 3 3Significance was verified for our own results tf is in all exper- iments shown in this paper with a standard t-test 71 ComparisonContingencyExpansionTemporal tpPf-imtlide-fir,denwft,oianslht. [sent-108, score-0.074]

75 F-measure (accuracy) for various implementations of the word pairs features ComparisonContingencyExpansionTemporal fZPBehai trksoleutrSaenytsdatuCels. [sent-121, score-0.246]

76 tf and pmi refer to the word pair features used (by tf implementation), and the numbers refer to the indeces of Table 3 Comp. [sent-134, score-0.36]

77 We also show results using a stop list of 50 common functional words. [sent-138, score-0.182]

78 The stop list has only a small effect on performance except in the temporal class. [sent-139, score-0.141]

79 This may be because of functional words like was and will which have a temporal effect. [sent-140, score-0.121]

80 5 Other Features For our secondary evaluation, we include additional features to complement the word pairs. [sent-141, score-0.228]

81 Previous work has relied on features based on the gold parse trees of the Penn Treebank (which overlaps with PDTB) and on contextual information from relations preceding the one being disambiguated. [sent-142, score-0.222]

82 We intentionally limit ourselves to features that do not require either so that our system can be readily used on arbitrary argument pairs. [sent-143, score-0.079]

83 WordNet Features: We define four features based on WordNet (Fellbaum, 1998) - Synonyms, Antonyms, Hypernyms and Hyponyms. [sent-144, score-0.079]

84 The values are the counts of word pairs in the cross-product of the words in the arguments that have the particular relation (synonymy, antonymy etc) between them. [sent-145, score-0.411]

85 Verb Class: This is the count of pairs of verbs from Arg1 and Arg2 that share the same class, defined as the highest level Levin verb class (Levin, 1993) from the LCS database (Dorr, 2001). [sent-146, score-0.209]

86 Money, Percentages and Numbers (MPN): The counts of currency symbols/abbreviations, percentage signs or cues (“percent”, “BPS”. [sent-147, score-0.101]

87 We include the counts of positive and negative words according to the MPQA subjectivity lexicon for both arguments. [sent-154, score-0.036]

88 We also do not explicitly group negation with polarity (although we do have separate negation features). [sent-157, score-0.271]

89 Each word in the DAL gets a score for three dimensions - pleasantness (pleasant - unpleasant), activation (passive - ac- tive) and imagery (hard to imagine - easy to imagine). [sent-159, score-0.166]

90 Content Similarity: We use the cosine similarity and word overlap of the arguments as features. [sent-161, score-0.128]

91 Negation: Presence or absence of negation terms in each of the arguments. [sent-162, score-0.122]

92 6 Evaluation of Additional Features For our secondary evaluation, we present results for each feature category on its own in Table 3 and for our best system for each of the relation classes in Table 2. [sent-164, score-0.245]

93 7 Conclusion We presented an aggregated approach to word pair features and showed that it outperforms the previous formulation for all relation types but contingency. [sent-168, score-0.452]

94 With this approach, using a stop list does not have a major effect on results for most relation classes, which suggests most of the word pairs affecting performance are content word pairs which may truly be semantically related to the discourse structure. [sent-170, score-0.823]

95 In addition, we introduced the new and useful WordNet, Affect, Length and Negation feature categories. [sent-171, score-0.036]

96 (2009), who used mostly similar features, for comparison and temporal and is competitive with the most recent state of the art systems for contingency and expansion without using any syntactic or context features. [sent-173, score-0.132]

97 Recognizing implicit discourse relations in the penn discourse treebank. [sent-210, score-0.967]

98 Improving implicit discourse relation recognition through feature set optimization. [sent-225, score-0.654]

99 Using syntax to disambiguate explicit discourse connectives in text. [sent-229, score-0.478]

100 Automatic sense prediction for implicit discourse relations in text. [sent-234, score-0.626]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('marker', 0.461), ('pdtb', 0.374), ('discourse', 0.285), ('pitler', 0.283), ('implicit', 0.23), ('markers', 0.163), ('explicit', 0.145), ('echihabi', 0.117), ('pairs', 0.113), ('relations', 0.111), ('relation', 0.103), ('negation', 0.091), ('marcu', 0.089), ('aggregated', 0.085), ('biran', 0.084), ('functional', 0.081), ('pair', 0.079), ('features', 0.079), ('comparisoncontingencyexpansiontemporal', 0.076), ('arguments', 0.074), ('tf', 0.074), ('stop', 0.068), ('crossproduct', 0.068), ('affect', 0.067), ('secondary', 0.065), ('unannotated', 0.063), ('penn', 0.056), ('asher', 0.056), ('polarity', 0.055), ('expansion', 0.054), ('word', 0.054), ('lcs', 0.053), ('reformulation', 0.053), ('zhou', 0.053), ('park', 0.052), ('formulation', 0.052), ('iarpa', 0.051), ('antonyms', 0.051), ('around', 0.05), ('connectives', 0.048), ('prasad', 0.047), ('imagine', 0.047), ('mpqa', 0.047), ('levin', 0.045), ('emily', 0.044), ('normally', 0.043), ('mann', 0.043), ('unambiguous', 0.043), ('disambiguating', 0.043), ('patterns', 0.042), ('columbia', 0.042), ('classes', 0.041), ('appear', 0.041), ('ani', 0.041), ('sparsity', 0.041), ('temporal', 0.04), ('owen', 0.039), ('pointed', 0.039), ('art', 0.038), ('wilson', 0.037), ('feature', 0.036), ('cardie', 0.036), ('wordnet', 0.036), ('counts', 0.036), ('intuition', 0.036), ('kathleen', 0.035), ('collecting', 0.035), ('class', 0.035), ('group', 0.034), ('recognizing', 0.034), ('currency', 0.034), ('occurence', 0.034), ('dal', 0.034), ('inthe', 0.034), ('logics', 0.034), ('mpn', 0.034), ('pleasantness', 0.034), ('reportedly', 0.034), ('unpleasant', 0.034), ('discard', 0.034), ('mckeown', 0.033), ('list', 0.033), ('pattern', 0.032), ('bonnie', 0.032), ('aggregate', 0.032), ('preceding', 0.032), ('treebank', 0.031), ('cues', 0.031), ('antonymy', 0.031), ('imagery', 0.031), ('ziheng', 0.031), ('database', 0.031), ('absence', 0.031), ('expect', 0.03), ('additional', 0.03), ('success', 0.03), ('predicting', 0.03), ('verb', 0.03), ('outside', 0.029), ('kathy', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 41 acl-2013-Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation

Author: Or Biran ; Kathleen McKeown

Abstract: We present a reformulation of the word pair features typically used for the task of disambiguating implicit relations in the Penn Discourse Treebank. Our word pair features achieve significantly higher performance than the previous formulation when evaluated without additional features. In addition, we present results for a full system using additional features which achieves close to state of the art performance without resorting to gold syntactic parses or to context outside the relation.

2 0.45529482 229 acl-2013-Leveraging Synthetic Discourse Data via Multi-task Learning for Implicit Discourse Relation Recognition

Author: Man Lan ; Yu Xu ; Zhengyu Niu

Abstract: To overcome the shortage of labeled data for implicit discourse relation recognition, previous works attempted to automatically generate training data by removing explicit discourse connectives from sentences and then built models on these synthetic implicit examples. However, a previous study (Sporleder and Lascarides, 2008) showed that models trained on these synthetic data do not generalize very well to natural (i.e. genuine) implicit discourse data. In this work we revisit this issue and present a multi-task learning based system which can effectively use synthetic data for implicit discourse relation recognition. Results on PDTB data show that under the multi-task learning framework our models with the use of the prediction of explicit discourse connectives as auxiliary learning tasks, can achieve an averaged F1 improvement of 5.86% over baseline models.

3 0.23294853 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

Author: Angeliki Lazaridou ; Ivan Titov ; Caroline Sporleder

Abstract: We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level. We deviate from the traditional view of discourse, as we induce types of discourse relations and associated discourse cues relevant to the considered opinion analysis task; consequently, the induced discourse relations play the role of opinion and aspect shifters. The quantitative analysis that we conducted indicated that the integration of a discourse model increased the prediction accuracy results with respect to the discourse-agnostic approach and the qualitative analysis suggests that the induced representations encode a meaningful discourse structure.

4 0.19730335 85 acl-2013-Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng ; Yashar Mehdad

Abstract: We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.

5 0.14306951 386 acl-2013-What causes a causal relation? Detecting Causal Triggers in Biomedical Scientific Discourse

Author: Claudiu Mihaila ; Sophia Ananiadou

Abstract: Current domain-specific information extraction systems represent an important resource for biomedical researchers, who need to process vaster amounts of knowledge in short times. Automatic discourse causality recognition can further improve their workload by suggesting possible causal connections and aiding in the curation of pathway models. We here describe an approach to the automatic identification of discourse causality triggers in the biomedical domain using machine learning. We create several baselines and experiment with various parameter settings for three algorithms, i.e., Conditional Random Fields (CRF), Support Vector Machines (SVM) and Random Forests (RF). Also, we evaluate the impact of lexical, syntactic and semantic features on each of the algorithms and look at er- rors. The best performance of 79.35% F-score is achieved by CRFs when using all three feature types.

6 0.13767877 189 acl-2013-ImpAr: A Deterministic Algorithm for Implicit Semantic Role Labelling

7 0.099185556 16 acl-2013-A Novel Translation Framework Based on Rhetorical Structure Theory

8 0.09153676 280 acl-2013-Plurality, Negation, and Quantification:Towards Comprehensive Quantifier Scope Disambiguation

9 0.091191433 28 acl-2013-A Unified Morpho-Syntactic Scheme of Stanford Dependencies

10 0.089562602 56 acl-2013-Argument Inference from Relevant Event Mentions in Chinese Argument Extraction

11 0.086519003 339 acl-2013-Temporal Signals Help Label Temporal Relations

12 0.083400898 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

13 0.077928528 298 acl-2013-Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms

14 0.077132493 306 acl-2013-SPred: Large-scale Harvesting of Semantic Predicates

15 0.075805336 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

16 0.07279516 318 acl-2013-Sentiment Relevance

17 0.07140518 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

18 0.067695364 242 acl-2013-Mining Equivalent Relations from Linked Data

19 0.06734743 245 acl-2013-Modeling Human Inference Process for Textual Entailment Recognition

20 0.064245142 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.199), (1, 0.102), (2, -0.045), (3, -0.034), (4, -0.091), (5, 0.099), (6, -0.03), (7, 0.048), (8, 0.013), (9, 0.182), (10, 0.168), (11, 0.03), (12, -0.139), (13, 0.108), (14, -0.007), (15, -0.124), (16, 0.124), (17, -0.18), (18, -0.179), (19, -0.264), (20, 0.069), (21, 0.151), (22, 0.061), (23, -0.095), (24, 0.017), (25, 0.065), (26, 0.085), (27, 0.069), (28, 0.004), (29, 0.006), (30, 0.051), (31, 0.007), (32, 0.057), (33, 0.034), (34, 0.046), (35, -0.033), (36, -0.002), (37, -0.063), (38, 0.045), (39, 0.095), (40, -0.033), (41, -0.061), (42, -0.012), (43, 0.039), (44, 0.024), (45, -0.056), (46, 0.033), (47, -0.025), (48, 0.077), (49, -0.016)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97879195 229 acl-2013-Leveraging Synthetic Discourse Data via Multi-task Learning for Implicit Discourse Relation Recognition

Author: Man Lan ; Yu Xu ; Zhengyu Niu

Abstract: To overcome the shortage of labeled data for implicit discourse relation recognition, previous works attempted to automatically generate training data by removing explicit discourse connectives from sentences and then built models on these synthetic implicit examples. However, a previous study (Sporleder and Lascarides, 2008) showed that models trained on these synthetic data do not generalize very well to natural (i.e. genuine) implicit discourse data. In this work we revisit this issue and present a multi-task learning based system which can effectively use synthetic data for implicit discourse relation recognition. Results on PDTB data show that under the multi-task learning framework our models with the use of the prediction of explicit discourse connectives as auxiliary learning tasks, can achieve an averaged F1 improvement of 5.86% over baseline models.

same-paper 2 0.93724436 41 acl-2013-Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation

Author: Or Biran ; Kathleen McKeown

Abstract: We present a reformulation of the word pair features typically used for the task of disambiguating implicit relations in the Penn Discourse Treebank. Our word pair features achieve significantly higher performance than the previous formulation when evaluated without additional features. In addition, we present results for a full system using additional features which achieves close to state of the art performance without resorting to gold syntactic parses or to context outside the relation.

3 0.82898784 85 acl-2013-Combining Intra- and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng ; Yashar Mehdad

Abstract: We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.

4 0.70633548 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

Author: Angeliki Lazaridou ; Ivan Titov ; Caroline Sporleder

Abstract: We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level. We deviate from the traditional view of discourse, as we induce types of discourse relations and associated discourse cues relevant to the considered opinion analysis task; consequently, the induced discourse relations play the role of opinion and aspect shifters. The quantitative analysis that we conducted indicated that the integration of a discourse model increased the prediction accuracy results with respect to the discourse-agnostic approach and the qualitative analysis suggests that the induced representations encode a meaningful discourse structure.

5 0.56935978 386 acl-2013-What causes a causal relation? Detecting Causal Triggers in Biomedical Scientific Discourse

Author: Claudiu Mihaila ; Sophia Ananiadou

Abstract: Current domain-specific information extraction systems represent an important resource for biomedical researchers, who need to process vaster amounts of knowledge in short times. Automatic discourse causality recognition can further improve their workload by suggesting possible causal connections and aiding in the curation of pathway models. We here describe an approach to the automatic identification of discourse causality triggers in the biomedical domain using machine learning. We create several baselines and experiment with various parameter settings for three algorithms, i.e., Conditional Random Fields (CRF), Support Vector Machines (SVM) and Random Forests (RF). Also, we evaluate the impact of lexical, syntactic and semantic features on each of the algorithms and look at er- rors. The best performance of 79.35% F-score is achieved by CRFs when using all three feature types.

6 0.56579983 280 acl-2013-Plurality, Negation, and Quantification:Towards Comprehensive Quantifier Scope Disambiguation

7 0.54413682 189 acl-2013-ImpAr: A Deterministic Algorithm for Implicit Semantic Role Labelling

8 0.4999676 16 acl-2013-A Novel Translation Framework Based on Rhetorical Structure Theory

9 0.49672222 242 acl-2013-Mining Equivalent Relations from Linked Data

10 0.47422108 61 acl-2013-Automatic Interpretation of the English Possessive

11 0.46870634 298 acl-2013-Recognizing Rare Social Phenomena in Conversation: Empowerment Detection in Support Group Chatrooms

12 0.43362701 339 acl-2013-Temporal Signals Help Label Temporal Relations

13 0.42514905 293 acl-2013-Random Walk Factoid Annotation for Collective Discourse

14 0.42000842 367 acl-2013-Universal Conceptual Cognitive Annotation (UCCA)

15 0.39269403 215 acl-2013-Large-scale Semantic Parsing via Schema Matching and Lexicon Extension

16 0.36792985 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

17 0.36727792 387 acl-2013-Why-Question Answering using Intra- and Inter-Sentential Causal Relations

18 0.36396897 365 acl-2013-Understanding Tables in Context Using Standard NLP Toolkits

19 0.3500374 28 acl-2013-A Unified Morpho-Syntactic Scheme of Stanford Dependencies

20 0.34625986 228 acl-2013-Leveraging Domain-Independent Information in Semantic Parsing


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.043), (6, 0.023), (11, 0.076), (14, 0.012), (24, 0.099), (26, 0.052), (35, 0.062), (42, 0.024), (48, 0.043), (70, 0.026), (88, 0.4), (90, 0.018), (95, 0.053)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.94941056 106 acl-2013-Decentralized Entity-Level Modeling for Coreference Resolution

Author: Greg Durrett ; David Hall ; Dan Klein

Abstract: Efficiently incorporating entity-level information is a challenge for coreference resolution systems due to the difficulty of exact inference over partitions. We describe an end-to-end discriminative probabilistic model for coreference that, along with standard pairwise features, enforces structural agreement constraints between specified properties of coreferent mentions. This model can be represented as a factor graph for each document that admits efficient inference via belief propagation. We show that our method can use entity-level information to outperform a basic pairwise system.

2 0.94447106 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

Author: Srinivasan Janarthanam ; Oliver Lemon ; Phil Bartie ; Tiphaine Dalmas ; Anna Dickinson ; Xingkun Liu ; William Mackaness ; Bonnie Webber

Abstract: We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments. In contrast to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context. We evaluated our system in comparison with Samsung S-Voice (which interfaces to Google navigation and Google search) with 17 users and found that users judged our system to be significantly more interesting to interact with and learn from. They also rated our system above Google search (with the Samsung S-Voice interface) for tourist information tasks.

3 0.91790521 327 acl-2013-Sorani Kurdish versus Kurmanji Kurdish: An Empirical Comparison

Author: Kyumars Sheykh Esmaili ; Shahin Salavati

Abstract: Resource scarcity along with diversity– both in dialect and script–are the two primary challenges in Kurdish language processing. In this paper we aim at addressing these two problems by (i) building a text corpus for Sorani and Kurmanji, the two main dialects of Kurdish, and (ii) highlighting some of the orthographic, phonological, and morphological differences between these two dialects from statistical and rule-based perspectives.

4 0.88357437 299 acl-2013-Reconstructing an Indo-European Family Tree from Non-native English Texts

Author: Ryo Nagata ; Edward Whittaker

Abstract: Mother tongue interference is the phenomenon where linguistic systems of a mother tongue are transferred to another language. Although there has been plenty of work on mother tongue interference, very little is known about how strongly it is transferred to another language and about what relation there is across mother tongues. To address these questions, this paper explores and visualizes mother tongue interference preserved in English texts written by Indo-European language speakers. This paper further explores linguistic features that explain why certain relations are preserved in English writing, and which contribute to related tasks such as native language identification.

same-paper 5 0.85055059 41 acl-2013-Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation

Author: Or Biran ; Kathleen McKeown

Abstract: We present a reformulation of the word pair features typically used for the task of disambiguating implicit relations in the Penn Discourse Treebank. Our word pair features achieve significantly higher performance than the previous formulation when evaluated without additional features. In addition, we present results for a full system using additional features which achieves close to state of the art performance without resorting to gold syntactic parses or to context outside the relation.

6 0.83801955 136 acl-2013-Enhanced and Portable Dependency Projection Algorithms Using Interlinear Glossed Text

7 0.79594761 345 acl-2013-The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis

8 0.69758743 111 acl-2013-Density Maximization in Context-Sense Metric Space for All-words WSD

9 0.66208565 252 acl-2013-Multigraph Clustering for Unsupervised Coreference Resolution

10 0.60425055 258 acl-2013-Neighbors Help: Bilingual Unsupervised WSD Using Context

11 0.56711751 105 acl-2013-DKPro WSD: A Generalized UIMA-based Framework for Word Sense Disambiguation

12 0.54864955 130 acl-2013-Domain-Specific Coreference Resolution with Lexicalized Features

13 0.54529017 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

14 0.54066336 196 acl-2013-Improving pairwise coreference models through feature space hierarchy learning

15 0.53842413 253 acl-2013-Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts

16 0.53321606 177 acl-2013-GuiTAR-based Pronominal Anaphora Resolution in Bengali

17 0.52909946 70 acl-2013-Bilingually-Guided Monolingual Dependency Grammar Induction

18 0.52196836 292 acl-2013-Question Classification Transfer

19 0.52153587 318 acl-2013-Sentiment Relevance

20 0.51796931 20 acl-2013-A Stacking-based Approach to Twitter User Geolocation Prediction