acl acl2011 acl2011-126 knowledge-graph by maker-knowledge-mining

126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction


Source: pdf

Author: Yee Seng Chan ; Dan Roth

Abstract: In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu l ino s Abstract In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. [sent-2, score-0.466]

2 We show that most of these second dimensional structures are relatively constrained and not difficult to identify. [sent-3, score-0.181]

3 We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. [sent-4, score-0.21]

4 In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. [sent-5, score-0.398]

5 ”, one would like to extract the relation that “the Seattle zoo” is located-at “Seattle”. [sent-14, score-0.161]

6 Conceptually, this is a rather simple approach as all spans of texts are treated uniformly and are being mapped to one of several relation types of interest. [sent-18, score-0.161]

7 In this paper we build on the observation that there exists a second dimension to the relation extraction problem that is orthogonal to the relation type di- mension: all relation types are expressed in one of several constrained syntactico-semantic structures. [sent-21, score-0.698]

8 For example, in “the Seattle zoo”, the entity mention “Seattle” modifies the noun “zoo”. [sent-24, score-0.479]

9 Thus, the two mentions “Seattle” and “the Seattle zoo”, are involved in what we later call a premodifier relation, one of several syntactico-semantic structures we identify in Section 3. [sent-25, score-0.508]

10 We highlight that all relation types can be expressed in one of several syntactico-semantic structures Premodifiers, Possessive, Preposition, Formulaic and Verbal. [sent-26, score-0.32]

11 As it turns out, most of these structures are relatively constrained and are not difficult to identify. [sent-27, score-0.181]

12 This suggests a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. [sent-28, score-0.21]

13 We explore one of these possibilities, making use of the constrained structures as a way to aid in the identification of the relations’ arguments. [sent-34, score-0.181]

14 The contributions of this paper are summarized below: – • We highlight that all relation types are expressed as one hoaft se alvler raell syntactico-semantic structures and show that most of these are relatively constrained and not difficult to identify. [sent-36, score-0.364]

15 • • We show that when one does not have a large nWuem sbheorw wo tfh training examples, exploiting trghee syntactico-semantic structures is crucial for RE performance. [sent-38, score-0.163]

16 The constrained structures allow us tojointly entertain argument candidates and relations built with them as arguments. [sent-40, score-0.23]

17 In the next section, we describe our relation extraction framework that leverages the syntacticosemantic structures. [sent-42, score-0.332]

18 We describe our mention entity typing system in Section 4 and features for the RE system in Section 5. [sent-44, score-0.513]

19 2 Relation Extraction Framework In Figure 1, we show the algorithm for training a typical baseline RE classifier (REbase), and for training a RE classifier that leverages the syntacticosemantic structures (REs). [sent-47, score-0.392]

20 When given a test example mention pair (xi,xj), we perform structure inference on it using the patterns described in Section 3. [sent-49, score-0.49]

21 Next, we show in Figure 2 our joint inference algorithmic framework that leverages the syntacticosemantic structures for RE, when mentions need to be predicted. [sent-51, score-0.48]

22 Since the structures are fairly constrained, we can use them to consider mention candidates that are originally predicted as non mentions. [sent-52, score-0.529]

23 As shown in Figure 2, we conservatively include such mentions when forming mention pairs, provided their null labels are predicted with a low probability t1. [sent-53, score-0.598]

24 Abbreviations: Lm: predicted entity label for mention m using the mention entity typing (MET) classifier described in Section 4; PMET: prediction probability according to the MET classifier; t: used for thresholding. [sent-61, score-1.118]

25 However, these works operate along the first dimension, that of using patterns to mine for relation type examples. [sent-65, score-0.277]

26 In contrast, in our RE framework, we apply patterns to identify the syntactico-semantic structure dimension first, and leverage this in the RE process. [sent-66, score-0.163]

27 In (Roth and Yih, 2007), the authors used entity types to constrain the (first dimensional) relation types allowed among them. [sent-67, score-0.267]

28 In our work, although a few of our patterns involve semantic type comparison, most of the patterns are syntactic in nature. [sent-68, score-0.203]

29 Most prior RE evaluation on ACE data assumed that mentions are already pre-annotated and given as input (Chan and Roth, 2010; Jiang and Zhai, 2007; Zhou et al. [sent-70, score-0.16]

30 In that work, the author did 553 not address the pipelined errors propagated from the mention identification process. [sent-73, score-0.39]

31 In ACE-2004 when the annotators tagged a pair of mentions with a relation, they also specified the type of syntactico-semantic structure2. [sent-75, score-0.225]

32 These four structures cover 80% of the mention pairs having valid semantic relations (we give the detailed breakdown in Section 7) and we show that they are relatively easy to identify using simple rules or patterns. [sent-82, score-0.562]

33 In this section, we indicate mentions using square bracket pairs, and use mi and mj to represent a mention pair. [sent-83, score-1.295]

34 Premodifier relations specify the proper adjective or proper noun premodifier and the following noun it modifies, e. [sent-85, score-0.318]

35 : [the [Seattle] zoo] Possessive indicates that the first mention is in a possessive case, e. [sent-87, score-0.468]

36 : [[California] ’s Governor] Preposition indicates that the two mentions are semantically related via the existence of a preposition, e. [sent-89, score-0.16]

37 We use the term syntactico-semantic structure in this paper as the mention pair exists in specific syntactic structures, and we use rules or patterns that are syntactically and semantically motivated to detect these structures. [sent-92, score-0.52]

38 If w1 = “’s” ∨ POS tag of w1 = POS, accept mention pair Let vl = last word in v+. [sent-105, score-0.552]

39 Abbreviations: Ec(m): coarse-grained entity type of mention m; Lthed: larebceelsd ning dependency path baelt;w ‘|e’e inn dthicea theesa odwr. [sent-109, score-0.479]

40 1 Premodifier Structures • • • We require that one of the mentions completely iWncel uredqeu tihree tohtahte or nmee onft tihone. [sent-117, score-0.16]

41 We use two patterns to differentiate between premodifier re plaatttieornnss a tond d possessive relations, by checking for the existence of POS tags PRP$, WP$, POS, and the word “’s”. [sent-122, score-0.705]

42 2 • • Possessive Structures The basic pattern for possessive is similar to tThhaet f boars premodifier: [u? [sent-127, score-0.155]

43 [v+] w+] If the word immediately following v+ is “’s” or iItfs t hPeO wSo tag mism “POS”, we accept vth+e ms “e’nst”io onr pair. [sent-128, score-0.138]

44 If the POS tag of the last word in v+ is either PRP$ or WP$, we accept the mention pair. [sent-129, score-0.482]

45 4 • Preposition Structures We first require the two mentions to be nonoverlapping, iarend t ceh tewcko mfore tthioen esx toiste bence n nofpatterns such as “IN [mi] [mj]” and “[mi] (IN|TO) [mj]”. [sent-132, score-0.16]

46 If the only dependency labels in the dependency path b deetwpeenedne tnhcey yh leaabde wlso irnds t hofe mi aenndmj are “prep” (prepositional modifier), accept the mention pair. [sent-133, score-0.726]

47 lw: last word in the mention; Bc(w) : the brown cluster bit string representing w; NE: named entity and whether they satisfy certain semantic entity type constraints. [sent-138, score-0.269]

48 We first describe the features (an overview is given in Table 2) and then describe how we extract candidate mentions from sentences during evaluation. [sent-140, score-0.16]

49 1 Mention Extraction Features Features for every word in the mention For every word wk in a mention mi, we extract seven features. [sent-142, score-0.806]

50 These are a combination of wk itself, its POS tag, and its integer offset from the last word (lw) in the mention. [sent-143, score-0.219]

51 For instance, given the mention “the operation room”, the offsets for the three words in the mention are -2, -1, and 0 respectively. [sent-144, score-0.713]

52 NE tags We automatically annotate the sentences with named entity (NE) tags using the named entity tagger of (Ratinov and Roth, 2009). [sent-153, score-0.212]

53 If the lw of mi coincides (actual token offset) with the lw of any NE annotated by the NE tagger, we extract the NE tag as a feature. [sent-155, score-0.667]

54 These mention candidates are then fed to our mention entity typing (MET) classifier for type prediction (more details in Section 6. [sent-160, score-0.97]

55 5 Relation Extraction System We build a supervised RE system using sentences annotated with entity mentions and predefined target relations. [sent-162, score-0.327]

56 During evaluation, when given a pair of mentions mi, mj, the system predicts whether any of the predefined target relation holds between the mention pair. [sent-163, score-0.735]

57 As part of our RE system, we need to extract the head word (hw) of a mention (m), which we heuristically determine as follows: if m contains a preposition and a noun preceding the preposition, we use the noun as the hw. [sent-168, score-0.528]

58 Given the hw of m, Pi,j refers to the sequence of POS tags in the immediate context of hw (we exclude the POS tag of hw). [sent-171, score-0.313]

59 For instance, P−2,−1 denotes the sequence of two POS tags on the immediate left of hw, and P−1,+1 denotes the POS tag on the immediate left of hw and the POS tag on the immediate right of hw. [sent-173, score-0.311]

60 Lumping all the predefined target relations into a single label, we build a binary classifier to predict whether any of the predefined relations exists between a given mention pair. [sent-188, score-0.664]

61 In this work, we model the argument order of the mentions when performing RE, since relations are usually asymmetric in nature. [sent-189, score-0.209]

62 For instance, we consider mi:EMP-ORG:mj and mj:EMP-ORG:mi to be distinct relation types. [sent-190, score-0.161]

63 In our experiments, we extracted a total of 55,520 examples or mention pairs. [sent-191, score-0.344]

64 Out of these, 4,01 1 are positive relation examples annotated with 6 coarse-grained relation types and 22 fine-grained relation types5. [sent-192, score-0.483]

65 We build a coarse-grained classifier to disambiguate between 13 relation labels (two asymmetric labels for each of the 6 coarse-grained relation types and a null label). [sent-193, score-0.481]

66 We similarly build a fine-grained classifier to disambiguate between 45 relation labels. [sent-194, score-0.274]

67 In that work, we also highlight that ACE annotators rarely duplicate a relation link for coreferent mentions. [sent-197, score-0.183]

68 For instance, assume mentions mi, mj, and mk are in the same sentence, mentions mi and mj are coreferent, and the annotators tag the mention pair mj, mk with a particular relation r. [sent-198, score-1.773]

69 The ACE2004 annotation guidelines states that the DISC relation is established only for the purposes of the discourse and does not reference an official entity relevant to world knowledge. [sent-200, score-0.267]

70 relation r between mi and mk, thus leaving the gold relation label as null. [sent-203, score-0.642]

71 Since the RE recall scores only take into account non-null relation labels, this scoring method does not change the recall, but could marginally increase the precision scores by decreasing the count of RE predictions. [sent-207, score-0.161]

72 In the experiments described in this section, we use the gold mentions available in the data. [sent-213, score-0.16]

73 In Section 2, we described how we trained a baseline RE classifier (REbase) and a RE classifier using the syntactico-semantic patterns (REs). [sent-217, score-0.203]

74 We first apply REbase on each test example mention pair (mi,mj) to obtain the RE baseline results, showing these in Table 4 under the column “10 documents”, and in the rows “Binary”, “Coarse”, and “Fine”. [sent-218, score-0.409]

75 ACE-2004 defines 7 coarse-grained entity types, each of which are then refined into 43 fineImprovement in (gold mentions) RE by using patterns Proportion (%) of data used for training Figure 3: Improvement in (gold mention) RE. [sent-226, score-0.193]

76 Using the ACE data annotated with mentions and predefined entity types, we build a fine-grained mention entity typing (MET) classifier to disambiguate between 44 labels (43 finegrained and a null label to indicate not a mention). [sent-228, score-0.995]

77 To obtain the coarse-grained entity type predictions from the classifier, we simply check which coarsegrained type the fine-grained prediction belongs to. [sent-229, score-0.259]

78 We apply REbase on all mention pairs (mi,mj) where both mi and mj have non null entity type predictions. [sent-232, score-1.316]

79 In Section 2, we described our algorithmic approach (Figure 2) that takes advantage of the structures with predicted mentions. [sent-234, score-0.229]

80 The results show that by leveraging syntacticosemantic structures, we obtain significant F-measure improvements of 8. [sent-236, score-0.144]

81 In Section 6, we note that out of 55,520 mention pairs, only 4,01 1 exhibit valid relations. [sent-254, score-0.376]

82 Thus, the proportion of positive relation examples is very sparse at 7. [sent-255, score-0.161]

83 If we can effectively identify and discard most of the negative relation examples, it should improve RE performance, including yielding training data with a more balanced label distribution. [sent-257, score-0.184]

84 As shown in Table 6, the patterns are effective in inferring the structure of mention pairs. [sent-259, score-0.454]

85 For instance, applying the premodifier patterns on the 55,520 mention pairs, we correctly identified 86. [sent-260, score-0.642]

86 8% of the 1,224 premodifier occurrences as premodifiers, while incurring a false-positive rate of only about 20%6. [sent-261, score-0.211]

87 8% note that preposition structures are relatively harder to identify. [sent-264, score-0.24]

88 Some of the reasons are due to possibly multiple prepositions in between a mention pair, preposition sense ambiguity, pp-attachment ambiguity, etc. [sent-265, score-0.447]

89 However, in general, we observe that inferring the structures allows us to discard a large portion of the mention pairs which have no valid relation between them. [sent-266, score-0.674]

90 The intuition behind this is the following: if we infer that there is a syntacticosemantic structure between a mention pair, then it is likely that the mention pair exhibits a valid relation. [sent-267, score-0.892]

91 Conversely, if there is a valid relation between a mention pair, then it is likely that there exists a syntactico-semantic structure between the mentions. [sent-268, score-0.59]

92 We note that leveraging the structures provides improvements on all experimental settings. [sent-275, score-0.168]

93 There are probably many near misses when we apply our structure patterns on predicted mentions. [sent-281, score-0.158]

94 For instance, for both premodifier and possessive structures, we require that one mention completely includes the other. [sent-282, score-0.679]

95 Relaxing this might potentially recover additional valid mention pairs and improve performance. [sent-283, score-0.376]

96 It will also be interesting to feedback the predictions of the structure patterns to the mention entity typing classifier and possibly retrain to obtain 559 a better classifier. [sent-285, score-0.711]

97 We thank Ming-Wei Chang and Quang Do for building the mention extraction system. [sent-289, score-0.376]

98 A systematic exploration of the feature space for relation extraction. [sent-318, score-0.161]

99 Relation extraction using convolution tree kernel expanded with entity features. [sent-335, score-0.138]

100 Global inference for entity and relation identification via a linear programming formulation. [sent-343, score-0.267]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('mj', 0.494), ('mention', 0.344), ('mi', 0.297), ('re', 0.283), ('premodifier', 0.211), ('relation', 0.161), ('mentions', 0.16), ('lw', 0.147), ('structures', 0.137), ('formulaic', 0.132), ('rebase', 0.13), ('possessive', 0.124), ('wk', 0.118), ('hw', 0.115), ('syntacticosemantic', 0.113), ('entity', 0.106), ('preposition', 0.103), ('offset', 0.101), ('zoo', 0.097), ('res', 0.094), ('patterns', 0.087), ('accept', 0.085), ('chan', 0.082), ('roth', 0.078), ('pos', 0.077), ('typing', 0.063), ('seattle', 0.062), ('ne', 0.06), ('classifier', 0.058), ('jj', 0.056), ('dimension', 0.053), ('tag', 0.053), ('met', 0.053), ('relations', 0.049), ('premodifiers', 0.049), ('predicted', 0.048), ('null', 0.046), ('algorithmic', 0.044), ('bc', 0.044), ('ace', 0.044), ('constrained', 0.044), ('disc', 0.043), ('coarsegrained', 0.039), ('dg', 0.039), ('binary', 0.039), ('coarse', 0.039), ('jiang', 0.038), ('wp', 0.037), ('ec', 0.036), ('pair', 0.036), ('predefined', 0.034), ('mk', 0.034), ('vl', 0.034), ('extraction', 0.032), ('categoryfeature', 0.032), ('fundel', 0.032), ('lmi', 0.032), ('lmj', 0.032), ('mcand', 0.032), ('fine', 0.032), ('valid', 0.032), ('pattern', 0.031), ('leveraging', 0.031), ('liblinear', 0.03), ('zhou', 0.03), ('zhai', 0.03), ('predictions', 0.03), ('exists', 0.03), ('immediate', 0.03), ('documents', 0.029), ('noun', 0.029), ('type', 0.029), ('ratinov', 0.029), ('rows', 0.029), ('satisfy', 0.028), ('disambiguate', 0.028), ('arguments', 0.028), ('build', 0.027), ('constituents', 0.027), ('bnews', 0.026), ('nwire', 0.026), ('org', 0.026), ('leverages', 0.026), ('exploiting', 0.026), ('prediction', 0.026), ('offsets', 0.025), ('greenwood', 0.025), ('qian', 0.025), ('abbreviations', 0.025), ('parse', 0.024), ('fold', 0.024), ('chunk', 0.024), ('coincides', 0.023), ('pipelined', 0.023), ('propagated', 0.023), ('seng', 0.023), ('label', 0.023), ('structure', 0.023), ('preceding', 0.023), ('highlight', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

Author: Yee Seng Chan ; Dan Roth

Abstract: In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.

2 0.26847172 277 acl-2011-Semi-supervised Relation Extraction with Large-scale Word Clustering

Author: Ang Sun ; Ralph Grishman ; Satoshi Sekine

Abstract: We present a simple semi-supervised relation extraction system with large-scale word clustering. We focus on systematically exploring the effectiveness of different cluster-based features. We also propose several statistical methods for selecting clusters at an appropriate level of granularity. When training on different sizes of data, our semi-supervised approach consistently outperformed a state-of-the-art supervised baseline system. 1

3 0.19045986 12 acl-2011-A Generative Entity-Mention Model for Linking Entities with Knowledge Base

Author: Xianpei Han ; Le Sun

Abstract: Linking entities with knowledge base (entity linking) is a key issue in bridging the textual data with the structural knowledge base. Due to the name variation problem and the name ambiguity problem, the entity linking decisions are critically depending on the heterogenous knowledge of entities. In this paper, we propose a generative probabilistic model, called entitymention model, which can leverage heterogenous entity knowledge (including popularity knowledge, name knowledge and context knowledge) for the entity linking task. In our model, each name mention to be linked is modeled as a sample generated through a three-step generative story, and the entity knowledge is encoded in the distribution of entities in document P(e), the distribution of possible names of a specific entity P(s|e), and the distribution of possible contexts of a specific entity P(c|e). To find the referent entity of a name mention, our method combines the evidences from all the three distributions P(e), P(s|e) and P(c|e). Experimental results show that our method can significantly outperform the traditional methods. 1

4 0.16366872 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

Author: Sameer Singh ; Amarnag Subramanya ; Fernando Pereira ; Andrew McCallum

Abstract: Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.

5 0.14975744 86 acl-2011-Coreference for Learning to Extract Relations: Yes Virginia, Coreference Matters

Author: Ryan Gabbard ; Marjorie Freedman ; Ralph Weischedel

Abstract: As an alternative to requiring substantial supervised relation training data, many have explored bootstrapping relation extraction from a few seed examples. Most techniques assume that the examples are based on easily spotted anchors, e.g., names or dates. Sentences in a corpus which contain the anchors are then used to induce alternative ways of expressing the relation. We explore whether coreference can improve the learning process. That is, if the algorithm considered examples such as his sister, would accuracy be improved? With coreference, we see on average a 2-fold increase in F-Score. Despite using potentially errorful machine coreference, we see significant increase in recall on all relations. Precision increases in four cases and decreases in six.

6 0.13467579 114 acl-2011-End-to-End Relation Extraction Using Distant Supervision from External Semantic Repositories

7 0.12977675 116 acl-2011-Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers

8 0.12731816 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization

9 0.11701045 63 acl-2011-Bootstrapping coreference resolution using word associations

10 0.10889732 23 acl-2011-A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models

11 0.10884731 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

12 0.10409205 40 acl-2011-An Error Analysis of Relation Extraction in Social Media Documents

13 0.097171202 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

14 0.093179055 129 acl-2011-Extending the Entity Grid with Entity-Specific Features

15 0.088754259 328 acl-2011-Using Cross-Entity Inference to Improve Event Extraction

16 0.081604853 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

17 0.081111036 224 acl-2011-Models and Training for Unsupervised Preposition Sense Disambiguation

18 0.080549344 262 acl-2011-Relation Guided Bootstrapping of Semantic Lexicons

19 0.079366609 110 acl-2011-Effective Use of Function Words for Rule Generalization in Forest-Based Translation

20 0.07755737 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.184), (1, 0.036), (2, -0.165), (3, -0.03), (4, 0.13), (5, 0.048), (6, -0.002), (7, -0.116), (8, -0.204), (9, 0.036), (10, 0.027), (11, -0.045), (12, -0.064), (13, -0.026), (14, 0.042), (15, 0.061), (16, -0.069), (17, -0.087), (18, 0.025), (19, 0.097), (20, -0.099), (21, -0.036), (22, 0.118), (23, 0.033), (24, 0.007), (25, -0.009), (26, 0.066), (27, 0.034), (28, 0.037), (29, 0.011), (30, -0.04), (31, 0.02), (32, 0.013), (33, 0.008), (34, -0.033), (35, 0.002), (36, -0.02), (37, -0.094), (38, -0.027), (39, -0.025), (40, -0.056), (41, 0.062), (42, 0.076), (43, 0.009), (44, 0.012), (45, -0.088), (46, -0.052), (47, 0.042), (48, -0.092), (49, 0.052)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96480948 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

Author: Yee Seng Chan ; Dan Roth

Abstract: In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.

2 0.79003137 277 acl-2011-Semi-supervised Relation Extraction with Large-scale Word Clustering

Author: Ang Sun ; Ralph Grishman ; Satoshi Sekine

Abstract: We present a simple semi-supervised relation extraction system with large-scale word clustering. We focus on systematically exploring the effectiveness of different cluster-based features. We also propose several statistical methods for selecting clusters at an appropriate level of granularity. When training on different sizes of data, our semi-supervised approach consistently outperformed a state-of-the-art supervised baseline system. 1

3 0.69750136 40 acl-2011-An Error Analysis of Relation Extraction in Social Media Documents

Author: Gregory Brown

Abstract: Relation extraction in documents allows the detection of how entities being discussed in a document are related to one another (e.g. partof). This paper presents an analysis of a relation extraction system based on prior work but applied to the J.D. Power and Associates Sentiment Corpus to examine how the system works on documents from a range of social media. The results are examined on three different subsets of the JDPA Corpus, showing that the system performs much worse on documents from certain sources. The proposed explanation is that the features used are more appropriate to text with strong editorial standards than the informal writing style of blogs.

4 0.68480545 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

Author: Raphael Hoffmann ; Congle Zhang ; Xiao Ling ; Luke Zettlemoyer ; Daniel S. Weld

Abstract: Information extraction (IE) holds the promise of generating a large-scale knowledge base from the Web’s natural language text. Knowledge-based weak supervision, using structured data to heuristically label a training corpus, works towards this goal by enabling the automated learning of a potentially unbounded number of relation extractors. Recently, researchers have developed multiinstance learning algorithms to combat the noisy training data that can come from heuristic labeling, but their models assume relations are disjoint — for example they cannot extract the pair Founded ( Jobs Apple ) and CEO-o f ( Jobs Apple ) . , , This paper presents a novel approach for multi-instance learning with overlapping relations that combines a sentence-level extrac- , tion model with a simple, corpus-level component for aggregating the individual facts. We apply our model to learn extractors for NY Times text using weak supervision from Freebase. Experiments show that the approach runs quickly and yields surprising gains in accuracy, at both the aggregate and sentence level.

5 0.68321252 86 acl-2011-Coreference for Learning to Extract Relations: Yes Virginia, Coreference Matters

Author: Ryan Gabbard ; Marjorie Freedman ; Ralph Weischedel

Abstract: As an alternative to requiring substantial supervised relation training data, many have explored bootstrapping relation extraction from a few seed examples. Most techniques assume that the examples are based on easily spotted anchors, e.g., names or dates. Sentences in a corpus which contain the anchors are then used to induce alternative ways of expressing the relation. We explore whether coreference can improve the learning process. That is, if the algorithm considered examples such as his sister, would accuracy be improved? With coreference, we see on average a 2-fold increase in F-Score. Despite using potentially errorful machine coreference, we see significant increase in recall on all relations. Precision increases in four cases and decreases in six.

6 0.65980422 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

7 0.65253186 114 acl-2011-End-to-End Relation Extraction Using Distant Supervision from External Semantic Repositories

8 0.61117381 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization

9 0.57827842 12 acl-2011-A Generative Entity-Mention Model for Linking Entities with Knowledge Base

10 0.54541641 191 acl-2011-Knowledge Base Population: Successful Approaches and Challenges

11 0.53422332 23 acl-2011-A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models

12 0.49930698 262 acl-2011-Relation Guided Bootstrapping of Semantic Lexicons

13 0.49328521 63 acl-2011-Bootstrapping coreference resolution using word associations

14 0.46874142 261 acl-2011-Recognizing Named Entities in Tweets

15 0.45861995 85 acl-2011-Coreference Resolution with World Knowledge

16 0.45803031 293 acl-2011-Template-Based Information Extraction without the Templates

17 0.45243821 322 acl-2011-Unsupervised Learning of Semantic Relation Composition

18 0.45223743 129 acl-2011-Extending the Entity Grid with Entity-Specific Features

19 0.44776019 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

20 0.42891407 116 acl-2011-Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.013), (17, 0.075), (26, 0.022), (37, 0.114), (39, 0.065), (41, 0.101), (53, 0.012), (55, 0.044), (59, 0.06), (71, 0.167), (72, 0.056), (91, 0.055), (96, 0.102), (97, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88060731 273 acl-2011-Semantic Representation of Negation Using Focus Detection

Author: Eduardo Blanco ; Dan Moldovan

Abstract: Negation is present in all human languages and it is used to reverse the polarity of part of statements that are otherwise affirmative by default. A negated statement often carries positive implicit meaning, but to pinpoint the positive part from the negative part is rather difficult. This paper aims at thoroughly representing the semantics of negation by revealing implicit positive meaning. The proposed representation relies on focus of negation detection. For this, new annotation over PropBank and a learning algorithm are proposed.

2 0.85320938 307 acl-2011-Towards Tracking Semantic Change by Visual Analytics

Author: Christian Rohrdantz ; Annette Hautli ; Thomas Mayer ; Miriam Butt ; Daniel A. Keim ; Frans Plank

Abstract: This paper presents a new approach to detecting and tracking changes in word meaning by visually modeling and representing diachronic development in word contexts. Previous studies have shown that computational models are capable of clustering and disambiguating senses, a more recent trend investigates whether changes in word meaning can be tracked by automatic methods. The aim of our study is to offer a new instrument for investigating the diachronic development of word senses in a way that allows for a better understanding of the nature of semantic change in general. For this purpose we combine techniques from the field of Visual Analytics with unsupervised methods from Natural Language Processing, allowing for an interactive visual exploration of semantic change.

same-paper 3 0.85136175 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

Author: Yee Seng Chan ; Dan Roth

Abstract: In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.

4 0.74936956 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

Author: Shasha Liao ; Ralph Grishman

Abstract: Annotating training data for event extraction is tedious and labor-intensive. Most current event extraction tasks rely on hundreds of annotated documents, but this is often not enough. In this paper, we present a novel self-training strategy, which uses Information Retrieval (IR) to collect a cluster of related documents as the resource for bootstrapping. Also, based on the particular characteristics of this corpus, global inference is applied to provide more confident and informative data selection. We compare this approach to self-training on a normal newswire corpus and show that IR can provide a better corpus for bootstrapping and that global inference can further improve instance selection. We obtain gains of 1.7% in trigger labeling and 2.3% in role labeling through IR and an additional 1.1% in trigger labeling and 1.3% in role labeling by applying global inference. 1

5 0.74687439 69 acl-2011-Clause Restructuring For SMT Not Absolutely Helpful

Author: Susan Howlett ; Mark Dras

Abstract: There are a number of systems that use a syntax-based reordering step prior to phrasebased statistical MT. An early work proposing this idea showed improved translation performance, but subsequent work has had mixed results. Speculations as to cause have suggested the parser, the data, or other factors. We systematically investigate possible factors to give an initial answer to the question: Under what conditions does this use of syntax help PSMT?

6 0.74029505 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

7 0.73290813 209 acl-2011-Lexically-Triggered Hidden Markov Models for Clinical Document Coding

8 0.73280394 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

9 0.73164892 58 acl-2011-Beam-Width Prediction for Efficient Context-Free Parsing

10 0.72754699 316 acl-2011-Unary Constraints for Efficient Context-Free Parsing

11 0.72685224 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

12 0.72598255 331 acl-2011-Using Large Monolingual and Bilingual Corpora to Improve Coordination Disambiguation

13 0.72595453 32 acl-2011-Algorithm Selection and Model Adaptation for ESL Correction Tasks

14 0.72572213 277 acl-2011-Semi-supervised Relation Extraction with Large-scale Word Clustering

15 0.72472823 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

16 0.72461015 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

17 0.72448683 269 acl-2011-Scaling up Automatic Cross-Lingual Semantic Role Annotation

18 0.72312838 311 acl-2011-Translationese and Its Dialects

19 0.72240353 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

20 0.72207701 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization