emnlp emnlp2013 emnlp2013-90 knowledge-graph by maker-knowledge-mining

90 emnlp-2013-Generating Coherent Event Schemas at Scale


Source: pdf

Author: Niranjan Balasubramanian ; Stephen Soderland ; Mausam ; Oren Etzioni

Abstract: Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 However, our analysis of their schemas identifies several weaknesses, e. [sent-2, score-0.597]

2 , some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. [sent-4, score-0.702]

3 It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. [sent-5, score-0.092]

4 This often leads to subject-verb-object triples that are not meaningful in the real-world. [sent-6, score-0.106]

5 We present a novel approach to inducing open-domain event schemas that overcomes these limitations. [sent-7, score-0.785]

6 Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). [sent-8, score-0.147]

7 In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. [sent-9, score-1.194]

8 Both Rel-grams and event schemas are freely available to the research community. [sent-10, score-0.748]

9 1 Introduction Event schemas (also known as templates or frames) have been widely used in information extraction. [sent-11, score-0.597]

10 An event schema is a set of actors (also known as slots) that play different roles in an event, such as the perpetrator, victim, and instrument in a bombing event. [sent-12, score-0.784]

11 et z ioni} @ c s washingt on edu 1721 resented as a set of (Actor, Rel, Actor) triples, and a set of instances for each actor A1, A2, etc. [sent-17, score-0.239]

12 Until recently, all event schemas in use in NLP were hand-engineered, e. [sent-19, score-0.748]

13 , the MUC templates and ACE event relations (ARPA, 1991 ; ARPA, 1998; Doddington et al. [sent-21, score-0.198]

14 The seminal work of Chambers and Jurafsky (2009) showed that event schemas can also be in- duced automatically from text corpora. [sent-24, score-0.748]

15 Instead of labeled roles these schemas have a set of relations and actors that serve as Their system is fully automatic, domain-independent, and scales to large text corpora. [sent-25, score-0.921]

16 1 However, we identify several limitations in the schemas produced by their Their schemas system. [sent-27, score-1.194]

17 2 1In the rest of this paper we use event schemas to refer to these automatically induced schemas with actors and relations. [sent-28, score-1.613]

18 edu /Us e rs / c s / nchamber / dat a / s chemas / acl 0 9 ProceeSdeiantgtlse o,f W thaesh 2i0n1gt3o nC,o UnSfeAre,n 1c8e- o2n1 E Omctpoibriecra 2l0 M13et. [sent-31, score-0.11]

19 oc d2s0 i1n3 N Aastusorcaila Ltiaonng fuoarg Ceo Pmrpoucetastsi onnga,l p Laignegsu 1is7t2ic1s–1731, mixes the events of fire spreading and disease spreading. [sent-33, score-0.17]

20 often lack coherence: mixing unrelated events and having actors whose entities do not play the same role in the schema. [sent-34, score-0.348]

21 Table 2 shows an event schema from Chambers that mixes the events of fire spreading and disease spreading. [sent-35, score-0.65]

22 Much of the incoherence of Chambers’ schemas can be traced to their representation that uses pairs of elements from an assertion, thus, treating subjectverb and verb-object separately. [sent-36, score-0.667]

23 This often leads to subject-verb-object triples that do not make sense in the real world. [sent-37, score-0.115]

24 Another limitation in schemas Chambers released is that they restrict schemas to two actors, which can result in combining different actors. [sent-39, score-1.238]

25 1 Contributions We present an event schema induction algorithm that overcomes these weaknesses. [sent-42, score-0.517]

26 Our basic representation is triples of the form (Arg1, Relation, Arg2), extracted from a text corpus using Open Information Extraction (Mausam et al. [sent-43, score-0.109]

27 The use of triples aids in agreement between subject and object of a relation. [sent-45, score-0.116]

28 We also assign semantic types to arguments, both to alleviate data sparsity and to produce coherent actors for our schemas. [sent-49, score-0.425]

29 Table 1 shows an event schema generated by our system. [sent-50, score-0.48]

30 The schema makes several related assertions about a person using a drug, failing a test, and getting suspended. [sent-52, score-0.386]

31 The main actors in the schema include the person who failed the test, the drug used, and the agent that suspended the person. [sent-53, score-0.628]

32 1722 Our first step in creating event schemas is to tabulate co-occurrence of tuples in a database that we call Rel-grams (relational n-grams) (Sections 3, 5. [sent-54, score-1.19]

33 We then perform analysis on a graph induced from the Rel-grams database and use this to create event schemas (Section 4). [sent-56, score-0.822]

34 We compared our event schemas with those of Chambers on several metrics including whether the schema pertains to a coherent topic or event and whether the actors play a coherent role in that event (Section 5. [sent-57, score-2.089]

35 Amazon Mechanical Turk workers judged that our schemas have significantly better coherence 92% versus 82% have coherent topic and 81% versus 59% have coherent actors. [sent-59, score-1.113]

36 We release our open domain event schemas and the Rel-grams database3 for further use by the NLP community. [sent-60, score-0.767]

37 – 2 System Overview Our approach to schema generation is based on the idea that frequently co-occurring relations in text capture relatedness of assertions about real-world events. [sent-61, score-0.433]

38 We begin by extracting a set of relational tuples from a large text corpus and tabulate occurrence of pairs of tuples in a database. [sent-62, score-0.873]

39 We then construct a graph from this database and identify high-connectivity nodes (relational tuples) in this graph as a starting point for constructing event schemas. [sent-63, score-0.226]

40 We use graph analysis to rank the tuples and merge arguments to form the actors in the schema. [sent-64, score-0.651]

41 3 Modeling Relational Co-occurrence In order to tabulate pairwise occurences of relational tuples we need a suitable relation-based representation. [sent-65, score-0.547]

42 We now describe the extraction and representation of relations, a database for storing cooccurrence information, and our probabilistic model for the co-occurrence. [sent-66, score-0.088]

43 We call this model Relgrams, as it can be seen as a relational analog to the n-grams language model. [sent-67, score-0.138]

44 1 Relations Extraction and Representation We extract relational triples from each sentence in a large corpus using the OLLIE Open IE system 3Available at http://relgrams. [sent-69, score-0.201]

45 4 This provides relational tuples in the format (Arg1, Relation, Arg2) where each tuple element is a phrase from the sentence. [sent-81, score-0.597]

46 (a new study, was released in, 2008) Relational triples provide a more specific representation which is less ambiguous when compared to (subj, verb) or (verb, obj) pairs. [sent-86, score-0.153]

47 To reduce sparsity and to improve generalization, we represent the relation phrase by its stemmed head verb plus any prepositions. [sent-88, score-0.082]

48 The relation phrase may include embedded nouns, in which case these are stemmed as well. [sent-89, score-0.082]

49 Moreover, tuple arguments are represented as stemmed head nouns, and we also record semantic types of the arguments. [sent-90, score-0.232]

50 We selected 29 semantic types from WordNet, examining the set of instances on a small development set to ensure that the types are useful, but not overly specific. [sent-91, score-0.116]

51 While the Rel-grams suffer from noise in the tuple validity, there is clearly strong signal in the data about common topic and implication between tuples in the Rel-grams. [sent-104, score-0.554]

52 As we demonstrate in the following section, an end task can use graph analysis techniques to amplify this strong signal, producing highquality relational schemas. [sent-105, score-0.159]

53 2 Schemas Evaluation In our schema evaluation, we are interested in assessing how well the schemas correspond to common-sense knowledge about real world events. [sent-107, score-0.959]

54 To this end, we focus on three measures, topical coherence, tuple validity, and actor coherence. [sent-108, score-0.335]

55 , the relations and actors should relate to some real world topic or event. [sent-111, score-0.404]

56 The tuples that comprise a schema should be valid assertions that make sense in the real world. [sent-112, score-0.892]

57 Finally, each actor in the schema should belong to a cohesive set that plays a consistent role in the relations. [sent-113, score-0.596]

58 We compare Rel-grams schemas against the stateof-the-art narrative schemas released by Chambers (Chambers and Jurafsky, 2009). [sent-115, score-1.238]

59 Each of the top instances for A1 or A2 is presented, holding the relation and the other actor fixed. [sent-122, score-0.278]

60 schemas are less expressive than ours they do not associate types with actors and each schema has a constant pre-specified number of relations. [sent-123, score-1.222]

61 For a – fair comparison we use a similarly expressive version of our schemas that strips off argument types and has the same number of relations per schema (six) as their highest quality output set. [sent-124, score-1.05]

62 The first task tests the coherence and validity of relations in a schema and the second does the same for the schema actors. [sent-128, score-0.819]

63 In order to make the tasks understandable to unskilled AMT workers, we followed the accepted practice of presenting them with grounded instances of the schemas (Wang et al. [sent-129, score-0.772]

64 , instantiating a schema with a specific argument instead of showing the various possibilities for an actor. [sent-132, score-0.358]

65 First, we collect the information in schemas as a set of tuples: S = {T1, T2, · · · , Tn}, where each tuple oisf toufp tlhese: f Sor m= {TT : (X, Rel, Y ), wwhheicrhe conveys a relationship Rel between actors X and Y . [sent-133, score-0.978]

66 Each actor is represented by its highest frequency examples (instances). [sent-134, score-0.183]

67 Table 4 shows examples of schemas from Chambers and Rel-grams represented in this format. [sent-135, score-0.597]

68 Then, we create grounded tuples by ran- domly sampling from top instances for each actor. [sent-136, score-0.52]

69 Task I: Topical Coherence To test whether the relations in a schema form a coherent topic or event, we presented the AMT annotators with a schema as a set of grounded tuples, showing each relation in the schema, but randomly selecting one of the top 5 instances from each actor. [sent-137, score-1.194]

70 We generated five such nchamber / dat a / s chemas / acl 0 9 Figure 3: (a) Has Topic: Percentage of schema instantiations with a coherent topic. [sent-138, score-0.672]

71 (b) Valid Tuples: Percentage of grounded statements that assert valid real-world relations. [sent-139, score-0.31]

72 (c) Valid + On Topic: Percentage of grounded statements where 1) the instantiation has a coherent topic, 2) the tuple is valid and 3) the relation belongs to the common topic. [sent-140, score-0.65]

73 We ask three kinds of questions on each grounded schema: (1) is each of the grounded tuples valid (i. [sent-145, score-0.711]

74 meaningful in the real world); (2) do the majority of relations form a coherent topic; and (3) does each tuple belong to the common topic. [sent-147, score-0.459]

75 Our instructions specified that the annotators should ignore grammar and focus on whether a tuple may be interpreted as a real world statement. [sent-150, score-0.207]

76 For example, the first tuple in R1 in Table 5 is a valid statement “a bomb exploded in a city”, but the tuples in C1 “a blast exploded a child”, “a child detonated a blast”, and “a child planted a blast” don’t make sense. [sent-151, score-0.777]

77 Task II: Actor Coherence To test whether the instances of an actor form a coherent set, we held the relation and one actor fixed and presented the AMT annotators with the top 5 instances for the other actor. [sent-152, score-0.716]

78 The first example R11 in Table 6 holds the – relation “explode in” fixed, and A2 is grounded to the randomly selected instance “city”. [sent-153, score-0.158]

79 We present grounded tuples by varying A1 and ask annotators to judge whether these instances form a coherent topic and whether each instance belongs to that common topic. [sent-154, score-0.814]

80 As with Task I, we create five random instantiations for each schema. [sent-155, score-0.075]

81 1728 Figure 4: Actor Coherence: Has Role bars compare the percentage of tuples where the tested actors have a coherent role. [sent-156, score-0.864]

82 Fits Role compares the percentage of top instances that fit the specified role for the tested actors. [sent-157, score-0.155]

83 2 Results We obtained a test set of 100 schemas per system by randomly sampling from the top 500 schemas from each system. [sent-162, score-1.194]

84 The Has Topic bars in Figure 3 show results for schema coherence. [sent-166, score-0.386]

85 Rel-grams has a higher proportion of schemas with a coherent topic, 91% compared to 82% for Chambers’ . [sent-167, score-0.774]

86 The Valid Tuples bars in Figure 3 compare the percentage of valid grounded tuples in the schema instantiations. [sent-170, score-1.034]

87 A tuple was labeled valid if a majority of the annotators labeled it to be meaningful in the real world. [sent-171, score-0.382]

88 Here we see a dramatic difference Relgrams have 92% valid tuples, compared with Chambers’ 61%. [sent-172, score-0.128]

89 The Valid + On Topic bars in Figure 3 compare the percentage of tuples that are both valid and on topic, i. [sent-174, score-0.586]

90 Tuples from schema instantiations that did not have a coherent topic were labeled incorrect. [sent-177, score-0.638]

91 Rel-grams have a higher proportion of valid tuples belonging to a common topic, 82% compared to 58% for Chambers’ schemas, a 56% error reduction. [sent-178, score-0.492]

92 This is the strictest ofthe experiments described thus far 1) the schema must have a topic, 2) the tuple must be valid, and 3) the tuple must belong to the topic. [sent-179, score-0.636]

93 We evaluated schema actors from the top 25 schemas in Chambers’ and Rel-grams schemas, using grounded instances such as those in Table 6. [sent-181, score-1.349]

94 Figure 4 compares the percentage of tuples where the actors play a coherent role (Has Role), and the percentage of instances that fit that role for the actor (Fits Role). [sent-182, score-1.215]

95 Rel-grams has much higher actor coherence than Chambers’, with 97% judged to have a topic compared to 81%, and 81% of instances fitting the common role compared with Chambers’ 59%. [sent-183, score-0.453]

96 3 Error Analysis The errors in both our schemas and those of Chambers are primarily due to mismatched actors and from extraction errors, although Chambers’ schemas have a larger number of actor mismatch errors and the cause of the errors is different for each system. [sent-186, score-1.758]

97 Examining the data published by Chambers, the main source of invalid tuples are mismatch of subject and object for a given relation, which accounts for 80% of the invalid tuples. [sent-187, score-0.503]

98 An example is (boiler, light, candle) where (boiler, light) and (light, candle) are well-formed, yet the entire tuple is not. [sent-189, score-0.133]

99 In addition, 43% of the invalid tuples seem to be from errors by the dependency parser. [sent-190, score-0.426]

100 Our schemas also suffer from mismatched actors, despite the semantic typing of the actors we found a mismatch in 56% of the invalid tuples (5% of all tuples). [sent-191, score-1.32]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('schemas', 0.597), ('tuples', 0.345), ('schema', 0.329), ('actors', 0.248), ('chambers', 0.21), ('actor', 0.183), ('coherent', 0.158), ('event', 0.151), ('tuple', 0.133), ('valid', 0.128), ('relational', 0.119), ('grounded', 0.119), ('triples', 0.082), ('amt', 0.082), ('topic', 0.076), ('coherence', 0.075), ('instantiations', 0.075), ('tabulate', 0.064), ('invalid', 0.06), ('bars', 0.057), ('assertions', 0.057), ('instances', 0.056), ('percentage', 0.056), ('drug', 0.051), ('blast', 0.051), ('relations', 0.047), ('released', 0.044), ('role', 0.043), ('boiler', 0.043), ('candle', 0.043), ('chemas', 0.043), ('nchamber', 0.043), ('relgrams', 0.043), ('subjectverb', 0.043), ('ucla', 0.043), ('stemmed', 0.043), ('mausam', 0.041), ('belong', 0.041), ('annotators', 0.041), ('relation', 0.039), ('validity', 0.039), ('rel', 0.039), ('mismatch', 0.038), ('arguments', 0.037), ('assert', 0.037), ('mixes', 0.037), ('spreading', 0.037), ('exploded', 0.037), ('overcomes', 0.037), ('fire', 0.034), ('aids', 0.034), ('real', 0.033), ('database', 0.033), ('arpa', 0.032), ('disease', 0.032), ('mismatched', 0.032), ('fits', 0.031), ('events', 0.03), ('argument', 0.029), ('roles', 0.029), ('expressive', 0.029), ('workers', 0.029), ('cooccurrence', 0.028), ('instantiation', 0.028), ('light', 0.028), ('representation', 0.027), ('play', 0.027), ('jurafsky', 0.026), ('cited', 0.026), ('statements', 0.026), ('meaningful', 0.024), ('dat', 0.024), ('majority', 0.023), ('child', 0.023), ('ie', 0.023), ('treats', 0.022), ('examining', 0.022), ('errors', 0.021), ('graph', 0.021), ('http', 0.021), ('judged', 0.02), ('induced', 0.02), ('proportion', 0.019), ('topical', 0.019), ('types', 0.019), ('open', 0.019), ('belongs', 0.019), ('turk', 0.019), ('occurences', 0.019), ('sport', 0.019), ('doddington', 0.019), ('executive', 0.019), ('equality', 0.019), ('ollie', 0.019), ('perpetrator', 0.019), ('usna', 0.019), ('victim', 0.019), ('victims', 0.019), ('amplify', 0.019), ('analog', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 90 emnlp-2013-Generating Coherent Event Schemas at Scale

Author: Niranjan Balasubramanian ; Stephen Soderland ; Mausam ; Oren Etzioni

Abstract: Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.

2 0.46133298 75 emnlp-2013-Event Schema Induction with a Probabilistic Entity-Driven Model

Author: Nathanael Chambers

Abstract: Event schema induction is the task of learning high-level representations of complex events (e.g., a bombing) and their entity roles (e.g., perpetrator and victim) from unlabeled text. Event schemas have important connections to early NLP research on frames and scripts, as well as modern applications like template extraction. Recent research suggests event schemas can be learned from raw text. Inspired by a pipelined learner based on named entity coreference, this paper presents the first generative model for schema induction that integrates coreference chains into learning. Our generative model is conceptually simpler than the pipelined approach and requires far less training data. It also provides an interesting contrast with a recent HMM-based model. We evaluate on a common dataset for template schema extraction. Our generative model matches the pipeline’s performance, and outperforms the HMM by 7 F1 points (20%).

3 0.21118169 194 emnlp-2013-Unsupervised Relation Extraction with General Domain Knowledge

Author: Oier Lopez de Lacalle ; Mirella Lapata

Abstract: In this paper we present an unsupervised approach to relational information extraction. Our model partitions tuples representing an observed syntactic relationship between two named entities (e.g., “X was born in Y” and “X is from Y”) into clusters corresponding to underlying semantic relation types (e.g., BornIn, Located). Our approach incorporates general domain knowledge which we encode as First Order Logic rules and automatically combine with a topic model developed specifically for the relation extraction task. Evaluation results on the ACE 2007 English Relation Detection and Categorization (RDC) task show that our model outperforms competitive unsupervised approaches by a wide margin and is able to produce clusters shaped by both the data and the rules.

4 0.11686455 192 emnlp-2013-Unsupervised Induction of Contingent Event Pairs from Film Scenes

Author: Zhichao Hu ; Elahe Rahimtoroghi ; Larissa Munishkina ; Reid Swanson ; Marilyn A. Walker

Abstract: Human engagement in narrative is partially driven by reasoning about discourse relations between narrative events, and the expectations about what is likely to happen next that results from such reasoning. Researchers in NLP have tackled modeling such expectations from a range of perspectives, including treating it as the inference of the CONTINGENT discourse relation, or as a type of common-sense causal reasoning. Our approach is to model likelihood between events by drawing on several of these lines of previous work. We implement and evaluate different unsupervised methods for learning event pairs that are likely to be CONTINGENT on one another. We refine event pairs that we learn from a corpus of film scene descriptions utilizing web search counts, and evaluate our results by collecting human judgments ofcontingency. Our results indicate that the use of web search counts increases the av- , erage accuracy of our best method to 85.64% over a baseline of 50%, as compared to an average accuracy of 75. 15% without web search.

5 0.11156759 118 emnlp-2013-Learning Biological Processes with Global Constraints

Author: Aju Thalappillil Scaria ; Jonathan Berant ; Mengqiu Wang ; Peter Clark ; Justin Lewis ; Brittany Harding ; Christopher D. Manning

Abstract: Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How? ” and “Why? ” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set oftemporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint in- ference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.

6 0.097866915 16 emnlp-2013-A Unified Model for Topics, Events and Users on Twitter

7 0.075417481 41 emnlp-2013-Building Event Threads out of Multiple News Articles

8 0.06984967 93 emnlp-2013-Harvesting Parallel News Streams to Generate Paraphrases of Event Relations

9 0.063780196 74 emnlp-2013-Event-Based Time Label Propagation for Automatic Dating of News Articles

10 0.063229993 147 emnlp-2013-Optimized Event Storyline Generation based on Mixture-Event-Aspect Model

11 0.058263693 160 emnlp-2013-Relational Inference for Wikification

12 0.058156494 152 emnlp-2013-Predicting the Presence of Discourse Connectives

13 0.055117488 105 emnlp-2013-Improving Web Search Ranking by Incorporating Structured Annotation of Queries

14 0.050272204 77 emnlp-2013-Exploiting Domain Knowledge in Aspect Extraction

15 0.050207637 76 emnlp-2013-Exploiting Discourse Analysis for Article-Wide Temporal Classification

16 0.047069333 68 emnlp-2013-Effectiveness and Efficiency of Open Relation Extraction

17 0.041620091 166 emnlp-2013-Semantic Parsing on Freebase from Question-Answer Pairs

18 0.039670285 69 emnlp-2013-Efficient Collective Entity Linking with Stacking

19 0.039647989 154 emnlp-2013-Prior Disambiguation of Word Tensors for Constructing Sentence Vectors

20 0.039611913 106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.141), (1, 0.156), (2, -0.01), (3, 0.214), (4, 0.006), (5, -0.075), (6, -0.178), (7, 0.078), (8, 0.012), (9, 0.002), (10, 0.091), (11, -0.153), (12, -0.092), (13, -0.03), (14, -0.095), (15, -0.132), (16, 0.07), (17, -0.195), (18, 0.176), (19, -0.298), (20, 0.224), (21, 0.252), (22, 0.219), (23, -0.073), (24, -0.101), (25, -0.235), (26, -0.0), (27, 0.026), (28, 0.023), (29, -0.119), (30, 0.119), (31, 0.132), (32, -0.05), (33, -0.002), (34, -0.089), (35, -0.036), (36, -0.056), (37, -0.032), (38, -0.002), (39, -0.031), (40, 0.194), (41, -0.002), (42, -0.073), (43, -0.021), (44, 0.03), (45, 0.046), (46, -0.0), (47, -0.018), (48, -0.018), (49, -0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9788276 90 emnlp-2013-Generating Coherent Event Schemas at Scale

Author: Niranjan Balasubramanian ; Stephen Soderland ; Mausam ; Oren Etzioni

Abstract: Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.

2 0.85613191 75 emnlp-2013-Event Schema Induction with a Probabilistic Entity-Driven Model

Author: Nathanael Chambers

Abstract: Event schema induction is the task of learning high-level representations of complex events (e.g., a bombing) and their entity roles (e.g., perpetrator and victim) from unlabeled text. Event schemas have important connections to early NLP research on frames and scripts, as well as modern applications like template extraction. Recent research suggests event schemas can be learned from raw text. Inspired by a pipelined learner based on named entity coreference, this paper presents the first generative model for schema induction that integrates coreference chains into learning. Our generative model is conceptually simpler than the pipelined approach and requires far less training data. It also provides an interesting contrast with a recent HMM-based model. We evaluate on a common dataset for template schema extraction. Our generative model matches the pipeline’s performance, and outperforms the HMM by 7 F1 points (20%).

3 0.40782893 194 emnlp-2013-Unsupervised Relation Extraction with General Domain Knowledge

Author: Oier Lopez de Lacalle ; Mirella Lapata

Abstract: In this paper we present an unsupervised approach to relational information extraction. Our model partitions tuples representing an observed syntactic relationship between two named entities (e.g., “X was born in Y” and “X is from Y”) into clusters corresponding to underlying semantic relation types (e.g., BornIn, Located). Our approach incorporates general domain knowledge which we encode as First Order Logic rules and automatically combine with a topic model developed specifically for the relation extraction task. Evaluation results on the ACE 2007 English Relation Detection and Categorization (RDC) task show that our model outperforms competitive unsupervised approaches by a wide margin and is able to produce clusters shaped by both the data and the rules.

4 0.34613031 192 emnlp-2013-Unsupervised Induction of Contingent Event Pairs from Film Scenes

Author: Zhichao Hu ; Elahe Rahimtoroghi ; Larissa Munishkina ; Reid Swanson ; Marilyn A. Walker

Abstract: Human engagement in narrative is partially driven by reasoning about discourse relations between narrative events, and the expectations about what is likely to happen next that results from such reasoning. Researchers in NLP have tackled modeling such expectations from a range of perspectives, including treating it as the inference of the CONTINGENT discourse relation, or as a type of common-sense causal reasoning. Our approach is to model likelihood between events by drawing on several of these lines of previous work. We implement and evaluate different unsupervised methods for learning event pairs that are likely to be CONTINGENT on one another. We refine event pairs that we learn from a corpus of film scene descriptions utilizing web search counts, and evaluate our results by collecting human judgments ofcontingency. Our results indicate that the use of web search counts increases the av- , erage accuracy of our best method to 85.64% over a baseline of 50%, as compared to an average accuracy of 75. 15% without web search.

5 0.27927315 161 emnlp-2013-Rule-Based Information Extraction is Dead! Long Live Rule-Based Information Extraction Systems!

Author: Laura Chiticariu ; Yunyao Li ; Frederick R. Reiss

Abstract: The rise of “Big Data” analytics over unstructured text has led to renewed interest in information extraction (IE). We surveyed the landscape ofIE technologies and identified a major disconnect between industry and academia: while rule-based IE dominates the commercial world, it is widely regarded as dead-end technology by the academia. We believe the disconnect stems from the way in which the two communities measure the benefits and costs of IE, as well as academia’s perception that rulebased IE is devoid of research challenges. We make a case for the importance of rule-based IE to industry practitioners. We then lay out a research agenda in advancing the state-of-theart in rule-based IE systems which we believe has the potential to bridge the gap between academic research and industry practice.

6 0.2661657 118 emnlp-2013-Learning Biological Processes with Global Constraints

7 0.2206288 16 emnlp-2013-A Unified Model for Topics, Events and Users on Twitter

8 0.18937114 41 emnlp-2013-Building Event Threads out of Multiple News Articles

9 0.18432592 106 emnlp-2013-Inducing Document Plans for Concept-to-Text Generation

10 0.17981884 68 emnlp-2013-Effectiveness and Efficiency of Open Relation Extraction

11 0.17824681 93 emnlp-2013-Harvesting Parallel News Streams to Generate Paraphrases of Event Relations

12 0.17285557 147 emnlp-2013-Optimized Event Storyline Generation based on Mixture-Event-Aspect Model

13 0.16726419 49 emnlp-2013-Combining Generative and Discriminative Model Scores for Distant Supervision

14 0.16236596 138 emnlp-2013-Naive Bayes Word Sense Induction

15 0.1598148 152 emnlp-2013-Predicting the Presence of Discourse Connectives

16 0.15726477 199 emnlp-2013-Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students

17 0.15165211 105 emnlp-2013-Improving Web Search Ranking by Incorporating Structured Annotation of Queries

18 0.14908984 196 emnlp-2013-Using Crowdsourcing to get Representations based on Regular Expressions

19 0.14700051 74 emnlp-2013-Event-Based Time Label Propagation for Automatic Dating of News Articles

20 0.14433159 160 emnlp-2013-Relational Inference for Wikification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.021), (8, 0.335), (9, 0.022), (18, 0.035), (22, 0.095), (30, 0.054), (38, 0.016), (50, 0.012), (51, 0.155), (66, 0.018), (71, 0.038), (75, 0.042), (77, 0.016), (90, 0.013), (96, 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78028607 90 emnlp-2013-Generating Coherent Event Schemas at Scale

Author: Niranjan Balasubramanian ; Stephen Soderland ; Mausam ; Oren Etzioni

Abstract: Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.

2 0.72228283 137 emnlp-2013-Multi-Relational Latent Semantic Analysis

Author: Kai-Wei Chang ; Wen-tau Yih ; Christopher Meek

Abstract: We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a lowrank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state- of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.

3 0.61974615 15 emnlp-2013-A Systematic Exploration of Diversity in Machine Translation

Author: Kevin Gimpel ; Dhruv Batra ; Chris Dyer ; Gregory Shakhnarovich

Abstract: This paper addresses the problem of producing a diverse set of plausible translations. We present a simple procedure that can be used with any statistical machine translation (MT) system. We explore three ways of using diverse translations: (1) system combination, (2) discriminative reranking with rich features, and (3) a novel post-editing scenario in which multiple translations are presented to users. We find that diversity can improve performance on these tasks, especially for sentences that are difficult for MT.

4 0.54266632 64 emnlp-2013-Discriminative Improvements to Distributional Sentence Similarity

Author: Yangfeng Ji ; Jacob Eisenstein

Abstract: Matrix and tensor factorization have been applied to a number of semantic relatedness tasks, including paraphrase identification. The key idea is that similarity in the latent space implies semantic relatedness. We describe three ways in which labeled data can improve the accuracy of these approaches on paraphrase classification. First, we design a new discriminative term-weighting metric called TF-KLD, which outperforms TF-IDF. Next, we show that using the latent representation from matrix factorization as features in a classification algorithm substantially improves accuracy. Finally, we combine latent features with fine-grained n-gram overlap features, yielding performance that is 3% more accurate than the prior state-of-the-art.

5 0.51114768 77 emnlp-2013-Exploiting Domain Knowledge in Aspect Extraction

Author: Zhiyuan Chen ; Arjun Mukherjee ; Bing Liu ; Meichun Hsu ; Malu Castellanos ; Riddhiman Ghosh

Abstract: Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized Pólya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MCLDA outperforms the existing state-of-the-art models markedly.

6 0.50268966 179 emnlp-2013-Summarizing Complex Events: a Cross-Modal Solution of Storylines Extraction and Reconstruction

7 0.49956882 48 emnlp-2013-Collective Personal Profile Summarization with Social Networks

8 0.49480796 152 emnlp-2013-Predicting the Presence of Discourse Connectives

9 0.4944258 51 emnlp-2013-Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction

10 0.49264008 154 emnlp-2013-Prior Disambiguation of Word Tensors for Constructing Sentence Vectors

11 0.48964858 21 emnlp-2013-An Empirical Study Of Semi-Supervised Chinese Word Segmentation Using Co-Training

12 0.48784494 82 emnlp-2013-Exploring Representations from Unlabeled Data with Co-training for Chinese Word Segmentation

13 0.48760661 114 emnlp-2013-Joint Learning and Inference for Grammatical Error Correction

14 0.48701668 47 emnlp-2013-Collective Opinion Target Extraction in Chinese Microblogs

15 0.48685077 75 emnlp-2013-Event Schema Induction with a Probabilistic Entity-Driven Model

16 0.48675531 79 emnlp-2013-Exploiting Multiple Sources for Open-Domain Hypernym Discovery

17 0.48574921 168 emnlp-2013-Semi-Supervised Feature Transformation for Dependency Parsing

18 0.48545915 56 emnlp-2013-Deep Learning for Chinese Word Segmentation and POS Tagging

19 0.48536122 80 emnlp-2013-Exploiting Zero Pronouns to Improve Chinese Coreference Resolution

20 0.48444191 41 emnlp-2013-Building Event Threads out of Multiple News Articles