acl acl2012 acl2012-126 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Nathanael Chambers
Abstract: Temporal reasoners for document understanding typically assume that a document’s creation date is known. Algorithms to ground relative time expressions and order events often rely on this timestamp to assist the learner. Unfortunately, the timestamp is not always known, particularly on the Web. This paper addresses the task of automatic document timestamping, presenting two new models that incorporate rich linguistic features about time. The first is a discriminative classifier with new features extracted from the text’s time expressions (e.g., ‘since 1999’). This model alone improves on previous generative models by 77%. The second model learns probabilistic constraints between time expressions and the unknown document time. Imposing these learned constraints on the discriminative model further improves its accuracy. Finally, we present a new experiment design that facil- itates easier comparison by future work.
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract Temporal reasoners for document understanding typically assume that a document’s creation date is known. [sent-2, score-0.335]
2 Algorithms to ground relative time expressions and order events often rely on this timestamp to assist the learner. [sent-3, score-0.701]
3 Unfortunately, the timestamp is not always known, particularly on the Web. [sent-4, score-0.42]
4 This paper addresses the task of automatic document timestamping, presenting two new models that incorporate rich linguistic features about time. [sent-5, score-0.28]
5 The first is a discriminative classifier with new features extracted from the text’s time expressions (e. [sent-6, score-0.363]
6 The second model learns probabilistic constraints between time expressions and the unknown document time. [sent-10, score-0.546]
7 1 Introduction This paper addresses a relatively new task in the NLP community: automatic document dating. [sent-13, score-0.249]
8 Given a document with unknown origins, what characteristics of its text indicate the year in which the document was written? [sent-14, score-1.016]
9 This paper proposes a learning approach that builds constraints from a document’s use of time expressions, and combines them with a new discriminative classifier that greatly improves previous work. [sent-15, score-0.318]
10 The temporal reasoning community has long depended on document timestamps to ground rela98 tive time expressions and events (Mani and Wilson, 2000; Llid o´ et al. [sent-16, score-0.905]
11 , 2003): And while there was no profit this year from discontinued operations, last year they contributed 34 million, before tax. [sent-19, score-0.984]
12 Reconstructing the timeline of events from this doc- ument requires extensive temporal knowledge, most notably, the document’s creation date to ground its relative expressions (e. [sent-20, score-0.429]
13 , 2009) include tasks to link events to the (known) document creation time, but state-of-the-art event-event ordering algorithms also rely on these timestamps (Chambers and Jurafsky, 2008; Yoshikawa et al. [sent-25, score-0.445]
14 Several IR applications depend on knowledge ofwhen documents were posted, such as computing document relevance (Li and Croft, 2003; Dakka et al. [sent-29, score-0.313]
15 , 2008) and labeling search queries with temporal profiles (Diaz and Jones, 2004; Zhang et al. [sent-30, score-0.206]
16 The first part of this paper describes a novel learning approach to document dating, presenting a discriminative model and rich linguistic features that have not been applied to document dating. [sent-39, score-0.581]
17 The second half of this paper describes a novel learning algorithm that orders time expressions against the unknown timestamp. [sent-42, score-0.235]
18 These labels impose constraints on the possible timestamp and narrow down its range of valid dates. [sent-44, score-0.482]
19 We combine these constraints with our discriminative learner and see another relative improvement in accuracy by 9%. [sent-45, score-0.178]
20 2 Previous Work Most work on dating documents has come from the IR and knowledge management communities interested in dating documents with unknown origins. [sent-46, score-0.686]
21 They learned unigram language models (LMs) for specific time periods and scored articles with log-likelihood ratio scores. [sent-49, score-0.225]
22 As above, they learned unigram LMs, but instead measured the KLdivergence between a document and a time period’s LM. [sent-55, score-0.395]
23 Our proposed models differ from this work by applying rich linguistic features, discriminative models, and by focusing on how time expressions improve accuracy. [sent-56, score-0.261]
24 They computed probability distributions over different time periods (e. [sent-59, score-0.151]
25 99 They focused on finding words that show periodic spikes (defined by the word’s standard deviation in its distribution over time), weighted with inverse document frequency scores. [sent-63, score-0.249]
26 who focus on fiction) all train on news articles from a particular time period, and test on articles in the same time period. [sent-66, score-0.238]
27 In fact, one of the systems in Kanhabua and Norvag (2008) simply searches for one training document that best matches a test document, and assigns its timestamp. [sent-68, score-0.249]
28 The majority of articles in our dataset contain time expressions (e. [sent-71, score-0.209]
29 , the year 1998), yet these have not been incorporated into the models despite their obvious connection to the article’s timestamp. [sent-73, score-0.492]
30 This paper first describes how to include time expressions as traditional features, and then describes a more sophisticated temporal reasoning component that naturally fits into our classifier. [sent-74, score-0.427]
31 3 Timestamp Classifiers Labeling documents with timestamps is similar to topic classification, but instead of choosing from topics, we choose the most likely year (or other granularity) in which it was written. [sent-75, score-0.638]
32 The subsequent sections then introduce our novel classifiers and temporal reasoners to compare against this model. [sent-77, score-0.273]
33 It weights tokens by the ratio of their probability in a specific year to their probability over the entire corpus. [sent-81, score-0.522]
34 The model thus requires an LM for each year and an LM for the entire corpus: NLLR(D,Y ) =wX∈DP(w|D) ∗ log(PP((ww||CY) )) (1) where D is the target document, Y is the time span (e. [sent-82, score-0.594]
35 A document is labeled with the year that satisfies argmaxYNLLR(D, Y ). [sent-85, score-0.772]
36 Named entities are important to document dating due to the nature of people and places coming in and out of the news at precise moments in time. [sent-100, score-0.549]
37 However, document dating is not just a simple topic classification application, but rather relates to temporal phenomena that is often explicitly described in the text itself. [sent-111, score-0.691]
38 Language contains words and phrases that discuss the very time periods we aim to recover. [sent-112, score-0.151]
39 However, time expressions are sometimes included, and the last sentence in the original text contains a helpful relative clause: Their tickets will entitle them to a preview of. [sent-122, score-0.437]
40 This one clause is more valuable than the rest of the document, allowing us to infer that the document’s timestamp is before February, 2000. [sent-126, score-0.425]
41 An educated guess might surmise the article appeared in the year prior, 1999, which is the correct year. [sent-127, score-0.528]
42 Previous work on document dating does not integrate this information except to include the unigram ‘2000’ in the model. [sent-129, score-0.559]
43 2 Time Features To our knowledge, the following time features have not been used in a document dating setting. [sent-136, score-0.648]
44 Typed Dependency: The most basic time feature is including governors of year mentions and the relation between them. [sent-139, score-0.804]
45 For example, consider the following context for the mention 1997: Torre, who watched the Kansas City Royals beat the Yankees, 13-6, on Friday for the first time since 1997. [sent-141, score-0.173]
46 This generalizes the features to capture time expressions with prepositions, as noun modifiers, or other constructs. [sent-145, score-0.24]
47 Verb Tense: An important syntactic feature for temporal positioning is the tense of the verb that dominates the time expression. [sent-146, score-0.388]
48 Verb Path: The verb path feature is the dependency path from the nearest verb to the year expression. [sent-150, score-0.638]
49 tincbevfhiotryelawthesrRlimatubolefnaotre o2u0s Figure 1: Three year mentions and their relation to the document creation year. [sent-158, score-0.993]
50 Relations can be correctly identified for training using known document timestamps. [sent-159, score-0.249]
51 People and places are often discussed during specific time periods, particularly in the news genre. [sent-162, score-0.168]
52 Collecting named entity mentions will differentiate between an article discussing a bill and one discussing the US President, Bill Clinton. [sent-163, score-0.212]
53 We extract NER features as sequences of uninterrupted tokens labeled with the same NER tag, ignoring unigrams (since unigrams are already included in the base model). [sent-164, score-0.198]
54 4 Learning Time Constraints This section departs from the above document classifiers and instead classifies individual emphyear mentions. [sent-166, score-0.302]
55 The goal is to automatically learn temporal constraints on the document’s timestamp. [sent-167, score-0.238]
56 Instead of predicting a single year for a document, a temporal constraint predicts a range of years. [sent-168, score-0.738]
57 Each time mention, such as ‘not since 2009’, is a constraint representing its relation to the document’s timestamp. [sent-169, score-0.206]
58 For example, the mentioned year ‘2009’ must occur before the year of document creation. [sent-170, score-1.233]
59 This section builds a classifier to label time mentions with their relations (e. [sent-171, score-0.431]
60 , before, after, or simultane- ous with the document’s timestamp), enabling these mentions to constrain the document classifiers described above. [sent-173, score-0.478]
61 Figure 1 gives an example of time mentions and the desired labels we wish to learn. [sent-174, score-0.278]
62 510 9 519 619 719 8Yea1r9 C9las20 20 120 420 5 Figure 2: Distribution over years for a single document as output by a MaxEnt classifier. [sent-178, score-0.337]
63 Figure 2 illustrate a typical distribution output by a document classifier for a training document. [sent-179, score-0.32]
64 Two of the years appear likely (1999 and 2001), however, the document contains a time expression that seems to impose a strict constraint that should eliminate 2001 from consideration: Their tickets will entitle them to a preview of. [sent-180, score-0.742]
65 The clause until February 2000 in a present tense context may not definitively identify the document’s timestamp (1999 is a good guess), but as discussed earlier, it should remove all future years beyond 2000 from consideration. [sent-184, score-0.578]
66 We thus want to impose a constraint based on this phrase that says, loosely, ‘this document was likely written before 2000’ . [sent-185, score-0.351]
67 The document classifiers described in previous sections cannot capture such ordering information. [sent-186, score-0.329]
68 2 add richer time information (such as until pobj 2000 and open prep until pobj 2000), but they compete with many other features that can mislead the final classification. [sent-189, score-0.365]
69 An independent constraint learner may push the document classifier in the right direction. [sent-190, score-0.427]
70 1 Constraint Types We learn several types of constraints between each year mention and the document’s timestamp. [sent-192, score-0.625]
71 Year mentions are defined as tokens with exactly four digits, numerically between 1900 and 2100. [sent-193, score-0.176]
72 Let T be the document timestamp’s year, and M the year mention. [sent-194, score-0.741]
73 The learning process is a typical training environment where year mentions are treated as labeled training examples. [sent-207, score-0.699]
74 Labels for year mentions are automatically computed by comparing the actual timestamp of the training document (all documents in Gigaword have dates) with the integer value of the year token. [sent-208, score-1.861]
75 For example, a document written in 1997 might contain the phrase, “in the year 2000”. [sent-209, score-0.741]
76 The year token (2000) is thus three+ years after the timestamp (1997). [sent-210, score-0.968]
77 We use this relation for the year mention as a labeled training example. [sent-211, score-0.628]
78 Ultimately, we want to use similar syntactic constructs in training so that “in the year 2000” and “in the year 2003” mutually inform each other. [sent-212, score-1.031]
79 We thus compute the label for each time expression, and replace the integer year with the generic YEAR token to generalize mentions. [sent-213, score-0.594]
80 The text for this example becomes “in the year YEAR” (labeled as three+ years after). [sent-214, score-0.58]
81 We train a MaxEnt model on each year mention, to be described next. [sent-215, score-0.492]
82 The vast majority of year mentions are references to the future (e. [sent-217, score-0.668]
83 2 Constraint Learner The features we use to classify year mentions are given in Table 1. [sent-221, score-0.699]
84 The same time features in the document classifier of Section 3. [sent-222, score-0.453]
85 We use a MaxEnt classifier trained on the individ- ual year mentions. [sent-225, score-0.563]
86 Documents often contain multiple (and different) year mentions; all are included in training and testing. [sent-226, score-0.518]
87 This classifier labels mentions with relations, but in order to influence the document classifier, we need to map the relations to individual Time Constraint Features TnVBD-eaygprcbaeodmPTfeaDWtnhesop. [sent-227, score-0.547]
88 ConstraintCount BAfetfeorr Te iTmimesetastmampp11,26083,1,08150 Same as Timestamp141,201 Table 2: Training size of year mentions (and their relation to the document timestamp) in Gigaword’s NYT section. [sent-231, score-0.951]
89 We represent a MaxEnt classifier by PY (R|t) for a time mention t ∈ Td and possible relati(oRns| tR) . [sent-234, score-0.244]
90 Table 6 shows that performance increased most on the documents that contain at least one year mention (60% of the corpus). [sent-237, score-0.627]
91 Finally, Table 5 shows the results of the temporal constraint classifiers on year mentions. [sent-238, score-0.791]
92 7% Table 6: Accuracy on all documents and documents with at least one year mention (about 60% of the corpus). [sent-252, score-0.691]
93 Our richer syntax-based features only apply to year mentions, but this small textual phenomena leads to a surprising 13% relative improvement in accuracy. [sent-256, score-0.576]
94 Table 6 shows that a significant chunk of this improvement comes from docu- ments containing year mentions, as expected. [sent-257, score-0.492]
95 Although most of its features are in the document classifier, by learning constraints it captures a different picture of time that a traditional document classifier does not address. [sent-259, score-0.764]
96 Combining this picture with the document classifier leads to another 3. [sent-260, score-0.32]
97 Although we focused on year mentions here, there are several avenues for future study, including explorations of how other types of time expressions might inform the task. [sent-262, score-0.924]
98 We hope our explicit train/test environment encourages future comparison and progress on document dating. [sent-269, score-0.249]
99 Using temporal profiles of queries for precision prediction. [sent-293, score-0.206]
100 Improving temporal language models for determining time of non-timestamped documents. [sent-301, score-0.278]
wordName wordTfidf (topN-words)
[('year', 0.492), ('timestamp', 0.388), ('dating', 0.266), ('document', 0.249), ('kanhabua', 0.193), ('norvag', 0.177), ('temporal', 0.176), ('mentions', 0.176), ('jong', 0.155), ('expressions', 0.107), ('maxent', 0.103), ('time', 0.102), ('nllr', 0.089), ('years', 0.088), ('pobj', 0.088), ('timestamps', 0.082), ('mention', 0.071), ('classifier', 0.071), ('constraint', 0.07), ('entitle', 0.067), ('hayden', 0.067), ('preview', 0.067), ('tickets', 0.067), ('tense', 0.065), ('documents', 0.064), ('constraints', 0.062), ('february', 0.062), ('tempeval', 0.059), ('ner', 0.057), ('verhagen', 0.057), ('unigrams', 0.055), ('classifiers', 0.053), ('discriminative', 0.052), ('relations', 0.051), ('kumar', 0.051), ('ir', 0.049), ('periods', 0.049), ('core', 0.047), ('inform', 0.047), ('events', 0.045), ('verb', 0.045), ('dakka', 0.044), ('dalli', 0.044), ('jintao', 0.044), ('kjetil', 0.044), ('llid', 0.044), ('planetarium', 0.044), ('pyear', 0.044), ('reasoners', 0.044), ('unigram', 0.044), ('creation', 0.042), ('reasoning', 0.042), ('collocations', 0.041), ('gaizauskas', 0.039), ('historical', 0.039), ('nattiya', 0.039), ('depended', 0.039), ('diaz', 0.039), ('officially', 0.039), ('reproducing', 0.039), ('chambers', 0.038), ('learner', 0.037), ('clause', 0.037), ('article', 0.036), ('heritage', 0.035), ('rel', 0.035), ('news', 0.034), ('relation', 0.034), ('museum', 0.033), ('timebank', 0.033), ('impose', 0.032), ('ground', 0.032), ('particularly', 0.032), ('gigaword', 0.032), ('features', 0.031), ('labeled', 0.031), ('lowercased', 0.031), ('hepple', 0.031), ('mani', 0.031), ('yoshikawa', 0.031), ('history', 0.031), ('builds', 0.031), ('community', 0.031), ('de', 0.03), ('ratio', 0.03), ('prep', 0.03), ('nathanael', 0.03), ('schilder', 0.03), ('profiles', 0.03), ('typed', 0.029), ('path', 0.028), ('lms', 0.027), ('td', 0.027), ('relative', 0.027), ('ordering', 0.027), ('unknown', 0.026), ('richer', 0.026), ('dates', 0.026), ('pustejovsky', 0.026), ('included', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000014 126 acl-2012-Labeling Documents with Timestamps: Learning from their Time Expressions
Author: Nathanael Chambers
Abstract: Temporal reasoners for document understanding typically assume that a document’s creation date is known. Algorithms to ground relative time expressions and order events often rely on this timestamp to assist the learner. Unfortunately, the timestamp is not always known, particularly on the Web. This paper addresses the task of automatic document timestamping, presenting two new models that incorporate rich linguistic features about time. The first is a discriminative classifier with new features extracted from the text’s time expressions (e.g., ‘since 1999’). This model alone improves on previous generative models by 77%. The second model learns probabilistic constraints between time expressions and the unknown document time. Imposing these learned constraints on the discriminative model further improves its accuracy. Finally, we present a new experiment design that facil- itates easier comparison by future work.
2 0.22817378 191 acl-2012-Temporally Anchored Relation Extraction
Author: Guillermo Garrido ; Anselmo Penas ; Bernardo Cabaleiro ; Alvaro Rodrigo
Abstract: Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually fluents, as their validity is naturally anchored to a certain time period. This paper proposes a methodological approach to temporally anchored relation extraction. Our proposal performs distant supervised learning to extract a set of relations from a natural language corpus, and anchors each of them to an interval of temporal validity, aggregating evidence from documents supporting the relation. We use a rich graphbased document-level representation to generate novel features for this task. Results show that our implementation for temporal anchoring is able to achieve a 69% of the upper bound performance imposed by the relation extraction step. Compared to the state of the art, the overall system achieves the highest precision reported.
3 0.16422851 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures
Author: Oleksandr Kolomiyets ; Steven Bethard ; Marie-Francine Moens
Abstract: We propose a new approach to characterizing the timeline of a text: temporal dependency structures, where all the events of a narrative are linked via partial ordering relations like BEFORE, AFTER, OVERLAP and IDENTITY. We annotate a corpus of children’s stories with temporal dependency trees, achieving agreement (Krippendorff’s Alpha) of 0.856 on the event words, 0.822 on the links between events, and of 0.700 on the ordering relation labels. We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596. Our analysis of the dependency parser errors gives some insights into future research directions.
4 0.14386429 99 acl-2012-Finding Salient Dates for Building Thematic Timelines
Author: Remy Kessler ; Xavier Tannier ; Caroline Hagege ; Veronique Moriceau ; Andre Bittar
Abstract: We present an approach for detecting salient (important) dates in texts in order to automatically build event timelines from a search query (e.g. the name of an event or person, etc.). This work was carried out on a corpus of newswire texts in English provided by the Agence France Presse (AFP). In order to extract salient dates that warrant inclusion in an event timeline, we first recognize and normalize temporal expressions in texts and then use a machine-learning approach to extract salient dates that relate to a particular topic. We focused only on extracting the dates and not the events to which they are related.
5 0.12646142 60 acl-2012-Coupling Label Propagation and Constraints for Temporal Fact Extraction
Author: Yafang Wang ; Maximilian Dylla ; Marc Spaniol ; Gerhard Weikum
Abstract: The Web and digitized text sources contain a wealth of information about named entities such as politicians, actors, companies, or cultural landmarks. Extracting this information has enabled the automated construction oflarge knowledge bases, containing hundred millions of binary relationships or attribute values about these named entities. However, in reality most knowledge is transient, i.e. changes over time, requiring a temporal dimension in fact extraction. In this paper we develop a methodology that combines label propagation with constraint reasoning for temporal fact extraction. Label propagation aggressively gathers fact candidates, and an Integer Linear Program is used to clean out false hypotheses that violate temporal constraints. Our method is able to improve on recall while keeping up with precision, which we demonstrate by experiments with biography-style Wikipedia pages and a large corpus of news articles.
6 0.11460845 135 acl-2012-Learning to Temporally Order Medical Events in Clinical Text
7 0.10177377 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model
8 0.098156318 192 acl-2012-Tense and Aspect Error Correction for ESL Learners Using Global Context
9 0.09804225 18 acl-2012-A Probabilistic Model for Canonicalizing Named Entity Mentions
10 0.095718391 73 acl-2012-Discriminative Learning for Joint Template Filling
11 0.092876822 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale
12 0.085150845 85 acl-2012-Event Linking: Grounding Event Reference in a News Archive
13 0.083314329 17 acl-2012-A Novel Burst-based Text Representation Model for Scalable Event Detection
14 0.081490584 33 acl-2012-Automatic Event Extraction with Structured Preference Modeling
15 0.079950005 91 acl-2012-Extracting and modeling durations for habits and events from Twitter
16 0.079779796 50 acl-2012-Collective Classification for Fine-grained Information Status
17 0.06837707 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation
18 0.058457132 58 acl-2012-Coreference Semantics from Web Features
19 0.057820372 40 acl-2012-Big Data versus the Crowd: Looking for Relationships in All the Right Places
20 0.05699243 199 acl-2012-Topic Models for Dynamic Translation Model Adaptation
topicId topicWeight
[(0, -0.188), (1, 0.157), (2, -0.074), (3, 0.184), (4, 0.002), (5, -0.1), (6, -0.011), (7, -0.06), (8, 0.0), (9, -0.184), (10, -0.072), (11, -0.033), (12, -0.027), (13, 0.007), (14, 0.006), (15, -0.001), (16, 0.028), (17, 0.007), (18, -0.038), (19, -0.068), (20, -0.003), (21, -0.0), (22, -0.039), (23, 0.015), (24, 0.095), (25, 0.024), (26, 0.055), (27, -0.043), (28, 0.017), (29, -0.06), (30, -0.1), (31, 0.092), (32, 0.027), (33, 0.026), (34, 0.111), (35, 0.028), (36, -0.031), (37, 0.058), (38, 0.061), (39, 0.063), (40, 0.062), (41, 0.008), (42, -0.118), (43, -0.019), (44, -0.091), (45, -0.085), (46, 0.087), (47, -0.051), (48, -0.035), (49, -0.055)]
simIndex simValue paperId paperTitle
same-paper 1 0.95306057 126 acl-2012-Labeling Documents with Timestamps: Learning from their Time Expressions
Author: Nathanael Chambers
Abstract: Temporal reasoners for document understanding typically assume that a document’s creation date is known. Algorithms to ground relative time expressions and order events often rely on this timestamp to assist the learner. Unfortunately, the timestamp is not always known, particularly on the Web. This paper addresses the task of automatic document timestamping, presenting two new models that incorporate rich linguistic features about time. The first is a discriminative classifier with new features extracted from the text’s time expressions (e.g., ‘since 1999’). This model alone improves on previous generative models by 77%. The second model learns probabilistic constraints between time expressions and the unknown document time. Imposing these learned constraints on the discriminative model further improves its accuracy. Finally, we present a new experiment design that facil- itates easier comparison by future work.
2 0.7658456 135 acl-2012-Learning to Temporally Order Medical Events in Clinical Text
Author: Preethi Raghavan ; Albert Lai ; Eric Fosler-Lussier
Abstract: We investigate the problem of ordering medical events in unstructured clinical narratives by learning to rank them based on their time of occurrence. We represent each medical event as a time duration, with a corresponding start and stop, and learn to rank the starts/stops based on their proximity to the admission date. Such a representation allows us to learn all of Allen’s temporal relations between medical events. Interestingly, we observe that this methodology performs better than a classification-based approach for this domain, but worse on the relationships found in the Timebank corpus. This finding has important implications for styles of data representation and resources used for temporal relation learning: clinical narratives may have different language attributes corresponding to temporal ordering relative to Timebank, implying that the field may need to look at a wider range ofdomains to fully understand the nature of temporal ordering.
3 0.7482425 191 acl-2012-Temporally Anchored Relation Extraction
Author: Guillermo Garrido ; Anselmo Penas ; Bernardo Cabaleiro ; Alvaro Rodrigo
Abstract: Although much work on relation extraction has aimed at obtaining static facts, many of the target relations are actually fluents, as their validity is naturally anchored to a certain time period. This paper proposes a methodological approach to temporally anchored relation extraction. Our proposal performs distant supervised learning to extract a set of relations from a natural language corpus, and anchors each of them to an interval of temporal validity, aggregating evidence from documents supporting the relation. We use a rich graphbased document-level representation to generate novel features for this task. Results show that our implementation for temporal anchoring is able to achieve a 69% of the upper bound performance imposed by the relation extraction step. Compared to the state of the art, the overall system achieves the highest precision reported.
4 0.71989197 60 acl-2012-Coupling Label Propagation and Constraints for Temporal Fact Extraction
Author: Yafang Wang ; Maximilian Dylla ; Marc Spaniol ; Gerhard Weikum
Abstract: The Web and digitized text sources contain a wealth of information about named entities such as politicians, actors, companies, or cultural landmarks. Extracting this information has enabled the automated construction oflarge knowledge bases, containing hundred millions of binary relationships or attribute values about these named entities. However, in reality most knowledge is transient, i.e. changes over time, requiring a temporal dimension in fact extraction. In this paper we develop a methodology that combines label propagation with constraint reasoning for temporal fact extraction. Label propagation aggressively gathers fact candidates, and an Integer Linear Program is used to clean out false hypotheses that violate temporal constraints. Our method is able to improve on recall while keeping up with precision, which we demonstrate by experiments with biography-style Wikipedia pages and a large corpus of news articles.
5 0.71331245 99 acl-2012-Finding Salient Dates for Building Thematic Timelines
Author: Remy Kessler ; Xavier Tannier ; Caroline Hagege ; Veronique Moriceau ; Andre Bittar
Abstract: We present an approach for detecting salient (important) dates in texts in order to automatically build event timelines from a search query (e.g. the name of an event or person, etc.). This work was carried out on a corpus of newswire texts in English provided by the Agence France Presse (AFP). In order to extract salient dates that warrant inclusion in an event timeline, we first recognize and normalize temporal expressions in texts and then use a machine-learning approach to extract salient dates that relate to a particular topic. We focused only on extracting the dates and not the events to which they are related.
6 0.5516665 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures
7 0.53478396 91 acl-2012-Extracting and modeling durations for habits and events from Twitter
8 0.48192465 50 acl-2012-Collective Classification for Fine-grained Information Status
9 0.46360067 73 acl-2012-Discriminative Learning for Joint Template Filling
10 0.42464343 192 acl-2012-Tense and Aspect Error Correction for ESL Learners Using Global Context
11 0.42200699 18 acl-2012-A Probabilistic Model for Canonicalizing Named Entity Mentions
12 0.42034298 189 acl-2012-Syntactic Annotations for the Google Books NGram Corpus
13 0.41968808 58 acl-2012-Coreference Semantics from Web Features
14 0.41265711 133 acl-2012-Learning to "Read Between the Lines" using Bayesian Logic Programs
15 0.41103122 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model
16 0.40102735 200 acl-2012-Toward Automatically Assembling Hittite-Language Cuneiform Tablet Fragments into Larger Texts
17 0.38724595 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale
18 0.36305684 31 acl-2012-Authorship Attribution with Author-aware Topic Models
19 0.35599238 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model
20 0.33675551 216 acl-2012-Word Epoch Disambiguation: Finding How Words Change Over Time
topicId topicWeight
[(25, 0.015), (26, 0.062), (28, 0.033), (30, 0.029), (37, 0.031), (39, 0.055), (52, 0.271), (59, 0.012), (64, 0.011), (74, 0.039), (82, 0.058), (84, 0.03), (85, 0.027), (90, 0.138), (92, 0.06), (94, 0.016), (99, 0.049)]
simIndex simValue paperId paperTitle
same-paper 1 0.77464098 126 acl-2012-Labeling Documents with Timestamps: Learning from their Time Expressions
Author: Nathanael Chambers
Abstract: Temporal reasoners for document understanding typically assume that a document’s creation date is known. Algorithms to ground relative time expressions and order events often rely on this timestamp to assist the learner. Unfortunately, the timestamp is not always known, particularly on the Web. This paper addresses the task of automatic document timestamping, presenting two new models that incorporate rich linguistic features about time. The first is a discriminative classifier with new features extracted from the text’s time expressions (e.g., ‘since 1999’). This model alone improves on previous generative models by 77%. The second model learns probabilistic constraints between time expressions and the unknown document time. Imposing these learned constraints on the discriminative model further improves its accuracy. Finally, we present a new experiment design that facil- itates easier comparison by future work.
2 0.74933904 35 acl-2012-Automatically Mining Question Reformulation Patterns from Search Log Data
Author: Xiaobing Xue ; Yu Tao ; Daxin Jiang ; Hang Li
Abstract: Natural language questions have become popular in web search. However, various questions can be formulated to convey the same information need, which poses a great challenge to search systems. In this paper, we automatically mined 5w1h question reformulation patterns from large scale search log data. The question reformulations generated from these patterns are further incorporated into the retrieval model. Experiments show that using question reformulation patterns can significantly improve the search performance of natural language questions.
Author: Xu Sun ; Houfeng Wang ; Wenjie Li
Abstract: We present a joint model for Chinese word segmentation and new word detection. We present high dimensional new features, including word-based features and enriched edge (label-transition) features, for the joint modeling. As we know, training a word segmentation system on large-scale datasets is already costly. In our case, adding high dimensional new features will further slow down the training speed. To solve this problem, we propose a new training method, adaptive online gradient descent based on feature frequency information, for very fast online training of the parameters, even given large-scale datasets with high dimensional features. Compared with existing training methods, our training method is an order magnitude faster in terms of training time, and can achieve equal or even higher accuracies. The proposed fast training method is a general purpose optimization method, and it is not limited in the specific task discussed in this paper.
4 0.72055012 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation
Author: Junhui Li ; Zhaopeng Tu ; Guodong Zhou ; Josef van Genabith
Abstract: This paper presents an extension of Chiang’s hierarchical phrase-based (HPB) model, called Head-Driven HPB (HD-HPB), which incorporates head information in translation rules to better capture syntax-driven information, as well as improved reordering between any two neighboring non-terminals at any stage of a derivation to explore a larger reordering search space. Experiments on Chinese-English translation on four NIST MT test sets show that the HD-HPB model significantly outperforms Chiang’s model with average gains of 1.91 points absolute in BLEU. 1
5 0.58500874 187 acl-2012-Subgroup Detection in Ideological Discussions
Author: Amjad Abu-Jbara ; Pradeep Dasigi ; Mona Diab ; Dragomir Radev
Abstract: The rapid and continuous growth of social networking sites has led to the emergence of many communities of communicating groups. Many of these groups discuss ideological and political topics. It is not uncommon that the participants in such discussions split into two or more subgroups. The members of each subgroup share the same opinion toward the discussion topic and are more likely to agree with members of the same subgroup and disagree with members from opposing subgroups. In this paper, we propose an unsupervised approach for automatically detecting discussant subgroups in online communities. We analyze the text exchanged between the participants of a discussion to identify the attitude they carry toward each other and towards the various aspects of the discussion topic. We use attitude predictions to construct an attitude vector for each discussant. We use clustering techniques to cluster these vectors and, hence, determine the subgroup membership of each participant. We compare our methods to text clustering and other baselines, and show that our method achieves promising results.
6 0.57151818 191 acl-2012-Temporally Anchored Relation Extraction
7 0.55971831 99 acl-2012-Finding Salient Dates for Building Thematic Timelines
9 0.55641639 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information
10 0.55628616 167 acl-2012-QuickView: NLP-based Tweet Search
11 0.55489033 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers
12 0.55442512 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling
13 0.55396479 148 acl-2012-Modified Distortion Matrices for Phrase-Based Statistical Machine Translation
14 0.55384177 123 acl-2012-Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT
15 0.55383676 45 acl-2012-Capturing Paradigmatic and Syntagmatic Lexical Relations: Towards Accurate Chinese Part-of-Speech Tagging
16 0.55237091 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models
17 0.55230165 41 acl-2012-Bootstrapping a Unified Model of Lexical and Phonetic Acquisition
18 0.55208731 73 acl-2012-Discriminative Learning for Joint Template Filling
19 0.5509268 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures
20 0.55083847 12 acl-2012-A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relation Extraction