acl acl2013 acl2013-160 knowledge-graph by maker-knowledge-mining

160 acl-2013-Fine-grained Semantic Typing of Emerging Entities


Source: pdf

Author: Ndapandula Nakashole ; Tomasz Tylenda ; Gerhard Weikum

Abstract: Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. [sent-4, score-0.263]

2 In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. [sent-5, score-0.251]

3 Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. [sent-6, score-0.953]

4 Most KB projects focus on entities that appear in Wikipedia (or other reference collections such as IMDB), and very few have tried to gather entities “in the long tail” beyond prominent sources. [sent-12, score-0.45]

5 Virtually all projects miss out on newly emerging entities that appear only in the latest news or social media. [sent-13, score-0.352]

6 Our goal in this paper is to discover emerging entities of this kind on the fly as they become noteworthy in news and social-media streams. [sent-16, score-0.352]

7 A similar theme is pursued in research on open information extraction (open IE) (Banko 2007; Fader 2011; Talukdar 2010; Venetis 2011; Wu 2012), which yields higher recall compared to ontologystyle KB construction with canonicalized and semantically typed entities organized in prespecified classes. [sent-17, score-0.444]

8 These phrases are not canonicalized, so the same entity may appear under many different names, e. [sent-19, score-0.288]

9 Our aim is for all recognized and newly discovered entities to be semantically interpretable by having fine-grained types that connect them to KB classes. [sent-24, score-0.367]

10 The expectation is that this will boost the disambiguation of known entity names and the grouping of new entities, and will also strengthen the extraction of relational facts about entities. [sent-25, score-0.449]

11 For informative knowledge, new entities must be typed in a fine-grained manner (e. [sent-26, score-0.331]

12 The solution presented in this paper, called PEARL, leverages a repository of relational patterns that are organized in a typesignature taxonomy. [sent-36, score-0.251]

13 The type signatures of the relational phrases are cues for the type of the entity denoted by the noun phrase. [sent-40, score-0.951]

14 For example, an entity named Snoop Dogg that frequently cooccurs with the hsingeri * distinctive voice in * hsongi pattern eis likely to * be d a singer. [sent-41, score-0.482]

15 , a singer), we can use a partially bound pattern to infer the type of the other entity (e. [sent-45, score-0.493]

16 Known entities are recognized and mapped to the KB using a recent tool for named entity disambiguation (Hoffart 2011). [sent-49, score-0.505]

17 For cleaning out false hypotheses among the type candidates for a new entity, we devised probabilistic models and an integer linear program that considers incompatibilities and correlations among entity types. [sent-50, score-0.418]

18 In summary, our contribution in this paper is a model for discovering and ontologically typing out-of-KB entities, using a fine-grained type system and harnessing relational paraphrases with type signatures for probabilistic weight computation. [sent-51, score-0.716]

19 b) The noun phrase is a known entity that can be directly mapped to the knowledge base. [sent-55, score-0.404]

20 d) The noun phrase is a new entity not known to the knowledge base at all. [sent-57, score-0.404]

21 We use an extensive dictionary of surface forms for in-KB entities (Hoffart 2012), to determine if a name or phrase refers to a known entity. [sent-59, score-0.326]

22 , a common noun phrase that denotes a class or a general concept), we base the decision on the following hypothesis (inspired by and generalizing (Bunescu 2006): A given noun phrase, not known to the knowledge base, is a true entity if its headword is singular and is consistently capitalized (i. [sent-65, score-0.479]

23 3 Typing Emerging Entities To deduce types for new entities we propose to align new entities along the type signatures of patterns they occur with. [sent-68, score-0.921]

24 In this manner we use the patterns to suggest types for the entities they occur with. [sent-69, score-0.441]

25 In particular, we infer entity types from pattern type signatures. [sent-70, score-0.601]

26 1 (Type Alignment Hypothesis) For a given pattern such as hactori ’s character in hmoviei, we assume cthha at an entity pair (x, y) frequently occurring ew tihtah athne pattern in text implies that x and y are of the types hactori and hmoviei, respectively. [sent-72, score-0.676]

27 With polysemy, the same lexico-syntactic pattern can have different type signatures. [sent-76, score-0.265]

28 Freoler an entity pair (x, y) occurring sweidth h tphreo pattern “released”, x can be one of three different types. [sent-78, score-0.367]

29 We cannot expect that the phrases we extract in text will be exact matches of the typed relational patterns learned by PATTY. [sent-79, score-0.471]

30 Quite often however, the extracted phrase matches multiple relational patterns to various degrees. [sent-81, score-0.365]

31 Each of the matched relational patterns has its own type signature. [sent-82, score-0.441]

32 The type signatures of the various matched patterns can be incompatible with one another. [sent-83, score-0.363]

33 The problem of incorrect paths between entities emerges when a pair of entities occurring in the same sentence do not stand in a true subject-object relation. [sent-84, score-0.514]

34 We define and solve the following optimization problem: Definition 1(Type Inference Optimization) Given all the candidate types for x, find the best types or “strongly supported” types for x. [sent-89, score-0.415]

35 Type disjointness constraints are constraints that indicate that, semantically, a pair of types cannot apply to the same entity at the same time. [sent-91, score-0.625]

36 We also study a relaxation of type disjointness constraints through the use oftype correlation constraints. [sent-93, score-0.437]

37 Our task is therefore twofold: first, generate candidate types for new entities; second, find the best types for each new entity among its candidate types. [sent-94, score-0.626]

38 4 Candidate Types for Entities For a given entity, candidate types are types that can potentially be assigned to that entity, based on the entity’s co-occurrences with typed relational patterns. [sent-95, score-0.556]

39 Definition 2 (Candidate Type) Given a new entity x which occurs with a number of patterns p1, p2, . [sent-96, score-0.336]

40 , pn, where each pattern pi has a type signature with a domain and a range: if x occurs on the left of pi, we pick the domain of pi as a candidate type for x; if x occurs on the right of pi, we pick the range of pi as a candidate type for x. [sent-99, score-1.251]

41 Ideally, if an entity occurs with a pattern which is highly specific to a given type then the candidate type should have high confidence. [sent-101, score-0.774]

42 1 Uniform Weights We are given a new entity x which occurs with phrases (x phrase1 y1), (x phrase2 y2), . [sent-107, score-0.288]

43 The pis are the typed relational patterns extracted by PATTY. [sent-115, score-0.357]

44 The facts are generated by matching phrases to relational patterns with type signatures. [sent-116, score-0.538]

45 The type signature of a pattern is denoted by: sig(pi) = (domain(pi) , range(pi)) We allow fuzzy matches, hence each fact comes with a match score. [sent-117, score-0.339]

46 This is the similarity degree between the phrase observed in text and the typed relational pattern. [sent-118, score-0.309]

47 The fuzzy match similarity score is: sim(phrase, pi), where similarity is the n-gram Jaccard similarity between the phrase and the typed pattern. [sent-120, score-0.24]

48 The confidence that x is of type domain is defined as follows: Definition 4 (Candidate Type Confidence) For a given observation (x phrase y), where 1490 phrase matches patterns p1, . [sent-121, score-0.529]

49 To compute the final confidence for typeConf(x, domain), we aggregate the confidences over all phrases occurring with x. [sent-130, score-0.233]

50 , (x, phrasen, yn), the aggregate candidate type confidence is given by: aggTypeConf(x, d) = = phXraseitypeConf(x,phrasei,d) X {pj :domXain(pj )=d} (sim(phrasei,pj)) X phXrasei The confidence for the range typeConf(x, range) is computed analogously. [sent-134, score-0.447]

51 Thus this approach does not take into account the intuition that an entity occurring with a pattern which is highly specific to a given type is a stronger signal that the entity is of the type suggested. [sent-137, score-0.975]

52 2 Co-occurrence Likelihood Weight Computation We devise a likelihood model for computing weights for entity candidate types. [sent-140, score-0.443]

53 Central to this model is the estimation of the likelihood of a given type occurring with a given pattern. [sent-141, score-0.324]

54 Suppose using PATTY methods we mined a typed relational pattern ht1i p ht2i . [sent-142, score-0.324]

55 Suppose that we now encounter a new entity pair (x, y) occurring with a phrase that matches p. [sent-143, score-0.406]

56 We can compute the likelihood of x and y being of types t1 and t2, respectively, from the likelihood of p cooccurring with entities of types t1, t2. [sent-144, score-0.615]

57 For example, this sums up over both hmusicani plays hsongi occurrences and hactori plays hfictional characteri . [sent-155, score-0.276]

58 Multiple patterns can suggest the same type for an entity. [sent-159, score-0.298]

59 Therefore, the weight of the assertion that y is of type t, is the total support strength from all phrases that suggest type t for y. [sent-160, score-0.44]

60 Definition 8 (Aggregate Likelihood) The aggregate likelihood candidate type confidence is given 1491 by: typeConf(x, domain)) = X X phXrasei Xpj ? [sent-161, score-0.46]

61 In the next step we pick the best types for an entity among all its candidate types. [sent-165, score-0.462]

62 3 Integer Linear Program Formulation Given a set of weighted candidate types, our goal is to pick a compatible subset of types for x. [sent-167, score-0.234]

63 The additional asset that we leverage here is the compatibility of types: how likely is it that an entity belongs to both type ti and type tj. [sent-168, score-0.726]

64 Some types are mutually exclusive, for example, the type location rules out person and, at finer levels, city rules out river and building, and so on. [sent-169, score-0.298]

65 First, we define a decision variable Ti for each candidate type i = 1, . [sent-175, score-0.281]

66 These are binary variables: Ti = 1 means type ti is selected to be included × in the set of types for x, Ti = 0 means we discard type ti for x. [sent-179, score-0.724]

67 In the following we develop two variants of this approach: a “hard” ILP with rigorous disjointness constraints, and a “soft” ILP which considers type correlations. [sent-180, score-0.395]

68 We infer type disjointness constraints from the YAGO2 knowledge base using occurrence statistics. [sent-182, score-0.437]

69 Notice that this introduces hard constraints whereby selecting one type of a disjoint pair rules out the second type. [sent-184, score-0.268]

70 We define type disjointness constraints Ti Tj ≤ 1for all disjoint pairs ti, tj (e. [sent-185, score-0.575]

71 The ILP is defined as follows: objective objecPtive + max Pi Ti wi type Pdisjointness constraint P ∀(ti, tj)disjoint Ti + Tj ≤ 1 The weights wi are the aggregrated likelihoods as specified in Definition 8. [sent-189, score-0.244]

72 In many cases, two types are not really mutually exclusive in the strict sense, but the likelihood that an entity belongs to both types is very low. [sent-191, score-0.514]

73 To this end, we precompute Pearson correlation coefficients for all type pairs (ti, tj) based on co-occurrences of types for the same entities. [sent-196, score-0.298]

74 We additionally introduce pair-wise decision variables Yij, set to 1if the entity at hand belongs to both types ti and tj, and 0 otherwise. [sent-198, score-0.454]

75 The ILP with correlations is defined as follows: objective P max α Pi Ti + (1− α) Pij Yij wi type coPrrelati×on w co+ns (t1ra −in αts) vij P + 1 ≥ Ti + Tj Yij ≤ Ti Yij ≤ Tj ∀i,j Yij ∀i,j ∀i,j Note that both ILP variants need to be solved per entity, not over all entities together. [sent-201, score-0.449]

76 For both the “hard” and “soft” variants of the ILP, the solution is the best types for entity x satisfying the constraints. [sent-205, score-0.336]

77 NNPLB (No Noun Phrase Left Behind), is the method presented in (Lin 2012), based on the propagation of types for known entities through salient patterns occurring with both known and unknown entities. [sent-215, score-0.587]

78 We implemented the algorithm in (Lin 2012) in our framework, using the relational patterns of PATTY (Nakashole 2012) for comparability. [sent-216, score-0.251]

79 To evaluate the quality of types assigned to emerging entities, we presented turkers with sentences from the news tagged with outof-KB entities and the types inferred by the methods under test. [sent-223, score-0.672]

80 The turkers task was to assess the correctness of types assigned to an entity mention. [sent-224, score-0.44]

81 To make it easy to understand the task for the turkers, we combined the extracted entity and type into a sentence. [sent-225, score-0.418]

82 We first sampled among out-of-KB entities that were mentioned frequently in the news corpus: in at least 20 different news articles. [sent-237, score-0.301]

83 Since the turkers did not always agree if the type for a sample is good or not, we aggregate their answers. [sent-239, score-0.346]

84 This limitation could be addressed by applying PEARL’s ILPs and probabilistic weights to the candidate types suggested by NNPLB. [sent-252, score-0.253]

85 For a given entity mention e, an entitytyping system returns a ranked list of types {t1, t2 , . [sent-274, score-0.373]

86 lated Work Table 3: PEARL-hard performance on a sample of frequent entities (mention frequency≥ 20) and on a sample eonft ietinetisti (ems eonft aiol n m freenqtuioenn frequencies. [sent-291, score-0.309]

87 We also studied PEARLhard’s performance on entities of different mention frequencies. [sent-293, score-0.262]

88 The third variation is PEARL without probabilistic weights (denoted Uniform Tagging mentions of named entities with lexical types has been pursued in previous work. [sent-302, score-0.476]

89 Most well-known is the Stanford named entity recognition (NER) tagger (Finkel 2005) which assigns coarse-grained types like person, organization, location, and other to noun phrases that are likely to denote entities. [sent-303, score-0.523]

90 For unlinkable entities the NNPLB method (inspired by (Kozareva 2011)) picks types based on co-occurrence with salient relational patterns by propagating types of linked entities to unlinkable entities that occur with the same patterns. [sent-340, score-1.268]

91 In contrast, PEARL uses an ILP with type disjointness and correlation constraints to solve and penalize such inconsistencies. [sent-342, score-0.437]

92 NNPLB uses untyped patterns, whereas PEARL harnesses patterns with type signatures. [sent-343, score-0.374]

93 Furthermore, PEARL computes weights for candidate types based on patterns and type signatures. [sent-344, score-0.551]

94 NNPLB only assigns types to entities that appear in the subject role of a pattern. [sent-346, score-0.333]

95 This means that entities in the object role are not typed at all. [sent-347, score-0.331]

96 In contrast, PEARL infers types for entities in both the subject and object role. [sent-348, score-0.333]

97 Type disjointness constraints have been studied for other tasks in information extraction (Carlson 2010; Suchanek 2009), but using different formulations. [sent-349, score-0.247]

98 7 Conclusion This paper addressed the problem of detecting and semantically typing newly emerging entities, to support the life-cycle of large knowledge bases. [sent-350, score-0.251]

99 Our solution, PEARL, draws on a collection of semantically typed patterns for binary relations. [sent-351, score-0.248]

100 PEARL feeds probabilistic evidence derived from occurrences of such patterns into two kinds of ILPs, considering type disjointness or type corre- lations. [sent-352, score-0.739]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('pearl', 0.373), ('nnplb', 0.232), ('entity', 0.228), ('entities', 0.225), ('disjointness', 0.205), ('type', 0.19), ('hyena', 0.168), ('ilp', 0.166), ('relational', 0.143), ('kb', 0.138), ('typing', 0.128), ('ti', 0.118), ('pi', 0.118), ('types', 0.108), ('patterns', 0.108), ('weikum', 0.107), ('typed', 0.106), ('turkers', 0.104), ('suchanek', 0.104), ('tj', 0.102), ('yij', 0.097), ('patty', 0.093), ('yosef', 0.093), ('candidate', 0.091), ('emerging', 0.089), ('hoffart', 0.085), ('entwistle', 0.085), ('hsongi', 0.085), ('typeconf', 0.085), ('pattern', 0.075), ('noun', 0.075), ('fuzzy', 0.074), ('likelihood', 0.07), ('signatures', 0.065), ('occurring', 0.064), ('bonamassa', 0.063), ('expelled', 0.063), ('hactori', 0.063), ('unlinkable', 0.063), ('fleiss', 0.063), ('nakashole', 0.062), ('phrase', 0.06), ('phrases', 0.06), ('confidence', 0.057), ('brussels', 0.056), ('guitar', 0.056), ('weights', 0.054), ('matches', 0.054), ('aggregate', 0.052), ('named', 0.052), ('bbc', 0.052), ('organization', 0.048), ('feeds', 0.046), ('rahman', 0.044), ('summit', 0.044), ('triples', 0.043), ('plays', 0.043), ('analog', 0.042), ('canonicalized', 0.042), ('domxain', 0.042), ('eonft', 0.042), ('halbumi', 0.042), ('harnesses', 0.042), ('hmoviei', 0.042), ('hmusicani', 0.042), ('hsingeri', 0.042), ('lochte', 0.042), ('malick', 0.042), ('melinda', 0.042), ('pearlhard', 0.042), ('phrasen', 0.042), ('phxrasei', 0.042), ('precisionlowerprecisionupper', 0.042), ('realtytrac', 0.042), ('venetis', 0.042), ('constraints', 0.042), ('soft', 0.042), ('director', 0.041), ('known', 0.041), ('definition', 0.04), ('freebase', 0.038), ('news', 0.038), ('conceptnet', 0.037), ('album', 0.037), ('havasi', 0.037), ('ilps', 0.037), ('musician', 0.037), ('ndcg', 0.037), ('pursued', 0.037), ('facts', 0.037), ('sigmod', 0.037), ('sim', 0.037), ('mention', 0.037), ('disjoint', 0.036), ('yn', 0.035), ('pick', 0.035), ('vij', 0.034), ('cooccurring', 0.034), ('untyped', 0.034), ('semantically', 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 160 acl-2013-Fine-grained Semantic Typing of Emerging Entities

Author: Ndapandula Nakashole ; Tomasz Tylenda ; Gerhard Weikum

Abstract: Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work.

2 0.3049742 179 acl-2013-HYENA-live: Fine-Grained Online Entity Type Classification from Natural-language Text

Author: Mohamed Amir Yosef ; Sandro Bauer ; Johannes Hoffart ; Marc Spaniol ; Gerhard Weikum

Abstract: Recent research has shown progress in achieving high-quality, very fine-grained type classification in hierarchical taxonomies. Within such a multi-level type hierarchy with several hundreds of types at different levels, many entities naturally belong to multiple types. In order to achieve high-precision in type classification, current approaches are either limited to certain domains or require time consuming multistage computations. As a consequence, existing systems are incapable of performing ad-hoc type classification on arbitrary input texts. In this demo, we present a novel Webbased tool that is able to perform domain independent entity type classification under real time conditions. Thanks to its efficient implementation and compacted feature representation, the system is able to process text inputs on-the-fly while still achieving equally high precision as leading state-ofthe-art implementations. Our system offers an online interface where natural-language text can be inserted, which returns semantic type labels for entity mentions. Further more, the user interface allows users to explore the assigned types by visualizing and navigating along the type-hierarchy.

3 0.23169358 352 acl-2013-Towards Accurate Distant Supervision for Relational Facts Extraction

Author: Xingxing Zhang ; Jianwen Zhang ; Junyu Zeng ; Jun Yan ; Zheng Chen ; Zhifang Sui

Abstract: Distant supervision (DS) is an appealing learning method which learns from existing relational facts to extract more from a text corpus. However, the accuracy is still not satisfying. In this paper, we point out and analyze some critical factors in DS which have great impact on accuracy, including valid entity type detection, negative training examples construction and ensembles. We propose an approach to handle these factors. By experimenting on Wikipedia articles to extract the facts in Freebase (the top 92 relations), we show the impact of these three factors on the accuracy of DS and the remarkable improvement led by the proposed approach.

4 0.1462969 139 acl-2013-Entity Linking for Tweets

Author: Xiaohua Liu ; Yitong Li ; Haocheng Wu ; Ming Zhou ; Furu Wei ; Yi Lu

Abstract: We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.

5 0.12611924 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

Author: Oleg Rokhlenko ; Idan Szpektor

Abstract: We introduce the novel task of automatically generating questions that are relevant to a text but do not appear in it. One motivating example of its application is for increasing user engagement around news articles by suggesting relevant comparable questions, such as “is Beyonce a better singer than Madonna?”, for the user to answer. We present the first algorithm for the task, which consists of: (a) offline construction of a comparable question template database; (b) ranking of relevant templates to a given article; and (c) instantiation of templates only with entities in the article whose comparison under the template’s relation makes sense. We tested the suggestions generated by our algorithm via a Mechanical Turk experiment, which showed a significant improvement over the strongest baseline of more than 45% in all metrics.

6 0.12370483 219 acl-2013-Learning Entity Representation for Entity Disambiguation

7 0.12017729 71 acl-2013-Bootstrapping Entity Translation on Weakly Comparable Corpora

8 0.11523215 178 acl-2013-HEADY: News headline abstraction through event pattern clustering

9 0.11249101 377 acl-2013-Using Supervised Bigram-based ILP for Extractive Summarization

10 0.096202143 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

11 0.096085757 138 acl-2013-Enriching Entity Translation Discovery using Selective Temporality

12 0.092445157 306 acl-2013-SPred: Large-scale Harvesting of Semantic Predicates

13 0.087501943 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

14 0.086529061 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

15 0.085732877 256 acl-2013-Named Entity Recognition using Cross-lingual Resources: Arabic as an Example

16 0.085547075 172 acl-2013-Graph-based Local Coherence Modeling

17 0.080914125 43 acl-2013-Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity

18 0.080642365 207 acl-2013-Joint Inference for Fine-grained Opinion Extraction

19 0.080475934 290 acl-2013-Question Analysis for Polish Question Answering

20 0.078121781 56 acl-2013-Argument Inference from Relevant Event Mentions in Chinese Argument Extraction


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.218), (1, 0.075), (2, -0.001), (3, -0.108), (4, 0.075), (5, 0.168), (6, -0.072), (7, -0.007), (8, 0.029), (9, -0.021), (10, -0.036), (11, -0.088), (12, -0.077), (13, -0.049), (14, -0.012), (15, 0.051), (16, 0.007), (17, 0.027), (18, -0.126), (19, 0.034), (20, -0.075), (21, -0.003), (22, -0.008), (23, 0.254), (24, 0.032), (25, 0.04), (26, 0.073), (27, -0.086), (28, -0.114), (29, 0.095), (30, 0.084), (31, -0.094), (32, 0.027), (33, -0.01), (34, -0.034), (35, 0.021), (36, 0.021), (37, -0.079), (38, -0.09), (39, -0.044), (40, 0.102), (41, -0.081), (42, -0.043), (43, -0.027), (44, -0.019), (45, 0.024), (46, -0.09), (47, -0.02), (48, -0.012), (49, 0.01)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97529781 160 acl-2013-Fine-grained Semantic Typing of Emerging Entities

Author: Ndapandula Nakashole ; Tomasz Tylenda ; Gerhard Weikum

Abstract: Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work.

2 0.90197164 352 acl-2013-Towards Accurate Distant Supervision for Relational Facts Extraction

Author: Xingxing Zhang ; Jianwen Zhang ; Junyu Zeng ; Jun Yan ; Zheng Chen ; Zhifang Sui

Abstract: Distant supervision (DS) is an appealing learning method which learns from existing relational facts to extract more from a text corpus. However, the accuracy is still not satisfying. In this paper, we point out and analyze some critical factors in DS which have great impact on accuracy, including valid entity type detection, negative training examples construction and ensembles. We propose an approach to handle these factors. By experimenting on Wikipedia articles to extract the facts in Freebase (the top 92 relations), we show the impact of these three factors on the accuracy of DS and the remarkable improvement led by the proposed approach.

3 0.85517496 179 acl-2013-HYENA-live: Fine-Grained Online Entity Type Classification from Natural-language Text

Author: Mohamed Amir Yosef ; Sandro Bauer ; Johannes Hoffart ; Marc Spaniol ; Gerhard Weikum

Abstract: Recent research has shown progress in achieving high-quality, very fine-grained type classification in hierarchical taxonomies. Within such a multi-level type hierarchy with several hundreds of types at different levels, many entities naturally belong to multiple types. In order to achieve high-precision in type classification, current approaches are either limited to certain domains or require time consuming multistage computations. As a consequence, existing systems are incapable of performing ad-hoc type classification on arbitrary input texts. In this demo, we present a novel Webbased tool that is able to perform domain independent entity type classification under real time conditions. Thanks to its efficient implementation and compacted feature representation, the system is able to process text inputs on-the-fly while still achieving equally high precision as leading state-ofthe-art implementations. Our system offers an online interface where natural-language text can be inserted, which returns semantic type labels for entity mentions. Further more, the user interface allows users to explore the assigned types by visualizing and navigating along the type-hierarchy.

4 0.75004101 139 acl-2013-Entity Linking for Tweets

Author: Xiaohua Liu ; Yitong Li ; Haocheng Wu ; Ming Zhou ; Furu Wei ; Yi Lu

Abstract: We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.

5 0.69508052 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

Author: Wei Xu ; Raphael Hoffmann ; Le Zhao ; Ralph Grishman

Abstract: Distant supervision has attracted recent interest for training information extraction systems because it does not require any human annotation but rather employs existing knowledge bases to heuristically label a training corpus. However, previous work has failed to address the problem of false negative training examples mislabeled due to the incompleteness of knowledge bases. To tackle this problem, we propose a simple yet novel framework that combines a passage retrieval model using coarse features into a state-of-the-art relation extractor using multi-instance learning with fine features. We adapt the information retrieval technique of pseudo- relevance feedback to expand knowledge bases, assuming entity pairs in top-ranked passages are more likely to express a relation. Our proposed technique significantly improves the quality of distantly supervised relation extraction, boosting recall from 47.7% to 61.2% with a consistently high level of precision of around 93% in the experiments.

6 0.66519171 219 acl-2013-Learning Entity Representation for Entity Disambiguation

7 0.65978152 178 acl-2013-HEADY: News headline abstraction through event pattern clustering

8 0.64545602 138 acl-2013-Enriching Entity Translation Discovery using Selective Temporality

9 0.64489192 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

10 0.57446188 71 acl-2013-Bootstrapping Entity Translation on Weakly Comparable Corpora

11 0.55113375 172 acl-2013-Graph-based Local Coherence Modeling

12 0.54879892 365 acl-2013-Understanding Tables in Context Using Standard NLP Toolkits

13 0.52918315 215 acl-2013-Large-scale Semantic Parsing via Schema Matching and Lexicon Extension

14 0.5225473 256 acl-2013-Named Entity Recognition using Cross-lingual Resources: Arabic as an Example

15 0.52028078 340 acl-2013-Text-Driven Toponym Resolution using Indirect Supervision

16 0.51645154 375 acl-2013-Using Integer Linear Programming in Concept-to-Text Generation to Produce More Compact Texts

17 0.51267081 242 acl-2013-Mining Equivalent Relations from Linked Data

18 0.47157362 356 acl-2013-Transfer Learning Based Cross-lingual Knowledge Extraction for Wikipedia

19 0.46093002 21 acl-2013-A Statistical NLG Framework for Aggregated Planning and Realization

20 0.45808041 232 acl-2013-Linguistic Models for Analyzing and Detecting Biased Language


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.042), (6, 0.027), (11, 0.054), (15, 0.013), (21, 0.016), (24, 0.031), (26, 0.037), (35, 0.456), (42, 0.036), (48, 0.037), (70, 0.031), (71, 0.045), (88, 0.027), (90, 0.026), (95, 0.048)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98401397 160 acl-2013-Fine-grained Semantic Typing of Emerging Entities

Author: Ndapandula Nakashole ; Tomasz Tylenda ; Gerhard Weikum

Abstract: Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work.

2 0.98267084 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

Author: Annie Chen

Abstract: Though there has been substantial research concerning the extraction of information from clinical notes, to date there has been less work concerning the extraction of useful information from patient-generated content. Using a dataset comprised of online support group discussion content, this paper investigates two dimensions that may be important in the extraction of patient-generated experiences from text; significant individuals/groups and medication use. With regard to the former, the paper describes an approach involving the pairing of important figures (e.g. family, husbands, doctors, etc.) and affect, and suggests possible applications of such techniques to research concerning online social support, as well as integration into search interfaces for patients. Additionally, the paper demonstrates the extraction of side effects and sentiment at different phases in patient medication use, e.g. adoption, current use, discontinuation and switching, and demonstrates the utility of such an application for drug safety monitoring in online discussion forums. 1

3 0.98004639 55 acl-2013-Are Semantically Coherent Topic Models Useful for Ad Hoc Information Retrieval?

Author: Romain Deveaud ; Eric SanJuan ; Patrice Bellot

Abstract: The current topic modeling approaches for Information Retrieval do not allow to explicitly model query-oriented latent topics. More, the semantic coherence of the topics has never been considered in this field. We propose a model-based feedback approach that learns Latent Dirichlet Allocation topic models on the top-ranked pseudo-relevant feedback, and we measure the semantic coherence of those topics. We perform a first experimental evaluation using two major TREC test collections. Results show that retrieval perfor- mances tend to be better when using topics with higher semantic coherence.

4 0.97674769 76 acl-2013-Building and Evaluating a Distributional Memory for Croatian

Author: Jan Snajder ; Sebastian Pado ; Zeljko Agic

Abstract: We report on the first structured distributional semantic model for Croatian, DM.HR. It is constructed after the model of the English Distributional Memory (Baroni and Lenci, 2010), from a dependencyparsed Croatian web corpus, and covers about 2M lemmas. We give details on the linguistic processing and the design principles. An evaluation shows state-of-theart performance on a semantic similarity task with particularly good performance on nouns. The resource is freely available.

5 0.97016674 311 acl-2013-Semantic Neighborhoods as Hypergraphs

Author: Chris Quirk ; Pallavi Choudhury

Abstract: Ambiguity preserving representations such as lattices are very useful in a number of NLP tasks, including paraphrase generation, paraphrase recognition, and machine translation evaluation. Lattices compactly represent lexical variation, but word order variation leads to a combinatorial explosion of states. We advocate hypergraphs as compact representations for sets of utterances describing the same event or object. We present a method to construct hypergraphs from sets of utterances, and evaluate this method on a simple recognition task. Given a set of utterances that describe a single object or event, we construct such a hypergraph, and demonstrate that it can recognize novel descriptions of the same event with high accuracy.

6 0.96587896 32 acl-2013-A relatedness benchmark to test the role of determiners in compositional distributional semantics

7 0.96317244 10 acl-2013-A Markov Model of Machine Translation using Non-parametric Bayesian Inference

8 0.9615559 122 acl-2013-Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners

9 0.82900059 60 acl-2013-Automatic Coupling of Answer Extraction and Information Retrieval

10 0.8227039 238 acl-2013-Measuring semantic content in distributional vectors

11 0.81172979 113 acl-2013-Derivational Smoothing for Syntactic Distributional Semantics

12 0.80729204 58 acl-2013-Automated Collocation Suggestion for Japanese Second Language Learners

13 0.7886588 283 acl-2013-Probabilistic Domain Modelling With Contextualized Distributional Semantic Vectors

14 0.78295946 121 acl-2013-Discovering User Interactions in Ideological Discussions

15 0.77302825 158 acl-2013-Feature-Based Selection of Dependency Paths in Ad Hoc Information Retrieval

16 0.76972783 352 acl-2013-Towards Accurate Distant Supervision for Relational Facts Extraction

17 0.76075763 347 acl-2013-The Role of Syntax in Vector Space Models of Compositional Semantics

18 0.7601229 219 acl-2013-Learning Entity Representation for Entity Disambiguation

19 0.75583297 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

20 0.75529236 231 acl-2013-Linggle: a Web-scale Linguistic Search Engine for Words in Context