emnlp emnlp2010 emnlp2010-31 knowledge-graph by maker-knowledge-mining

31 emnlp-2010-Constraints Based Taxonomic Relation Classification


Source: pdf

Author: Quang Do ; Dan Roth

Abstract: Determining whether two terms in text have an ancestor relation (e.g. Toyota and car) or a sibling relation (e.g. Toyota and Honda) is an essential component of textual inference in NLP applications such as Question Answering, Summarization, and Recognizing Textual Entailment. Significant work has been done on developing stationary knowledge sources that could potentially support these tasks, but these resources often suffer from low coverage, noise, and are inflexible when needed to support terms that are not identical to those placed in them, making their use as general purpose background knowledge resources difficult. In this paper, rather than building a stationary hierarchical structure of terms and relations, we describe a system that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. Moreover, we develop a global constraint opti- mization inference process and use it to leverage an existing knowledge base also to enforce relational constraints among terms and thus improve the classifier predictions. Our experimental evaluation shows that our approach significantly outperforms other systems built upon existing well-known knowledge sources.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu , Abstract Determining whether two terms in text have an ancestor relation (e. [sent-2, score-0.392]

2 In this paper, rather than building a stationary hierarchical structure of terms and relations, we describe a system that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. [sent-8, score-0.816]

3 Moreover, we develop a global constraint opti- mization inference process and use it to leverage an existing knowledge base also to enforce relational constraints among terms and thus improve the classifier predictions. [sent-9, score-0.38]

4 1 Introduction Taxonomic relations that are read off of structured ontological knowledge bases have been shown to play important roles in many computational linguistics tasks, such as document clustering (Hotho et al. [sent-11, score-0.177]

5 It is clear that the recognition of taxonomic relation between terms in sentences is essential to support textual inference tasks such as Recognizing Textual Entailment (RTE) (Dagan et al. [sent-16, score-0.811]

6 However, identifying when these relations hold using fixed stationary hierarchical structures may be impaired by noise in the resource and by uncertainty in mapping targeted terms to concepts in the structures. [sent-23, score-0.322]

7 In the current work, we take a different approach, identifying directly whether a pair of terms hold a taxonomic relation. [sent-28, score-0.616]

8 This often happens when targeted terms have the same meaning, but different surface forms, than the terms used in the resources (e. [sent-30, score-0.22]

9 We argue that it is essential to have a classifier that, given two terms, can build a semantic representation of the terms and determines the taxonomic relations between them. [sent-33, score-0.824]

10 Moreover, stationary resources are usually brittle because of the way most of them are built: using local relational patterns (e. [sent-36, score-0.196]

11 Infrequent terms are less likely to be covered, and some relations may not be supported well by these methods because their corresponding terms rarely appear in close proximity (e. [sent-40, score-0.339]

12 Motivated by the needs of NLP applications such as RTE, QA, Summarization, and the compositionality argument alluded to above, we focus on identifying two fundamental types of taxonomic relations - ancestor and sibling. [sent-47, score-0.75]

13 An ancestor relation and its directionality can help us infer that a statement with respect to the child (e. [sent-48, score-0.282]

14 , 2010) suggest to isolate TE phenomena, such as recognizing taxonomic relations, and study them separately; they discuss some ofcharacteristics of phenomena such as contradiction from a similar perspective to ours, but do not provide a solution. [sent-59, score-0.517]

15 In this paper, we present TAxonomic RElation Classifier (TAREC), a system that classifies taxonomic relations between a given pair of terms using a machine learning based classifier. [sent-60, score-0.735]

16 An integral part of TAREC is also our inference model that makes use of relational constraints to enforce co- herency among several related predictions. [sent-61, score-0.183]

17 TAREC does not aim at building or extracting a hierarchical structure of concepts and relations, but rather to directly recognize taxonomic relations given a pair of terms. [sent-62, score-0.695]

18 In addition, we make use of existing stationary ontologies to find related terms to the target terms, and classify those too. [sent-64, score-0.211]

19 2 Related Work There are several works that aim at building taxonomies and ontologies which organize concepts and their taxonomic relations into hierarchical structures. [sent-71, score-0.688]

20 Terms with recognized hypernym relation are extracted and incorporated into a man-made lexical database, WordNet (Fellbaum, 1998), resulting in the extended WordNet, which has been augmented with over 400, 000 synsets. [sent-75, score-0.197]

21 A natural way to use these hierarchical structures to support taxonomic relation classification is to map targeted terms onto the hierarchies and check if they subsume each other or share a common subsumer. [sent-79, score-0.763]

22 TAREC overcomes these limitations by searching and selecting the top relevant articles in Wikipedia for each input term; taxonomic relations are then recognized based on the features extracted from these articles. [sent-81, score-0.738]

23 , 2008), automatically harvest related terms on large corpora by starting with a few seeds of pre-specified relations (e. [sent-83, score-0.255]

24 Moreover, an Open IE system cannot control the extracted relations and this is essential when identifying taxonomic relations. [sent-89, score-0.6]

25 TAREC does not aim at extracting terms and building a stationary hierarchical structure of terms, but rather recognize the taxonomic relation between any two given terms. [sent-102, score-0.846]

26 TAREC focuses on classifying two fundamental types of taxonomic relations: ancestor and sibling. [sent-103, score-0.631]

27 Determining whether two terms hold a taxonomic relation depends on a pragmatic decision of how far one wants to climb up a taxonomy to find a common subsumer. [sent-104, score-0.762]

28 TAREC makes use of a hierarchical structure as background knowledge and considers two terms to hold a taxonomic relation only if the relation can be recognized from information acquired by climbing up at most K levels from the representation of the target terms in the structure. [sent-111, score-1.066]

29 It is also possible that the sibling relation can be recognized by clustering terms together by using vector space models. [sent-112, score-0.404]

30 If so, two terms are siblings if they belong to the same cluster. [sent-113, score-0.19]

31 To cast the problem ofidentifying taxonomic relations between two terms x and y in a machine learning perspective, we model it as a multi-class classification problem. [sent-114, score-0.71]

32 This paper focuses on studying a fundamental problem of recognizing taxonomic relations (given well-segmented terms) and leaves the orthogonal is- data sets. [sent-116, score-0.636]

33 2 The Overview of TAREC Assume that we already have a learned local classifier that can classify taxonomic relations between any two terms. [sent-119, score-0.709]

34 Given two terms, TAREC uses Wikipedia and the local classifier in an inference model to make a final prediction on the taxonomic relation between these two. [sent-120, score-0.774]

35 In practice, we first train a local classifier (Section 4), then incorporate it into an inference model (Section 5) to classify taxonomic relations between terms. [sent-122, score-0.761]

36 Normalizing input terms to Wikipedia: Although most commonly used terms have corresponding Wikipedia articles, there are still a lot of terms with no corresponding Wikipedia articles. [sent-125, score-0.383]

37 We wish to find a replacement such that the taxonomic relation is unchanged. [sent-127, score-0.64]

38 We first make a query with the two input terms (e. [sent-135, score-0.191]

39 2): TAREC leverage an existing knowledge base to extract additional terms related to the input terms, to be used in the inference model in step 3. [sent-155, score-0.242]

40 1): TAREC performs several local predictions using the local classifier R (Section 4) on ethdiec ttwioon input gte trhmes l oacnadl tchlaesssei iteerrm Rs (wSiethc tiohen additional ones. [sent-158, score-0.211]

41 4 Learning Taxonomic Relations The local classifier of TAREC is trained on the pairs of terms with correct taxonomic relation labels (some examples are showed in Table 1). [sent-160, score-0.832]

42 The trained classifier when applied on a new input pair of terms will return a real valued number which can be inter- preted as the probability of the predicted label. [sent-161, score-0.248]

43 Given two input terms, we first build a semantic representation for each term by using a local search engine3 to retrieve a list of top articles in Wikipedia that are relevant to the term. [sent-175, score-0.31]

44 Once we have a semantic representation of each term, in the form of the extracted articles, we extract from it features that we use as the representation of the two input terms in our learning algorithm. [sent-188, score-0.217]

45 From now on, we use the titles of x, the texts of x, and the categories of x to refer to the titles, texts, and categories of the associated articles in the representation of x. [sent-193, score-0.222]

46 and categories associated with two input terms x and y in Table 3. [sent-206, score-0.213]

47 To collect categories of a term, we take the categories of its associated articles and go up K levels in the Wikipedia category system. [sent-207, score-0.196]

48 We capture this feature by the pointwise mutual information (pmi) which quantifies the discrepancy between the probability of two terms appearing together versus the probability of each term appearing independently4. [sent-210, score-0.213]

49 Overlap Ratios: The overlap ratio features capture the fact that the titles of a term usually overlap with the categories of its descendants. [sent-213, score-0.224]

50 We measure this overlap as the ratio of the number of common phrases used in the titles of one term and the categories of the other term. [sent-214, score-0.224]

51 considered to be a common phrase ifit appears in the titles of one term and the categories of the other term and it is also of the following types: (1) the whole string of a category, or (2) the head in the root form of a category, or (3) the post-modifier of a category. [sent-217, score-0.327]

52 Given term pair (City, Chicago), we observe that City matches the head of the category Cities in Illinois of term Chicago. [sent-222, score-0.276]

53 5 Inference with Relational Constraints Once we have a local multi-class classifier that maps a given pair of terms to one of the four possible relations, we use a constraint-based optimization algorithm to improve this prediction. [sent-227, score-0.244]

54 The key insight behind the way we model the inference model is that if we consider more than two terms, there are logical constraints that restrict the possible relations among them. [sent-228, score-0.208]

55 Bush cannot be an ancestor or sibling of president if we are confident that president is an ancestor of Bill Clin- ton, and Bill Clinton is a sibling of George W. [sent-230, score-0.638]

56 We call the combination of terms and their relations a term network. [sent-232, score-0.332]

57 Figure 2 shows some n-term networks consisting of two input terms (x, y), and additional terms z, w, v. [sent-233, score-0.313]

58 The aforementioned observations show that if we can obtain additional terms that are related to the two target terms, we can enforce such coherency relational constraints and make a global prediction that would improve the prediction of the taxonomic relation between the two given terms. [sent-234, score-0.893]

59 President Bush Red Green Honda Toyota Celcius meter (a) (b) (c) (d) Figure 2: Examples of n-term networks with two input term x and y. [sent-239, score-0.196]

60 tw Foorrk sa swuhbosseet Znod ∈es Z are x, y annstdr uacllt aele smete onfts t eirnm Z, eatwndo trkhes edge, e, between every two nodes is one of four taxonomic relations whose weight, w(e), is given by a local classifier (Section 4). [sent-248, score-0.709]

61 A relational constraint is defined as a term network consisting of only its “illegitimate” edge set- tings, those that belongs to a pre-defined list of invalid edge combinations. [sent-254, score-0.228]

62 For example, Figure 2b shows an invalid network where red is a sibling of both green and blue, and green is an ancestor ofblue. [sent-255, score-0.436]

63 In Figure 2d, Celcius and meter cannot be siblings because they are children of two sibling terms temperature and length. [sent-256, score-0.318]

64 In our work, we use relational constraints as hard con- Figure 3: Our YAGO query patterns used to obtain related terms for “x”. [sent-262, score-0.269]

65 After picking the best term network t∗ for every Z ∈ Z, we make the final decision on the taxonomic Zrel a∈tio Zn, w beet mweaekne x ean fidn y. [sent-266, score-0.615]

66 Equation (2) finds the best taxonomic relation of two input terms by computing the average score of every group of the best term networks representing a particular relation of two input terms. [sent-276, score-1.104]

67 2 Extracting Related Terms In the inference model, we need to obtain other terms that are related to the two input terms. [sent-278, score-0.215]

68 The related term space is a space of direct ancestors, siblings and children in a particular resource. [sent-280, score-0.183]

69 To map our input terms to entities in YAGO, we use the MEANS relation defined in the YAGO ontology. [sent-288, score-0.295]

70 This allows us to obtain direct ancestors of an entity by using the TYPE relation which gives the entity’s classes. [sent-290, score-0.196]

71 We call the test set of this 5However, YAGO by itself is weaker than our approach in identifying taxonomic relations (see Section 6. [sent-310, score-0.6]

72 Four types of taxonomic relations are covered with balanced numbers of examples in all data sets. [sent-323, score-0.6]

73 For a fair comparison, we first generate a semantic representation for each input term by following the same procedure used in TAREC described in Section 4. [sent-331, score-0.21]

74 The titles and categories of the articles in the representation of each input term are then extracted. [sent-332, score-0.328]

75 A term is an ancestor of the other if at least one of its titles is in the categories of the other term. [sent-334, score-0.374]

76 The ancestor relation is checked first, then sibling, and finally no relation. [sent-336, score-0.282]

77 We grouped some semantically similar classes for the purpose of classifying taxonomic relations. [sent-338, score-0.515]

78 TAREC (local) uses only our local classifier to identify taxonomic relations by choosing the relation with highest confidence. [sent-349, score-0.841]

79 A term is an ancestor of the other if it can be found as an hypernym after going up K levels in the hierarchy from the other term. [sent-354, score-0.284]

80 Otherwise, there is no relation between the two input terms. [sent-356, score-0.185]

81 Because the YAGO ontology is a combination of Wikipedia and WordNet, this system is expected to perform well at recognizing taxonomic relations. [sent-360, score-0.588]

82 To access a term’s ancestors and siblings, we use patterns 1 and 2 in Figure 3 to map a term to the ontology and move up on the ontology. [sent-361, score-0.238]

83 If an input term is not recognized by these systems, they return no relation. [sent-363, score-0.19]

84 We manually construct a pre-defined list of 35 relational constraints to use in the inference model. [sent-365, score-0.183]

85 We believe that our machine learning based classifier is very flexible in extracting features of the two input terms and thus in predicting their taxonomic Relation. [sent-377, score-0.704]

86 On the other hand, other system rely heavily on string matching techniques to map input terms to their respective ontologies, and these are very inflexible and brittle. [sent-378, score-0.2]

87 This clearly shows one limitation of using existing structured resources to classify taxonomic relations. [sent-379, score-0.481]

88 However, our procedure of building se- mantic representations for input terms described in Section 4 ties the senses of the two input terms and thus, implicitly, may get some sense information. [sent-381, score-0.326]

89 We also do not use this procedure in Yago07 because in YAGO, a term is mapped onto the ontology by using the MEANS operator (in Pattern 1, Figure 3). [sent-383, score-0.174]

90 However, since the information available in TypeDM does not support predicting the ancestor relation between terms, we only evaluate TypeDM in classifying sibling vs. [sent-395, score-0.41]

91 Out of 20,410 noun terms in TypeDM, there are only 345 terms overlapping with the instances in OrgData-I and belonging to 10 significant semantic classes. [sent-398, score-0.274]

92 The rest of the overlapping instances are randomly paired to make a dataset of 4,000 pairs of terms balanced in the number of sib- ling and no relation pairs. [sent-400, score-0.242]

93 TAREC (local), with the local classifier trained on the training set (with 4 relation classes) of Dataset-I, gives 78. [sent-403, score-0.241]

94 We also re-train and evaluate the local classifier of TAREC on the same training set but without ancestor relation pairs. [sent-407, score-0.391]

95 However, TypeDM can only work in a limited setting where semantic classes are given in advance, which is not practical in real-world applications; and of course, TypeDM does not help to recognize ancestor relations between two terms. [sent-412, score-0.387]

96 Precision and Recall: We want to study TAREC on individual taxonomic relations using Precision and Recall. [sent-415, score-0.6]

97 Sibling and no relation are the most difficult relations to classify. [sent-417, score-0.251]

98 The improvement of TAREC over TAREC (local) on the Wiki and WordNet test sets shows the contribution of the inference model, whereas the improvement on the non-Wikipedia test set shows the contribution of normalizing input terms to Wikipedia. [sent-442, score-0.215]

99 7 Conclusions We studied an important component of many computational linguistics tasks: given two target terms, determine that taxonomic relation between them. [sent-459, score-0.613]

100 Global inference for entity and relation identification via a linear programming formulation. [sent-607, score-0.184]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tarec', 0.612), ('taxonomic', 0.481), ('yago', 0.2), ('wikipedia', 0.161), ('ancestor', 0.15), ('relation', 0.132), ('sibling', 0.128), ('relations', 0.119), ('typedm', 0.112), ('terms', 0.11), ('term', 0.103), ('relational', 0.094), ('siblings', 0.08), ('bush', 0.08), ('toyota', 0.075), ('ontology', 0.071), ('titles', 0.071), ('suchanek', 0.064), ('ancestors', 0.064), ('wordnet', 0.063), ('classifier', 0.06), ('semantic', 0.054), ('input', 0.053), ('stationary', 0.053), ('inference', 0.052), ('articles', 0.051), ('hdelimiteri', 0.05), ('kova', 0.05), ('lojze', 0.05), ('green', 0.05), ('blue', 0.05), ('categories', 0.05), ('snow', 0.05), ('local', 0.049), ('george', 0.049), ('ontologies', 0.048), ('category', 0.045), ('kozareva', 0.043), ('honda', 0.043), ('president', 0.041), ('networks', 0.04), ('hierarchical', 0.04), ('taxonomy', 0.039), ('ford', 0.039), ('coherency', 0.039), ('eligo', 0.037), ('inflexible', 0.037), ('ponzetto', 0.037), ('rudi', 0.037), ('tstrube', 0.037), ('constraints', 0.037), ('textual', 0.036), ('recognizing', 0.036), ('baroni', 0.036), ('maccartney', 0.036), ('pantel', 0.036), ('classes', 0.034), ('recognized', 0.034), ('roth', 0.032), ('sammons', 0.032), ('clinton', 0.032), ('lenci', 0.032), ('wiki', 0.032), ('hypernym', 0.031), ('bases', 0.031), ('network', 0.031), ('recognize', 0.03), ('pennacchiotti', 0.029), ('cb', 0.029), ('query', 0.028), ('replacement', 0.027), ('red', 0.027), ('entailment', 0.027), ('bill', 0.027), ('knowledge', 0.027), ('seeds', 0.026), ('built', 0.026), ('pair', 0.025), ('abad', 0.025), ('camry', 0.025), ('celcius', 0.025), ('chakrabarti', 0.025), ('espresso', 0.025), ('hred', 0.025), ('marjan', 0.025), ('navigating', 0.025), ('nde', 0.025), ('nigeria', 0.025), ('nonwikipedia', 0.025), ('politician', 0.025), ('sarmento', 0.025), ('saxena', 0.025), ('strube', 0.025), ('subclassof', 0.025), ('taiwan', 0.025), ('tonnes', 0.025), ('vef', 0.025), ('vikas', 0.025), ('vyas', 0.025), ('cities', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000014 31 emnlp-2010-Constraints Based Taxonomic Relation Classification

Author: Quang Do ; Dan Roth

Abstract: Determining whether two terms in text have an ancestor relation (e.g. Toyota and car) or a sibling relation (e.g. Toyota and Honda) is an essential component of textual inference in NLP applications such as Question Answering, Summarization, and Recognizing Textual Entailment. Significant work has been done on developing stationary knowledge sources that could potentially support these tasks, but these resources often suffer from low coverage, noise, and are inflexible when needed to support terms that are not identical to those placed in them, making their use as general purpose background knowledge resources difficult. In this paper, rather than building a stationary hierarchical structure of terms and relations, we describe a system that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. Moreover, we develop a global constraint opti- mization inference process and use it to leverage an existing knowledge base also to enforce relational constraints among terms and thus improve the classifier predictions. Our experimental evaluation shows that our approach significantly outperforms other systems built upon existing well-known knowledge sources.

2 0.2406524 12 emnlp-2010-A Semi-Supervised Method to Learn and Construct Taxonomies Using the Web

Author: Zornitsa Kozareva ; Eduard Hovy

Abstract: Although many algorithms have been developed to harvest lexical resources, few organize the mined terms into taxonomies. We propose (1) a semi-supervised algorithm that uses a root concept, a basic level concept, and recursive surface patterns to learn automatically from the Web hyponym-hypernym pairs subordinated to the root; (2) a Web based concept positioning procedure to validate the learned pairs’ is-a relations; and (3) a graph algorithm that derives from scratch the integrated taxonomy structure of all the terms. Comparing results with WordNet, we find that the algorithm misses some concepts and links, but also that it discovers many additional ones lacking in WordNet. We evaluate the taxonomization power of our method on reconstructing parts of the WordNet taxonomy. Experiments show that starting from scratch, the algorithm can reconstruct 62% of the WordNet taxonomy for the regions tested.

3 0.095038362 28 emnlp-2010-Collective Cross-Document Relation Extraction Without Labelled Data

Author: Limin Yao ; Sebastian Riedel ; Andrew McCallum

Abstract: We present a novel approach to relation extraction that integrates information across documents, performs global inference and requires no labelled text. In particular, we tackle relation extraction and entity identification jointly. We use distant supervision to train a factor graph model for relation extraction based on an existing knowledge base (Freebase, derived in parts from Wikipedia). For inference we run an efficient Gibbs sampler that leads to linear time joint inference. We evaluate our approach both for an indomain (Wikipedia) and a more realistic outof-domain (New York Times Corpus) setting. For the in-domain setting, our joint model leads to 4% higher precision than an isolated local approach, but has no advantage over a pipeline. For the out-of-domain data, we benefit strongly from joint modelling, and observe improvements in precision of 13% over the pipeline, and 15% over the isolated baseline.

4 0.084200226 11 emnlp-2010-A Semi-Supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension

Author: Hugo Hernault ; Danushka Bollegala ; Mitsuru Ishizuka

Abstract: Several recent discourse parsers have employed fully-supervised machine learning approaches. These methods require human annotators to beforehand create an extensive training corpus, which is a time-consuming and costly process. On the other hand, unlabeled data is abundant and cheap to collect. In this paper, we propose a novel semi-supervised method for discourse relation classification based on the analysis of cooccurring features in unlabeled data, which is then taken into account for extending the feature vectors given to a classifier. Our experimental results on the RST Discourse Treebank corpus and Penn Discourse Treebank indicate that the proposed method brings a significant improvement in classification accuracy and macro-average F-score when small training datasets are used. For instance, with training sets of c.a. 1000 labeled instances, the proposed method brings improvements in accuracy and macro-average F-score up to 50% compared to a baseline classifier. We believe that the proposed method is a first step towards detecting low-occurrence relations, which is useful for domains with a lack of annotated data.

5 0.075840183 112 emnlp-2010-Unsupervised Discovery of Negative Categories in Lexicon Bootstrapping

Author: Tara McIntosh

Abstract: Multi-category bootstrapping algorithms were developed to reduce semantic drift. By extracting multiple semantic lexicons simultaneously, a category’s search space may be restricted. The best results have been achieved through reliance on manually crafted negative categories. Unfortunately, identifying these categories is non-trivial, and their use shifts the unsupervised bootstrapping paradigm towards a supervised framework. We present NEG-FINDER, the first approach for discovering negative categories automatically. NEG-FINDER exploits unsupervised term clustering to generate multiple negative categories during bootstrapping. Our algorithm effectively removes the necessity of manual intervention and formulation of negative categories, with performance closely approaching that obtained using negative categories defined by a domain expert.

6 0.070806533 27 emnlp-2010-Clustering-Based Stratified Seed Sampling for Semi-Supervised Relation Classification

7 0.070166051 59 emnlp-2010-Identifying Functional Relations in Web Text

8 0.068062991 21 emnlp-2010-Automatic Discovery of Manner Relations and its Applications

9 0.061845984 107 emnlp-2010-Towards Conversation Entailment: An Empirical Investigation

10 0.054234967 51 emnlp-2010-Function-Based Question Classification for General QA

11 0.053354938 72 emnlp-2010-Learning First-Order Horn Clauses from Web Text

12 0.052069765 20 emnlp-2010-Automatic Detection and Classification of Social Events

13 0.043161076 33 emnlp-2010-Cross Language Text Classification by Model Translation and Semi-Supervised Learning

14 0.042035777 38 emnlp-2010-Dual Decomposition for Parsing with Non-Projective Head Automata

15 0.041413102 66 emnlp-2010-Inducing Word Senses to Improve Web Search Result Clustering

16 0.040202942 121 emnlp-2010-What a Parser Can Learn from a Semantic Role Labeler and Vice Versa

17 0.03988719 93 emnlp-2010-Resolving Event Noun Phrases to Their Verbal Mentions

18 0.039403997 7 emnlp-2010-A Mixture Model with Sharing for Lexical Semantics

19 0.037906945 87 emnlp-2010-Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space

20 0.037713788 73 emnlp-2010-Learning Recurrent Event Queries for Web Search


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.151), (1, 0.096), (2, -0.042), (3, 0.233), (4, 0.089), (5, -0.067), (6, -0.064), (7, 0.08), (8, 0.027), (9, 0.014), (10, -0.051), (11, -0.19), (12, -0.068), (13, -0.254), (14, 0.027), (15, 0.115), (16, -0.134), (17, -0.008), (18, -0.056), (19, -0.055), (20, 0.028), (21, -0.048), (22, 0.183), (23, 0.237), (24, -0.081), (25, 0.018), (26, -0.276), (27, -0.132), (28, 0.021), (29, -0.052), (30, -0.208), (31, -0.028), (32, -0.143), (33, 0.134), (34, 0.088), (35, -0.077), (36, -0.009), (37, -0.032), (38, -0.085), (39, -0.049), (40, 0.064), (41, -0.042), (42, 0.075), (43, 0.004), (44, 0.049), (45, 0.07), (46, -0.052), (47, 0.035), (48, -0.063), (49, 0.063)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94191146 31 emnlp-2010-Constraints Based Taxonomic Relation Classification

Author: Quang Do ; Dan Roth

Abstract: Determining whether two terms in text have an ancestor relation (e.g. Toyota and car) or a sibling relation (e.g. Toyota and Honda) is an essential component of textual inference in NLP applications such as Question Answering, Summarization, and Recognizing Textual Entailment. Significant work has been done on developing stationary knowledge sources that could potentially support these tasks, but these resources often suffer from low coverage, noise, and are inflexible when needed to support terms that are not identical to those placed in them, making their use as general purpose background knowledge resources difficult. In this paper, rather than building a stationary hierarchical structure of terms and relations, we describe a system that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. Moreover, we develop a global constraint opti- mization inference process and use it to leverage an existing knowledge base also to enforce relational constraints among terms and thus improve the classifier predictions. Our experimental evaluation shows that our approach significantly outperforms other systems built upon existing well-known knowledge sources.

2 0.9128511 12 emnlp-2010-A Semi-Supervised Method to Learn and Construct Taxonomies Using the Web

Author: Zornitsa Kozareva ; Eduard Hovy

Abstract: Although many algorithms have been developed to harvest lexical resources, few organize the mined terms into taxonomies. We propose (1) a semi-supervised algorithm that uses a root concept, a basic level concept, and recursive surface patterns to learn automatically from the Web hyponym-hypernym pairs subordinated to the root; (2) a Web based concept positioning procedure to validate the learned pairs’ is-a relations; and (3) a graph algorithm that derives from scratch the integrated taxonomy structure of all the terms. Comparing results with WordNet, we find that the algorithm misses some concepts and links, but also that it discovers many additional ones lacking in WordNet. We evaluate the taxonomization power of our method on reconstructing parts of the WordNet taxonomy. Experiments show that starting from scratch, the algorithm can reconstruct 62% of the WordNet taxonomy for the regions tested.

3 0.31891942 28 emnlp-2010-Collective Cross-Document Relation Extraction Without Labelled Data

Author: Limin Yao ; Sebastian Riedel ; Andrew McCallum

Abstract: We present a novel approach to relation extraction that integrates information across documents, performs global inference and requires no labelled text. In particular, we tackle relation extraction and entity identification jointly. We use distant supervision to train a factor graph model for relation extraction based on an existing knowledge base (Freebase, derived in parts from Wikipedia). For inference we run an efficient Gibbs sampler that leads to linear time joint inference. We evaluate our approach both for an indomain (Wikipedia) and a more realistic outof-domain (New York Times Corpus) setting. For the in-domain setting, our joint model leads to 4% higher precision than an isolated local approach, but has no advantage over a pipeline. For the out-of-domain data, we benefit strongly from joint modelling, and observe improvements in precision of 13% over the pipeline, and 15% over the isolated baseline.

4 0.31655684 59 emnlp-2010-Identifying Functional Relations in Web Text

Author: Thomas Lin ; Mausam ; Oren Etzioni

Abstract: Determining whether a textual phrase denotes a functional relation (i.e., a relation that maps each domain element to a unique range element) is useful for numerous NLP tasks such as synonym resolution and contradiction detection. Previous work on this problem has relied on either counting methods or lexico-syntactic patterns. However, determining whether a relation is functional, by analyzing mentions of the relation in a corpus, is challenging due to ambiguity, synonymy, anaphora, and other linguistic phenomena. We present the LEIBNIZ system that overcomes these challenges by exploiting the synergy between the Web corpus and freelyavailable knowledge resources such as Freebase. It first computes multiple typedfunctionality scores, representing functionality of the relation phrase when its arguments are constrained to specific types. It then aggregates these scores to predict the global functionality for the phrase. LEIBNIZ outperforms previous work, increasing area under the precisionrecall curve from 0.61 to 0.88. We utilize LEIBNIZ to generate the first public repository of automatically-identified functional relations.

5 0.28866756 11 emnlp-2010-A Semi-Supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension

Author: Hugo Hernault ; Danushka Bollegala ; Mitsuru Ishizuka

Abstract: Several recent discourse parsers have employed fully-supervised machine learning approaches. These methods require human annotators to beforehand create an extensive training corpus, which is a time-consuming and costly process. On the other hand, unlabeled data is abundant and cheap to collect. In this paper, we propose a novel semi-supervised method for discourse relation classification based on the analysis of cooccurring features in unlabeled data, which is then taken into account for extending the feature vectors given to a classifier. Our experimental results on the RST Discourse Treebank corpus and Penn Discourse Treebank indicate that the proposed method brings a significant improvement in classification accuracy and macro-average F-score when small training datasets are used. For instance, with training sets of c.a. 1000 labeled instances, the proposed method brings improvements in accuracy and macro-average F-score up to 50% compared to a baseline classifier. We believe that the proposed method is a first step towards detecting low-occurrence relations, which is useful for domains with a lack of annotated data.

6 0.27334267 21 emnlp-2010-Automatic Discovery of Manner Relations and its Applications

7 0.24568309 112 emnlp-2010-Unsupervised Discovery of Negative Categories in Lexicon Bootstrapping

8 0.21645932 107 emnlp-2010-Towards Conversation Entailment: An Empirical Investigation

9 0.20106797 27 emnlp-2010-Clustering-Based Stratified Seed Sampling for Semi-Supervised Relation Classification

10 0.18459782 109 emnlp-2010-Translingual Document Representations from Discriminative Projections

11 0.18199761 72 emnlp-2010-Learning First-Order Horn Clauses from Web Text

12 0.1728822 51 emnlp-2010-Function-Based Question Classification for General QA

13 0.15752167 66 emnlp-2010-Inducing Word Senses to Improve Web Search Result Clustering

14 0.15463448 20 emnlp-2010-Automatic Detection and Classification of Social Events

15 0.14638616 87 emnlp-2010-Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space

16 0.14622112 121 emnlp-2010-What a Parser Can Learn from a Semantic Role Labeler and Vice Versa

17 0.14250028 17 emnlp-2010-An Efficient Algorithm for Unsupervised Word Segmentation with Branching Entropy and MDL

18 0.13571243 7 emnlp-2010-A Mixture Model with Sharing for Lexical Semantics

19 0.13419266 120 emnlp-2010-What's with the Attitude? Identifying Sentences with Attitude in Online Discussions

20 0.13138948 93 emnlp-2010-Resolving Event Noun Phrases to Their Verbal Mentions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.016), (10, 0.043), (12, 0.036), (13, 0.293), (22, 0.022), (29, 0.062), (30, 0.057), (32, 0.016), (52, 0.021), (56, 0.059), (62, 0.039), (66, 0.096), (72, 0.059), (76, 0.025), (77, 0.02), (82, 0.025), (87, 0.019), (89, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.70120764 31 emnlp-2010-Constraints Based Taxonomic Relation Classification

Author: Quang Do ; Dan Roth

Abstract: Determining whether two terms in text have an ancestor relation (e.g. Toyota and car) or a sibling relation (e.g. Toyota and Honda) is an essential component of textual inference in NLP applications such as Question Answering, Summarization, and Recognizing Textual Entailment. Significant work has been done on developing stationary knowledge sources that could potentially support these tasks, but these resources often suffer from low coverage, noise, and are inflexible when needed to support terms that are not identical to those placed in them, making their use as general purpose background knowledge resources difficult. In this paper, rather than building a stationary hierarchical structure of terms and relations, we describe a system that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. Moreover, we develop a global constraint opti- mization inference process and use it to leverage an existing knowledge base also to enforce relational constraints among terms and thus improve the classifier predictions. Our experimental evaluation shows that our approach significantly outperforms other systems built upon existing well-known knowledge sources.

2 0.55566651 57 emnlp-2010-Hierarchical Phrase-Based Translation Grammars Extracted from Alignment Posterior Probabilities

Author: Adria de Gispert ; Juan Pino ; William Byrne

Abstract: We report on investigations into hierarchical phrase-based translation grammars based on rules extracted from posterior distributions over alignments of the parallel text. Rather than restrict rule extraction to a single alignment, such as Viterbi, we instead extract rules based on posterior distributions provided by the HMM word-to-word alignmentmodel. We define translation grammars progressively by adding classes of rules to a basic phrase-based system. We assess these grammars in terms of their expressive power, measured by their ability to align the parallel text from which their rules are extracted, and the quality of the translations they yield. In Chinese-to-English translation, we find that rule extraction from posteriors gives translation improvements. We also find that grammars with rules with only one nonterminal, when extracted from posteri- ors, can outperform more complex grammars extracted from Viterbi alignments. Finally, we show that the best way to exploit source-totarget and target-to-source alignment models is to build two separate systems and combine their output translation lattices.

3 0.44747889 55 emnlp-2010-Handling Noisy Queries in Cross Language FAQ Retrieval

Author: Danish Contractor ; Govind Kothari ; Tanveer Faruquie ; L V Subramaniam ; Sumit Negi

Abstract: Recent times have seen a tremendous growth in mobile based data services that allow people to use Short Message Service (SMS) to access these data services. In a multilingual society it is essential that data services that were developed for a specific language be made accessible through other local languages also. In this paper, we present a service that allows a user to query a FrequentlyAsked-Questions (FAQ) database built in a local language (Hindi) using Noisy SMS English queries. The inherent noise in the SMS queries, along with the language mismatch makes this a challenging problem. We handle these two problems by formulating the query similarity over FAQ questions as a combinatorial search problem where the search space consists of combinations of dictionary variations of the noisy query and its top-N translations. We demonstrate the effectiveness of our approach on a real-life dataset.

4 0.4473094 100 emnlp-2010-Staying Informed: Supervised and Semi-Supervised Multi-View Topical Analysis of Ideological Perspective

Author: Amr Ahmed ; Eric Xing

Abstract: With the proliferation of user-generated articles over the web, it becomes imperative to develop automated methods that are aware of the ideological-bias implicit in a document collection. While there exist methods that can classify the ideological bias of a given document, little has been done toward understanding the nature of this bias on a topical-level. In this paper we address the problem ofmodeling ideological perspective on a topical level using a factored topic model. We develop efficient inference algorithms using Collapsed Gibbs sampling for posterior inference, and give various evaluations and illustrations of the utility of our model on various document collections with promising results. Finally we give a Metropolis-Hasting inference algorithm for a semi-supervised extension with decent results.

5 0.44568583 72 emnlp-2010-Learning First-Order Horn Clauses from Web Text

Author: Stefan Schoenmackers ; Jesse Davis ; Oren Etzioni ; Daniel Weld

Abstract: input. Even the entire Web corpus does not explicitly answer all questions, yet inference can uncover many implicit answers. But where do inference rules come from? This paper investigates the problem of learning inference rules from Web text in an unsupervised, domain-independent manner. The SHERLOCK system, described herein, is a first-order learner that acquires over 30,000 Horn clauses from Web text. SHERLOCK embodies several innovations, including a novel rule scoring function based on Statistical Relevance (Salmon et al., 1971) which is effective on ambiguous, noisy and incomplete Web extractions. Our experiments show that inference over the learned rules discovers three times as many facts (at precision 0.8) as the TEXTRUNNER system which merely extracts facts explicitly stated in Web text.

6 0.44364798 32 emnlp-2010-Context Comparison of Bursty Events in Web Search and Online Media

7 0.44281411 51 emnlp-2010-Function-Based Question Classification for General QA

8 0.44027352 35 emnlp-2010-Discriminative Sample Selection for Statistical Machine Translation

9 0.43643346 105 emnlp-2010-Title Generation with Quasi-Synchronous Grammar

10 0.43629587 67 emnlp-2010-It Depends on the Translation: Unsupervised Dependency Parsing via Word Alignment

11 0.43581894 18 emnlp-2010-Assessing Phrase-Based Translation Models with Oracle Decoding

12 0.43578714 120 emnlp-2010-What's with the Attitude? Identifying Sentences with Attitude in Online Discussions

13 0.43447858 47 emnlp-2010-Example-Based Paraphrasing for Improved Phrase-Based Statistical Machine Translation

14 0.43418741 78 emnlp-2010-Minimum Error Rate Training by Sampling the Translation Lattice

15 0.43321475 65 emnlp-2010-Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification

16 0.43320912 49 emnlp-2010-Extracting Opinion Targets in a Single and Cross-Domain Setting with Conditional Random Fields

17 0.43294111 20 emnlp-2010-Automatic Detection and Classification of Social Events

18 0.43267286 63 emnlp-2010-Improving Translation via Targeted Paraphrasing

19 0.43050414 58 emnlp-2010-Holistic Sentiment Analysis Across Languages: Multilingual Supervised Latent Dirichlet Allocation

20 0.43013033 45 emnlp-2010-Evaluating Models of Latent Document Semantics in the Presence of OCR Errors