acl acl2012 acl2012-217 knowledge-graph by maker-knowledge-mining

217 acl-2012-Word Sense Disambiguation Improves Information Retrieval


Source: pdf

Author: Zhi Zhong ; Hwee Tou Ng

Abstract: Previous research has conflicting conclusions on whether word sense disambiguation (WSD) systems can improve information retrieval (IR) performance. In this paper, we propose a method to estimate sense distributions for short queries. Together with the senses predicted for words in documents, we propose a novel approach to incorporate word senses into the language modeling approach to IR and also exploit the integration of synonym relations. Our experimental results on standard TREC collections show that using the word senses tagged by a supervised WSD system, we obtain significant improvements over a state-of-the-art IR system.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 s g Abstract Previous research has conflicting conclusions on whether word sense disambiguation (WSD) systems can improve information retrieval (IR) performance. [sent-4, score-0.488]

2 Together with the senses predicted for words in documents, we propose a novel approach to incorporate word senses into the language modeling approach to IR and also exploit the integration of synonym relations. [sent-6, score-0.891]

3 Our experimental results on standard TREC collections show that using the word senses tagged by a supervised WSD system, we obtain significant improvements over a state-of-the-art IR system. [sent-7, score-0.439]

4 1 Introduction Word sense disambiguation (WSD) is the task of identifying the correct meaning of a word in context. [sent-8, score-0.341]

5 The 273 ambiguities of these query words can hurt retrieval precision. [sent-16, score-0.42]

6 Identifying the correct meaning of the ambiguous words in both queries and documents can help improve retrieval precision. [sent-17, score-0.364]

7 Some of the early research showed a drop in retrieval performance by using word senses (Krovetz and Croft, 1992; Voorhees, 1993). [sent-22, score-0.488]

8 Some other experiments observed improvements by integrating word senses in IR systems (Sch u¨tze and Pedersen, 1995; Gonzalo et al. [sent-23, score-0.372]

9 This paper proposes the use of word senses to improve the performance of IR. [sent-27, score-0.341]

10 We propose an approach to annotate the senses for short queries. [sent-28, score-0.341]

11 We incorporate word senses into the language modeling (LM) approach to IR (Ponte and Croft, 1998), and utilize sense synonym relations to further improve the performance. [sent-29, score-0.825]

12 c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi2c 7s3–282, generating word senses for query terms in Section 4, followed by presenting our novel method of incorporating word senses and their synonyms into the LM approach in Section 5. [sent-38, score-1.074]

13 Krovetz and Croft (1992) studied the sense matches between terms in query and the document collection. [sent-42, score-0.628]

14 They concluded that the benefits of WSD in IR are not as expected because query words have skewed sense distribution and the collocation effect from other query terms already performs some disambiguation. [sent-43, score-0.855]

15 Sanderson (1994; 2000) used pseudowords to introduce artificial word ambiguity in order to study the impact of sense ambiguity on IR. [sent-44, score-0.325]

16 They obtained significant improvements by representing documents and queries with accurate senses as well as synsets (synonym sets). [sent-48, score-0.589]

17 Several works attempted to disambiguate terms in both queries and documents with the senses predefined in hand-crafted sense inventories, and then used the senses to perform indexing and retrieval. [sent-52, score-1.303]

18 However, it is hard to judge the effect of word senses because of the overall poor performances of their baseline method and their system. [sent-60, score-0.374]

19 (2004) tagged words with 25 root senses of nouns in WordNet. [sent-62, score-0.365]

20 Their retrieval method maintained the stem-based index and adjusted the term weight in a document according to its sense matching result with the query. [sent-63, score-0.544]

21 They attributed the improvement achieved on TREC collections to their coarse-grained, consistent, and flexible sense tagging method. [sent-64, score-0.33]

22 The integration of senses into the traditional stem-based index overcomes some of the negative impact of disambiguation errors. [sent-65, score-0.447]

23 Different from using predefined sense inventories, Sch u¨tze and Pedersen (1995) induced the sense inventory directly from the text retrieval collection. [sent-66, score-0.726]

24 For each word, its occurrences were clustered into senses based on the similarities of their contexts. [sent-67, score-0.341]

25 Their experiments showed that using senses improved retrieval performance, and the combination of word-based ranking and sense-based ranking can further improve performance. [sent-68, score-0.488]

26 Because the sense inventory is collection dependent, it is also hard to expand the text collection without re-doing preprocessing. [sent-70, score-0.388]

27 Some researchers achieved improvements by expanding the disambiguated query words with synonyms and some other information from WordNet (Voorhees, 1994; Liu et al. [sent-72, score-0.358]

28 It is important to reduce the negative impact of erroneous disambiguation, and the integration of senses into traditional term index, such as stem-based index, is a possible solution. [sent-79, score-0.424]

29 It is also interesting to investigate the utilization of semantic relations among senses in IR. [sent-81, score-0.381]

30 1 The language modeling approach In the language modeling approach to IR, language models are constructed for each query q and each document d in a text collection C. [sent-84, score-0.362]

31 The documents in C are ranked by the distance to a given query q according to the language models. [sent-85, score-0.366]

32 One of the commonly used measures of the similarity between query model and document model is negative Kullback-Leibler (KL) divergence (Lafferty and Zhai, 2001). [sent-88, score-0.319]

33 In the first step, ranked documents are retrieved from C by a normal retrieval method with the original query q. [sent-98, score-0.625]

34 In the second step, a number of terms are selected from the top k ranked documents Dq for query expansion, under the assumption that these k documents are relevant to the query. [sent-99, score-0.523]

35 Then, the expanded query is used to retrieve the documents from C. [sent-100, score-0.366]

36 Finally, the relevance mPodel is interpolated with the original query model: + p(t|θpqrf) = λ p(t|θrq) (1 − λ)p(t|θq), (4) where parameter λ controls the amount of feedback. [sent-107, score-0.328]

37 The usage of X is supposed to provide more relevant feedback documents and feedback query terms. [sent-110, score-0.532]

38 Then, we propose the method of assigning senses to query terms. [sent-113, score-0.647]

39 1 Word sense disambiguation system Previous research shows that translations in another language can be used to disambiguate the meanings ofwords (Chan and Ng, 2005; Zhong and Ng, 2009). [sent-115, score-0.441]

40 2 Estimating sense distributions for query terms In IR, both terms in queries and the text collection can be ambiguous. [sent-135, score-0.781]

41 Similar to the PRF method, assuming that the top k documents retrieved by the basic method are relevant to the query, these k documents can be used to represent query q (Broder et al. [sent-143, score-0.603]

42 We propose a method to estimate the sense probabilities of each query term of q from these top k retrieved documents. [sent-146, score-0.703]

43 Given a query q, suppose Dq is the set of top k documents retrieved by the basic method, with the probability score p(q|θd) assigned to d ∈ Dq. [sent-149, score-0.507]

44 Basically, we utilized the sense distribution of the words with the same stem form in Dq as a proxy to estimate the sense probabilities of a query term. [sent-151, score-0.872]

45 The retrieval scores are used to weight the information from the corresponding retrieved documents in Dq. [sent-152, score-0.319]

46 5 Incorporating Senses into Language Modeling Approaches In this section, we propose to incorporate senses into the LM approach to IR. [sent-153, score-0.341]

47 Then, we describe the integration of sense synonym relations into our model. [sent-154, score-0.526]

48 1 Incorporating senses as smoothing With the method described in Section 4. [sent-156, score-0.414]

49 2, both the terms in queries and documents have been sense tagged. [sent-157, score-0.526]

50 Suppose p(t, s, q) is the probability of tagging a query term t ∈ q as sense s, and p(w, s, d) is the probability o tf tagging a nwseord s, occurrence w ∈ sd t as sense s. [sent-159, score-1.014]

51 tGyi voefn t a query q aorndd a cdoucruremncenet w wd ∈in dte axst collection C, we want to re-estimate the language models by making use of the sense information assigned to them. [sent-160, score-0.593]

52 Define the frequency of s in d as: stf (s, d) = Pw∈dp(w, s, d), and the frequency of s iPn C as: stf (s, C) = Pd∈C stf (s, d). [sent-161, score-0.694]

53 Define the frequencies ofP sense set S in d and C as: stf (S, d) = Ps∈S stf (s, d), stf (S, C) = PPs∈S stf (s, C). [sent-162, score-1.153]

54 , p(t, sn, q)} :is{ sthe vecto}r, souf probabilities assigned to the senses hofe vt catnodr W:{stf (s1 , d) , . [sent-169, score-0.341]

55 (6) In sen(t, q, d), the last item stf (S(t, q) , d) calculates the sum of the sense frequencies of t senses in d, which represents the amount of t’s sense information in d. [sent-176, score-1.173]

56 The first item α∆cos(t,q,d) is a weight of the sense information concerning the relative sense similarity ∆cos(t, q, d), where α is a positive parameter to control the impact of sense similarity. [sent-177, score-0.831]

57 When ∆cos(t, q, d) is larger than zero, such that the sense similarity of d and q according to t is above the av- erage, the weight for the sense information is larger than 1; otherwise, it is less than 1. [sent-178, score-0.554]

58 For t ∈/ q, because rteh,e sense set S(t, q) gish empty, stf (S(t, q) , d) equals to zero and tfsen (t, d) is identical to tf (t, d). [sent-180, score-0.575]

59 With sense incorporated, the term frequency is influenced by the sense information. [sent-181, score-0.624]

60 In this part, we further integrate the synonym relations of senses into the LM approach. [sent-186, score-0.548]

61 Suppose R(s) is the set of senses having synonym relation with sense s. [sent-187, score-0.785]

62 Define S(q) as the set of senses of query q, S(q) = St∈q S(t, q), and de- fine R(s, q) =R(s)−S(q). [sent-188, score-0.614]

63 We Supdate the frequency ofinf a query t=erRm( ts i−n Sd( by integrating hthee synonym relations as follows: + tfsyn (t, d) = tfsen (t, d) syn(t, q, d) , (8) where syn(t, q, d) is a function measuring the synonym information in d: syn(t,q,d) = X β(s,q)p(t,s,q)stf (R(s,q),d). [sent-189, score-0.72]

64 s∈XS(t) The last item stf (R(s, q) , d) in syn(t, q, d) is the sum of the sense frequencies of R(s, q) in d. [sent-190, score-0.517]

65 Notice that the synonym senses already appearing in S(q) are not included in the calculation, because the information of these senses has been used in some other places in the retrieval function. [sent-191, score-0.996]

66 The frequency of synonyms, stf (R(s, q) , d), is weighted by p(t, s, q) together with a scaling function β(s, q) : β(s, q) = min(1, stf (sRtf ( ss,,qC),)C) . [sent-192, score-0.453]

67 When stf (s, C), the frequency of sense s in C, is less than stf (R(s, q) , C), the frequency of R(s, q) in C, the function β(s, q) scales down the impact of synonyms according to the ratio of these two frequencies. [sent-193, score-0.813]

68 The scaling function makes sure that the overall impact of the synonym senses is not greater than the original word senses. [sent-194, score-0.508]

69 With this language mPodel, thPe probability of a query term in a document is enlarged by the synonyms of its senses; The more its synonym senses in a document, the higher the probability. [sent-196, score-0.951]

70 Consequently, documents with more synonym senses of the query terms will get higher retrieval rankings. [sent-197, score-1.053]

71 We use 50 queries from TREC6 Ad Hoc task as the development set, and evaluate on 50 queries from TREC7 Ad Hoc task, 50 queries from TREC8 Ad Hoc task, 50 queries from ROBUST 2003 (RB03), and 49 queries from ROBUST 2004 (RB04). [sent-203, score-0.647]

72 The first column lists the query topics, and the column #qry is the number of queries. [sent-207, score-0.353]

73 The column Ave gives the average query length, and the column Rels is the total number of relevant documents. [sent-208, score-0.385]

74 11as the basic retrieval tool, and select the default unigram LM approach based on KLdivergence and Dirichlet-prior smoothing method in Lemur as our basic retrieval approach. [sent-211, score-0.39]

75 We set the smoothing parameter in Equation 3 to 400 by tuning on TREC6 query set in a range of {100, 400, 700, 1000, 1500, 2000, 3000, 4000, 5000}. [sent-215, score-0.313]

76 {W1i0th0 t4h0is0 ,b7a0si0c, method, up to 100, top 0r,a4n0k0ed0, 5do00cu0-} ments Dq are retrieved for each query q from the extended text collection C ∪ X, for the usage of performing xPRt Fco alnledc generating query senses. [sent-216, score-0.71]

77 To estimate the sense distributions for terms in query q, the method described in Section 4. [sent-235, score-0.615]

78 The method Even assigns equal probabilities to all senses for each word, and the method MFS tags the words with their corresponding most frequent senses. [sent-239, score-0.407]

79 Assume that the senses with the same Chinese part are synonyms, therefore, we can generate a set of synonyms for each sense, and then utilize these synonym relations in the method proposed in Section 5. [sent-244, score-0.635]

80 The column Comb shows the results on the union of the four test query sets. [sent-252, score-0.313]

81 The rows Stemprf+{MFS, Even, WSD} are the results of Stemprf incorporating weni,th W tSheD senses generated for the original query terms, by applying the approach proposed in Section 5. [sent-257, score-0.644]

82 Comparing to the baseline method, all methods with sense integrated achieve consistent improvements on all query sets. [sent-259, score-0.581]

83 The integration of senses into the baseline method has two aspects of impact. [sent-261, score-0.416]

84 First, the morphological roots of senses conquer the irregular inflection problem. [sent-262, score-0.368]

85 Thus, the documents containing the irregular inflections are retrieved when senses are integrated. [sent-263, score-0.54]

86 2 6A s{ fseirnrky i ssi an irregular verb, the usage of senses improves the retrieval recall by retrieving the documents containing the inflection forms sunk, sank, and sunken. [sent-265, score-0.65]

87 Second, the senses output by supervised WSD system help identify the meanings of query terms. [sent-266, score-0.681]

88 Take topic 357 {territorial waters dispute} for example, tphiec s3t5em7 {foterrmri oofr waters eirss water aten}d oitsr appropriate sense in this query should be water 水域 (body of water) instead of the most frequent sense of water 水 (H2O). [sent-267, score-0.992]

89 In Stemprf +WSD, we correctly identify the minority sense for this query term. [sent-268, score-0.55]

90 Ay}l-, though the most frequent sense counterfeit 冒 牌 (not genuine) is not wrong, another sense counterfeit 伪钞 (forged money) is more accurate for this query term. [sent-270, score-0.887]

91 The integration of synonym FrSel,a Etivoenns, fWuSrtDhe}r+ improves the performance no matter what kind of sense tagging method is applied. [sent-276, score-0.542]

92 It shows that the WSD technique can help choose the appropriate senses for synonym expansion. [sent-281, score-0.508]

93 We proposed a method for annotating senses to terms in short queries, and also described an approach to integrate senses into an LM approach for IR. [sent-287, score-0.747]

94 In the experiment on four query sets of TREC collection, we compared the performance of a supervised WSD method and two WSD baseline methods. [sent-288, score-0.343]

95 Our experimental results showed that the incorporation of senses improved a state-of-the-art baseline, a stem-based LM approach with PRF method. [sent-289, score-0.341]

96 Enhancing query translation with relevance feedback in translingual information retrieval. [sent-391, score-0.374]

97 Information retrieval using word senses: root sense tagging approach. [sent-401, score-0.471]

98 Using WordNet to disambiguate word senses for text retrieval. [sent-509, score-0.411]

99 Word sense disambiguation for all words without hard labor. [sent-534, score-0.341]

100 It Makes Sense: A widecoverage word sense disambiguation system for free text. [sent-541, score-0.341]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('wsd', 0.532), ('senses', 0.341), ('sense', 0.277), ('query', 0.273), ('stemprf', 0.222), ('stf', 0.212), ('synonym', 0.167), ('retrieval', 0.147), ('ir', 0.144), ('queries', 0.124), ('mfs', 0.118), ('trec', 0.116), ('cos', 0.114), ('syn', 0.103), ('dq', 0.099), ('documents', 0.093), ('sigir', 0.085), ('retrieved', 0.079), ('prf', 0.074), ('disambiguate', 0.07), ('gonzalo', 0.064), ('zhong', 0.064), ('disambiguation', 0.064), ('lm', 0.061), ('croft', 0.057), ('relevance', 0.055), ('synonyms', 0.054), ('pd', 0.052), ('expansion', 0.048), ('wordnet', 0.048), ('acm', 0.048), ('feedback', 0.046), ('document', 0.046), ('stem', 0.045), ('krovetz', 0.044), ('lemur', 0.044), ('stokoe', 0.044), ('tfsen', 0.044), ('voorhees', 0.044), ('collection', 0.043), ('integration', 0.042), ('zhai', 0.042), ('tf', 0.042), ('usage', 0.042), ('term', 0.041), ('smoothing', 0.04), ('column', 0.04), ('relations', 0.04), ('hoc', 0.039), ('water', 0.039), ('ponte', 0.039), ('tq', 0.039), ('chan', 0.038), ('calculates', 0.038), ('supervised', 0.037), ('sen', 0.036), ('pseudo', 0.035), ('xp', 0.034), ('method', 0.033), ('tze', 0.033), ('suppose', 0.033), ('terms', 0.032), ('pages', 0.032), ('relevant', 0.032), ('dt', 0.031), ('improvements', 0.031), ('meanings', 0.03), ('collections', 0.03), ('rows', 0.03), ('counterfeit', 0.03), ('counterfeiting', 0.03), ('dagger', 0.03), ('inquery', 0.03), ('mpodel', 0.03), ('ofp', 0.03), ('ogilvie', 0.03), ('sinkings', 0.03), ('probability', 0.029), ('frequency', 0.029), ('frequencies', 0.028), ('irregular', 0.027), ('logp', 0.027), ('development', 0.027), ('chinese', 0.027), ('lavrenko', 0.026), ('bendersky', 0.026), ('sch', 0.025), ('lafferty', 0.025), ('ps', 0.025), ('indexing', 0.025), ('inventory', 0.025), ('root', 0.024), ('ambiguity', 0.024), ('waters', 0.024), ('kwok', 0.024), ('broder', 0.024), ('unigram', 0.023), ('singapore', 0.023), ('tagging', 0.023), ('ad', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999982 217 acl-2012-Word Sense Disambiguation Improves Information Retrieval

Author: Zhi Zhong ; Hwee Tou Ng

Abstract: Previous research has conflicting conclusions on whether word sense disambiguation (WSD) systems can improve information retrieval (IR) performance. In this paper, we propose a method to estimate sense distributions for short queries. Together with the senses predicted for words in documents, we propose a novel approach to incorporate word senses into the language modeling approach to IR and also exploit the integration of synonym relations. Our experimental results on standard TREC collections show that using the word senses tagged by a supervised WSD system, we obtain significant improvements over a state-of-the-art IR system.

2 0.31945568 152 acl-2012-Multilingual WSD with Just a Few Lines of Code: the BabelNet API

Author: Roberto Navigli ; Simone Paolo Ponzetto

Abstract: In this paper we present an API for programmatic access to BabelNet a wide-coverage multilingual lexical knowledge base and multilingual knowledge-rich Word Sense Disambiguation (WSD). Our aim is to provide the research community with easy-to-use tools to perform multilingual lexical semantic analysis and foster further research in this direction. – –

3 0.25737551 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition

Author: Weiwei Guo ; Mona Diab

Abstract: In this paper we study unsupervised word sense disambiguation (WSD) based on sense definition. We learn low-dimensional latent semantic vectors of concept definitions to construct a more robust sense similarity measure wmfvec. Experiments on four all-words WSD data sets show significant improvement over the baseline WSD systems and LDA based similarity measures, achieving results comparable to state of the art WSD systems.

4 0.18611462 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

Author: Limin Yao ; Sebastian Riedel ; Andrew McCallum

Abstract: To discover relation types from text, most methods cluster shallow or syntactic patterns of relation mentions, but consider only one possible sense per pattern. In practice this assumption is often violated. In this paper we overcome this issue by inducing clusters of pattern senses from feature representations of patterns. In particular, we employ a topic model to partition entity pairs associated with patterns into sense clusters using local and global features. We merge these sense clusters into semantic relations using hierarchical agglomerative clustering. We compare against several baselines: a generative latent-variable model, a clustering method that does not disambiguate between path senses, and our own approach but with only local features. Experimental results show our proposed approach discovers dramatically more accurate clusters than models without sense disambiguation, and that incorporating global features, such as the document theme, is crucial.

5 0.15651685 212 acl-2012-Using Search-Logs to Improve Query Tagging

Author: Kuzman Ganchev ; Keith Hall ; Ryan McDonald ; Slav Petrov

Abstract: Syntactic analysis of search queries is important for a variety of information-retrieval tasks; however, the lack of annotated data makes training query analysis models difficult. We propose a simple, efficient procedure in which part-of-speech tags are transferred from retrieval-result snippets to queries at training time. Unlike previous work, our final model does not require any additional resources at run-time. Compared to a state-ofthe-art approach, we achieve more than 20% relative error reduction. Additionally, we annotate a corpus of search queries with partof-speech tags, providing a resource for future work on syntactic query analysis.

6 0.15209875 142 acl-2012-Mining Entity Types from Query Logs via User Intent Modeling

7 0.12307023 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

8 0.11241624 44 acl-2012-CSNIPER - Annotation-by-query for Non-canonical Constructions in Large Corpora

9 0.10421934 35 acl-2012-Automatically Mining Question Reformulation Patterns from Search Log Data

10 0.10405669 216 acl-2012-Word Epoch Disambiguation: Finding How Words Change Over Time

11 0.099074125 27 acl-2012-Arabic Retrieval Revisited: Morphological Hole Filling

12 0.08678779 66 acl-2012-DOMCAT: A Bilingual Concordancer for Domain-Specific Computer Assisted Translation

13 0.078637585 134 acl-2012-Learning to Find Translations and Transliterations on the Web

14 0.077879176 161 acl-2012-Polarity Consistency Checking for Sentiment Dictionaries

15 0.071867958 203 acl-2012-Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information

16 0.063466705 117 acl-2012-Improving Word Representations via Global Context and Multiple Word Prototypes

17 0.059684601 99 acl-2012-Finding Salient Dates for Building Thematic Timelines

18 0.059198271 97 acl-2012-Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation

19 0.055439837 159 acl-2012-Pattern Learning for Relation Extraction with a Hierarchical Topic Model

20 0.054848462 157 acl-2012-PDTB-style Discourse Annotation of Chinese Text


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.181), (1, 0.065), (2, 0.023), (3, 0.092), (4, 0.046), (5, 0.207), (6, -0.035), (7, -0.006), (8, -0.026), (9, -0.065), (10, 0.3), (11, 0.067), (12, 0.263), (13, 0.152), (14, -0.011), (15, -0.332), (16, 0.082), (17, 0.118), (18, 0.008), (19, -0.04), (20, 0.147), (21, -0.015), (22, -0.082), (23, 0.004), (24, 0.012), (25, -0.144), (26, -0.029), (27, 0.077), (28, -0.054), (29, -0.1), (30, 0.033), (31, 0.071), (32, 0.038), (33, -0.112), (34, -0.015), (35, 0.059), (36, -0.001), (37, 0.019), (38, -0.058), (39, 0.012), (40, -0.036), (41, -0.017), (42, 0.06), (43, 0.002), (44, 0.006), (45, 0.042), (46, 0.029), (47, -0.023), (48, 0.085), (49, 0.049)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96232027 217 acl-2012-Word Sense Disambiguation Improves Information Retrieval

Author: Zhi Zhong ; Hwee Tou Ng

Abstract: Previous research has conflicting conclusions on whether word sense disambiguation (WSD) systems can improve information retrieval (IR) performance. In this paper, we propose a method to estimate sense distributions for short queries. Together with the senses predicted for words in documents, we propose a novel approach to incorporate word senses into the language modeling approach to IR and also exploit the integration of synonym relations. Our experimental results on standard TREC collections show that using the word senses tagged by a supervised WSD system, we obtain significant improvements over a state-of-the-art IR system.

2 0.75369537 152 acl-2012-Multilingual WSD with Just a Few Lines of Code: the BabelNet API

Author: Roberto Navigli ; Simone Paolo Ponzetto

Abstract: In this paper we present an API for programmatic access to BabelNet a wide-coverage multilingual lexical knowledge base and multilingual knowledge-rich Word Sense Disambiguation (WSD). Our aim is to provide the research community with easy-to-use tools to perform multilingual lexical semantic analysis and foster further research in this direction. – –

3 0.71166468 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition

Author: Weiwei Guo ; Mona Diab

Abstract: In this paper we study unsupervised word sense disambiguation (WSD) based on sense definition. We learn low-dimensional latent semantic vectors of concept definitions to construct a more robust sense similarity measure wmfvec. Experiments on four all-words WSD data sets show significant improvement over the baseline WSD systems and LDA based similarity measures, achieving results comparable to state of the art WSD systems.

4 0.56218034 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

Author: Gerard de Melo ; Gerhard Weikum

Abstract: We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.

5 0.52569437 142 acl-2012-Mining Entity Types from Query Logs via User Intent Modeling

Author: Patrick Pantel ; Thomas Lin ; Michael Gamon

Abstract: We predict entity type distributions in Web search queries via probabilistic inference in graphical models that capture how entitybearing queries are generated. We jointly model the interplay between latent user intents that govern queries and unobserved entity types, leveraging observed signals from query formulations and document clicks. We apply the models to resolve entity types in new queries and to assign prior type distributions over an existing knowledge base. Our models are efficiently trained using maximum likelihood estimation over millions of real-world Web search queries. We show that modeling user intent significantly improves entity type resolution for head queries over the state ofthe art, on several metrics, without degradation in tail query performance.

6 0.45874742 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

7 0.43549204 44 acl-2012-CSNIPER - Annotation-by-query for Non-canonical Constructions in Large Corpora

8 0.43403307 212 acl-2012-Using Search-Logs to Improve Query Tagging

9 0.42718503 216 acl-2012-Word Epoch Disambiguation: Finding How Words Change Over Time

10 0.41838664 35 acl-2012-Automatically Mining Question Reformulation Patterns from Search Log Data

11 0.37862593 161 acl-2012-Polarity Consistency Checking for Sentiment Dictionaries

12 0.36913925 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

13 0.32985756 66 acl-2012-DOMCAT: A Bilingual Concordancer for Domain-Specific Computer Assisted Translation

14 0.31342289 145 acl-2012-Modeling Sentences in the Latent Space

15 0.29114735 7 acl-2012-A Computational Approach to the Automation of Creative Naming

16 0.2892704 134 acl-2012-Learning to Find Translations and Transliterations on the Web

17 0.28396359 49 acl-2012-Coarse Lexical Semantic Annotation with Supersenses: An Arabic Case Study

18 0.28123927 117 acl-2012-Improving Word Representations via Global Context and Multiple Word Prototypes

19 0.27487782 14 acl-2012-A Joint Model for Discovery of Aspects in Utterances

20 0.27031481 27 acl-2012-Arabic Retrieval Revisited: Morphological Hole Filling


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(8, 0.233), (25, 0.036), (26, 0.044), (28, 0.05), (30, 0.028), (37, 0.022), (39, 0.065), (52, 0.013), (57, 0.016), (74, 0.034), (82, 0.014), (84, 0.023), (85, 0.04), (90, 0.145), (92, 0.099), (94, 0.018), (99, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.77112454 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content

Author: Gregory Druck ; Bo Pang

Abstract: There are a growing number of popular web sites where users submit and review instructions for completing tasks as varied as building a table and baking a pie. In addition to providing their subjective evaluation, reviewers often provide actionable refinements. These refinements clarify, correct, improve, or provide alternatives to the original instructions. However, identifying and reading all relevant reviews is a daunting task for a user. In this paper, we propose a generative model that jointly identifies user-proposed refinements in instruction reviews at multiple granularities, and aligns them to the appropriate steps in the original instructions. Labeled data is not readily available for these tasks, so we focus on the unsupervised setting. In experiments in the recipe domain, our model provides 90. 1% F1 for predicting refinements at the review level, and 77.0% F1 for predicting refinement segments within reviews.

same-paper 2 0.76513672 217 acl-2012-Word Sense Disambiguation Improves Information Retrieval

Author: Zhi Zhong ; Hwee Tou Ng

Abstract: Previous research has conflicting conclusions on whether word sense disambiguation (WSD) systems can improve information retrieval (IR) performance. In this paper, we propose a method to estimate sense distributions for short queries. Together with the senses predicted for words in documents, we propose a novel approach to incorporate word senses into the language modeling approach to IR and also exploit the integration of synonym relations. Our experimental results on standard TREC collections show that using the word senses tagged by a supervised WSD system, we obtain significant improvements over a state-of-the-art IR system.

3 0.67278349 9 acl-2012-A Cost Sensitive Part-of-Speech Tagging: Differentiating Serious Errors from Minor Errors

Author: Hyun-Je Song ; Jeong-Woo Son ; Tae-Gil Noh ; Seong-Bae Park ; Sang-Jo Lee

Abstract: All types of part-of-speech (POS) tagging errors have been equally treated by existing taggers. However, the errors are not equally important, since some errors affect the performance of subsequent natural language processing (NLP) tasks seriously while others do not. This paper aims to minimize these serious errors while retaining the overall performance of POS tagging. Two gradient loss functions are proposed to reflect the different types of errors. They are designed to assign a larger cost to serious errors and a smaller one to minor errors. Through a set of POS tagging experiments, it is shown that the classifier trained with the proposed loss functions reduces serious errors compared to state-of-the-art POS taggers. In addition, the experimental result on text chunking shows that fewer serious errors help to improve the performance of sub- sequent NLP tasks.

4 0.64714622 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

Author: Bevan Jones ; Mark Johnson ; Sharon Goldwater

Abstract: Many semantic parsing models use tree transformations to map between natural language and meaning representation. However, while tree transformations are central to several state-of-the-art approaches, little use has been made of the rich literature on tree automata. This paper makes the connection concrete with a tree transducer based semantic parsing model and suggests that other models can be interpreted in a similar framework, increasing the generality of their contributions. In particular, this paper further introduces a variational Bayesian inference algorithm that is applicable to a wide class of tree transducers, producing state-of-the-art semantic parsing results while remaining applicable to any domain employing probabilistic tree transducers.

5 0.64447623 132 acl-2012-Learning the Latent Semantics of a Concept from its Definition

Author: Weiwei Guo ; Mona Diab

Abstract: In this paper we study unsupervised word sense disambiguation (WSD) based on sense definition. We learn low-dimensional latent semantic vectors of concept definitions to construct a more robust sense similarity measure wmfvec. Experiments on four all-words WSD data sets show significant improvement over the baseline WSD systems and LDA based similarity measures, achieving results comparable to state of the art WSD systems.

6 0.6441825 22 acl-2012-A Topic Similarity Model for Hierarchical Phrase-based Translation

7 0.64308214 167 acl-2012-QuickView: NLP-based Tweet Search

8 0.64231873 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing

9 0.63807249 31 acl-2012-Authorship Attribution with Author-aware Topic Models

10 0.63780272 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

11 0.63530529 36 acl-2012-BIUTEE: A Modular Open-Source System for Recognizing Textual Entailment

12 0.63374019 84 acl-2012-Estimating Compact Yet Rich Tree Insertion Grammars

13 0.63010168 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

14 0.62932634 61 acl-2012-Cross-Domain Co-Extraction of Sentiment and Topic Lexicons

15 0.62865347 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

16 0.62816393 142 acl-2012-Mining Entity Types from Query Logs via User Intent Modeling

17 0.62685579 10 acl-2012-A Discriminative Hierarchical Model for Fast Coreference at Large Scale

18 0.62623256 98 acl-2012-Finding Bursty Topics from Microblogs

19 0.62475413 148 acl-2012-Modified Distortion Matrices for Phrase-Based Statistical Machine Translation

20 0.62402004 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction