acl acl2010 acl2010-250 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Gerard de Melo ; Gerhard Weikum
Abstract: Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. Unfortunately, large numbers of links are imprecise or simply wrong. In this paper, techniques to detect such problems are identified. We formalize their removal as an optimization task based on graph repair operations. We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts.
Reference: text
sentIndex sentText sentNum sentScore
1 de o Abstract Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. [sent-3, score-0.442]
2 Unfortunately, large numbers of links are imprecise or simply wrong. [sent-4, score-0.197]
3 In this paper, techniques to detect such problems are identified. [sent-5, score-0.029]
4 We formalize their removal as an optimization task based on graph repair operations. [sent-6, score-0.235]
5 We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. [sent-7, score-0.052]
6 This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts. [sent-8, score-0.13]
7 The open community-maintained encyclopedia Wikipedia has not only turned the Internet into a more useful and linguistically diverse source of information, but is also increasingly being used in computational applications as a large-scale source of linguistic and encyclopedic knowledge. [sent-10, score-0.024]
8 To allow cross-lingual navigation, Wikipedia offers cross-lingual interwiki links that for instance connect the Indonesian article about Albert Einstein to the corresponding articles in over 100 other languages. [sent-11, score-0.383]
9 In the ideal case, a set of articles connected directly or indirectly via such links would all describe the same entity or concept. [sent-13, score-0.332]
10 Due to conceptual drift, different granularities, as well as mistakes made by editors, we frequently find concepts as different as economics and manager in the same connected component. [sent-14, score-0.117]
11 Filtering out inaccurate links enables us to exploit Wikipedia’s multi- linguality in a much safer manner and allows us to create a multilingual register of named entities. [sent-15, score-0.395]
12 Our research contributions are: 1) We identify criteria to detect inaccurate connections in Wikipedia’s cross-lingual link structure. [sent-19, score-0.202]
13 2) We formalize the task of removing such links as an optimization problem. [sent-20, score-0.228]
14 3) We introduce an algorithm that attempts to repair the cross-lingual graph in a minimally invasive way. [sent-21, score-0.206]
15 This algorithm has an approximation guarantee with respect to optimal solutions. [sent-22, score-0.111]
16 4) We show how this algorithm can be used to combine all editions of Wikipedia into a single large-scale multilingual register of named entities and concepts. [sent-23, score-0.173]
17 2 Detecting Inaccurate Links In this paper, we model the union of cross-lingual links provided by all editions of Wikipedia as an undirected graph G = (V, E) with edge weights w(e) for e ∈ E. [sent-24, score-0.447]
18 ivi Indu oaul rl einxkp equally by defining w(e) = 2 if there are reciprocal links between the two pages, 1if there is a single link, and 0 otherwise. [sent-26, score-0.147]
19 one could easily plug in cross-lingual measures of semantic relatedness between article texts. [sent-29, score-0.082]
20 It turns out that an astonishing number of connected components in this graph harbour inaccurate links between articles. [sent-30, score-0.557]
21 For instance, the Esperanto article ‘Germana Imperiestro’ is about German emporers and another Esperanto article ‘Germana Imperiestra Regno’ is about the German Empire, but, as of June 2010, both are linked to the English and German articles about the German Empire. [sent-31, score-0.232]
22 Over time, some inaccurate links may be fixed, but in this and in large numbers of other cases, the imprecise connection has persisted for many years. [sent-32, score-0.319]
23 In order to detect such cases, we need to have some way of specifying that two ar- ticles are likely to be distinct. [sent-33, score-0.029]
24 c As2s0o1c0ia Atisosnoc foiart Cionom fopru Ctaotmiopnuatla Lti on gaulis Lti cnsg,u piasgtiecs 84 –853, Figure 1: Connected component with inaccurate links (simplified) 2. [sent-36, score-0.269]
25 1 Distinctness Assertions Figure 1 shows a connected component that conflates the concept of television as a medium with the concept of TV sets as devices. [sent-37, score-0.293]
26 ’ are distinct from ‘Television set’ and ‘TV set’ . [sent-40, score-0.073]
27 In general, we may have several sets of entities Di,1, . [sent-41, score-0.029]
28 , Di,li , for which we assume that any two entities u,v from different sets are pairwise distinct with some degree of confidence or weight. [sent-44, score-0.125]
29 (Distinctness Assertions) Given a set of nodes V , a distinctness assertion is a collection Di = (Di,1, . [sent-49, score-0.864]
30 We found that many components with inaccurate links can be identified automatically with the following distinctness assertions. [sent-55, score-0.961]
31 (Distinctness between articles from = = the same Wikipedia edition) For each languagespecific edition of Wikipedia, a separate assertion (Di,1, Di,2, . [sent-57, score-0.314]
32 ) can be made, where each Di,j contains an individual article together with its respective redirection pages. [sent-60, score-0.151]
33 Two articles from the same Wikipedia very likely describe distinct concepts unless they are redirects of each other. [sent-61, score-0.233]
34 For example, ‘Georgia (country)’ is distinct from ‘Georgia (U. [sent-62, score-0.073]
35 Additionally, there are also redirects that are clearly marked by a category or template as involving topic drift, e. [sent-65, score-0.117]
36 redirects from songs to albums or artists, from products to companies, etc. [sent-67, score-0.115]
37 We keep such redirects in a Di,j distinct from the one of their redirect targets. [sent-68, score-0.19]
38 (Distinctness between categories from the same Wikipedia edition) For each language-specific edition of Wikipedia, a separate assertion (Di,1, Di,2, . [sent-70, score-0.246]
39 ) is made, where each Di,j contains a category page together with any redirects. [sent-73, score-0.025]
40 (Distinctness for links with anchor identifiers) The English ‘Division by zero’ , for in- stance, links to the German ‘Null#Division’ . [sent-76, score-0.342]
41 The latter is only a part of a larger article about the number zero in general, so we can make a distinctness assertion to separate ‘Division by zero’ from ‘Null’ . [sent-77, score-0.942]
42 In general, for each interwiki link or redirection with an anchor identifier, we add an assertion (Di,1, Di,2) where Di,1,Di,2 represent the respective articles without anchor identifiers. [sent-78, score-0.521]
43 These three types of distinctness assertions are instantiated for all articles and categories of all Wikipedia editions. [sent-79, score-1.005]
44 The assertion weights are tunable; the simplest choice is using a uniform weight for all assertions (note that these weights are different from the edge weights in the graph). [sent-80, score-0.658]
45 2 Enforcing Consistency Given a graph G representing cross-lingual links between Wikipedia pages, as well as distinctness assertions D1, . [sent-83, score-1.198]
46 , Dn with weights w(Di), we may find that nodes that are asserted to be distinct are in the same connected component. [sent-86, score-0.343]
47 We can then try to apply repair operations to recon- cile the graph’s link structure with the distinctness asssertions and obtain global consistency. [sent-87, score-0.828]
48 There are two ways to modify the input, and for each we can also consider the corresponding weights as a sort of cost that quantifies how much we are changing the original input: a) Edge cutting: We may remove an edge e ∈ EEd fgreom cu uthttein graph, paying cost w(e). [sent-88, score-0.286]
49 b) Distinctness assertion relaxation: We may remove a node v ∈ V from a distinctness asrseemrtioovne Di, paying c Vos tf w(Di). [sent-89, score-0.888]
50 845 Removing edges allows us to split connected components into multiple smaller components, thereby ensuring that two nodes asserted to be distinct are no longer connected directly or indirectly. [sent-90, score-0.5]
51 In Figure 1, for instance, we could delete the edge from the Spanish ‘TV set’ article to the Japanese ‘television’ article. [sent-91, score-0.141]
52 In constrast, removing nodes from distinctness assertions means that we decide to give up our claim of them being distinct, instead allowing them to share a connected component. [sent-92, score-1.159]
53 Our reliance on costs is based on the assumption that the link structure or topology of the graph provides the best indication of which cross-lingual links to remove. [sent-93, score-0.345]
54 In Figure 1, we have distinctness assertions between nodes in two densely connected clusters that are tied together only by a single spurious link. [sent-94, score-1.132]
55 In such cases, edge removals can easily yield separate connected components. [sent-95, score-0.201]
56 When, however, the two nodes are strongly connected via many different paths with high weights, we may instead opt for removing one of the two nodes from the distinctness assertion. [sent-96, score-0.935]
57 The aim will be to balance the costs for removing edges from the graph with the costs for removing nodes from distinctness assertions to produce a consistent solution with a minimal total repair cost. [sent-97, score-1.485]
58 We accommodate our knowledge about distinctness while staying as close as possible to what Wikipedia provides as input. [sent-98, score-0.685]
59 Let G be an undirected graph with a set of vertices V and a set of edges E weighted by w : E → R. [sent-100, score-0.206]
60 If we use a set C ⊆ V to specify :w Ehich → edges we we uanset t oa cseutt Cfrom ⊆ t hVe original graph, and sets Ui to specify which nodes we want to remove from distinctness assertions, we can begin by defining WDGS solutions as follows. [sent-101, score-0.805]
61 Given a graph G = (V, E) and n distinctness assertions D1, . [sent-104, score-1.051]
62 , Un) is a valid WDGS solution if and only if ∀i, j,k j,u ∈ Di,j \ Ui, v ∈ Di,k \ Ui: P(u, v, ,Ej \ C) = ∈i. [sent-110, score-0.054]
63 Dthe set of paths from u to v (inu ,tvhe, graph (V, E ∅, \ C) ihse empty. [sent-112, score-0.114]
64 Let w : E → R be a weight function for edges e ∈ E, a :n Ed w(Di) (i = 1e . [sent-115, score-0.065]
65 n) nbcet weights for st ehe ∈ ∈d Eist,in acntdne wss( assertions. [sent-118, score-0.08]
66 The (total) cost of a WDGS solution S = (C, U1, . [sent-119, score-0.089]
67 A WDGS problem instance P consists of a graph G = (V, E) with edge weights w(e) and n distinctness assertions D1, . [sent-127, score-1.167]
68 The objective consists in finding a solution (C, U1, . [sent-131, score-0.054]
69 It turns out that finding optimal solutions efficiently is a hard problem (proofs in Appendix A). [sent-138, score-0.025]
70 3 Approximation Algorithm Due to the hardness of WDGS, we devise a polynomial-time approximation algorithm with an approximation factor of 4 ln(nq + 1) where n is the number of distinctness assertions and q = max |Di,j |. [sent-142, score-1.089]
71 This means that for all problem ini,j stances P, we can guarantee cc((SS∗((PP))))≤ 4ln(nq + 1), where S(P) is the solution determined by our algorithm, and S∗ (P) is an optimal solution. [sent-143, score-0.115]
72 Note that this approximation guarantee is independent of how long each Di is, and that it merely represents an upper bound on the worst case scenario. [sent-144, score-0.086]
73 Our algorithm first solves a linear program (LP) relaxation of the original problem, which gives us hints as to which edges should most likely be cut and which nodes should most likely be removed from distinctness assertions. [sent-146, score-0.872]
74 Note that this is a continuous LP, not an integer linear program (ILP); the latter would not be tractable due to the large number of variables and constraints of the problem. [sent-147, score-0.059]
75 After solving the linear program, a new extended graph is constructed and the optimal LP solution is used to define a distance metric on it. [sent-148, score-0.218]
76 The final solution is obtained by smartly se- – – lecting regions in this extended graph as the individual output components, employing a region 846 growing technique in the spirit ofthe seminal work by Leighton and Rao (1999). [sent-149, score-0.25]
77 Edges that cross the boundaries of these regions are cut. [sent-150, score-0.03]
wordName wordTfidf (topN-words)
[('distinctness', 0.66), ('wdgs', 0.344), ('assertions', 0.277), ('wikipedia', 0.177), ('television', 0.151), ('assertion', 0.151), ('links', 0.147), ('inaccurate', 0.122), ('connected', 0.117), ('graph', 0.114), ('di', 0.103), ('repair', 0.092), ('redirects', 0.092), ('tv', 0.09), ('interwiki', 0.086), ('article', 0.082), ('register', 0.075), ('distinct', 0.073), ('edition', 0.07), ('articles', 0.068), ('un', 0.066), ('edges', 0.065), ('ui', 0.064), ('edge', 0.059), ('esperanto', 0.057), ('germana', 0.057), ('weights', 0.057), ('solution', 0.054), ('nodes', 0.053), ('dn', 0.052), ('removing', 0.052), ('link', 0.051), ('mpi', 0.05), ('imprecise', 0.05), ('paying', 0.05), ('georgia', 0.05), ('planck', 0.05), ('approximation', 0.05), ('lp', 0.049), ('anchor', 0.048), ('redirection', 0.046), ('nq', 0.043), ('editions', 0.043), ('asserted', 0.043), ('german', 0.041), ('criterion', 0.036), ('guarantee', 0.036), ('division', 0.035), ('relaxation', 0.035), ('cost', 0.035), ('program', 0.034), ('ucken', 0.033), ('costs', 0.033), ('components', 0.032), ('drift', 0.031), ('regions', 0.03), ('saarbr', 0.03), ('detect', 0.029), ('entities', 0.029), ('formalize', 0.029), ('max', 0.029), ('definition', 0.028), ('informatics', 0.027), ('remove', 0.027), ('region', 0.027), ('undirected', 0.027), ('multilingual', 0.026), ('category', 0.025), ('optimal', 0.025), ('staying', 0.025), ('ejr', 0.025), ('lecting', 0.025), ('dew', 0.025), ('melo', 0.025), ('cile', 0.025), ('hve', 0.025), ('eed', 0.025), ('einstein', 0.025), ('conflates', 0.025), ('densely', 0.025), ('astonishing', 0.025), ('safer', 0.025), ('redirect', 0.025), ('constrast', 0.025), ('separate', 0.025), ('linear', 0.025), ('increasingly', 0.024), ('zero', 0.024), ('null', 0.024), ('pairwise', 0.023), ('respective', 0.023), ('hardness', 0.023), ('stance', 0.023), ('weikum', 0.023), ('tunable', 0.023), ('ehe', 0.023), ('dthe', 0.023), ('albums', 0.023), ('cu', 0.023), ('artists', 0.023)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999976 250 acl-2010-Untangling the Cross-Lingual Link Structure of Wikipedia
Author: Gerard de Melo ; Gerhard Weikum
Abstract: Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. Unfortunately, large numbers of links are imprecise or simply wrong. In this paper, techniques to detect such problems are identified. We formalize their removal as an optimization task based on graph repair operations. We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts.
2 0.11596935 261 acl-2010-Wikipedia as Sense Inventory to Improve Diversity in Web Search Results
Author: Celina Santamaria ; Julio Gonzalo ; Javier Artiles
Abstract: Is it possible to use sense inventories to improve Web search results diversity for one word queries? To answer this question, we focus on two broad-coverage lexical resources of a different nature: WordNet, as a de-facto standard used in Word Sense Disambiguation experiments; and Wikipedia, as a large coverage, updated encyclopaedic resource which may have a better coverage of relevant senses in Web pages. Our results indicate that (i) Wikipedia has a much better coverage of search results, (ii) the distribution of senses in search results can be estimated using the internal graph structure of the Wikipedia and the relative number of visits received by each sense in Wikipedia, and (iii) associating Web pages to Wikipedia senses with simple and efficient algorithms, we can produce modified rankings that cover 70% more Wikipedia senses than the original search engine rankings. 1 Motivation The application of Word Sense Disambiguation (WSD) to Information Retrieval (IR) has been subject of a significant research effort in the recent past. The essential idea is that, by indexing and matching word senses (or even meanings) , the retrieval process could better handle polysemy and synonymy problems (Sanderson, 2000). In practice, however, there are two main difficulties: (i) for long queries, IR models implicitly perform disambiguation, and thus there is little room for improvement. This is the case with most standard IR benchmarks, such as TREC (trec.nist.gov) or CLEF (www.clef-campaign.org) ad-hoc collections; (ii) for very short queries, disambiguation j ul io @ l i uned . e s j avart s . @bec . uned . e s may not be possible or even desirable. This is often the case with one word and even two word queries in Web search engines. In Web search, there are at least three ways of coping with ambiguity: • • • Promoting diversity in the search results (Clarke negt al., 2008): given th seea query s”uolatssis”, the search engine may try to include representatives for different senses of the word (such as the Oasis band, the Organization for the Advancement of Structured Information Standards, the online fashion store, etc.) among the top results. Search engines are supposed to handle diversity as one of the multiple factors that influence the ranking. Presenting the results as a set of (labelled) cPlruessteenrtsi nragth tehre eth reansu as a a rsan ake sde lti ostf (Carpineto et al., 2009). Complementing search results with search suggestions (e.g. e”oaracshis band”, ”woitahsis s fashion store”) that serve to refine the query in the intended way (Anick, 2003). All of them rely on the ability of the search engine to cluster search results, detecting topic similarities. In all of them, disambiguation is implicit, a side effect of the process but not its explicit target. Clustering may detect that documents about the Oasis band and the Oasis fashion store deal with unrelated topics, but it may as well detect a group of documents discussing why one of the Oasis band members is leaving the band, and another group of documents about Oasis band lyrics; both are different aspects of the broad topic Oasis band. A perfect hierarchical clustering should distinguish between the different Oasis senses at a first level, and then discover different topics within each of the senses. Is it possible to use sense inventories to improve search results for one word queries? To answer 1357 Proce dingUsp opfs thaela 4, 8Stwhe Adnen u,a 1l1- M16e Jtiunlgy o 2f0 t1h0e. A ?c s 2o0c1ia0ti Aosnso focria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s357–136 , this question, we will focus on two broad-coverage lexical resources of a different nature: WordNet (Miller et al., 1990), as a de-facto standard used in Word Sense Disambiguation experiments and many other Natural Language Processing research fields; and Wikipedia (www.wikipedia.org), as a large coverage and updated encyclopedic resource which may have a better coverage of relevant senses in Web pages. Our hypothesis is that, under appropriate conditions, any of the above mechanisms (clustering, search suggestions, diversity) might benefit from an explicit disambiguation (classification of pages in the top search results) using a wide-coverage sense inventory. Our research is focused on four relevant aspects of the problem: 1. Coverage: Are Wikipedia/Wordnet senses representative of search results? Otherwise, trying to make a disambiguation in terms of a fixed sense inventory would be meaningless. 2. If the answer to (1) is positive, the reverse question is also interesting: can we estimate search results diversity using our sense inven- tories? 3. Sense frequencies: knowing sense frequencies in (search results) Web pages is crucial to have a usable sense inventory. Is it possible to estimate Web sense frequencies from currently available information? 4. Classification: The association of Web pages to word senses must be done with some unsupervised algorithm, because it is not possible to hand-tag training material for every possible query word. Can this classification be done accurately? Can it be effective to promote diversity in search results? In order to provide an initial answer to these questions, we have built a corpus consisting of 40 nouns and 100 Google search results per noun, manually annotated with the most appropriate Wordnet and Wikipedia senses. Section 2 describes how this corpus has been created, and in Section 3 we discuss WordNet and Wikipedia coverage of search results according to our testbed. As this initial results clearly discard Wordnet as a sense inventory for the task, the rest of the paper mainly focuses on Wikipedia. In Section 4 we estimate search results diversity from our testbed, finding that the use of Wikipedia could substantially improve diversity in the top results. In Section 5 we use the Wikipedia internal link structure and the number of visits per page to estimate relative frequencies for Wikipedia senses, obtaining an estimation which is highly correlated with actual data in our testbed. Finally, in Section 6 we discuss a few strategies to classify Web pages into word senses, and apply the best classifier to enhance diversity in search results. The paper concludes with a discussion of related work (Section 7) and an overall discussion of our results in Section 8. 2 Test Set 2.1 Set of Words The most crucial step in building our test set is choosing the set of words to be considered. We are looking for words which are susceptible to form a one-word query for a Web search engine, and therefore we should focus on nouns which are used to denote one or more named entities. At the same time we want to have some degree of comparability with previous research on Word Sense Disambiguation, which points to noun sets used in Senseval/SemEval evaluation campaigns1 . Our budget for corpus annotation was enough for two persons-month, which limited us to handle 40 nouns (usually enough to establish statistically significant differences between WSD algorithms, although obviously limited to reach solid figures about the general behaviour of words in the Web). With these arguments in mind, we decided to choose: (i) 15 nouns from the Senseval-3 lexical sample dataset, which have been previously employed by (Mihalcea, 2007) in a related experiment (see Section 7); (ii) 25 additional words which satisfy two conditions: they are all ambiguous, and they are all names for music bands in one of their senses (not necessarily the most salient). The Senseval set is: {argument, arm, atmosphere, bank, degree, difference, disc, irmm-, age, paper, party, performance, plan, shelter, sort, source}. The bands set is {amazon, apple, camel, cell, columbia, cream, foreigner, fox, genesis, jaguar, oasis, pioneer, police, puma, rainbow, shell, skin, sun, tesla, thunder, total, traffic, trapeze, triumph, yes}. Fpoerz e,a trchiu noun, we looked up all its possible senses in WordNet 3.0 and in Wikipedia (using 1http://senseval.org 1358 Table 1: Coverage of Search Results: Wikipedia vs. WordNet Wikiped#ia documents # senses WordNe#t documents Senseval setava2il4a2b/1le0/u0sedassign8e7d7 to (5 s9o%me) senseavai9la2b/5le2/usedassigne6d96 to (4 s6o%m)e sense # senses BaTnodtsa lset868420//21774421323558 ((5546%%))17780/3/9911529995 (2 (342%%)) Wikipedia disambiguation pages). Wikipedia has an average of 22 senses per noun (25.2 in the Bands set and 16. 1in the Senseval set), and Wordnet a much smaller figure, 4.5 (3. 12 for the Bands set and 6.13 for the Senseval set). For a conventional dictionary, a higher ambiguity might indicate an excess of granularity; for an encyclopaedic resource such as Wikipedia, however, it is just an indication of larger coverage. Wikipedia en- tries for camel which are not in WordNet, for instance, include the Apache Camel routing and mediation engine, the British rock band, the brand of cigarettes, the river in Cornwall, and the World World War I fighter biplane. 2.2 Set of Documents We retrieved the 150 first ranked documents for each noun, by submitting the nouns as queries to a Web search engine (Google). Then, for each document, we stored both the snippet (small description of the contents of retrieved document) and the whole HTML document. This collection of documents contain an implicit new inventory of senses, based on Web search, as documents retrieved by a noun query are associated with some sense of the noun. Given that every document in the top Web search results is supposed to be highly relevant for the query word, we assume a ”one sense per document” scenario, although we allow annotators to assign more than one sense per document. In general this assumption turned out to be correct except in a few exceptional cases (such as Wikipedia disambiguation pages): only nine docu- ments received more than one WordNet sense, and 44 (1. 1% of all annotated pages) received more than one Wikipedia sense. 2.3 Manual Annotation We implemented an annotation interface which stored all documents and a short description for every Wordnet and Wikipedia sense. The annotators had to decide, for every document, whether there was one or more appropriate senses in each of the dictionaries. They were instructed to provide annotations for 100 documents per name; if an URL in the list was corrupt or not available, it had to be discarded. We provided 150 documents per name to ensure that the figure of 100 usable documents per name could be reached without problems. Each judge provided annotations for the 4,000 documents in the final data set. In a second round, they met and discussed their independent annotations together, reaching a consensus judgement for every document. 3 Coverage of Web Search Results: Wikipedia vs Wordnet Table 1 shows how Wikipedia and Wordnet cover the senses in search results. We report each noun subset separately (Senseval and bands subsets) as well as aggregated figures. The most relevant fact is that, unsurprisingly, Wikipedia senses cover much more search results (56%) than Wordnet (32%). If we focus on the top ten results, in the bands subset (which should be more representative of plausible web queries) Wikipedia covers 68% of the top ten documents. This is an indication that it can indeed be useful for promoting diversity or help clustering search results: even if 32% of the top ten documents are not covered by Wikipedia, it is still a representative source of senses in the top search results. We have manually examined all documents in the top ten results that are not covered by Wikipedia: a majority of the missing senses consists of names of (generally not well-known) companies (45%) and products or services (26%); the other frequent type (12%) of non annotated doc- ument is disambiguation pages (from Wikipedia and also from other dictionaries). It is also interesting to examine the degree of overlap between Wikipedia and Wordnet senses. Being two different types of lexical resource, they might have some degree of complementarity. Table 2 shows, however, that this is not the case: most of the (annotated) documents either fit Wikipedia senses (26%) or both Wikipedia and Wordnet (29%), and just 3% fit Wordnet only. 1359 Table 2: Overlap between Wikipedia and Wordnet in Search Results # documents annotated with Senseval setWikipe60di7a ( &40 W%o)rdnetWi2k7ip0e (d1i8a% on)lyWo8r9d (n6e%t o)nly534no (3n6e%) BaTnodtsa slet1517729 ( (2239%%))1708566 (3 (216%%))12176 ( (13%%))11614195 ( (4415%%)) Therefore, Wikipedia seems to extend the coverage of Wordnet rather than providing complementary sense information. If we wanted to extend the coverage of Wikipedia, the best strategy seems to be to consider lists ofcompanies, products and services, rather than complementing Wikipedia with additional sense inventories. 4 Diversity in Google Search Results Once we know that Wikipedia senses are a representative subset of actual Web senses (covering more than half of the documents retrieved by the search engine), we can test how well search results respect diversity in terms of this subset of senses. Table 3 displays the number of different senses found at different depths in the search results rank, and the average proportion of total senses that they represent. These results suggest that diversity is not a major priority for ranking results: the top ten results only cover, in average, 3 Wikipedia senses (while the average number of senses listed in Wikipedia is 22). When considering the first 100 documents, this number grows up to 6.85 senses per noun. Another relevant figure is the frequency of the most frequent sense for each word: in average, 63% of the pages in search results belong to the most frequent sense of the query word. This is roughly comparable with most frequent sense figures in standard annotated corpora such as Semcor (Miller et al., 1993) and the Senseval/Semeval data sets, which suggests that diversity may not play a major role in the current Google ranking algorithm. Of course this result must be taken with care, because variability between words is high and unpredictable, and we are using only 40 nouns for our experiment. But what we have is a positive indication that Wikipedia could be used to improve diversity or cluster search results: potentially the first top ten results could cover 6.15 different senses in average (see Section 6.5), which would be a substantial growth. 5 Sense Frequency Estimators for Wikipedia Wikipedia disambiguation pages contain no systematic information about the relative importance of senses for a given word. Such information, however, is crucial in a lexicon, because sense distributions tend to be skewed, and knowing them can help disambiguation algorithms. We have attempted to use two estimators of expected sense distribution: • • Internal relevance of a word sense, measured as incoming alinnckes o ffo ar wthoer U seRnLs o, fm a given sense in Wikipedia. External relevance of a word sense, measured as ttheren naulm rebleevr aonfc vei osifts a f woro trhde s eUnRsLe, mofe a given sense (as reported in http://stats.grok.se). The number of internal incoming links is expected to be relatively stable for Wikipedia articles. As for the number of visits, we performed a comparison of the number of visits received by the bands noun subset in May, June and July 2009, finding a stable-enough scenario with one notorious exception: the number of visits to the noun Tesla raised dramatically in July, because July 10 was the anniversary of the birth of Nicola Tesla, and a special Google logo directed users to the Wikipedia page for the scientist. We have measured correlation between the relative frequencies derived from these two indicators and the actual relative frequencies in our testbed. Therefore, for each noun w and for each sense wi, we consider three values: (i) proportion of documents retrieved for w which are manually assigned to each sense wi; (ii) inlinks(wi) : relative amount of incoming links to each sense wi; and (iii) visits(wi) : relative number of visits to the URL for each sense wi. We have measured the correlation between these three values using a linear regression correlation coefficient, which gives a correlation value of .54 for the number of visits and of .71 for the number of incoming links. Both estimators seem 1360 Table 3: Diversity in Search Results according to Wikipedia F ir s t 12570 docsBave6n425.rd9854a6 s8get#snSe 65sien43. v68a3s27elarcthesTu6543l.o t5083as5lBvaen.r3d2a73s81gectovrSaegnso. 4f32v615aWlsiketpdaTs.3oe249tn01asle to be positively correlated with real relative frequencies in our testbed, with a strong preference for the number of links. We have experimented with weighted combinations of both indicators, using weights of the form (k, 1 k) , k ∈ {0, 0.1, 0.2 . . . 1}, reaching a maxi(mk,a1l c−okrre),lkati ∈on { 0of, .07.13, f0o.r2 t.h.e. following weights: − freq(wi) = 0.9∗inlinks(wi) +0. 1∗visits(wi) (1) This weighted estimator provides a slight advantage over the use of incoming links only (.73 vs .71). Overall, we have an estimator which has a strong correlation with the distribution of senses in our testbed. In the next section we will test its utility for disambiguation purposes. 6 Association of Wikipedia Senses to Web Pages We want to test whether the information provided by Wikipedia can be used to classify search results accurately. Note that we do not want to consider approaches that involve a manual creation of training material, because they can’t be used in practice. Given a Web page p returned by the search engine for the query w, and the set of senses w1 . . . wn listed in Wikipedia, the task is to assign the best candidate sense to p. We consider two different techniques: • A basic Information Retrieval approach, wAhe breas tche I dfoocrmumateionnts Ranetdr tvhael Wikipedia pages are represented using a Vector Space Model (VSM) and compared with a standard cosine measure. This is a basic approach which, if successful, can be used efficiently to classify search results. An approach based on a state-of-the-art supervised oWacShD b system, extracting training examples automatically from Wikipedia content. We also compute two baselines: • • • A random assignment of senses (precision is computed as itghnem ienvnter osfe oenfs tehse ( pnruemcibsieorn o isf senses, for every test case). A most frequent sense heuristic which uses our eosstitm fraetiqoune otf s sense frequencies acnhd u assigns the same sense (the most frequent) to all documents. Both are naive baselines, but it must be noted that the most frequent sense heuristic is usually hard to beat for unsupervised WSD algorithms in most standard data sets. We now describe each of the two main approaches in detail. 6.1 VSM Approach For each word sense, we represent its Wikipedia page in a (unigram) vector space model, assigning standard tf*idf weights to the words in the document. idf weights are computed in two different ways: 1. Experiment VSM computes inverse document frequencies in the collection of retrieved documents (for the word being considered). 2. Experiment VSM-GT uses the statistics provided by the Google Terabyte collection (Brants and Franz, 2006), i.e. it replaces the collection of documents with statistics from a representative snapshot of the Web. 3. Experiment VSM-mixed combines statistics from the collection and from the Google Terabyte collection, following (Chen et al., 2009). The document p is represented in the same vector space as the Wikipedia senses, and it is compared with each of the candidate senses wi via the cosine similarity metric (we have experimented 1361 with other similarity metrics such as χ2, but differences are irrelevant). The sense with the highest similarity to p is assigned to the document. In case of ties (which are rare), we pick the first sense in the Wikipedia disambiguation page (which in practice is like a random decision, because senses in disambiguation pages do not seem to be ordered according to any clear criteria). We have also tested a variant of this approach which uses the estimation of sense frequencies presented above: once the similarities are computed, we consider those cases where two or more senses have a similar score (in particular, all senses with a score greater or equal than 80% of the highest score). In that cases, instead of using the small similarity differences to select a sense, we pick up the one which has the largest frequency according to our estimator. We have applied this strategy to the best performing system, VSM-GT, resulting in experiment VSM-GT+freq. 6.2 WSD Approach We have used TiMBL (Daelemans et al., 2001), a state-of-the-art supervised WSD system which uses Memory-Based Learning. The key, in this case, is how to extract learning examples from the Wikipedia automatically. For each word sense, we basically have three sources of examples: (i) occurrences of the word in the Wikipedia page for the word sense; (ii) occurrences of the word in Wikipedia pages pointing to the page for the word sense; (iii) occurrences of the word in external pages linked in the Wikipedia page for the word sense. After an initial manual inspection, we decided to discard external pages for being too noisy, and we focused on the first two options. We tried three alternatives: • • • TiMBL-core uses only the examples found Tini MtheB page rfoer u tshees sense being atrmaipneleds. TiMBL-inlinks uses the examples found in Wikipedia pages pointing etxoa mthep sense being trained. TiMBL-all uses both sources of examples. In order to classify a page p with respect to the senses for a word w, we first disambiguate all occurrences of w in the page p. Then we choose the sense which appears most frequently in the page according to TiMBL results. In case of ties we pick up the first sense listed in the Wikipedia disambiguation page. We have also experimented with a variant of the approach that uses our estimation of sense frequencies, similarly to what we did with the VSM approach. In this case, (i) when there is a tie between two or more senses (which is much more likely than in the VSM approach), we pick up the sense with the highest frequency according to our estimator; and (ii) when no sense reaches 30% of the cases in the page to be disambiguated, we also resort to the most frequent sense heuristic (among the candidates for the page). This experiment is called TiMBL-core+freq (we discarded ”inlinks” and ”all” versions because they were clearly worse than ”core”). 6.3 Classification Results Table 4 shows classification results. The accuracy of systems is reported as precision, i.e. the number of pages correctly classified divided by the total number of predictions. This is approximately the same as recall (correctly classified pages divided by total number of pages) for our systems, because the algorithms provide an answer for every page containing text (actual coverage is 94% because some pages only contain text as part of an image file such as photographs and logotypes). Table 4: Classification Results Experiment Precision random most frequent sense (estimation) .19 .46 TiMBL-core TiMBL-inlinks TiMBL-all TiMBL-core+freq .60 .50 .58 .67 VSM VSM-GT VSM-mixed VSM-GT+freq .67 .68 .67 .69 All systems are significantly better than the random and most frequent sense baselines (using p < 0.05 for a standard t-test). Overall, both approaches (using TiMBL WSD machinery and using VSM) lead to similar results (.67 vs. .69), which would make VSM preferable because it is a simpler and more efficient approach. Taking a 1362 Figure 1: Precision/Coverage curves for VSM-GT+freq classification algorithm closer look at the results with TiMBL, there are a couple of interesting facts: • There is a substantial difference between using only examples itaalke dnif fferroemnc tehe b Wikipedia Web page for the sense being trained (TiMBL-core, .60) and using examples from the Wikipedia pages pointing to that page (TiMBL-inlinks, .50). Examples taken from related pages (even if the relationship is close as in this case) seem to be too noisy for the task. This result is compatible with findings in (Santamar ı´a et al., 2003) using the Open Directory Project to extract examples automatically. • Our estimation of sense frequencies turns oOuutr rto e tbiem very helpful sfeor f cases wcihesere t our TiMBL-based algorithm cannot provide an answer: precision rises from .60 (TiMBLcore) to .67 (TiMBL-core+freq). The difference is statistically significant (p < 0.05) according to the t-test. As for the experiments with VSM, the variations tested do not provide substantial improvements to the baseline (which is .67). Using idf frequencies obtained from the Google Terabyte corpus (instead of frequencies obtained from the set of retrieved documents) provides only a small improvement (VSM-GT, .68), and adding the estimation of sense frequencies gives another small improvement (.69). Comparing the baseline VSM with the optimal setting (VSM-GT+freq), the difference is small (.67 vs .69) but relatively robust (p = 0.066 according to the t-test). Remarkably, the use of frequency estimations is very helpful for the WSD approach but not for the SVM one, and they both end up with similar performance figures; this might indicate that using frequency estimations is only helpful up to certain precision ceiling. 6.4 Precision/Coverage Trade-off All the above experiments are done at maximal coverage, i.e., all systems assign a sense for every document in the test collection (at least for every document with textual content). But it is possible to enhance search results diversity without annotating every document (in fact, not every document can be assigned to a Wikipedia sense, as we have discussed in Section 3). Thus, it is useful to investigate which is the precision/coverage trade-off in our dataset. We have experimented with the best performing system (VSM-GT+freq), introducing a similarity threshold: assignment of a document to a sense is only done if the similarity of the document to the Wikipedia page for the sense exceeds the similarity threshold. We have computed precision and coverage for every threshold in the range [0.00 −0.90] (beyond 0e.v9e0ry coverage was null) anngde represented 0th] e(b breeysuolntds in Figure 1 (solid line). The graph shows that we 1363 can classify around 20% of the documents with a precision above .90, and around 60% of the documents with a precision of .80. Note that we are reporting disambiguation results using a conventional WSD test set, i.e., one in which every test case (every document) has been manually assigned to some Wikipedia sense. But in our Web Search scenario, 44% of the documents were not assigned to any Wikipedia sense: in practice, our classification algorithm would have to cope with all this noise as well. Figure 1 (dotted line) shows how the precision/coverage curve is affected when the algorithm attempts to disambiguate all documents retrieved by Google, whether they can in fact be assigned to a Wikipedia sense or not. At a coverage of 20%, precision drops approximately from .90 to .70, and at a coverage of 60% it drops from .80 to .50. We now address the question of whether this performance is good enough to improve search re- sults diversity in practice. 6.5 Using Classification to Promote Diversity We now want to estimate how the reported classification accuracy may perform in practice to enhance diversity in search results. In order to provide an initial answer to this question, we have re-ranked the documents for the 40 nouns in our testbed, using our best classifier (VSM-GT+freq) and making a list of the top-ten documents with the primary criterion of maximising the number of senses represented in the set, and the secondary criterion of maximising the similarity scores of the documents to their assigned senses. The algorithm proceeds as follows: we fill each position in the rank (starting at rank 1), with the document which has the highest similarity to some of the senses which are not yet represented in the rank; once all senses are represented, we start choosing a second representative for each sense, following the same criterion. The process goes on until the first ten documents are selected. We have also produced a number of alternative rankings for comparison purposes: clustering (centroids): this method applies eHriiengrarc (hciecnatlr Agglomerative Clustering which proved to be the most competitive clustering algorithm in a similar task (Artiles et al., 2009) to the set of search results, forcing the algorithm to create ten clusters. The centroid of each cluster is then selected Table 5: Enhancement of Search Results Diversity • – – rank@10 # senses coverage Original rank2.8049% Wikipedia 4.75 77% clustering (centroids) 2.50 42% clustering (top ranked) 2.80 46% random 2.45 43% upper bound6.1597% as one of the top ten documents in the new rank. • clustering (top ranked): Applies the same clustering algorithm, db u)t: tAhpisp lti emse t tehe s top ranked document (in the original Google rank) of each cluster is selected. • • random: Randomly selects ten documents frraonmd otmhe: :se Rt aofn dreomtrielyve sde lreecstuslts te. upper bound: This is the maximal diversity tuhpapt can o beu nodb:tai Tnheids iins our mteasxtbiemda. lN doivteer tshitayt coverage is not 100%, because some words have more than ten meanings in Wikipedia and we are only considering the top ten documents. All experiments have been applied on the full set of documents in the testbed, including those which could not be annotated with any Wikipedia sense. Coverage is computed as the ratio of senses that appear in the top ten results compared to the number of senses that appear in all search results. Results are presented in Table 5. Note that diversity in the top ten documents increases from an average of 2.80 Wikipedia senses represented in the original search engine rank, to 4.75 in the modified rank (being 6.15 the upper bound), with the coverage of senses going from 49% to 77%. With a simple VSM algorithm, the coverage of Wikipedia senses in the top ten results is 70% larger than in the original ranking. Using Wikipedia to enhance diversity seems to work much better than clustering: both strategies to select a representative from each cluster are unable to improve the diversity of the original ranking. Note, however, that our evaluation has a bias towards using Wikipedia, because only Wikipedia senses are considered to estimate diversity. Of course our results do not imply that the Wikipedia modified rank is better than the original 1364 Google rank: there are many other factors that influence the final ranking provided by a search engine. What our results indicate is that, with simple and efficient algorithms, Wikipedia can be used as a reference to improve search results diversity for one-word queries. 7 Related Work Web search results clustering and diversity in search results are topics that receive an increasing attention from the research community. Diversity is used both to represent sub-themes in a broad topic, or to consider alternative interpretations for ambiguous queries (Agrawal et al., 2009), which is our interest here. Standard IR test collections do not usually consider ambiguous queries, and are thus inappropriate to test systems that promote diversity (Sanderson, 2008); it is only recently that appropriate test collections are being built, such as (Paramita et al., 2009) for image search and (Artiles et al., 2009) for person name search. We see our testbed as complementary to these ones, and expect that it can contribute to foster research on search results diversity. To our knowledge, Wikipedia has not explicitly been used before to promote diversity in search results; but in (Gollapudi and Sharma, 2009), it is used as a gold standard to evaluate diversification algorithms: given a query with a Wikipedia disambiguation page, an algorithm is evaluated as promoting diversity when different documents in the search results are semantically similar to different Wikipedia pages (describing the alternative senses of the query). Although semantic similarity is measured automatically in this work, our results confirm that this evaluation strategy is sound, because Wikipedia senses are indeed representative of search results. (Clough et al., 2009) analyses query diversity in a Microsoft Live Search, using click entropy and query reformulation as diversity indicators. It was found that at least 9.5% - 16.2% of queries could benefit from diversification, although no correlation was found between the number of senses of a word in Wikipedia and the indicators used to discover diverse queries. This result does not discard, however, that queries where applying diversity is useful cannot benefit from Wikipedia as a sense inventory. In the context of clustering, (Carmel et al., 2009) successfully employ Wikipedia to enhance automatic cluster labeling, finding that Wikipedia labels agree with manual labels associated by humans to a cluster, much more than with signif- icant terms that are extracted directly from the text. In a similar line, both (Gabrilovich and Markovitch, 2007) and (Syed et al., 2008) provide evidence suggesting that categories of Wikipedia articles can successfully describe common concepts in documents. In the field of Natural Language Processing, there has been successful attempts to connect Wikipedia entries to Wordnet senses: (RuizCasado et al., 2005) reports an algorithm that provides an accuracy of 84%. (Mihalcea, 2007) uses internal Wikipedia hyperlinks to derive sensetagged examples. But instead of using Wikipedia directly as sense inventory, Mihalcea then manually maps Wikipedia senses into Wordnet senses (claiming that, at the time of writing the paper, Wikipedia did not consistently report ambiguity in disambiguation pages) and shows that a WSD system based on acquired sense-tagged examples reaches an accuracy well beyond an (informed) most frequent sense heuristic. 8 Conclusions We have investigated whether generic lexical resources can be used to promote diversity in Web search results for one-word, ambiguous queries. We have compared WordNet and Wikipedia and arrived to a number of conclusions: (i) unsurprisingly, Wikipedia has a much better coverage of senses in search results, and is therefore more appropriate for the task; (ii) the distribution of senses in search results can be estimated using the internal graph structure of the Wikipedia and the relative number of visits received by each sense in Wikipedia, and (iii) associating Web pages to Wikipedia senses with simple and efficient algorithms, we can produce modified rankings that cover 70% more Wikipedia senses than the original search engine rankings. We expect that the testbed created for this research will complement the - currently short - set of benchmarking test sets to explore search results diversity and query ambiguity. Our testbed is publicly available for research purposes at http://nlp.uned.es. Our results endorse further investigation on the use of Wikipedia to organize search results. Some limitations of our research, however, must be 1365 noted: (i) the nature of our testbed (with every search result manually annotated in terms of two sense inventories) makes it too small to extract solid conclusions on Web searches (ii) our work does not involve any study of diversity from the point of view of Web users (i.e. when a Web query addresses many different use needs in practice); research in (Clough et al., 2009) suggests that word ambiguity in Wikipedia might not be related with diversity of search needs; (iii) we have tested our classifiers with a simple re-ordering of search results to test how much diversity can be improved, but a search results ranking depends on many other factors, some of them more crucial than diversity; it remains to be tested how can we use document/Wikipedia associations to improve search results clustering (for instance, providing seeds for the clustering process) and to provide search suggestions. Acknowledgments This work has been partially funded by the Spanish Government (project INES/Text-Mess) and the Xunta de Galicia. References R. Agrawal, S. Gollapudi, A. Halverson, and S. Leong. 2009. Diversifying Search Results. In Proc. of WSDM’09. ACM. P. Anick. 2003. Using Terminological Feedback for Web Search Refinement : a Log-based Study. In Proc. ACM SIGIR 2003, pages 88–95. ACM New York, NY, USA. J. Artiles, J. Gonzalo, and S. Sekine. 2009. WePS 2 Evaluation Campaign: overview of the Web People Search Clustering Task. In 2nd Web People Search Evaluation Workshop (WePS 2009), 18th WWW Conference. 2009. T. Brants and A. Franz. 2006. Web 1T 5-gram, version 1. Philadelphia: Linguistic Data Consortium. D. Carmel, H. Roitman, and N. Zwerdling. 2009. Enhancing Cluster Labeling using Wikipedia. In Pro- ceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 139–146. ACM. C. Carpineto, S. Osinski, G. Romano, and Dawid Weiss. 2009. A Survey of Web Clustering Engines. ACM Computing Surveys, 41(3). Y. Chen, S. Yat Mei Lee, and C. Huang. 2009. PolyUHK: A Robust Information Extraction System for Web Personal Names. In Proc. WWW’09 (WePS2 Workshop). ACM. C. Clarke, M. Kolla, G. Cormack, O. Vechtomova, A. Ashkan, S. B ¨uttcher, and I. MacKinnon. 2008. Novelty and Diversity in Information Retrieval Evaluation. In Proc. SIGIR ’08, pages 659–666. ACM. P. Clough, M. Sanderson, M. Abouammoh, S. Navarro, and M. Paramita. 2009. Multiple Approaches to Analysing Query Diversity. In Proc. of SIGIR 2009. ACM. W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2001 . TiMBL: Tilburg Memory Based Learner, version 4.0, Reference Guide. Technical report, University of Antwerp. E. Gabrilovich and S. Markovitch. 2007. Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis. In Proceedings of The 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India. S. Gollapudi and A. Sharma. 2009. An Axiomatic Approach for Result Diversification. In Proc. WWW 2009, pages 381–390. ACM New York, NY, USA. R. Mihalcea. 2007. Using Wikipedia for Automatic Word Sense Disambiguation. In Proceedings of NAACL HLT, volume 2007. G. Miller, C. R. Beckwith, D. Fellbaum, Gross, and K. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicograph, 3(4). G.A Miller, C. Leacock, R. Tengi, and Bunker R. T. 1993. A Semantic Concordance. In Proceedings of the ARPA WorkShop on Human Language Technology. San Francisco, Morgan Kaufman. M. Paramita, M. Sanderson, and P. Clough. 2009. Diversity in Photo Retrieval: Overview of the ImageCLEFPhoto task 2009. CLEF working notes, 2009. M. Ruiz-Casado, E. Alfonseca, and P. Castells. 2005. Automatic Assignment of Wikipedia Encyclopaedic Entries to Wordnet Synsets. Advances in Web Intelligence, 3528:380–386. M. Sanderson. 2000. Retrieving with Good Sense. Information Retrieval, 2(1):49–69. M. Sanderson. 2008. Ambiguous Queries: Test Collections Need More Sense. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 499–506. ACM New York, NY, USA. C. Santamar ı´a, J. Gonzalo, and F. Verdejo. 2003. Automatic Association of Web Directories to Word Senses. Computational Linguistics, 29(3):485–502. Z. S. Syed, T. Finin, and Joshi. A. 2008. Wikipedia as an Ontology for Describing Documents. In Proc. ICWSM’08. 1366
3 0.094311878 44 acl-2010-BabelNet: Building a Very Large Multilingual Semantic Network
Author: Roberto Navigli ; Simone Paolo Ponzetto
Abstract: In this paper we present BabelNet a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. –
4 0.084944226 156 acl-2010-Knowledge-Rich Word Sense Disambiguation Rivaling Supervised Systems
Author: Simone Paolo Ponzetto ; Roberto Navigli
Abstract: One of the main obstacles to highperformance Word Sense Disambiguation (WSD) is the knowledge acquisition bottleneck. In this paper, we present a methodology to automatically extend WordNet with large amounts of semantic relations from an encyclopedic resource, namely Wikipedia. We show that, when provided with a vast amount of high-quality semantic relations, simple knowledge-lean disambiguation algorithms compete with state-of-the-art supervised WSD systems in a coarse-grained all-words setting and outperform them on gold-standard domain-specific datasets.
5 0.069693744 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation
Author: Xianpei Han ; Jun Zhao
Abstract: Name ambiguity problem has raised urgent demands for efficient, high-quality named entity disambiguation methods. In recent years, the increasing availability of large-scale, rich semantic knowledge sources (such as Wikipedia and WordNet) creates new opportunities to enhance the named entity disambiguation by developing algorithms which can exploit these knowledge sources at best. The problem is that these knowledge sources are heterogeneous and most of the semantic knowledge within them is embedded in complex structures, such as graphs and networks. This paper proposes a knowledge-based method, called Structural Semantic Relatedness (SSR), which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge in multiple knowledge sources. Empirical results show that, in comparison with the classical BOW based methods and social network based methods, our method can significantly improve the disambiguation performance by respectively 8.7% and 14.7%. 1
6 0.068582848 20 acl-2010-A Transition-Based Parser for 2-Planar Dependency Structures
8 0.067686997 159 acl-2010-Learning 5000 Relational Extractors
9 0.06538666 127 acl-2010-Global Learning of Focused Entailment Graphs
10 0.059858661 185 acl-2010-Open Information Extraction Using Wikipedia
11 0.054261759 24 acl-2010-Active Learning-Based Elicitation for Semi-Supervised Word Alignment
12 0.050554805 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
13 0.047648914 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining
14 0.044173416 170 acl-2010-Letter-Phoneme Alignment: An Exploration
15 0.043937217 113 acl-2010-Extraction and Approximation of Numerical Attributes from the Web
16 0.039006107 210 acl-2010-Sentiment Translation through Lexicon Induction
17 0.037653055 133 acl-2010-Hierarchical Search for Word Alignment
18 0.037637495 166 acl-2010-Learning Word-Class Lattices for Definition and Hypernym Extraction
19 0.036520071 87 acl-2010-Discriminative Modeling of Extraction Sets for Machine Translation
20 0.033937279 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
topicId topicWeight
[(0, -0.099), (1, 0.018), (2, -0.036), (3, -0.011), (4, 0.088), (5, 0.003), (6, 0.07), (7, 0.037), (8, -0.008), (9, -0.009), (10, -0.051), (11, -0.084), (12, -0.092), (13, 0.007), (14, -0.018), (15, -0.01), (16, 0.008), (17, 0.12), (18, -0.057), (19, 0.028), (20, -0.121), (21, -0.001), (22, -0.051), (23, -0.058), (24, -0.008), (25, 0.033), (26, 0.003), (27, -0.108), (28, -0.022), (29, -0.053), (30, 0.051), (31, 0.037), (32, -0.02), (33, 0.013), (34, -0.094), (35, 0.03), (36, 0.046), (37, -0.097), (38, -0.05), (39, -0.097), (40, 0.019), (41, -0.065), (42, -0.024), (43, -0.081), (44, -0.065), (45, -0.001), (46, -0.008), (47, -0.019), (48, 0.046), (49, 0.029)]
simIndex simValue paperId paperTitle
same-paper 1 0.96385962 250 acl-2010-Untangling the Cross-Lingual Link Structure of Wikipedia
Author: Gerard de Melo ; Gerhard Weikum
Abstract: Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. Unfortunately, large numbers of links are imprecise or simply wrong. In this paper, techniques to detect such problems are identified. We formalize their removal as an optimization task based on graph repair operations. We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts.
Author: Decong Li ; Sujian Li ; Wenjie Li ; Wei Wang ; Weiguang Qu
Abstract: It is a fundamental and important task to extract key phrases from documents. Generally, phrases in a document are not independent in delivering the content of the document. In order to capture and make better use of their relationships in key phrase extraction, we suggest exploring the Wikipedia knowledge to model a document as a semantic network, where both n-ary and binary relationships among phrases are formulated. Based on a commonly accepted assumption that the title of a document is always elaborated to reflect the content of a document and consequently key phrases tend to have close semantics to the title, we propose a novel semi-supervised key phrase extraction approach in this paper by computing the phrase importance in the semantic network, through which the influence of title phrases is propagated to the other phrases iteratively. Experimental results demonstrate the remarkable performance of this approach. 1
3 0.5417906 261 acl-2010-Wikipedia as Sense Inventory to Improve Diversity in Web Search Results
Author: Celina Santamaria ; Julio Gonzalo ; Javier Artiles
Abstract: Is it possible to use sense inventories to improve Web search results diversity for one word queries? To answer this question, we focus on two broad-coverage lexical resources of a different nature: WordNet, as a de-facto standard used in Word Sense Disambiguation experiments; and Wikipedia, as a large coverage, updated encyclopaedic resource which may have a better coverage of relevant senses in Web pages. Our results indicate that (i) Wikipedia has a much better coverage of search results, (ii) the distribution of senses in search results can be estimated using the internal graph structure of the Wikipedia and the relative number of visits received by each sense in Wikipedia, and (iii) associating Web pages to Wikipedia senses with simple and efficient algorithms, we can produce modified rankings that cover 70% more Wikipedia senses than the original search engine rankings. 1 Motivation The application of Word Sense Disambiguation (WSD) to Information Retrieval (IR) has been subject of a significant research effort in the recent past. The essential idea is that, by indexing and matching word senses (or even meanings) , the retrieval process could better handle polysemy and synonymy problems (Sanderson, 2000). In practice, however, there are two main difficulties: (i) for long queries, IR models implicitly perform disambiguation, and thus there is little room for improvement. This is the case with most standard IR benchmarks, such as TREC (trec.nist.gov) or CLEF (www.clef-campaign.org) ad-hoc collections; (ii) for very short queries, disambiguation j ul io @ l i uned . e s j avart s . @bec . uned . e s may not be possible or even desirable. This is often the case with one word and even two word queries in Web search engines. In Web search, there are at least three ways of coping with ambiguity: • • • Promoting diversity in the search results (Clarke negt al., 2008): given th seea query s”uolatssis”, the search engine may try to include representatives for different senses of the word (such as the Oasis band, the Organization for the Advancement of Structured Information Standards, the online fashion store, etc.) among the top results. Search engines are supposed to handle diversity as one of the multiple factors that influence the ranking. Presenting the results as a set of (labelled) cPlruessteenrtsi nragth tehre eth reansu as a a rsan ake sde lti ostf (Carpineto et al., 2009). Complementing search results with search suggestions (e.g. e”oaracshis band”, ”woitahsis s fashion store”) that serve to refine the query in the intended way (Anick, 2003). All of them rely on the ability of the search engine to cluster search results, detecting topic similarities. In all of them, disambiguation is implicit, a side effect of the process but not its explicit target. Clustering may detect that documents about the Oasis band and the Oasis fashion store deal with unrelated topics, but it may as well detect a group of documents discussing why one of the Oasis band members is leaving the band, and another group of documents about Oasis band lyrics; both are different aspects of the broad topic Oasis band. A perfect hierarchical clustering should distinguish between the different Oasis senses at a first level, and then discover different topics within each of the senses. Is it possible to use sense inventories to improve search results for one word queries? To answer 1357 Proce dingUsp opfs thaela 4, 8Stwhe Adnen u,a 1l1- M16e Jtiunlgy o 2f0 t1h0e. A ?c s 2o0c1ia0ti Aosnso focria Ctio nm fpourta Ctoiomnpault Laitniognuaislt Licisn,g puaigsetisc 1s357–136 , this question, we will focus on two broad-coverage lexical resources of a different nature: WordNet (Miller et al., 1990), as a de-facto standard used in Word Sense Disambiguation experiments and many other Natural Language Processing research fields; and Wikipedia (www.wikipedia.org), as a large coverage and updated encyclopedic resource which may have a better coverage of relevant senses in Web pages. Our hypothesis is that, under appropriate conditions, any of the above mechanisms (clustering, search suggestions, diversity) might benefit from an explicit disambiguation (classification of pages in the top search results) using a wide-coverage sense inventory. Our research is focused on four relevant aspects of the problem: 1. Coverage: Are Wikipedia/Wordnet senses representative of search results? Otherwise, trying to make a disambiguation in terms of a fixed sense inventory would be meaningless. 2. If the answer to (1) is positive, the reverse question is also interesting: can we estimate search results diversity using our sense inven- tories? 3. Sense frequencies: knowing sense frequencies in (search results) Web pages is crucial to have a usable sense inventory. Is it possible to estimate Web sense frequencies from currently available information? 4. Classification: The association of Web pages to word senses must be done with some unsupervised algorithm, because it is not possible to hand-tag training material for every possible query word. Can this classification be done accurately? Can it be effective to promote diversity in search results? In order to provide an initial answer to these questions, we have built a corpus consisting of 40 nouns and 100 Google search results per noun, manually annotated with the most appropriate Wordnet and Wikipedia senses. Section 2 describes how this corpus has been created, and in Section 3 we discuss WordNet and Wikipedia coverage of search results according to our testbed. As this initial results clearly discard Wordnet as a sense inventory for the task, the rest of the paper mainly focuses on Wikipedia. In Section 4 we estimate search results diversity from our testbed, finding that the use of Wikipedia could substantially improve diversity in the top results. In Section 5 we use the Wikipedia internal link structure and the number of visits per page to estimate relative frequencies for Wikipedia senses, obtaining an estimation which is highly correlated with actual data in our testbed. Finally, in Section 6 we discuss a few strategies to classify Web pages into word senses, and apply the best classifier to enhance diversity in search results. The paper concludes with a discussion of related work (Section 7) and an overall discussion of our results in Section 8. 2 Test Set 2.1 Set of Words The most crucial step in building our test set is choosing the set of words to be considered. We are looking for words which are susceptible to form a one-word query for a Web search engine, and therefore we should focus on nouns which are used to denote one or more named entities. At the same time we want to have some degree of comparability with previous research on Word Sense Disambiguation, which points to noun sets used in Senseval/SemEval evaluation campaigns1 . Our budget for corpus annotation was enough for two persons-month, which limited us to handle 40 nouns (usually enough to establish statistically significant differences between WSD algorithms, although obviously limited to reach solid figures about the general behaviour of words in the Web). With these arguments in mind, we decided to choose: (i) 15 nouns from the Senseval-3 lexical sample dataset, which have been previously employed by (Mihalcea, 2007) in a related experiment (see Section 7); (ii) 25 additional words which satisfy two conditions: they are all ambiguous, and they are all names for music bands in one of their senses (not necessarily the most salient). The Senseval set is: {argument, arm, atmosphere, bank, degree, difference, disc, irmm-, age, paper, party, performance, plan, shelter, sort, source}. The bands set is {amazon, apple, camel, cell, columbia, cream, foreigner, fox, genesis, jaguar, oasis, pioneer, police, puma, rainbow, shell, skin, sun, tesla, thunder, total, traffic, trapeze, triumph, yes}. Fpoerz e,a trchiu noun, we looked up all its possible senses in WordNet 3.0 and in Wikipedia (using 1http://senseval.org 1358 Table 1: Coverage of Search Results: Wikipedia vs. WordNet Wikiped#ia documents # senses WordNe#t documents Senseval setava2il4a2b/1le0/u0sedassign8e7d7 to (5 s9o%me) senseavai9la2b/5le2/usedassigne6d96 to (4 s6o%m)e sense # senses BaTnodtsa lset868420//21774421323558 ((5546%%))17780/3/9911529995 (2 (342%%)) Wikipedia disambiguation pages). Wikipedia has an average of 22 senses per noun (25.2 in the Bands set and 16. 1in the Senseval set), and Wordnet a much smaller figure, 4.5 (3. 12 for the Bands set and 6.13 for the Senseval set). For a conventional dictionary, a higher ambiguity might indicate an excess of granularity; for an encyclopaedic resource such as Wikipedia, however, it is just an indication of larger coverage. Wikipedia en- tries for camel which are not in WordNet, for instance, include the Apache Camel routing and mediation engine, the British rock band, the brand of cigarettes, the river in Cornwall, and the World World War I fighter biplane. 2.2 Set of Documents We retrieved the 150 first ranked documents for each noun, by submitting the nouns as queries to a Web search engine (Google). Then, for each document, we stored both the snippet (small description of the contents of retrieved document) and the whole HTML document. This collection of documents contain an implicit new inventory of senses, based on Web search, as documents retrieved by a noun query are associated with some sense of the noun. Given that every document in the top Web search results is supposed to be highly relevant for the query word, we assume a ”one sense per document” scenario, although we allow annotators to assign more than one sense per document. In general this assumption turned out to be correct except in a few exceptional cases (such as Wikipedia disambiguation pages): only nine docu- ments received more than one WordNet sense, and 44 (1. 1% of all annotated pages) received more than one Wikipedia sense. 2.3 Manual Annotation We implemented an annotation interface which stored all documents and a short description for every Wordnet and Wikipedia sense. The annotators had to decide, for every document, whether there was one or more appropriate senses in each of the dictionaries. They were instructed to provide annotations for 100 documents per name; if an URL in the list was corrupt or not available, it had to be discarded. We provided 150 documents per name to ensure that the figure of 100 usable documents per name could be reached without problems. Each judge provided annotations for the 4,000 documents in the final data set. In a second round, they met and discussed their independent annotations together, reaching a consensus judgement for every document. 3 Coverage of Web Search Results: Wikipedia vs Wordnet Table 1 shows how Wikipedia and Wordnet cover the senses in search results. We report each noun subset separately (Senseval and bands subsets) as well as aggregated figures. The most relevant fact is that, unsurprisingly, Wikipedia senses cover much more search results (56%) than Wordnet (32%). If we focus on the top ten results, in the bands subset (which should be more representative of plausible web queries) Wikipedia covers 68% of the top ten documents. This is an indication that it can indeed be useful for promoting diversity or help clustering search results: even if 32% of the top ten documents are not covered by Wikipedia, it is still a representative source of senses in the top search results. We have manually examined all documents in the top ten results that are not covered by Wikipedia: a majority of the missing senses consists of names of (generally not well-known) companies (45%) and products or services (26%); the other frequent type (12%) of non annotated doc- ument is disambiguation pages (from Wikipedia and also from other dictionaries). It is also interesting to examine the degree of overlap between Wikipedia and Wordnet senses. Being two different types of lexical resource, they might have some degree of complementarity. Table 2 shows, however, that this is not the case: most of the (annotated) documents either fit Wikipedia senses (26%) or both Wikipedia and Wordnet (29%), and just 3% fit Wordnet only. 1359 Table 2: Overlap between Wikipedia and Wordnet in Search Results # documents annotated with Senseval setWikipe60di7a ( &40 W%o)rdnetWi2k7ip0e (d1i8a% on)lyWo8r9d (n6e%t o)nly534no (3n6e%) BaTnodtsa slet1517729 ( (2239%%))1708566 (3 (216%%))12176 ( (13%%))11614195 ( (4415%%)) Therefore, Wikipedia seems to extend the coverage of Wordnet rather than providing complementary sense information. If we wanted to extend the coverage of Wikipedia, the best strategy seems to be to consider lists ofcompanies, products and services, rather than complementing Wikipedia with additional sense inventories. 4 Diversity in Google Search Results Once we know that Wikipedia senses are a representative subset of actual Web senses (covering more than half of the documents retrieved by the search engine), we can test how well search results respect diversity in terms of this subset of senses. Table 3 displays the number of different senses found at different depths in the search results rank, and the average proportion of total senses that they represent. These results suggest that diversity is not a major priority for ranking results: the top ten results only cover, in average, 3 Wikipedia senses (while the average number of senses listed in Wikipedia is 22). When considering the first 100 documents, this number grows up to 6.85 senses per noun. Another relevant figure is the frequency of the most frequent sense for each word: in average, 63% of the pages in search results belong to the most frequent sense of the query word. This is roughly comparable with most frequent sense figures in standard annotated corpora such as Semcor (Miller et al., 1993) and the Senseval/Semeval data sets, which suggests that diversity may not play a major role in the current Google ranking algorithm. Of course this result must be taken with care, because variability between words is high and unpredictable, and we are using only 40 nouns for our experiment. But what we have is a positive indication that Wikipedia could be used to improve diversity or cluster search results: potentially the first top ten results could cover 6.15 different senses in average (see Section 6.5), which would be a substantial growth. 5 Sense Frequency Estimators for Wikipedia Wikipedia disambiguation pages contain no systematic information about the relative importance of senses for a given word. Such information, however, is crucial in a lexicon, because sense distributions tend to be skewed, and knowing them can help disambiguation algorithms. We have attempted to use two estimators of expected sense distribution: • • Internal relevance of a word sense, measured as incoming alinnckes o ffo ar wthoer U seRnLs o, fm a given sense in Wikipedia. External relevance of a word sense, measured as ttheren naulm rebleevr aonfc vei osifts a f woro trhde s eUnRsLe, mofe a given sense (as reported in http://stats.grok.se). The number of internal incoming links is expected to be relatively stable for Wikipedia articles. As for the number of visits, we performed a comparison of the number of visits received by the bands noun subset in May, June and July 2009, finding a stable-enough scenario with one notorious exception: the number of visits to the noun Tesla raised dramatically in July, because July 10 was the anniversary of the birth of Nicola Tesla, and a special Google logo directed users to the Wikipedia page for the scientist. We have measured correlation between the relative frequencies derived from these two indicators and the actual relative frequencies in our testbed. Therefore, for each noun w and for each sense wi, we consider three values: (i) proportion of documents retrieved for w which are manually assigned to each sense wi; (ii) inlinks(wi) : relative amount of incoming links to each sense wi; and (iii) visits(wi) : relative number of visits to the URL for each sense wi. We have measured the correlation between these three values using a linear regression correlation coefficient, which gives a correlation value of .54 for the number of visits and of .71 for the number of incoming links. Both estimators seem 1360 Table 3: Diversity in Search Results according to Wikipedia F ir s t 12570 docsBave6n425.rd9854a6 s8get#snSe 65sien43. v68a3s27elarcthesTu6543l.o t5083as5lBvaen.r3d2a73s81gectovrSaegnso. 4f32v615aWlsiketpdaTs.3oe249tn01asle to be positively correlated with real relative frequencies in our testbed, with a strong preference for the number of links. We have experimented with weighted combinations of both indicators, using weights of the form (k, 1 k) , k ∈ {0, 0.1, 0.2 . . . 1}, reaching a maxi(mk,a1l c−okrre),lkati ∈on { 0of, .07.13, f0o.r2 t.h.e. following weights: − freq(wi) = 0.9∗inlinks(wi) +0. 1∗visits(wi) (1) This weighted estimator provides a slight advantage over the use of incoming links only (.73 vs .71). Overall, we have an estimator which has a strong correlation with the distribution of senses in our testbed. In the next section we will test its utility for disambiguation purposes. 6 Association of Wikipedia Senses to Web Pages We want to test whether the information provided by Wikipedia can be used to classify search results accurately. Note that we do not want to consider approaches that involve a manual creation of training material, because they can’t be used in practice. Given a Web page p returned by the search engine for the query w, and the set of senses w1 . . . wn listed in Wikipedia, the task is to assign the best candidate sense to p. We consider two different techniques: • A basic Information Retrieval approach, wAhe breas tche I dfoocrmumateionnts Ranetdr tvhael Wikipedia pages are represented using a Vector Space Model (VSM) and compared with a standard cosine measure. This is a basic approach which, if successful, can be used efficiently to classify search results. An approach based on a state-of-the-art supervised oWacShD b system, extracting training examples automatically from Wikipedia content. We also compute two baselines: • • • A random assignment of senses (precision is computed as itghnem ienvnter osfe oenfs tehse ( pnruemcibsieorn o isf senses, for every test case). A most frequent sense heuristic which uses our eosstitm fraetiqoune otf s sense frequencies acnhd u assigns the same sense (the most frequent) to all documents. Both are naive baselines, but it must be noted that the most frequent sense heuristic is usually hard to beat for unsupervised WSD algorithms in most standard data sets. We now describe each of the two main approaches in detail. 6.1 VSM Approach For each word sense, we represent its Wikipedia page in a (unigram) vector space model, assigning standard tf*idf weights to the words in the document. idf weights are computed in two different ways: 1. Experiment VSM computes inverse document frequencies in the collection of retrieved documents (for the word being considered). 2. Experiment VSM-GT uses the statistics provided by the Google Terabyte collection (Brants and Franz, 2006), i.e. it replaces the collection of documents with statistics from a representative snapshot of the Web. 3. Experiment VSM-mixed combines statistics from the collection and from the Google Terabyte collection, following (Chen et al., 2009). The document p is represented in the same vector space as the Wikipedia senses, and it is compared with each of the candidate senses wi via the cosine similarity metric (we have experimented 1361 with other similarity metrics such as χ2, but differences are irrelevant). The sense with the highest similarity to p is assigned to the document. In case of ties (which are rare), we pick the first sense in the Wikipedia disambiguation page (which in practice is like a random decision, because senses in disambiguation pages do not seem to be ordered according to any clear criteria). We have also tested a variant of this approach which uses the estimation of sense frequencies presented above: once the similarities are computed, we consider those cases where two or more senses have a similar score (in particular, all senses with a score greater or equal than 80% of the highest score). In that cases, instead of using the small similarity differences to select a sense, we pick up the one which has the largest frequency according to our estimator. We have applied this strategy to the best performing system, VSM-GT, resulting in experiment VSM-GT+freq. 6.2 WSD Approach We have used TiMBL (Daelemans et al., 2001), a state-of-the-art supervised WSD system which uses Memory-Based Learning. The key, in this case, is how to extract learning examples from the Wikipedia automatically. For each word sense, we basically have three sources of examples: (i) occurrences of the word in the Wikipedia page for the word sense; (ii) occurrences of the word in Wikipedia pages pointing to the page for the word sense; (iii) occurrences of the word in external pages linked in the Wikipedia page for the word sense. After an initial manual inspection, we decided to discard external pages for being too noisy, and we focused on the first two options. We tried three alternatives: • • • TiMBL-core uses only the examples found Tini MtheB page rfoer u tshees sense being atrmaipneleds. TiMBL-inlinks uses the examples found in Wikipedia pages pointing etxoa mthep sense being trained. TiMBL-all uses both sources of examples. In order to classify a page p with respect to the senses for a word w, we first disambiguate all occurrences of w in the page p. Then we choose the sense which appears most frequently in the page according to TiMBL results. In case of ties we pick up the first sense listed in the Wikipedia disambiguation page. We have also experimented with a variant of the approach that uses our estimation of sense frequencies, similarly to what we did with the VSM approach. In this case, (i) when there is a tie between two or more senses (which is much more likely than in the VSM approach), we pick up the sense with the highest frequency according to our estimator; and (ii) when no sense reaches 30% of the cases in the page to be disambiguated, we also resort to the most frequent sense heuristic (among the candidates for the page). This experiment is called TiMBL-core+freq (we discarded ”inlinks” and ”all” versions because they were clearly worse than ”core”). 6.3 Classification Results Table 4 shows classification results. The accuracy of systems is reported as precision, i.e. the number of pages correctly classified divided by the total number of predictions. This is approximately the same as recall (correctly classified pages divided by total number of pages) for our systems, because the algorithms provide an answer for every page containing text (actual coverage is 94% because some pages only contain text as part of an image file such as photographs and logotypes). Table 4: Classification Results Experiment Precision random most frequent sense (estimation) .19 .46 TiMBL-core TiMBL-inlinks TiMBL-all TiMBL-core+freq .60 .50 .58 .67 VSM VSM-GT VSM-mixed VSM-GT+freq .67 .68 .67 .69 All systems are significantly better than the random and most frequent sense baselines (using p < 0.05 for a standard t-test). Overall, both approaches (using TiMBL WSD machinery and using VSM) lead to similar results (.67 vs. .69), which would make VSM preferable because it is a simpler and more efficient approach. Taking a 1362 Figure 1: Precision/Coverage curves for VSM-GT+freq classification algorithm closer look at the results with TiMBL, there are a couple of interesting facts: • There is a substantial difference between using only examples itaalke dnif fferroemnc tehe b Wikipedia Web page for the sense being trained (TiMBL-core, .60) and using examples from the Wikipedia pages pointing to that page (TiMBL-inlinks, .50). Examples taken from related pages (even if the relationship is close as in this case) seem to be too noisy for the task. This result is compatible with findings in (Santamar ı´a et al., 2003) using the Open Directory Project to extract examples automatically. • Our estimation of sense frequencies turns oOuutr rto e tbiem very helpful sfeor f cases wcihesere t our TiMBL-based algorithm cannot provide an answer: precision rises from .60 (TiMBLcore) to .67 (TiMBL-core+freq). The difference is statistically significant (p < 0.05) according to the t-test. As for the experiments with VSM, the variations tested do not provide substantial improvements to the baseline (which is .67). Using idf frequencies obtained from the Google Terabyte corpus (instead of frequencies obtained from the set of retrieved documents) provides only a small improvement (VSM-GT, .68), and adding the estimation of sense frequencies gives another small improvement (.69). Comparing the baseline VSM with the optimal setting (VSM-GT+freq), the difference is small (.67 vs .69) but relatively robust (p = 0.066 according to the t-test). Remarkably, the use of frequency estimations is very helpful for the WSD approach but not for the SVM one, and they both end up with similar performance figures; this might indicate that using frequency estimations is only helpful up to certain precision ceiling. 6.4 Precision/Coverage Trade-off All the above experiments are done at maximal coverage, i.e., all systems assign a sense for every document in the test collection (at least for every document with textual content). But it is possible to enhance search results diversity without annotating every document (in fact, not every document can be assigned to a Wikipedia sense, as we have discussed in Section 3). Thus, it is useful to investigate which is the precision/coverage trade-off in our dataset. We have experimented with the best performing system (VSM-GT+freq), introducing a similarity threshold: assignment of a document to a sense is only done if the similarity of the document to the Wikipedia page for the sense exceeds the similarity threshold. We have computed precision and coverage for every threshold in the range [0.00 −0.90] (beyond 0e.v9e0ry coverage was null) anngde represented 0th] e(b breeysuolntds in Figure 1 (solid line). The graph shows that we 1363 can classify around 20% of the documents with a precision above .90, and around 60% of the documents with a precision of .80. Note that we are reporting disambiguation results using a conventional WSD test set, i.e., one in which every test case (every document) has been manually assigned to some Wikipedia sense. But in our Web Search scenario, 44% of the documents were not assigned to any Wikipedia sense: in practice, our classification algorithm would have to cope with all this noise as well. Figure 1 (dotted line) shows how the precision/coverage curve is affected when the algorithm attempts to disambiguate all documents retrieved by Google, whether they can in fact be assigned to a Wikipedia sense or not. At a coverage of 20%, precision drops approximately from .90 to .70, and at a coverage of 60% it drops from .80 to .50. We now address the question of whether this performance is good enough to improve search re- sults diversity in practice. 6.5 Using Classification to Promote Diversity We now want to estimate how the reported classification accuracy may perform in practice to enhance diversity in search results. In order to provide an initial answer to this question, we have re-ranked the documents for the 40 nouns in our testbed, using our best classifier (VSM-GT+freq) and making a list of the top-ten documents with the primary criterion of maximising the number of senses represented in the set, and the secondary criterion of maximising the similarity scores of the documents to their assigned senses. The algorithm proceeds as follows: we fill each position in the rank (starting at rank 1), with the document which has the highest similarity to some of the senses which are not yet represented in the rank; once all senses are represented, we start choosing a second representative for each sense, following the same criterion. The process goes on until the first ten documents are selected. We have also produced a number of alternative rankings for comparison purposes: clustering (centroids): this method applies eHriiengrarc (hciecnatlr Agglomerative Clustering which proved to be the most competitive clustering algorithm in a similar task (Artiles et al., 2009) to the set of search results, forcing the algorithm to create ten clusters. The centroid of each cluster is then selected Table 5: Enhancement of Search Results Diversity • – – rank@10 # senses coverage Original rank2.8049% Wikipedia 4.75 77% clustering (centroids) 2.50 42% clustering (top ranked) 2.80 46% random 2.45 43% upper bound6.1597% as one of the top ten documents in the new rank. • clustering (top ranked): Applies the same clustering algorithm, db u)t: tAhpisp lti emse t tehe s top ranked document (in the original Google rank) of each cluster is selected. • • random: Randomly selects ten documents frraonmd otmhe: :se Rt aofn dreomtrielyve sde lreecstuslts te. upper bound: This is the maximal diversity tuhpapt can o beu nodb:tai Tnheids iins our mteasxtbiemda. lN doivteer tshitayt coverage is not 100%, because some words have more than ten meanings in Wikipedia and we are only considering the top ten documents. All experiments have been applied on the full set of documents in the testbed, including those which could not be annotated with any Wikipedia sense. Coverage is computed as the ratio of senses that appear in the top ten results compared to the number of senses that appear in all search results. Results are presented in Table 5. Note that diversity in the top ten documents increases from an average of 2.80 Wikipedia senses represented in the original search engine rank, to 4.75 in the modified rank (being 6.15 the upper bound), with the coverage of senses going from 49% to 77%. With a simple VSM algorithm, the coverage of Wikipedia senses in the top ten results is 70% larger than in the original ranking. Using Wikipedia to enhance diversity seems to work much better than clustering: both strategies to select a representative from each cluster are unable to improve the diversity of the original ranking. Note, however, that our evaluation has a bias towards using Wikipedia, because only Wikipedia senses are considered to estimate diversity. Of course our results do not imply that the Wikipedia modified rank is better than the original 1364 Google rank: there are many other factors that influence the final ranking provided by a search engine. What our results indicate is that, with simple and efficient algorithms, Wikipedia can be used as a reference to improve search results diversity for one-word queries. 7 Related Work Web search results clustering and diversity in search results are topics that receive an increasing attention from the research community. Diversity is used both to represent sub-themes in a broad topic, or to consider alternative interpretations for ambiguous queries (Agrawal et al., 2009), which is our interest here. Standard IR test collections do not usually consider ambiguous queries, and are thus inappropriate to test systems that promote diversity (Sanderson, 2008); it is only recently that appropriate test collections are being built, such as (Paramita et al., 2009) for image search and (Artiles et al., 2009) for person name search. We see our testbed as complementary to these ones, and expect that it can contribute to foster research on search results diversity. To our knowledge, Wikipedia has not explicitly been used before to promote diversity in search results; but in (Gollapudi and Sharma, 2009), it is used as a gold standard to evaluate diversification algorithms: given a query with a Wikipedia disambiguation page, an algorithm is evaluated as promoting diversity when different documents in the search results are semantically similar to different Wikipedia pages (describing the alternative senses of the query). Although semantic similarity is measured automatically in this work, our results confirm that this evaluation strategy is sound, because Wikipedia senses are indeed representative of search results. (Clough et al., 2009) analyses query diversity in a Microsoft Live Search, using click entropy and query reformulation as diversity indicators. It was found that at least 9.5% - 16.2% of queries could benefit from diversification, although no correlation was found between the number of senses of a word in Wikipedia and the indicators used to discover diverse queries. This result does not discard, however, that queries where applying diversity is useful cannot benefit from Wikipedia as a sense inventory. In the context of clustering, (Carmel et al., 2009) successfully employ Wikipedia to enhance automatic cluster labeling, finding that Wikipedia labels agree with manual labels associated by humans to a cluster, much more than with signif- icant terms that are extracted directly from the text. In a similar line, both (Gabrilovich and Markovitch, 2007) and (Syed et al., 2008) provide evidence suggesting that categories of Wikipedia articles can successfully describe common concepts in documents. In the field of Natural Language Processing, there has been successful attempts to connect Wikipedia entries to Wordnet senses: (RuizCasado et al., 2005) reports an algorithm that provides an accuracy of 84%. (Mihalcea, 2007) uses internal Wikipedia hyperlinks to derive sensetagged examples. But instead of using Wikipedia directly as sense inventory, Mihalcea then manually maps Wikipedia senses into Wordnet senses (claiming that, at the time of writing the paper, Wikipedia did not consistently report ambiguity in disambiguation pages) and shows that a WSD system based on acquired sense-tagged examples reaches an accuracy well beyond an (informed) most frequent sense heuristic. 8 Conclusions We have investigated whether generic lexical resources can be used to promote diversity in Web search results for one-word, ambiguous queries. We have compared WordNet and Wikipedia and arrived to a number of conclusions: (i) unsurprisingly, Wikipedia has a much better coverage of senses in search results, and is therefore more appropriate for the task; (ii) the distribution of senses in search results can be estimated using the internal graph structure of the Wikipedia and the relative number of visits received by each sense in Wikipedia, and (iii) associating Web pages to Wikipedia senses with simple and efficient algorithms, we can produce modified rankings that cover 70% more Wikipedia senses than the original search engine rankings. We expect that the testbed created for this research will complement the - currently short - set of benchmarking test sets to explore search results diversity and query ambiguity. Our testbed is publicly available for research purposes at http://nlp.uned.es. Our results endorse further investigation on the use of Wikipedia to organize search results. Some limitations of our research, however, must be 1365 noted: (i) the nature of our testbed (with every search result manually annotated in terms of two sense inventories) makes it too small to extract solid conclusions on Web searches (ii) our work does not involve any study of diversity from the point of view of Web users (i.e. when a Web query addresses many different use needs in practice); research in (Clough et al., 2009) suggests that word ambiguity in Wikipedia might not be related with diversity of search needs; (iii) we have tested our classifiers with a simple re-ordering of search results to test how much diversity can be improved, but a search results ranking depends on many other factors, some of them more crucial than diversity; it remains to be tested how can we use document/Wikipedia associations to improve search results clustering (for instance, providing seeds for the clustering process) and to provide search suggestions. Acknowledgments This work has been partially funded by the Spanish Government (project INES/Text-Mess) and the Xunta de Galicia. References R. Agrawal, S. Gollapudi, A. Halverson, and S. Leong. 2009. Diversifying Search Results. In Proc. of WSDM’09. ACM. P. Anick. 2003. Using Terminological Feedback for Web Search Refinement : a Log-based Study. In Proc. ACM SIGIR 2003, pages 88–95. ACM New York, NY, USA. J. Artiles, J. Gonzalo, and S. Sekine. 2009. WePS 2 Evaluation Campaign: overview of the Web People Search Clustering Task. In 2nd Web People Search Evaluation Workshop (WePS 2009), 18th WWW Conference. 2009. T. Brants and A. Franz. 2006. Web 1T 5-gram, version 1. Philadelphia: Linguistic Data Consortium. D. Carmel, H. Roitman, and N. Zwerdling. 2009. Enhancing Cluster Labeling using Wikipedia. In Pro- ceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 139–146. ACM. C. Carpineto, S. Osinski, G. Romano, and Dawid Weiss. 2009. A Survey of Web Clustering Engines. ACM Computing Surveys, 41(3). Y. Chen, S. Yat Mei Lee, and C. Huang. 2009. PolyUHK: A Robust Information Extraction System for Web Personal Names. In Proc. WWW’09 (WePS2 Workshop). ACM. C. Clarke, M. Kolla, G. Cormack, O. Vechtomova, A. Ashkan, S. B ¨uttcher, and I. MacKinnon. 2008. Novelty and Diversity in Information Retrieval Evaluation. In Proc. SIGIR ’08, pages 659–666. ACM. P. Clough, M. Sanderson, M. Abouammoh, S. Navarro, and M. Paramita. 2009. Multiple Approaches to Analysing Query Diversity. In Proc. of SIGIR 2009. ACM. W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2001 . TiMBL: Tilburg Memory Based Learner, version 4.0, Reference Guide. Technical report, University of Antwerp. E. Gabrilovich and S. Markovitch. 2007. Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis. In Proceedings of The 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India. S. Gollapudi and A. Sharma. 2009. An Axiomatic Approach for Result Diversification. In Proc. WWW 2009, pages 381–390. ACM New York, NY, USA. R. Mihalcea. 2007. Using Wikipedia for Automatic Word Sense Disambiguation. In Proceedings of NAACL HLT, volume 2007. G. Miller, C. R. Beckwith, D. Fellbaum, Gross, and K. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicograph, 3(4). G.A Miller, C. Leacock, R. Tengi, and Bunker R. T. 1993. A Semantic Concordance. In Proceedings of the ARPA WorkShop on Human Language Technology. San Francisco, Morgan Kaufman. M. Paramita, M. Sanderson, and P. Clough. 2009. Diversity in Photo Retrieval: Overview of the ImageCLEFPhoto task 2009. CLEF working notes, 2009. M. Ruiz-Casado, E. Alfonseca, and P. Castells. 2005. Automatic Assignment of Wikipedia Encyclopaedic Entries to Wordnet Synsets. Advances in Web Intelligence, 3528:380–386. M. Sanderson. 2000. Retrieving with Good Sense. Information Retrieval, 2(1):49–69. M. Sanderson. 2008. Ambiguous Queries: Test Collections Need More Sense. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 499–506. ACM New York, NY, USA. C. Santamar ı´a, J. Gonzalo, and F. Verdejo. 2003. Automatic Association of Web Directories to Word Senses. Computational Linguistics, 29(3):485–502. Z. S. Syed, T. Finin, and Joshi. A. 2008. Wikipedia as an Ontology for Describing Documents. In Proc. ICWSM’08. 1366
4 0.51554257 44 acl-2010-BabelNet: Building a Very Large Multilingual Semantic Network
Author: Roberto Navigli ; Simone Paolo Ponzetto
Abstract: In this paper we present BabelNet a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. –
5 0.48014826 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation
Author: Xianpei Han ; Jun Zhao
Abstract: Name ambiguity problem has raised urgent demands for efficient, high-quality named entity disambiguation methods. In recent years, the increasing availability of large-scale, rich semantic knowledge sources (such as Wikipedia and WordNet) creates new opportunities to enhance the named entity disambiguation by developing algorithms which can exploit these knowledge sources at best. The problem is that these knowledge sources are heterogeneous and most of the semantic knowledge within them is embedded in complex structures, such as graphs and networks. This paper proposes a knowledge-based method, called Structural Semantic Relatedness (SSR), which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge in multiple knowledge sources. Empirical results show that, in comparison with the classical BOW based methods and social network based methods, our method can significantly improve the disambiguation performance by respectively 8.7% and 14.7%. 1
6 0.46959886 156 acl-2010-Knowledge-Rich Word Sense Disambiguation Rivaling Supervised Systems
7 0.44798297 185 acl-2010-Open Information Extraction Using Wikipedia
8 0.44481492 29 acl-2010-An Exact A* Method for Deciphering Letter-Substitution Ciphers
9 0.4290919 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
10 0.40231293 112 acl-2010-Extracting Social Networks from Literary Fiction
11 0.39330646 159 acl-2010-Learning 5000 Relational Extractors
12 0.35908079 166 acl-2010-Learning Word-Class Lattices for Definition and Hypernym Extraction
13 0.34383154 127 acl-2010-Global Learning of Focused Entailment Graphs
14 0.31381202 186 acl-2010-Optimal Rank Reduction for Linear Context-Free Rewriting Systems with Fan-Out Two
15 0.30644816 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining
16 0.29262936 7 acl-2010-A Generalized-Zero-Preserving Method for Compact Encoding of Concept Lattices
17 0.2899923 113 acl-2010-Extraction and Approximation of Numerical Attributes from the Web
18 0.28903544 39 acl-2010-Automatic Generation of Story Highlights
19 0.28040466 141 acl-2010-Identifying Text Polarity Using Random Walks
20 0.2786043 126 acl-2010-GernEdiT - The GermaNet Editing Tool
topicId topicWeight
[(9, 0.01), (14, 0.014), (16, 0.397), (25, 0.065), (59, 0.079), (72, 0.035), (73, 0.041), (78, 0.032), (83, 0.071), (84, 0.026), (98, 0.111)]
simIndex simValue paperId paperTitle
1 0.78692007 223 acl-2010-Tackling Sparse Data Issue in Machine Translation Evaluation
Author: Ondrej Bojar ; Kamil Kos ; David Marecek
Abstract: We illustrate and explain problems of n-grams-based machine translation (MT) metrics (e.g. BLEU) when applied to morphologically rich languages such as Czech. A novel metric SemPOS based on the deep-syntactic representation of the sentence tackles the issue and retains the performance for translation to English as well.
same-paper 2 0.75627685 250 acl-2010-Untangling the Cross-Lingual Link Structure of Wikipedia
Author: Gerard de Melo ; Gerhard Weikum
Abstract: Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. Unfortunately, large numbers of links are imprecise or simply wrong. In this paper, techniques to detect such problems are identified. We formalize their removal as an optimization task based on graph repair operations. We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts.
3 0.61658585 249 acl-2010-Unsupervised Search for the Optimal Segmentation for Statistical Machine Translation
Author: Coskun Mermer ; Ahmet Afsin Akin
Abstract: We tackle the previously unaddressed problem of unsupervised determination of the optimal morphological segmentation for statistical machine translation (SMT) and propose a segmentation metric that takes into account both sides of the SMT training corpus. We formulate the objective function as the posterior probability of the training corpus according to a generative segmentation-translation model. We describe how the IBM Model-1 translation likelihood can be computed incrementally between adjacent segmentation states for efficient computation. Submerging the proposed segmentation method in a SMT task from morphologically-rich Turkish to English does not exhibit the expected improvement in translation BLEU scores and confirms the robustness of phrase-based SMT to translation unit combinatorics. A positive outcome of this work is the described modification to the sequential search algorithm of Morfessor (Creutz and Lagus, 2007) that enables arbitrary-fold parallelization of the computation, which unexpectedly improves the translation performance as measured by BLEU.
4 0.58788103 110 acl-2010-Exploring Syntactic Structural Features for Sub-Tree Alignment Using Bilingual Tree Kernels
Author: Jun Sun ; Min Zhang ; Chew Lim Tan
Abstract: We propose Bilingual Tree Kernels (BTKs) to capture the structural similarities across a pair of syntactic translational equivalences and apply BTKs to sub-tree alignment along with some plain features. Our study reveals that the structural features embedded in a bilingual parse tree pair are very effective for sub-tree alignment and the bilingual tree kernels can well capture such features. The experimental results show that our approach achieves a significant improvement on both gold standard tree bank and automatically parsed tree pairs against a heuristic similarity based method. We further apply the sub-tree alignment in machine translation with two methods. It is suggested that the subtree alignment benefits both phrase and syntax based systems by relaxing the constraint of the word alignment. 1
5 0.42637134 115 acl-2010-Filtering Syntactic Constraints for Statistical Machine Translation
Author: Hailong Cao ; Eiichiro Sumita
Abstract: Source language parse trees offer very useful but imperfect reordering constraints for statistical machine translation. A lot of effort has been made for soft applications of syntactic constraints. We alternatively propose the selective use of syntactic constraints. A classifier is built automatically to decide whether a node in the parse trees should be used as a reordering constraint or not. Using this information yields a 0.8 BLEU point improvement over a full constraint-based system.
6 0.42524981 163 acl-2010-Learning Lexicalized Reordering Models from Reordering Graphs
7 0.42405427 192 acl-2010-Paraphrase Lattice for Statistical Machine Translation
9 0.42248631 71 acl-2010-Convolution Kernel over Packed Parse Forest
10 0.41774374 87 acl-2010-Discriminative Modeling of Extraction Sets for Machine Translation
11 0.41384754 56 acl-2010-Bridging SMT and TM with Translation Recommendation
12 0.41249594 48 acl-2010-Better Filtration and Augmentation for Hierarchical Phrase-Based Translation Rules
13 0.41201782 102 acl-2010-Error Detection for Statistical Machine Translation Using Linguistic Features
14 0.41040969 12 acl-2010-A Probabilistic Generative Model for an Intermediate Constituency-Dependency Representation
15 0.40470284 147 acl-2010-Improving Statistical Machine Translation with Monolingual Collocation
16 0.40262428 113 acl-2010-Extraction and Approximation of Numerical Attributes from the Web
17 0.39854205 169 acl-2010-Learning to Translate with Source and Target Syntax
18 0.39787215 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation
19 0.39733171 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
20 0.3966817 127 acl-2010-Global Learning of Focused Entailment Graphs