acl acl2011 acl2011-246 knowledge-graph by maker-knowledge-mining

246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition


Source: pdf

Author: Stefan Rud ; Massimiliano Ciaramita ; Jens Muller ; Hinrich Schutze

Abstract: We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. [sent-2, score-0.572]

2 To address this problem, we propose a new type of features for NLP data, features extracted from search engine results. [sent-6, score-0.435]

3 Our motivation is that search engine results can be viewed as a substitute for the world knowledge that is required in NLP tasks, but that can only be extracted from a standard training set or precompiled resources to a limited extent. [sent-7, score-0.335]

4 For example, a named entity (NE) recognizer trained on news text may tag the NE London in an out-of-domain web query like London Klondike gold rush as a location. [sent-8, score-0.322]

5 But if we train the recognizer on features derived from search results for the sentence to be tagged, correct classification as person is possible. [sent-9, score-0.241]

6 This is because the search results for London Klondike gold 965 University of Stuttgart Germany rush contain snippets in which the first name Jack precedes London; this is a sure indicator of a last name and hence an NE of type person. [sent-10, score-0.254]

7 We call our approach piggyback and search resultderived features piggyback features because we piggyback on a search engine like Google for solving a difficult NLP task. [sent-11, score-1.776]

8 In this paper, we use piggyback features to address a particularly hard cross-domain problem, the application of an NER system trained on news to web queries. [sent-12, score-0.558]

9 But queries are generally lowercase and even if uppercase characters are used, they are not consistent enough to be reliable features. [sent-15, score-0.287]

10 Thus, applying NER systems trained on news to web queries requires a robust cross-domain approach. [sent-16, score-0.293]

11 News to queries adaptation is also hard because queries provide limited context for NEs. [sent-17, score-0.343]

12 In a short query like buy ford or ford pardon, there is much less context than in news. [sent-19, score-0.188]

13 The lack of context and capitalization, and the noisiness of real-world web queries (to- kenization irregularities and misspellings) all make NER hard. [sent-20, score-0.222]

14 The low annotator agreement we found for queries (Section 5) also confirms this point. [sent-21, score-0.154]

15 The correct identification of NEs in web queries can be crucial for providing relevant pages and ads to users. [sent-22, score-0.222]

16 Lexical, part-of-speech (PoS), shape and gazetteer features are standard. [sent-30, score-0.207]

17 While the impact of different types of features is well understood for standard NER, fundamentally different types of features can be used when leveraging search engine results. [sent-31, score-0.435]

18 Returning to the NE London in the query London Klondike gold rush, the feature “proportion of search engine results in which a first name precedes the token of interest” is likely to be useful in NER. [sent-32, score-0.534]

19 Since using search engine results for cross-domain robustness is a new ap- proach in NLP, the design of appropriate features is crucial to its success. [sent-33, score-0.404]

20 One main contribution of this paper is the large array of piggyback features that we propose in Section 4. [sent-38, score-0.452]

21 The results in Section 7 show that piggyback features significantly increase NER performance. [sent-40, score-0.452]

22 We discuss challenges of using piggyback features due to the cost of querying search engines and present our conclusions and future work in Section 8. [sent-42, score-0.68]

23 (2008) found that capitalization of NEs in web queries is inconsistent and not a reliable cue for NER. [sent-44, score-0.376]

24 This is also promising, but the context in search results is richer and potentially more informative than that of other queries in logs. [sent-47, score-0.331]

25 The insight that search results provide useful ad- ditional context for natural language expressions is not new. [sent-48, score-0.177]

26 Perhaps the oldest and best known application is pseudo-relevance feedback which uses words and phrases from search results for query expansion (Rocchio, 1971 ; Xu and Croft, 1996). [sent-49, score-0.257]

27 Search counts or search results have also been used for sentiment analysis (Turney, 2002), for transliteration (Grefenstette et al. [sent-50, score-0.177]

28 (2007), but they mainly used frequency statistics as opposed to what we view as the main strength of search results: the ability to get additional contextually similar uses of the token that is to be classified. [sent-54, score-0.238]

29 Our approach fits within this line of work in that it empirically investigates features with good cross-domain generalization properties. [sent-76, score-0.243]

30 3 Standard NER features As is standard in supervised NER, we train an NE tagger on a dataset where each token is represented as a feature vector. [sent-86, score-0.211]

31 We will refer to the target token the token we define the feature vector for as w0. [sent-88, score-0.208]

32 The binary feature WORD(k,i) (line 1) is 1 iff wi, the ith word in the dictionary, occurs at position k with respect to w0. [sent-93, score-0.172]

33 The analogous feature for part of speech, POS(k,t) (line 2), is 1 iff wk has been tagged with – – 967 PoS t, as determined by TnT tagger (Brants, 2000). [sent-95, score-0.174]

34 The two gazetteer features are the binary features GAZBl(k,i) and GAZ-Il (k,i). [sent-108, score-0.241]

35 4 Piggyback features Feature groups URL, LEX, BOW, and MISC are piggyback features. [sent-112, score-0.512]

36 Each trigram wi−1wiwi+1 is submitted as a query to the search engine. [sent-114, score-0.257]

37 1 The search engine returns a search result for the query consisting of, in most cases, 10 snippets,2 each of which contains 0, 1 or more hits of the search term wi. [sent-116, score-0.741]

38 w0 is the token that is to be classified (PER, LOC, ORG, or O) and the previous word and the next word serve as context that the search engine can exploit to provide snippets in which w0 is used in the same NE category as in the input text. [sent-121, score-0.41]

39 The feature URL-SUBPART (line 7) is the fraction of URLs in the search result containing w0 as a substring. [sent-130, score-0.263]

40 For URL-MI (line 8), each URL in the search result is split on special characters into parts (e. [sent-132, score-0.214]

41 We refer to the set of all parts in the search result as URL-parts. [sent-135, score-0.177]

42 The value of MIu(p, PER) is computed on the search results of the training set as the mutual information (MI) between (i) w0 being PER and (ii) p occurring as part of a URL in the search result. [sent-136, score-0.395]

43 These features assess how − − appropriate the words occurring in w0’s local contexts in the search result are for an NE class. [sent-153, score-0.309]

44 T=he − −va1lu (ele fotf NEIGHBOR(k) is defined as the average log ratio of NE-BNC(v, k) and OTHER-BNC(v, k), averaged over the set kneighbors, the set of words that occur at position k with respect to s0 in the search result. [sent-156, score-0.177]

45 Note that the search engine could be used again for this purpose; for practical reasons we preferred a static resource for this first study where many design variants were explored. [sent-159, score-0.307]

46 The feature LEX-MI interprets words occurring before or after s0 as indicators of named entitihood. [sent-160, score-0.182]

47 MId(v, PER) is computed on the search results of the training set as the MI between (i) w0 being PER and (ii) v occurring close to s0 in the search result either to the left (d = −1) or to tihne t right (d = 1) olft s0. [sent-162, score-0.395]

48 MIb(v, PER) is computed on the search results of the training set as the MI between (i) w0 being PER and (ii) v occurring anywhere in the search result. [sent-171, score-0.395]

49 The average is computed over all words v ∈ bow-words that occur in a particular search revsul ∈t. [sent-173, score-0.177]

50 We collect the remaining piggyback features in the group MISC. [sent-176, score-0.496]

51 The UPPERCASE and ALLCAPS features (lines 12&13) compute the fraction of occurrences of w0 in the search result with capitalization of only the first letter and all letters, respectively. [sent-177, score-0.365]

52 We exclude titles: capitalization in titles is not a consistent clue for NE status. [sent-178, score-0.164]

53 The 969 SPECIAL-TITLE feature (line 15) captures this by counting the occurrences of numbers and special characters in s−1 and s1 in titles of the search result. [sent-183, score-0.34]

54 The TITLE-WORD feature (line 16) computes the fraction of occurrences of w0 in the titles of the search result. [sent-184, score-0.303]

55 The NOMINAL-POS feature (line 17) calculates the proportion of nominal PoS tags (NN, NNS, NP, NPS) of s0 in the search result, as determined by a PoS tagging of the snippets using TreeTagger (Schmid, 1994). [sent-185, score-0.348]

56 This feature is complementary to the feature group LEX in that it is based on shape and PoS and does not estimate different parameters for each word. [sent-189, score-0.246]

57 The feature PHRASE-HIT(−1) (line 19) calculates the proportion of occurrences o1f) w0 ien 1t9he) scealacrcuhla result where the left neighbor in the snippet is equal to the word preceding w0 in the search string, i. [sent-190, score-0.415]

58 This feature helps − identify phrases search strings containing NEs are more likely to occur as a phrase in search results. [sent-194, score-0.44]

59 The ACRONYM feature (line 20) computes the proportion of the initials of w−1w0 or w0w1 or w−1w0w1 occurring in the search result. [sent-195, score-0.347]

60 The binary feature EMPTY (line 21) returns 1 iff the search result is empty. [sent-197, score-0.317]

61 , for the feature ALLCAPS) from values that are zero because the search engine found no hits. [sent-200, score-0.393]

62 As capitalization is absent from queries we lowercased both CoNLL and IEER. [sent-212, score-0.312]

63 Notice that this step is necessary as otherwise virtually no NNP/NNPS categories would be predicted on the query data because the lowercase NEs of web queries never occur in properly capitalized news; this causes an NER tagger trained on standard PoS to underpredict NEs (1–3% positive rate). [sent-214, score-0.386]

64 The 2005 KDD Cup is a query topic categorization task based on 800,000 queries (Li et al. [sent-215, score-0.234]

65 5 We use a random subset of 2000 queries as a source of web queries. [sent-217, score-0.222]

66 By means of simple regular expressions we excluded from sampling queries that looked like urls or emails (≈ 15%) as they are easy ltoo identify aunrlds odro nmoati provide a significant c ehaasly3A reviewer points out that we use the terms in-domain and out-of-domain somewhat liberally. [sent-218, score-0.185]

67 We also excluded queries shorter than 10 characters (4%) and longer than 50 characters (2%) to provide annotators with enough context, but not an overly complex task. [sent-226, score-0.228]

68 We instructed workers to follow the CoNLL 2003 NER guidelines (augmented with several examples from queries that we annotated) and identify up to three NEs in a short text and copy and paste them into a box with associated multiple choice menu with the 4 CoNLL NE labels: LOC, MISC, ORG, and PER. [sent-228, score-0.187]

69 In a first round we produced 1000 queries later used for development. [sent-230, score-0.154]

70 34 for KDD-T (Cohen, 1960)), we remove queries with less than 50% agreement, averaged over the tokens in the query. [sent-237, score-0.154]

71 PER is about as prevalent in KDD as in CoNLL, but LOC and ORG have higher percentages, reflecting the fact that people search frequently for locations and commercial organizations. [sent-248, score-0.177]

72 As a benchmark we use the baseline model with gazetteer features (BASE and GAZ). [sent-271, score-0.177]

73 In each column, the best numbers within a dataset for the “lowercased” runs are bolded (see below for discussion of the “capitalization” runs on lines c9 and i9). [sent-279, score-0.159]

74 Removing GAZ, URL, BOW and MISC from line c7, causes small comparable decreases in performance (lines c3–c6). [sent-284, score-0.178]

75 These feature groups seem to have about the same importance in this experimental setting, but leaving out BASE decreases F1 by a larger 6. [sent-285, score-0.185]

76 The main result for CoNLL is that using piggyback features (line c7) improves F1 of a standard NER system that uses only BASE and GAZ (line c1) by 4. [sent-287, score-0.452]

77 Comparing lines c7 and c9, we see that piggyback features are able to recover all the performance that is lost when proper capitalization is unavailable. [sent-292, score-0.659]

78 972 Compared to standard NER (using feature groups BASE and GAZ), our combined feature set achieves a performance that is by more than 10% higher (lines i8 vs i1). [sent-312, score-0.267]

79 This demonstrates that piggyback features have robust cross-domain generalization properties. [sent-313, score-0.525]

80 The comparison of lines i8 and i9 confirms that the features effectively compensate for the lack of capitalization, and perform almost as well as (although statistically worse than) a model trained on capitalized data. [sent-314, score-0.193]

81 On line k7, we show results for this run for KDD-T and for runs that differ by one feature group (lines k2–k6, k8). [sent-316, score-0.307]

82 On lines k2–k6, performance generally decreases on ALL and the three NE classes when dropping one of the five feature groups on line k7. [sent-321, score-0.407]

83 The key take-away from our results on KDD-T is that piggyback features are again (as for IEER) significantly better than standard feature groups BASE and GAZ. [sent-325, score-0.598]

84 Search engine based adaptation has an advantage of 9. [sent-326, score-0.165]

85 ment due to piggyback features increases as outof-domain data become more different from the indomain training set, performance declines in absolute terms from . [sent-336, score-0.452]

86 Because search engines attempt to make optimal use of the context a word occurs in, hits shown to the user usually include other uses of the word in semantically similar snippets. [sent-345, score-0.26]

87 Our first contribution is that we have shown that this basic idea of using search engines for robust domain-independent feature representations yields solid results for one specific NLP problem, NER. [sent-347, score-0.347]

88 A third contribution of this paper is the release of an annotated dataset for web query NER. [sent-358, score-0.148]

89 We hope that this dataset will foster more research on crossdomain generalization and domain adaptation in particular for NER and the difficult problem of – – 973 web query understanding. [sent-359, score-0.223]

90 However, the general idea of using search to provide rich context information to NLP systems is applicable to a broad array oftasks. [sent-361, score-0.177]

91 We used a web search engine in the experiments presented in this paper. [sent-366, score-0.375]

92 Latencies when using one of the three main commercial search en- gines Bing, Google and Yahoo! [sent-367, score-0.177]

93 Search engines also tend to limit the number of queries per user and IP address. [sent-372, score-0.275]

94 To gain widespread acceptance of the piggyback idea of using search results for robust NLP, we therefore must explore alternatives to search engines. [sent-373, score-0.775]

95 In future work, we plan to develop more efficient methods of using search results for cross-domain generalization to avoid the cost of issuing a large number of queries to search engines. [sent-374, score-0.548]

96 Another avenue we are pursuing is to build a specialized search system for our application in a way similar to Cafarella and Etzioni (2005). [sent-376, score-0.177]

97 While we need good coverage of a large variety of domains for our approach to work, it is not clear how big the index of the search engine must be for good performance. [sent-377, score-0.348]

98 Conceivably, collections much smaller than those indexed by major search engines (e. [sent-378, score-0.228]

99 It is important to keep in mind, however, that one of the key factors a search engine allows us to leverage is the notion of relevance which might not be always possible to model as accurately with other data. [sent-381, score-0.307]

100 Annotating large email datasets for named entity recognition with mechanical turk. [sent-472, score-0.158]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ner', 0.403), ('piggyback', 0.388), ('ieer', 0.222), ('url', 0.184), ('search', 0.177), ('gaz', 0.166), ('misc', 0.163), ('org', 0.159), ('nes', 0.155), ('queries', 0.154), ('conll', 0.14), ('line', 0.139), ('ne', 0.138), ('engine', 0.13), ('capitalization', 0.124), ('lex', 0.121), ('gazetteer', 0.113), ('loc', 0.112), ('mi', 0.092), ('feature', 0.086), ('bow', 0.083), ('lines', 0.083), ('kdd', 0.082), ('query', 0.08), ('mib', 0.074), ('miu', 0.074), ('neighbor', 0.073), ('per', 0.07), ('web', 0.068), ('amt', 0.067), ('features', 0.064), ('token', 0.061), ('groups', 0.06), ('london', 0.059), ('mechanical', 0.057), ('klondike', 0.055), ('named', 0.055), ('iff', 0.054), ('staff', 0.054), ('ford', 0.054), ('engines', 0.051), ('turian', 0.051), ('sang', 0.051), ('base', 0.05), ('amazon', 0.049), ('entity', 0.046), ('capitalized', 0.046), ('nlp', 0.046), ('tnt', 0.045), ('turk', 0.044), ('group', 0.044), ('proportion', 0.043), ('snippets', 0.042), ('meulder', 0.042), ('domains', 0.041), ('occurring', 0.041), ('generalization', 0.04), ('farkas', 0.04), ('ciaramita', 0.04), ('titles', 0.04), ('decreases', 0.039), ('lowercase', 0.038), ('bnc', 0.038), ('news', 0.038), ('runs', 0.038), ('characters', 0.037), ('allcaps', 0.037), ('lawson', 0.037), ('meliha', 0.037), ('poibeau', 0.037), ('sha', 0.037), ('snippet', 0.036), ('pos', 0.035), ('vs', 0.035), ('adaptation', 0.035), ('rush', 0.035), ('wk', 0.034), ('lowercased', 0.034), ('robustness', 0.033), ('robust', 0.033), ('guidelines', 0.033), ('cup', 0.033), ('occurs', 0.032), ('urls', 0.031), ('wikipedia', 0.03), ('sahami', 0.03), ('cunningham', 0.03), ('massimiliano', 0.03), ('reliable', 0.03), ('shape', 0.03), ('google', 0.029), ('location', 0.029), ('fujita', 0.028), ('chinchor', 0.028), ('grefenstette', 0.028), ('uppercase', 0.028), ('barr', 0.028), ('world', 0.028), ('twitter', 0.027), ('contexts', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999958 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

Author: Stefan Rud ; Massimiliano Ciaramita ; Jens Muller ; Hinrich Schutze

Abstract: We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.

2 0.18387112 271 acl-2011-Search in the Lost Sense of "Query": Question Formulation in Web Search Queries and its Temporal Changes

Author: Bo Pang ; Ravi Kumar

Abstract: Web search is an information-seeking activity. Often times, this amounts to a user seeking answers to a question. However, queries, which encode user’s information need, are typically not expressed as full-length natural language sentences in particular, as questions. Rather, they consist of one or more text fragments. As humans become more searchengine-savvy, do natural-language questions still have a role to play in web search? Through a systematic, large-scale study, we find to our surprise that as time goes by, web users are more likely to use questions to express their search intent. —

3 0.17413144 182 acl-2011-Joint Annotation of Search Queries

Author: Michael Bendersky ; W. Bruce Croft ; David A. Smith

Abstract: W. Bruce Croft Dept. of Computer Science University of Massachusetts Amherst, MA cro ft @ c s .uma s s .edu David A. Smith Dept. of Computer Science University of Massachusetts Amherst, MA dasmith@ c s .umas s .edu articles or web pages). As previous research shows, these differences severely limit the applicability of Marking up search queries with linguistic annotations such as part-of-speech tags, capitalization, and segmentation, is an impor- tant part of query processing and understanding in information retrieval systems. Due to their brevity and idiosyncratic structure, search queries pose a challenge to existing NLP tools. To address this challenge, we propose a probabilistic approach for performing joint query annotation. First, we derive a robust set of unsupervised independent annotations, using queries and pseudo-relevance feedback. Then, we stack additional classifiers on the independent annotations, and exploit the dependencies between them to further improve the accuracy, even with a very limited amount of available training data. We evaluate our method using a range of queries extracted from a web search log. Experimental results verify the effectiveness of our approach for both short keyword queries, and verbose natural language queries.

4 0.16912684 261 acl-2011-Recognizing Named Entities in Tweets

Author: Xiaohua LIU ; Shaodian ZHANG ; Furu WEI ; Ming ZHOU

Abstract: The challenges of Named Entities Recognition (NER) for tweets lie in the insufficient information in a tweet and the unavailability of training data. We propose to combine a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields (CRF) model under a semi-supervised learning framework to tackle these challenges. The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. The semi-supervised learning plus the gazetteers alleviate the lack of training data. Extensive experiments show the advantages of our method over the baselines as well as the effectiveness of KNN and semisupervised learning.

5 0.16198599 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

Author: Danuta Ploch

Abstract: Named entity disambiguation is the task of linking an entity mention in a text to the correct real-world referent predefined in a knowledge base, and is a crucial subtask in many areas like information retrieval or topic detection and tracking. Named entity disambiguation is challenging because entity mentions can be ambiguous and an entity can be referenced by different surface forms. We present an approach that exploits Wikipedia relations between entities co-occurring with the ambiguous form to derive a range of novel features for classifying candidate referents. We find that our features improve disambiguation results significantly over a strong popularity baseline, and are especially suitable for recognizing entities not contained in the knowledge base. Our system achieves state-of-the-art results on the TAC-KBP 2009 dataset.

6 0.13993464 181 acl-2011-Jigs and Lures: Associating Web Queries with Structured Entities

7 0.13235497 124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System

8 0.12621883 258 acl-2011-Ranking Class Labels Using Query Sessions

9 0.1050447 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

10 0.10334685 199 acl-2011-Learning Condensed Feature Representations from Large Unsupervised Data Sets for Supervised Learning

11 0.10173581 256 acl-2011-Query Weighting for Ranking Model Adaptation

12 0.093995832 242 acl-2011-Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments

13 0.091794983 333 acl-2011-Web-Scale Features for Full-Scale Parsing

14 0.087189436 19 acl-2011-A Mobile Touchable Application for Online Topic Graph Extraction and Exploration of Web Content

15 0.084155299 191 acl-2011-Knowledge Base Population: Successful Approaches and Challenges

16 0.081604853 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

17 0.080831692 13 acl-2011-A Graph Approach to Spelling Correction in Domain-Centric Search

18 0.073048398 127 acl-2011-Exploiting Web-Derived Selectional Preference to Improve Statistical Dependency Parsing

19 0.069563709 26 acl-2011-A Speech-based Just-in-Time Retrieval System using Semantic Search

20 0.067912906 163 acl-2011-Improved Modeling of Out-Of-Vocabulary Words Using Morphological Classes


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.216), (1, 0.065), (2, -0.091), (3, 0.029), (4, -0.067), (5, -0.137), (6, 0.016), (7, -0.221), (8, 0.019), (9, 0.023), (10, 0.031), (11, 0.049), (12, -0.024), (13, -0.042), (14, 0.023), (15, -0.016), (16, 0.004), (17, 0.028), (18, -0.009), (19, -0.048), (20, -0.007), (21, -0.069), (22, 0.028), (23, -0.019), (24, -0.001), (25, 0.017), (26, 0.017), (27, 0.02), (28, -0.084), (29, 0.048), (30, -0.016), (31, 0.067), (32, -0.028), (33, 0.029), (34, 0.048), (35, 0.099), (36, 0.034), (37, -0.038), (38, -0.068), (39, -0.009), (40, 0.073), (41, 0.039), (42, -0.012), (43, -0.045), (44, 0.051), (45, -0.031), (46, -0.03), (47, 0.131), (48, -0.062), (49, 0.044)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93843859 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

Author: Stefan Rud ; Massimiliano Ciaramita ; Jens Muller ; Hinrich Schutze

Abstract: We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.

2 0.74269444 181 acl-2011-Jigs and Lures: Associating Web Queries with Structured Entities

Author: Patrick Pantel ; Ariel Fuxman

Abstract: We propose methods for estimating the probability that an entity from an entity database is associated with a web search query. Association is modeled using a query entity click graph, blending general query click logs with vertical query click logs. Smoothing techniques are proposed to address the inherent data sparsity in such graphs, including interpolation using a query synonymy model. A large-scale empirical analysis of the smoothing techniques, over a 2-year click graph collected from a commercial search engine, shows significant reductions in modeling error. The association models are then applied to the task of recommending products to web queries, by annotating queries with products from a large catalog and then mining query- product associations through web search session analysis. Experimental analysis shows that our smoothing techniques improve coverage while keeping precision stable, and overall, that our top-performing model affects 9% of general web queries with 94% precision.

3 0.66287649 271 acl-2011-Search in the Lost Sense of "Query": Question Formulation in Web Search Queries and its Temporal Changes

Author: Bo Pang ; Ravi Kumar

Abstract: Web search is an information-seeking activity. Often times, this amounts to a user seeking answers to a question. However, queries, which encode user’s information need, are typically not expressed as full-length natural language sentences in particular, as questions. Rather, they consist of one or more text fragments. As humans become more searchengine-savvy, do natural-language questions still have a role to play in web search? Through a systematic, large-scale study, we find to our surprise that as time goes by, web users are more likely to use questions to express their search intent. —

4 0.65276963 182 acl-2011-Joint Annotation of Search Queries

Author: Michael Bendersky ; W. Bruce Croft ; David A. Smith

Abstract: W. Bruce Croft Dept. of Computer Science University of Massachusetts Amherst, MA cro ft @ c s .uma s s .edu David A. Smith Dept. of Computer Science University of Massachusetts Amherst, MA dasmith@ c s .umas s .edu articles or web pages). As previous research shows, these differences severely limit the applicability of Marking up search queries with linguistic annotations such as part-of-speech tags, capitalization, and segmentation, is an impor- tant part of query processing and understanding in information retrieval systems. Due to their brevity and idiosyncratic structure, search queries pose a challenge to existing NLP tools. To address this challenge, we propose a probabilistic approach for performing joint query annotation. First, we derive a robust set of unsupervised independent annotations, using queries and pseudo-relevance feedback. Then, we stack additional classifiers on the independent annotations, and exploit the dependencies between them to further improve the accuracy, even with a very limited amount of available training data. We evaluate our method using a range of queries extracted from a web search log. Experimental results verify the effectiveness of our approach for both short keyword queries, and verbose natural language queries.

5 0.64885169 258 acl-2011-Ranking Class Labels Using Query Sessions

Author: Marius Pasca

Abstract: The role of search queries, as available within query sessions or in isolation from one another, in examined in the context of ranking the class labels (e.g., brazilian cities, business centers, hilly sites) extracted from Web documents for various instances (e.g., rio de janeiro). The co-occurrence of a class label and an instance, in the same query or within the same query session, is used to reinforce the estimated relevance of the class label for the instance. Experiments over evaluation sets of instances associated with Web search queries illustrate the higher quality of the query-based, re-ranked class labels, relative to ranking baselines using documentbased counts.

6 0.62316227 261 acl-2011-Recognizing Named Entities in Tweets

7 0.6228587 191 acl-2011-Knowledge Base Population: Successful Approaches and Challenges

8 0.60620236 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

9 0.60399139 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

10 0.57702553 36 acl-2011-An Efficient Indexer for Large N-Gram Corpora

11 0.57220274 13 acl-2011-A Graph Approach to Spelling Correction in Domain-Centric Search

12 0.56224042 256 acl-2011-Query Weighting for Ranking Model Adaptation

13 0.5544647 199 acl-2011-Learning Condensed Feature Representations from Large Unsupervised Data Sets for Supervised Learning

14 0.53833145 124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System

15 0.53424907 165 acl-2011-Improving Classification of Medical Assertions in Clinical Notes

16 0.53414911 42 acl-2011-An Interface for Rapid Natural Language Processing Development in UIMA

17 0.49801236 320 acl-2011-Unsupervised Discovery of Domain-Specific Knowledge from Text

18 0.49794313 333 acl-2011-Web-Scale Features for Full-Scale Parsing

19 0.49408275 97 acl-2011-Discovering Sociolinguistic Associations with Structured Sparsity

20 0.4919402 89 acl-2011-Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.032), (17, 0.046), (26, 0.051), (37, 0.1), (39, 0.057), (41, 0.074), (53, 0.014), (55, 0.043), (59, 0.035), (61, 0.17), (72, 0.084), (91, 0.043), (96, 0.123), (97, 0.017), (98, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.89122206 165 acl-2011-Improving Classification of Medical Assertions in Clinical Notes

Author: Youngjun Kim ; Ellen Riloff ; Stephane Meystre

Abstract: We present an NLP system that classifies the assertion type of medical problems in clinical notes used for the Fourth i2b2/VA Challenge. Our classifier uses a variety of linguistic features, including lexical, syntactic, lexicosyntactic, and contextual features. To overcome an extremely unbalanced distribution of assertion types in the data set, we focused our efforts on adding features specifically to improve the performance of minority classes. As a result, our system reached 94. 17% micro-averaged and 79.76% macro-averaged F1-measures, and showed substantial recall gains on the minority classes. 1

2 0.86189878 271 acl-2011-Search in the Lost Sense of "Query": Question Formulation in Web Search Queries and its Temporal Changes

Author: Bo Pang ; Ravi Kumar

Abstract: Web search is an information-seeking activity. Often times, this amounts to a user seeking answers to a question. However, queries, which encode user’s information need, are typically not expressed as full-length natural language sentences in particular, as questions. Rather, they consist of one or more text fragments. As humans become more searchengine-savvy, do natural-language questions still have a role to play in web search? Through a systematic, large-scale study, we find to our surprise that as time goes by, web users are more likely to use questions to express their search intent. —

same-paper 3 0.83327049 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

Author: Stefan Rud ; Massimiliano Ciaramita ; Jens Muller ; Hinrich Schutze

Abstract: We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.

4 0.80821508 147 acl-2011-Grammatical Error Correction with Alternating Structure Optimization

Author: Daniel Dahlmeier ; Hwee Tou Ng

Abstract: We present a novel approach to grammatical error correction based on Alternating Structure Optimization. As part of our work, we introduce the NUS Corpus of Learner English (NUCLE), a fully annotated one million words corpus of learner English available for research purposes. We conduct an extensive evaluation for article and preposition errors using various feature sets. Our experiments show that our approach outperforms two baselines trained on non-learner text and learner text, respectively. Our approach also outperforms two commercial grammar checking software packages.

5 0.79451275 228 acl-2011-N-Best Rescoring Based on Pitch-accent Patterns

Author: Je Hun Jeon ; Wen Wang ; Yang Liu

Abstract: In this paper, we adopt an n-best rescoring scheme using pitch-accent patterns to improve automatic speech recognition (ASR) performance. The pitch-accent model is decoupled from the main ASR system, thus allowing us to develop it independently. N-best hypotheses from recognizers are rescored by additional scores that measure the correlation of the pitch-accent patterns between the acoustic signal and lexical cues. To test the robustness of our algorithm, we use two different data sets and recognition setups: the first one is English radio news data that has pitch accent labels, but the recognizer is trained from a small amount ofdata and has high error rate; the second one is English broadcast news data using a state-of-the-art SRI recognizer. Our experimental results demonstrate that our approach is able to reduce word error rate relatively by about 3%. This gain is consistent across the two different tests, showing promising future directions of incorporating prosodic information to improve speech recognition.

6 0.74630737 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

7 0.74443352 88 acl-2011-Creating a manually error-tagged and shallow-parsed learner corpus

8 0.74334347 32 acl-2011-Algorithm Selection and Model Adaptation for ESL Correction Tasks

9 0.74218273 261 acl-2011-Recognizing Named Entities in Tweets

10 0.7365799 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

11 0.73449045 48 acl-2011-Automatic Detection and Correction of Errors in Dependency Treebanks

12 0.73271191 209 acl-2011-Lexically-Triggered Hidden Markov Models for Clinical Document Coding

13 0.73247927 331 acl-2011-Using Large Monolingual and Bilingual Corpora to Improve Coordination Disambiguation

14 0.73191249 36 acl-2011-An Efficient Indexer for Large N-Gram Corpora

15 0.72898084 64 acl-2011-C-Feel-It: A Sentiment Analyzer for Micro-blogs

16 0.72855836 252 acl-2011-Prototyping virtual instructors from human-human corpora

17 0.72828948 292 acl-2011-Target-dependent Twitter Sentiment Classification

18 0.72781104 269 acl-2011-Scaling up Automatic Cross-Lingual Semantic Role Annotation

19 0.72709966 34 acl-2011-An Algorithm for Unsupervised Transliteration Mining with an Application to Word Alignment

20 0.72695267 311 acl-2011-Translationese and Its Dialects