acl acl2011 acl2011-158 knowledge-graph by maker-knowledge-mining

158 acl-2011-Identification of Domain-Specific Senses in a Machine-Readable Dictionary


Source: pdf

Author: Fumiyo Fukumoto ; Yoshimi Suzuki

Abstract: This paper focuses on domain-specific senses and presents a method for assigning category/domain label to each sense of words in a dictionary. The method first identifies each sense of a word in the dictionary to its corresponding category. We used a text classification technique to select appropriate senses for each domain. Then, senses were scored by computing the rank scores. We used Markov Random Walk (MRW) model. The method was tested on English and Japanese resources, WordNet 3.0 and EDR Japanese dictionary. For evaluation of the method, we compared English results with the Subject Field Codes (SFC) resources. We also compared each English and Japanese results to the first sense heuristics in the WSD task. These results suggest that identification of domain-specific senses (IDSS) may actually be of benefit.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 j p Abstract This paper focuses on domain-specific senses and presents a method for assigning category/domain label to each sense of words in a dictionary. [sent-4, score-0.801]

2 The method first identifies each sense of a word in the dictionary to its corresponding category. [sent-5, score-0.233]

3 We used a text classification technique to select appropriate senses for each domain. [sent-6, score-0.599]

4 Then, senses were scored by computing the rank scores. [sent-7, score-0.602]

5 We also compared each English and Japanese results to the first sense heuristics in the WSD task. [sent-12, score-0.223]

6 These results suggest that identification of domain-specific senses (IDSS) may actually be of benefit. [sent-13, score-0.598]

7 1 Introduction Domain-specific sense of a word is crucial information for many NLP tasks and their applications, such as Word Sense Disambiguation (WSD) and Information Retrieval (IR). [sent-14, score-0.194]

8 presented a method to find predominant noun senses automatically using a thesaurus acquired from raw textual corpora and the WordNet similarity package (McCarthy et al. [sent-16, score-0.676]

9 , to try to capture changes in ranking of senses for documents from different domains. [sent-28, score-0.65]

10 Domain adaptation is also an approach for focussing on domain-specific senses and used in the WSD task (Chand and Ng, 2007; Zhong et al. [sent-29, score-0.619]

11 proposed a supervised domain adaptation on a manually selected subset of 21 nouns from the DSO corpus having examples from the Brown corpus and Wall Street Journal corpus. [sent-33, score-0.217]

12 They used active learning, countmerging, and predominant sense estimation in order to save target annotation effort. [sent-34, score-0.275]

13 They showed that for the set of nouns which have different predominant senses between the training and target domains, the annotation effort was reduced up to 29%. [sent-35, score-0.678]

14 presented a method of supervised domain adaptation (Agirre and Lacalle, 2009). [sent-38, score-0.149]

15 They made use of unlabeled data with SVM (Vapnik, 1995), a combination of kernels and SVM, and showed that domain adaptation is an important technique for WSD systems. [sent-39, score-0.149]

16 The major motivation for domain adaptation is that the sense distribution depends on the domain in which a word is used. [sent-40, score-0.466]

17 In the context of dictionary-based approach, the first sense heuristic applied to WordNet is often used as a baseline for supervised WSD systems (Cotton et al. [sent-42, score-0.257]

18 , 1998), as the senses in WordNet are ordered according to the frequency data in the manually tagged resource SemCor (Miller et al. [sent-43, score-0.656]

19 i ac t2io0n11 fo Ar Cssoocmiaptuiotanti foonra Clo Lminpguutiast i ocns:aslh Loirntpgaupisetrics , pages 552–557, drawback in the first sense heuristic applied to the WordNet is the small size of the SemCor corpus. [sent-47, score-0.257]

20 Therefore, senses that do not occur in SemCor are often ordered arbitrarily. [sent-48, score-0.592]

21 More seriously, the decision is not based on the domain but on the frequency of SemCor data. [sent-49, score-0.098]

22 0 synsets were annotated with Subject Field Codes (SFC) by a procedure that exploits the WordNet structure (Magnini and Cavaglia, 2000; Bentivogli et al. [sent-52, score-0.062]

23 The results showed that 96% ofthe WordNet synsets of the noun hierarchy could have been annotated using 115 different SFC, while identification of the domain labels for word senses was required a considerable amount of hand-labeling. [sent-54, score-0.785]

24 In this paper, we focus on domain-specific senses and propose a method for assigning category/domain label to each sense of words in a dictionary. [sent-55, score-0.801]

25 Our approach is automated, and requires only documents assigned to domains/categories, such as Reuters corpus, and a dictionary with gloss text, such as WordNet. [sent-56, score-0.304]

26 Therefore, it can be applied easily to a new domain, sense inventory or different languages, given sufficient documents. [sent-57, score-0.194]

27 2 Identification of Domain-Specific Senses Our approach, IDSS consists of two steps: selection of senses and computation of rank scores. [sent-58, score-0.602]

28 1 Selection of senses The first step to find domain-specific senses is to select appropriate senses for each domain. [sent-60, score-1.704]

29 Let D be a domain set, and S be a set of senses that the word w ∈ W has. [sent-68, score-0.666]

30 For each sense s ∈ S, and for each d ∈ D, we applied hw soernds replacement, i. [sent-75, score-0.194]

31 , we replaced w in the training documents assigning to the domain d with its gloss text in a dictionary. [sent-77, score-0.367]

32 553 All the training and test documents are tagged by a part-of-speech tagger, and represented as term vectors with frequency. [sent-78, score-0.115]

33 If the classification accuracy of the domain d is equal or higher than that without word replacement, the sense s of a word w is judged to be a candidate sense in the domain d. [sent-83, score-0.641]

34 2 Computation of rank scores We note that text classification accuracy used in selection of senses depends on the number of words consisting gloss in a dictionary. [sent-86, score-0.806]

35 As a result, many of the classification accuracy with word replacement were equal to those without word replacement1 . [sent-88, score-0.137]

36 Then in the second procedure, we scored senses by using MRW model. [sent-89, score-0.568]

37 Given a set of senses Sd in the domain d, Gd = (Sd, E) is a graph reflecting the relationships between senses in the set. [sent-90, score-1.234]

38 Each sense si in Sd is a gloss text assigned from a dictionary. [sent-91, score-0.421]

39 Each edge eij in E is associated with an affinity weight f(i → j) ibnet Ewe iesn a senses si a wnidt sj (i j). [sent-93, score-0.674]

40 The transition probability from si to sj is then defined by normalizing the corresponding × = affinity weight p(i → j) =P|kSf=d(1i|→f(ji→)k), if Σf? [sent-97, score-0.143]

41 We used the row-normalized matrix Uij = (Uij) |Sd| |Sd| to describe G with each entry corre- sponding to the transition probability, where Uij = p(i → j). [sent-99, score-0.089]

42 The matrix form of the saliency score Score(si) can be formulated in a recursive form as in the MRW model: = + where = [Score(si)]|Sd|×1 is a vector of saliency scores for the senses. [sent-101, score-0.134]

43 The final transition matrix is given by the formula (1), and each score of the sense in a specific domain is obtained by the principal eigenvector of the new transition matrix M. [sent-110, score-0.47]

44 We note that the matrix M is a high-dimensional space. [sent-114, score-0.052]

45 We selected the topmost K% senses according to rank score for each domain and make a sensedomain list. [sent-116, score-0.739]

46 For each word w in a document, find the sense s that has the highest score within the list. [sent-117, score-0.194]

47 If a domain with the highest score of the sense s and a domain in a document appearing w match, s is regarded as a domain-specific sense of the word w. [sent-118, score-0.645]

48 0 We assigned Reuters categories to each sense of words in WordNet 3. [sent-121, score-0.299]

49 The Reuters documents are organized into 126 categories (Rose et al. [sent-123, score-0.152]

50 We selected 20 categories consisting a variety of genres. [sent-125, score-0.109]

51 We used one month of documents, from 20th Aug to 19th Sept 1996 to train the SVM model. [sent-126, score-0.034]

52 Similarly, we classified the following one month of documents into these 20 categories. [sent-127, score-0.116]

53 All documents were tagged by Tree Tagger (Schmid, 1995). [sent-128, score-0.115]

54 For each category, we collected noun words with more than five frequencies from one- year Reuters corpus. [sent-130, score-0.066]

55 The training data is used to estimate K according to rank score, and test data is used to test the method using the estimated value K. [sent-132, score-0.034]

56 , the total number of words and senses, and the number of selected senses (Select S) that the classification accuracy of each domain was equal or higher than the result without word replacement. [sent-139, score-0.786]

57 There are no existing sense-tagged data for these 20 categories that could be used for evaluation. [sent-141, score-0.07]

58 Therefore, we selected a limited number of words and evaluated these words qualitatively. [sent-142, score-0.039]

59 Table 3 shows the results of 12 Reuters categories that could be corresponded to SFC labels. [sent-146, score-0.107]

60 In Table 3, “Reuters” shows categories, and “IDSS” shows the number of senses assigned by our approach. [sent-147, score-0.603]

61 “SFC” refers to the number of senses appearing in the SFC resource. [sent-148, score-0.654]

62 “S & R” denotes the number of senses appearing in both SFC and Reuters corpus. [sent-149, score-0.629]

63 “Prec” is a ratio of correct assignments by “IDSS” divided by the total number of “IDSS” assignments. [sent-150, score-0.056]

64 We manually evaluated senses not appearing in SFC resource. [sent-151, score-0.629]

65 Therefore, recall denotes a ratio of the number of senses matched in our approach and SFC divided by the total number of senses appearing in both SFC and Reuters. [sent-153, score-1.253]

66 Examining the result of text classification by word replacement, the former was 0. [sent-157, score-0.055]

67 One reason is related to the length of the gloss in WordNet: the average number of words consisting the gloss assigned to “weather” was 8. [sent-160, score-0.331]

68 IDSS depends on the size of gloss text in WordNet. [sent-163, score-0.173]

69 Efficacy can be improved if we can assign gloss sentences to WordNet based on corpus statistics. [sent-164, score-0.148]

70 In the WSD task, a first sense heuristic is often applied because of its powerful and needless of expensive hand-annotated data sets. [sent-166, score-0.281]

71 We thus compared the results obtained by our method to those obtained by the first sense heuristic. [sent-167, score-0.194]

72 For each of the 12 categories, we randomly picked up 10 words from the senses assigned by our approach. [sent-168, score-0.631]

73 For each word, we CatTrainTestF-scoreCatTrainTestF-score CatWordsSensesS sensesCatWordsSensesS senses ReutersIDSSSFCS&RRecPrec; Table 3: The results against SFC resource selected 10 sentences from the documents belonging to each corresponding category. [sent-169, score-0.746]

74 “Sense” refers to the number of average senses par a word. [sent-172, score-0.62]

75 648, while the result obtained by the first sense heuristic was 0. [sent-174, score-0.281]

76 Table 555 4 also shows that overall performance obtained by our method was better than that with the first sense heuristic in all categories. [sent-176, score-0.257]

77 2 EDR dictionary We assigned categories from Japanese Mainichi newspapers to each sense of words in EDR Japanese dictionary 4. [sent-178, score-0.409]

78 We selected 4 categories, each of which has sufficient number of documents. [sent-180, score-0.039]

79 All documents were tagged by a morphological analyzer Chasen (Matsumoto et al. [sent-181, score-0.115]

80 We used 10,000 documents for each category from 1991 to 2000 year to train SVM model. [sent-183, score-0.153]

81 We classified other 600 documents from the same period into one of these four categories. [sent-184, score-0.082]

82 Table 5 shows categories and F-score (Baseline) by SVM. [sent-185, score-0.07]

83 html CatSenseCorrectWIDSroSngPrecCorrectFirWstr soenngsePrec Table 4: IDSS against the first sense heuristic (WordNet) CatPrecisionRecallF-score IS Encp tioe rn tocametioynal. [sent-195, score-0.295]

84 97 807598 02 Table 5: Text classification performance (Baseline) CatWordsSensesS sensesPrec IES nc pto ie rn tnocametiyonal3 4, 71 628504 971 2917, 59206296812 1 9013, 50764371 741. [sent-198, score-0.098]

85 642 Table 6: The # of selected senses (EDR) for each category and evaluated these senses qualitatively. [sent-200, score-1.207]

86 The average precision for four categories was 0. [sent-201, score-0.07]

87 In the WSD task, we randomly picked up 30 words from the senses assigned by our method. [sent-203, score-0.631]

88 For each word, we selected 10 sentences from the documents belonging to each corresponding category. [sent-204, score-0.147]

89 As we can see from Table 7 that IDSS was also better than the first sense heuristics in Japanese data. [sent-206, score-0.223]

90 For the first sense heuristics, there was no significant difference between English and Japanese, while the number of senses par a word in Japanese resource was 3. [sent-207, score-0.82]

91 , the 556 CatSenseIDSSFirst sense IES nc ptoie rn tnoscametioynal24 2. [sent-212, score-0.261]

92 593 Table 7: IDSS against the first sense heuristic (EDR) small size of the EDR corpus. [sent-218, score-0.257]

93 Therefore, there are many senses that do not occur in the corpus. [sent-219, score-0.568]

94 In fact, there are 62,460 nouns which appeared in both EDR and Mainichi newspapers (from 1991 to 2000 year), 164,761 senses in all. [sent-220, score-0.629]

95 Of these, there are 114,267 senses not appearing in the EDR corpus. [sent-221, score-0.629]

96 This also demonstrates that automatic IDSS is more effective than the frequency-based first sense heuristics. [sent-222, score-0.194]

97 4 Conclusion We presented a method for assigning categories to each sense of words in a machine-readable dictionary. [sent-223, score-0.303]

98 Moreover, the result of WSD obtained by our method outperformed against the first sense heuristic in both English and Japanese. [sent-228, score-0.281]

99 Future work will include: (i) applying the method to other part-of-speech words, (ii) comparing the method with existing other automated method, and (iii) extending the method to find domain-specific senses with unknown words. [sent-229, score-0.568]

100 Domain adaptation with active learning for word sense disambiguation. [sent-275, score-0.245]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('senses', 0.568), ('sfc', 0.375), ('idss', 0.295), ('sense', 0.194), ('edr', 0.189), ('reuters', 0.167), ('wsd', 0.155), ('wordnet', 0.151), ('gloss', 0.148), ('semcor', 0.142), ('sd', 0.11), ('yamanashi', 0.107), ('japanese', 0.101), ('domain', 0.098), ('magnini', 0.089), ('mccarthy', 0.086), ('documents', 0.082), ('predominant', 0.081), ('buitelaar', 0.08), ('mrw', 0.08), ('replacement', 0.08), ('mainichi', 0.071), ('categories', 0.07), ('uij', 0.065), ('svm', 0.063), ('heuristic', 0.063), ('synsets', 0.062), ('appearing', 0.061), ('catwordssensess', 0.054), ('cavaglia', 0.054), ('chasen', 0.054), ('cotton', 0.054), ('netlib', 0.054), ('matrix', 0.052), ('adaptation', 0.051), ('fukumoto', 0.047), ('chand', 0.047), ('zhong', 0.047), ('agirre', 0.046), ('codes', 0.044), ('si', 0.044), ('bentivogli', 0.043), ('koeling', 0.043), ('lacalle', 0.043), ('saliency', 0.041), ('weather', 0.041), ('miller', 0.041), ('assigning', 0.039), ('year', 0.039), ('dictionary', 0.039), ('interdisciplinary', 0.039), ('weeds', 0.039), ('prec', 0.039), ('selected', 0.039), ('rn', 0.038), ('corresponded', 0.037), ('war', 0.037), ('transition', 0.037), ('medicine', 0.036), ('brin', 0.036), ('assigned', 0.035), ('month', 0.034), ('rank', 0.034), ('tagged', 0.033), ('category', 0.032), ('affinity', 0.032), ('newspapers', 0.032), ('divided', 0.031), ('resource', 0.031), ('classification', 0.031), ('identification', 0.03), ('sj', 0.03), ('nouns', 0.029), ('nc', 0.029), ('heuristics', 0.029), ('resources', 0.028), ('picked', 0.028), ('ies', 0.028), ('graduate', 0.027), ('par', 0.027), ('noun', 0.027), ('ut', 0.027), ('equal', 0.026), ('rose', 0.026), ('belonging', 0.026), ('ratio', 0.025), ('matsumoto', 0.025), ('depends', 0.025), ('refers', 0.025), ('result', 0.024), ('ordered', 0.024), ('publically', 0.024), ('forner', 0.024), ('uts', 0.024), ('tributed', 0.024), ('hypertextual', 0.024), ('needless', 0.024), ('rno', 0.024), ('anp', 0.024), ('customization', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 158 acl-2011-Identification of Domain-Specific Senses in a Machine-Readable Dictionary

Author: Fumiyo Fukumoto ; Yoshimi Suzuki

Abstract: This paper focuses on domain-specific senses and presents a method for assigning category/domain label to each sense of words in a dictionary. The method first identifies each sense of a word in the dictionary to its corresponding category. We used a text classification technique to select appropriate senses for each domain. Then, senses were scored by computing the rank scores. We used Markov Random Walk (MRW) model. The method was tested on English and Japanese resources, WordNet 3.0 and EDR Japanese dictionary. For evaluation of the method, we compared English results with the Subject Field Codes (SFC) resources. We also compared each English and Japanese results to the first sense heuristics in the WSD task. These results suggest that identification of domain-specific senses (IDSS) may actually be of benefit.

2 0.32382366 307 acl-2011-Towards Tracking Semantic Change by Visual Analytics

Author: Christian Rohrdantz ; Annette Hautli ; Thomas Mayer ; Miriam Butt ; Daniel A. Keim ; Frans Plank

Abstract: This paper presents a new approach to detecting and tracking changes in word meaning by visually modeling and representing diachronic development in word contexts. Previous studies have shown that computational models are capable of clustering and disambiguating senses, a more recent trend investigates whether changes in word meaning can be tracked by automatic methods. The aim of our study is to offer a new instrument for investigating the diachronic development of word senses in a way that allows for a better understanding of the nature of semantic change in general. For this purpose we combine techniques from the field of Visual Analytics with unsupervised methods from Natural Language Processing, allowing for an interactive visual exploration of semantic change.

3 0.31637478 198 acl-2011-Latent Semantic Word Sense Induction and Disambiguation

Author: Tim Van de Cruys ; Marianna Apidianaki

Abstract: In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-20 10 word sense induction and disambiguation task, on which it reaches stateof-the-art results.

4 0.19005263 334 acl-2011-Which Noun Phrases Denote Which Concepts?

Author: Jayant Krishnamurthy ; Tom Mitchell

Abstract: Resolving polysemy and synonymy is required for high-quality information extraction. We present ConceptResolver, a component for the Never-Ending Language Learner (NELL) (Carlson et al., 2010) that handles both phenomena by identifying the latent concepts that noun phrases refer to. ConceptResolver performs both word sense induction and synonym resolution on relations extracted from text using an ontology and a small amount of labeled data. Domain knowledge (the ontology) guides concept creation by defining a set of possible semantic types for concepts. Word sense induction is performed by inferring a set of semantic types for each noun phrase. Synonym detection exploits redundant informa- tion to train several domain-specific synonym classifiers in a semi-supervised fashion. When ConceptResolver is run on NELL’s knowledge base, 87% of the word senses it creates correspond to real-world concepts, and 85% of noun phrases that it suggests refer to the same concept are indeed synonyms.

5 0.18505079 224 acl-2011-Models and Training for Unsupervised Preposition Sense Disambiguation

Author: Dirk Hovy ; Ashish Vaswani ; Stephen Tratz ; David Chiang ; Eduard Hovy

Abstract: We present a preliminary study on unsupervised preposition sense disambiguation (PSD), comparing different models and training techniques (EM, MAP-EM with L0 norm, Bayesian inference using Gibbs sampling). To our knowledge, this is the first attempt at unsupervised preposition sense disambiguation. Our best accuracy reaches 56%, a significant improvement (at p <.001) of 16% over the most-frequent-sense baseline.

6 0.17270699 167 acl-2011-Improving Dependency Parsing with Semantic Classes

7 0.17164683 240 acl-2011-ParaSense or How to Use Parallel Corpora for Word Sense Disambiguation

8 0.1668023 96 acl-2011-Disambiguating temporal-contrastive connectives for machine translation

9 0.10419847 304 acl-2011-Together We Can: Bilingual Bootstrapping for WSD

10 0.077634864 238 acl-2011-P11-2093 k2opt.pdf

11 0.073111065 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

12 0.067568608 222 acl-2011-Model-Portability Experiments for Textual Temporal Analysis

13 0.066754743 104 acl-2011-Domain Adaptation for Machine Translation by Mining Unseen Words

14 0.066371642 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling

15 0.061409831 148 acl-2011-HITS-based Seed Selection and Stop List Construction for Bootstrapping

16 0.056345224 162 acl-2011-Identifying the Semantic Orientation of Foreign Words

17 0.054284405 109 acl-2011-Effective Measures of Domain Similarity for Parsing

18 0.050662432 179 acl-2011-Is Machine Translation Ripe for Cross-Lingual Sentiment Classification?

19 0.049638145 332 acl-2011-Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification

20 0.049208481 229 acl-2011-NULEX: An Open-License Broad Coverage Lexicon


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.134), (1, 0.05), (2, -0.056), (3, -0.011), (4, 0.001), (5, -0.042), (6, 0.148), (7, 0.062), (8, -0.039), (9, -0.025), (10, 0.017), (11, -0.253), (12, 0.248), (13, 0.049), (14, -0.003), (15, -0.193), (16, 0.24), (17, 0.239), (18, -0.033), (19, 0.201), (20, 0.071), (21, -0.118), (22, 0.01), (23, 0.076), (24, -0.073), (25, 0.046), (26, 0.026), (27, -0.06), (28, -0.055), (29, 0.081), (30, -0.05), (31, 0.079), (32, -0.001), (33, 0.031), (34, -0.099), (35, -0.034), (36, -0.091), (37, -0.008), (38, -0.006), (39, -0.047), (40, -0.002), (41, -0.006), (42, 0.047), (43, 0.031), (44, 0.033), (45, -0.022), (46, -0.023), (47, -0.018), (48, 0.016), (49, -0.053)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96902782 158 acl-2011-Identification of Domain-Specific Senses in a Machine-Readable Dictionary

Author: Fumiyo Fukumoto ; Yoshimi Suzuki

Abstract: This paper focuses on domain-specific senses and presents a method for assigning category/domain label to each sense of words in a dictionary. The method first identifies each sense of a word in the dictionary to its corresponding category. We used a text classification technique to select appropriate senses for each domain. Then, senses were scored by computing the rank scores. We used Markov Random Walk (MRW) model. The method was tested on English and Japanese resources, WordNet 3.0 and EDR Japanese dictionary. For evaluation of the method, we compared English results with the Subject Field Codes (SFC) resources. We also compared each English and Japanese results to the first sense heuristics in the WSD task. These results suggest that identification of domain-specific senses (IDSS) may actually be of benefit.

2 0.91567975 307 acl-2011-Towards Tracking Semantic Change by Visual Analytics

Author: Christian Rohrdantz ; Annette Hautli ; Thomas Mayer ; Miriam Butt ; Daniel A. Keim ; Frans Plank

Abstract: This paper presents a new approach to detecting and tracking changes in word meaning by visually modeling and representing diachronic development in word contexts. Previous studies have shown that computational models are capable of clustering and disambiguating senses, a more recent trend investigates whether changes in word meaning can be tracked by automatic methods. The aim of our study is to offer a new instrument for investigating the diachronic development of word senses in a way that allows for a better understanding of the nature of semantic change in general. For this purpose we combine techniques from the field of Visual Analytics with unsupervised methods from Natural Language Processing, allowing for an interactive visual exploration of semantic change.

3 0.82015848 198 acl-2011-Latent Semantic Word Sense Induction and Disambiguation

Author: Tim Van de Cruys ; Marianna Apidianaki

Abstract: In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-20 10 word sense induction and disambiguation task, on which it reaches stateof-the-art results.

4 0.75520873 334 acl-2011-Which Noun Phrases Denote Which Concepts?

Author: Jayant Krishnamurthy ; Tom Mitchell

Abstract: Resolving polysemy and synonymy is required for high-quality information extraction. We present ConceptResolver, a component for the Never-Ending Language Learner (NELL) (Carlson et al., 2010) that handles both phenomena by identifying the latent concepts that noun phrases refer to. ConceptResolver performs both word sense induction and synonym resolution on relations extracted from text using an ontology and a small amount of labeled data. Domain knowledge (the ontology) guides concept creation by defining a set of possible semantic types for concepts. Word sense induction is performed by inferring a set of semantic types for each noun phrase. Synonym detection exploits redundant informa- tion to train several domain-specific synonym classifiers in a semi-supervised fashion. When ConceptResolver is run on NELL’s knowledge base, 87% of the word senses it creates correspond to real-world concepts, and 85% of noun phrases that it suggests refer to the same concept are indeed synonyms.

5 0.65810424 96 acl-2011-Disambiguating temporal-contrastive connectives for machine translation

Author: Thomas Meyer

Abstract: Temporal–contrastive discourse connectives (although, while, since, etc.) signal various types ofrelations between clauses such as temporal, contrast, concession and cause. They are often ambiguous and therefore difficult to translate from one language to another. We discuss several new and translation-oriented experiments for the disambiguation of a specific subset of discourse connectives in order to correct some of the translation errors made by current statistical machine translation systems.

6 0.63788623 224 acl-2011-Models and Training for Unsupervised Preposition Sense Disambiguation

7 0.57268375 240 acl-2011-ParaSense or How to Use Parallel Corpora for Word Sense Disambiguation

8 0.43761161 167 acl-2011-Improving Dependency Parsing with Semantic Classes

9 0.43209839 304 acl-2011-Together We Can: Bilingual Bootstrapping for WSD

10 0.41243035 222 acl-2011-Model-Portability Experiments for Textual Temporal Analysis

11 0.36429739 341 acl-2011-Word Maturity: Computational Modeling of Word Knowledge

12 0.35975489 120 acl-2011-Even the Abstract have Color: Consensus in Word-Colour Associations

13 0.33854049 319 acl-2011-Unsupervised Decomposition of a Document into Authorial Components

14 0.33170983 148 acl-2011-HITS-based Seed Selection and Stop List Construction for Bootstrapping

15 0.30184054 229 acl-2011-NULEX: An Open-License Broad Coverage Lexicon

16 0.30114621 288 acl-2011-Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications

17 0.29764101 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling

18 0.26850444 248 acl-2011-Predicting Clicks in a Vocabulary Learning System

19 0.26362041 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

20 0.26086158 213 acl-2011-Local and Global Algorithms for Disambiguation to Wikipedia


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(1, 0.23), (5, 0.021), (17, 0.037), (26, 0.02), (37, 0.154), (39, 0.031), (41, 0.052), (55, 0.018), (59, 0.041), (72, 0.036), (91, 0.028), (96, 0.109), (97, 0.112)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.825167 35 acl-2011-An ERP-based Brain-Computer Interface for text entry using Rapid Serial Visual Presentation and Language Modeling

Author: Kenneth Hild ; Umut Orhan ; Deniz Erdogmus ; Brian Roark ; Barry Oken ; Shalini Purwar ; Hooman Nezamfar ; Melanie Fried-Oken

Abstract: Event related potentials (ERP) corresponding to stimuli in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely used to build letter-byletter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this demonstration we will present a BCI system for typing that integrates a stochastic language model with ERP classification to achieve speedups, via the rapid serial visual presentation (RSVP) paradigm.

2 0.80562991 6 acl-2011-A Comprehensive Dictionary of Multiword Expressions

Author: Kosho Shudo ; Akira Kurahone ; Toshifumi Tanabe

Abstract: It has been widely recognized that one of the most difficult and intriguing problems in natural language processing (NLP) is how to cope with idiosyncratic multiword expressions. This paper presents an overview of the comprehensive dictionary (JDMWE) of Japanese multiword expressions. The JDMWE is characterized by a large notational, syntactic, and semantic diversity of contained expressions as well as a detailed description of their syntactic functions, structures, and flexibilities. The dictionary contains about 104,000 expressions, potentially 750,000 expressions. This paper shows that the JDMWE’s validity can be supported by comparing the dictionary with a large-scale Japanese N-gram frequency dataset, namely the LDC2009T08, generated by Google Inc. (Kudo et al. 2009). 1

same-paper 3 0.76345068 158 acl-2011-Identification of Domain-Specific Senses in a Machine-Readable Dictionary

Author: Fumiyo Fukumoto ; Yoshimi Suzuki

Abstract: This paper focuses on domain-specific senses and presents a method for assigning category/domain label to each sense of words in a dictionary. The method first identifies each sense of a word in the dictionary to its corresponding category. We used a text classification technique to select appropriate senses for each domain. Then, senses were scored by computing the rank scores. We used Markov Random Walk (MRW) model. The method was tested on English and Japanese resources, WordNet 3.0 and EDR Japanese dictionary. For evaluation of the method, we compared English results with the Subject Field Codes (SFC) resources. We also compared each English and Japanese results to the first sense heuristics in the WSD task. These results suggest that identification of domain-specific senses (IDSS) may actually be of benefit.

4 0.70934027 315 acl-2011-Types of Common-Sense Knowledge Needed for Recognizing Textual Entailment

Author: Peter LoBue ; Alexander Yates

Abstract: Understanding language requires both linguistic knowledge and knowledge about how the world works, also known as common-sense knowledge. We attempt to characterize the kinds of common-sense knowledge most often involved in recognizing textual entailments. We identify 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.

5 0.70634675 203 acl-2011-Learning Sub-Word Units for Open Vocabulary Speech Recognition

Author: Carolina Parada ; Mark Dredze ; Abhinav Sethy ; Ariya Rastrow

Abstract: Large vocabulary speech recognition systems fail to recognize words beyond their vocabulary, many of which are information rich terms, like named entities or foreign words. Hybrid word/sub-word systems solve this problem by adding sub-word units to large vocabulary word based systems; new words can then be represented by combinations of subword units. Previous work heuristically created the sub-word lexicon from phonetic representations of text using simple statistics to select common phone sequences. We propose a probabilistic model to learn the subword lexicon optimized for a given task. We consider the task of out of vocabulary (OOV) word detection, which relies on output from a hybrid model. A hybrid model with our learned sub-word lexicon reduces error by 6.3% and 7.6% (absolute) at a 5% false alarm rate on an English Broadcast News and MIT Lectures task respectively.

6 0.69162738 281 acl-2011-Sentiment Analysis of Citations using Sentence Structure-Based Features

7 0.67676079 167 acl-2011-Improving Dependency Parsing with Semantic Classes

8 0.66776133 10 acl-2011-A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing

9 0.62830949 111 acl-2011-Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation

10 0.62771487 309 acl-2011-Transition-based Dependency Parsing with Rich Non-local Features

11 0.62701333 2 acl-2011-AM-FM: A Semantic Framework for Translation Quality Assessment

12 0.62572891 122 acl-2011-Event Extraction as Dependency Parsing

13 0.62447119 334 acl-2011-Which Noun Phrases Denote Which Concepts?

14 0.62366712 85 acl-2011-Coreference Resolution with World Knowledge

15 0.62307954 14 acl-2011-A Hierarchical Model of Web Summaries

16 0.62048608 230 acl-2011-Neutralizing Linguistically Problematic Annotations in Unsupervised Dependency Parsing Evaluation

17 0.61853433 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

18 0.61711061 332 acl-2011-Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification

19 0.61707646 204 acl-2011-Learning Word Vectors for Sentiment Analysis

20 0.61633962 250 acl-2011-Prefix Probability for Probabilistic Synchronous Context-Free Grammars