acl acl2010 acl2010-80 knowledge-graph by maker-knowledge-mining

80 acl-2010-Cross Lingual Adaptation: An Experiment on Sentiment Classifications


Source: pdf

Author: Bin Wei ; Christopher Pal

Abstract: In this paper, we study the problem of using an annotated corpus in English for the same natural language processing task in another language. While various machine translation systems are available, automated translation is still far from perfect. To minimize the noise introduced by translations, we propose to use only key ‘reliable” parts from the translations and apply structural correspondence learning (SCL) to find a low dimensional representation shared by the two languages. We perform experiments on an EnglishChinese sentiment classification task and compare our results with a previous cotraining approach. To alleviate the problem of data sparseness, we create extra pseudo-examples for SCL by making queries to a search engine. Experiments on real-world on-line review data demonstrate the two techniques can effectively improvetheperformancecomparedtoprevious work.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 While various machine translation systems are available, automated translation is still far from perfect. [sent-5, score-0.058]

2 To minimize the noise introduced by translations, we propose to use only key ‘reliable” parts from the translations and apply structural correspondence learning (SCL) to find a low dimensional representation shared by the two languages. [sent-6, score-0.178]

3 We perform experiments on an EnglishChinese sentiment classification task and compare our results with a previous cotraining approach. [sent-7, score-0.184]

4 To alleviate the problem of data sparseness, we create extra pseudo-examples for SCL by making queries to a search engine. [sent-8, score-0.059]

5 1 Introduction In this paper we are interested in the problem of transferring knowledge gained from data gathered in one language to another language. [sent-10, score-0.06]

6 However, while machine translation has been the subject of a great deal of development in recent years, many of the recent gains in performance manifest as syntactically as opposed to semantically correct sentences. [sent-12, score-0.029]

7 For example, “PIANYI” is a word mainly used in positive comments in Chinese but its translation from the online Google translator is always “cheap”, a word typically used in a negative context in English. [sent-13, score-0.216]

8 In this setting classifiers are trained in both languages and the two classifiers teach each other for the unlabeled examples. [sent-18, score-0.084]

9 The co-training approach manages to boost the performance as it allows the text similarity in the target language to compete with the “fake” similarity from the translated texts. [sent-19, score-0.095]

10 However, the translated texts are still used as training data and thus can potentially mislead the classifier. [sent-20, score-0.102]

11 As we are not really interested in predicting something on the language created by the translator, but rather on the real one, it may be better to further diminish the role of the translated texts in the learning process. [sent-21, score-0.16]

12 Motivated by this observation, we suggest here to view this problem as a special case of domain adaptation, in the source domain, we mainly observe English features, while in the other domain mostly features from Chinese. [sent-22, score-0.306]

13 The problem we address is how to associate the features under a unified setting. [sent-23, score-0.09]

14 There has been a lot of work in domain adaption for NLP (Dai et al. [sent-24, score-0.123]

15 , 2007)(Jiang and Zhai, 2007) and one suitable choice for our problem is the approach based on structural correspondence learning (SCL) as in (Blitzer et al. [sent-25, score-0.11]

16 The key idea of SCL is to identify a low-dimensional representations that capture correspondence between features from both domains (xs and xt in our case) by modeling their correlations with some special pivot features. [sent-28, score-0.783]

17 The SCL approach is a good fit for our problem as it performs knowledge transfer through identifying important features. [sent-29, score-0.057]

18 In the cross-lingual setting, we can restrict the translated texts by using them only through the pivot features. [sent-30, score-0.629]

19 , 2003), where the authors translate a keyword lexicon to perform cross-lingual text categorization. [sent-35, score-0.046]

20 One can either choose to translate a corpus in the target language and apply the classifier in the source language to obtain labeled data, or directly translated the existing data set to the new language. [sent-39, score-0.283]

21 , 2008) for the subjective analysis task and an average 65 F1 score was reported. [sent-41, score-0.041]

22 In (Wan, 2008), the authors propose to combine both strategies with ensemble learning and train a bi-lingual classifier. [sent-42, score-0.061]

23 In this paper, we are also interested in exploring whether a search engine can be used to improve the performance ofNLP systems through reducing the effect of data sparseness. [sent-43, score-0.132]

24 As the SCL algorithm we use here is based on co-occurrence statistics, we adopt a simple approach of creating pseudo-examples from the query counts returned by Google. [sent-44, score-0.058]

25 2 Our Approach To begin, we give a formal definition of the problem we are considering. [sent-45, score-0.055]

26 Assume we have two languages ls and lt and denote features in these two languages as xs and xt respectively. [sent-46, score-0.466]

27 We also have text-level translations and we use xt0 for features in the translations from ls to lt and xs0 for the other direction. [sent-47, score-0.338]

28 Let y be the output variable we want to predict, we have labeled examples (y, xs) and some unlabeled examples (xt). [sent-48, score-0.095]

29 In this paper, we con- sider the binary sentiment classification (positive or negative) problem where ls and lt correspond to English and Chinese (for general sentiment analysis, we refer the readers to the various previous studies as in (Turney, 2002),(Pang et al. [sent-50, score-0.39]

30 1 Structural Correspondence Learning(SCL) Due to space limitations, we give a very brief overview of the SCL framework here. [sent-55, score-0.029]

31 When SCL is used in a domain adaptation problem, one first needs to find a set of pivot features xp. [sent-57, score-0.769]

32 These pivot features should behave in a similar manner in both domains, and can be used as “references” to estimate how much other features may contribute when used in a classifier to predict a target variable. [sent-58, score-0.753]

33 These features can either be identified with heuristics (Blitzer et al. [sent-59, score-0.064]

34 No hand-labeling is required and this specific feature doesn’t need to be present in our labeled training data of the source domain. [sent-64, score-0.109]

35 The SCL approach of (Ando and Zhang, 2005) formulates the above idea by constructing a set of linear predictors for each of the pivot features. [sent-65, score-0.593]

36 The weight smeatt orfix t roafi nthinegse i lsintaenacre predictors }w). [sent-67, score-0.066]

37 ill encode the co-occurrence statistics between an ordinary feature and the pivot features. [sent-68, score-0.562]

38 In the next step we can then train a classifier on the extended feature (x, w ∗ x) in the source domain. [sent-71, score-0.171]

39 ins A wsit wh similar behavior relative to the pivot features together, if such a classifier has good performance on the source domain, it will likely do well on the target domain as well. [sent-73, score-0.83]

40 2 SCL for the Cross-lingual Adaptation Viewing our task as a domain adaptation problem. [sent-75, score-0.178]

41 The source domain correspond to English reviews and the target domain for Chinese ones. [sent-76, score-0.391]

42 But as the conditional distribution can be quite different for the original language and the pseudo language produced by the machine translators, these two strategies give poor performance as reported in (Wan, 2009). [sent-80, score-0.091]

43 Our solution to this problem is simple: instead of using all the features as (xs, xt0) and (xs0 , xt), we only preserves the pivot features in the translated texts xs0 and xt0 respectively and discard the other features produced by the translator. [sent-81, score-0.847]

44 So, now we will have (xs, xtp) and (xsp, xt) where x(s|t)p are pivot features in the source and the target languages. [sent-82, score-0.657]

45 In other words, when we use the SCL on our problem, the translations are only used to decide if a certain pivot feature occurs or not in the training of the linear predictors. [sent-83, score-0.656]

46 All the other nonpivot features in the translators are blocked to re- duce the noise. [sent-84, score-0.124]

47 In the original SCL as we mentioned earlier, the final classifier is trained on the extended features (x, w ∗ x). [sent-85, score-0.132]

48 n Tioon represent this constraint, we can modify the vector to be (wp ∗ x, w ∗ x) where wp is a constant matrix that only ∗se xl,ewcts ∗ xth)e pivot f weatures. [sent-88, score-0.593]

49 Experiments show that using only pivot features actually outperforms the full feature setting. [sent-90, score-0.626]

50 For the selection of the pivot features, we follow the automatic selection method proposed in (Blitzer et al. [sent-91, score-0.527]

51 We first select some candidates that occur at least some constant number of times in reviews of the two languages. [sent-93, score-0.177]

52 Then, we rank these features according to their conditional entropy to the labels on the training set. [sent-94, score-0.064]

53 In table 1, we give some of the pivot features with English “ Epenoxgoclerisl”qh,eun“Pantl”iovt, y“w”tp,Feo“renfkao”tc,u”b“r,une“ysot”,tiol“”ev,ae“sryveu”rs,ye“c”go,am“rvbfaoegryeta”eb,ales”y Cwahninmeesei(p Peivrfoetct F),e xaitauorgeuso hen(effect is very. [sent-95, score-0.62]

54 ) tcihsahe(pnogo(irm),p srhouvseh)i(,fceoimchfaonrgta hbaloe)(,v cehruys geo(eoxdc)e,l ent) Table 1: Some pivot features. [sent-98, score-0.527]

55 As we can see from the table, although we only have text-level translations we still get some features with similar meaning from different languages, just like performing an alignment of words. [sent-100, score-0.158]

56 3 Utilizing the Search Engine Data sparseness is a common problem in NLP tasks. [sent-102, score-0.072]

57 On the other hand, search engines nowadays usually index a huge amount of web pages. [sent-103, score-0.057]

58 We now show how they can also be used as a valuable data source in a less obvious way. [sent-104, score-0.036]

59 Previous studies like (Bollegala, 2007) have shown that search engine results can be comparable to language statistics from a large scale corpus for some NLP tasks like word sense disambiguation. [sent-105, score-0.098]

60 For our problem, we use the query counts returned by a search engine to compute the correlations between a normal feature and the pivot features. [sent-106, score-0.744]

61 Consider the word “PIANYI” which is mostly used in positive comments, the query “CHANPIN(product) PING(comment) CHA(bad) PIANYI” has 2,900,000 results, while “CHANPIN(product) PING(comment) HAO(good) PIANYI” returns 57,400,000 pages. [sent-107, score-0.092]

62 The results imply the word “PIANYI” is closer to the pivot feature “good” and it behaves less similar with the pivot feature “bad”. [sent-108, score-1.124]

63 To add the query counts into the SCL scheme, we create pseudo examples when training linear predictors for pivot features. [sent-109, score-0.713]

64 To construct a pseudo-positive example between a certain feature xi and a certain pivot feature xp, we simply query the term xixp and get a count c1. [sent-110, score-0.721]

65 These pseudo examples are equivalent to texts with a single word and the count is used to 260 approximate the empirical expectation. [sent-123, score-0.138]

66 As an initial experiment, we select 10,000 Chinese features that occur more than once in the Chinese unlabeled data set but not frequent enough to be captured by the original SCL. [sent-124, score-0.121]

67 And we also select the top 20 most informative Chinese pivot features to perform the queries. [sent-125, score-0.591]

68 1 Data Set For comparsion, we use the same data set in (Wan, 2009): Test Set(Labeled Chinese Reviews): The data set contains a total of 886 labeled product reviews in Chinese (451 positive reviews and 435 negative ones). [sent-127, score-0.497]

69 These reviews are extracted from a popular Chinese IT product website IT168 1. [sent-128, score-0.235]

70 The reviews are mainly about electronic devices like mp3 players, mobile phones, digital cameras and computers. [sent-129, score-0.238]

71 Training Set(Labeled English Reviews): This is the data set used in the domain adaption experiment of (Blitzer et al. [sent-130, score-0.123]

72 The data set consists of 8000 reviews with 4000 positive and 4000 negative, It is a public data set available on the web 2. [sent-133, score-0.237]

73 Unlabeled Set (Unlabeled Chinese Reviews): 1000 Chinese reviews downloaded from the same website as the Chinese training set. [sent-134, score-0.209]

74 We translate each English review into Chinese and vice versus through the public Google Translation service. [sent-136, score-0.072]

75 Also following the setting in (Wan, 2009), we only use the Chinese unlabeled data and English training sets for our SCL training procedures. [sent-137, score-0.057]

76 The features we used are bigrams and unigrams in the two languages as in (Wan, 2009). [sent-139, score-0.118]

77 The features are also pre-processed and normalized as in (Blitzer et al. [sent-142, score-0.064]

78 2 Comparisons We compare our procedure with the co-training scheme reported in (Wan, 2009): CoTrain: The method with the best performance in (Wan, 2009). [sent-156, score-0.039]

79 Two standard SVMs are trained using the co-training scheme for the Chinese views and the English views. [sent-157, score-0.039]

80 SCL-O: The basic SCL except that we use all features from the translated texts instead of only the pivot features. [sent-160, score-0.693]

81 SCL-C: The training procedure is still the same as SCL-B except in the test time we only use the Chinese pivot features and neglect the English pivot features from translations. [sent-161, score-1.182]

82 SCL-E: The same as SCL-B except that in the training of linear pivot predictors, we also use the pseudo examples constructed from queries of the search engine. [sent-162, score-0.622]

83 Table 2 and 3 give results measured on the positive labeled reviews and negative reviews separately. [sent-163, score-0.5]

84 C85L4-E Table 4: Overall Accuracy of Different Methods 261 also notice that using all the features including the ones from translations actually deteriorate the performance from 0. [sent-172, score-0.158]

85 The model incorporating the co-occurrence count information from the search engine has the best overall performance of 0. [sent-175, score-0.137]

86 It is interesting to note that the simple scheme we have adopted increased the recall performance on the negative reviews significantly. [sent-177, score-0.261]

87 After examining the reviews, we find the negative part contains some idioms and words mainly used on the internet and the query count seems to be able to capture their usage. [sent-178, score-0.174]

88 Finally, as our final goal is to train a Chinese sentiment classifier, it will be best if our model can only rely on the Chinese features. [sent-179, score-0.171]

89 This oSbCsLerv −ati Bon suggests tChaLt t−he O Otra anpsplraotiaocnhse are Tsthiilsl helpful for the cross-lingual adaptation problem as the translators perform some implicit semantic mapping. [sent-181, score-0.19]

90 − 4 − Conclusion In this paper, we are interested in adapting existing knowledge to a new language. [sent-182, score-0.06]

91 We show that instead of fully relying on automatic translation, which may be misleading for a highly semantic task like the sentiment analysis, using techniques like SCL to connect the two languages through feature-level mapping seems a more suitable choice. [sent-183, score-0.166]

92 We also perform an initial experiment using the co-occurrence statistics from a search engine to handle the data sparseness problem in the adaptation process, and the result is encouraging. [sent-184, score-0.274]

93 As future research we believe a promising avenue of exploration is to construct a probabilistic version of the SCL approach which could offer a more explicit model of the relations between the two domains and the relations between the search engine results and the model parameters. [sent-185, score-0.098]

94 Also, in the current work, we select the pivot features by simple ranking with mutual information, which only considers the distribution information. [sent-186, score-0.591]

95 Incorporating the confidence from the translator may further improve the performance. [sent-187, score-0.076]

96 A framework for learning predictive structures from multiple tasks and unlabeled data. [sent-190, score-0.057]

97 Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. [sent-212, score-0.243]

98 Measuring semantic similarity between words using web search engines. [sent-216, score-0.033]

99 A two-stage approach to domain adaptation for statistical classifiers. [sent-224, score-0.178]

100 Using bilingual knowledge and ensemble techniques for unsupervised chinese sentiment analysis. [sent-249, score-0.357]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('scl', 0.538), ('pivot', 0.527), ('chinese', 0.189), ('reviews', 0.177), ('xs', 0.152), ('blitzer', 0.151), ('pianyi', 0.14), ('sentiment', 0.139), ('wan', 0.132), ('xt', 0.11), ('adaptation', 0.104), ('ando', 0.099), ('translations', 0.094), ('translator', 0.076), ('domain', 0.074), ('awful', 0.073), ('classifier', 0.068), ('predictors', 0.066), ('engine', 0.065), ('translated', 0.065), ('features', 0.064), ('banea', 0.063), ('pseudo', 0.062), ('translators', 0.06), ('query', 0.058), ('unlabeled', 0.057), ('correspondence', 0.056), ('chanpin', 0.056), ('thumbs', 0.053), ('adaption', 0.049), ('sparseness', 0.046), ('ls', 0.046), ('translate', 0.046), ('cotraining', 0.045), ('eal', 0.045), ('montr', 0.045), ('negative', 0.045), ('xiaojun', 0.042), ('dai', 0.042), ('subjective', 0.041), ('mihalcea', 0.04), ('carmen', 0.04), ('bel', 0.04), ('wp', 0.04), ('lt', 0.04), ('scheme', 0.039), ('count', 0.039), ('labeled', 0.038), ('texts', 0.037), ('ping', 0.036), ('source', 0.036), ('feature', 0.035), ('interested', 0.034), ('comment', 0.034), ('positive', 0.034), ('mcdonald', 0.034), ('search', 0.033), ('train', 0.032), ('mainly', 0.032), ('pang', 0.032), ('website', 0.032), ('janyce', 0.032), ('fernando', 0.032), ('good', 0.031), ('zhang', 0.031), ('target', 0.03), ('rada', 0.029), ('ensemble', 0.029), ('translation', 0.029), ('digital', 0.029), ('give', 0.029), ('english', 0.028), ('structural', 0.028), ('bigrams', 0.027), ('xp', 0.027), ('languages', 0.027), ('svms', 0.027), ('xi', 0.027), ('correlations', 0.026), ('rochester', 0.026), ('matrix', 0.026), ('ryan', 0.026), ('public', 0.026), ('adapting', 0.026), ('product', 0.026), ('problem', 0.026), ('jiang', 0.025), ('tihn', 0.024), ('fake', 0.024), ('geo', 0.024), ('wells', 0.024), ('nuria', 0.024), ('comparsion', 0.024), ('tioon', 0.024), ('nowadays', 0.024), ('xre', 0.024), ('bon', 0.024), ('cole', 0.024), ('diminish', 0.024), ('englishchinese', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 80 acl-2010-Cross Lingual Adaptation: An Experiment on Sentiment Classifications

Author: Bin Wei ; Christopher Pal

Abstract: In this paper, we study the problem of using an annotated corpus in English for the same natural language processing task in another language. While various machine translation systems are available, automated translation is still far from perfect. To minimize the noise introduced by translations, we propose to use only key ‘reliable” parts from the translations and apply structural correspondence learning (SCL) to find a low dimensional representation shared by the two languages. We perform experiments on an EnglishChinese sentiment classification task and compare our results with a previous cotraining approach. To alleviate the problem of data sparseness, we create extra pseudo-examples for SCL by making queries to a search engine. Experiments on real-world on-line review data demonstrate the two techniques can effectively improvetheperformancecomparedtoprevious work.

2 0.34153864 78 acl-2010-Cross-Language Text Classification Using Structural Correspondence Learning

Author: Peter Prettenhofer ; Benno Stein

Abstract: We present a new approach to crosslanguage text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce taskspecific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of interlanguage correspondence modeling. We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas.

3 0.25541398 50 acl-2010-Bilingual Lexicon Generation Using Non-Aligned Signatures

Author: Daphna Shezaf ; Ari Rappoport

Abstract: Bilingual lexicons are fundamental resources. Modern automated lexicon generation methods usually require parallel corpora, which are not available for most language pairs. Lexicons can be generated using non-parallel corpora or a pivot language, but such lexicons are noisy. We present an algorithm for generating a high quality lexicon from a noisy one, which only requires an independent corpus for each language. Our algorithm introduces non-aligned signatures (NAS), a cross-lingual word context similarity score that avoids the over-constrained and inef- ficient nature of alignment-based methods. We use NAS to eliminate incorrect translations from the generated lexicon. We evaluate our method by improving the quality of noisy Spanish-Hebrew lexicons generated from two pivot English lexicons. Our algorithm substantially outperforms other lexicon generation methods.

4 0.14004947 209 acl-2010-Sentiment Learning on Product Reviews via Sentiment Ontology Tree

Author: Wei Wei ; Jon Atle Gulla

Abstract: Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product’s attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HLSOT approach is easily generalized to labeling a mix of reviews of more than one products.

5 0.12022372 210 acl-2010-Sentiment Translation through Lexicon Induction

Author: Christian Scheible

Abstract: The translation of sentiment information is a task from which sentiment analysis systems can benefit. We present a novel, graph-based approach using SimRank, a well-established vertex similarity algorithm to transfer sentiment information between a source language and a target language graph. We evaluate this method in comparison with SO-PMI.

6 0.11374415 105 acl-2010-Evaluating Multilanguage-Comparability of Subjectivity Analysis Systems

7 0.11251727 77 acl-2010-Cross-Language Document Summarization Based on Machine Translation Quality Prediction

8 0.099287964 18 acl-2010-A Study of Information Retrieval Weighting Schemes for Sentiment Analysis

9 0.091869585 188 acl-2010-Optimizing Informativeness and Readability for Sentiment Summarization

10 0.085864209 157 acl-2010-Last but Definitely Not Least: On the Role of the Last Sentence in Automatic Polarity-Classification

11 0.083218917 123 acl-2010-Generating Focused Topic-Specific Sentiment Lexicons

12 0.076941907 122 acl-2010-Generating Fine-Grained Reviews of Songs from Album Reviews

13 0.072993033 22 acl-2010-A Unified Graph Model for Sentence-Based Opinion Retrieval

14 0.068888716 141 acl-2010-Identifying Text Polarity Using Random Walks

15 0.068780802 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features

16 0.067413159 79 acl-2010-Cross-Lingual Latent Topic Extraction

17 0.064671285 42 acl-2010-Automatically Generating Annotator Rationales to Improve Sentiment Classification

18 0.062790416 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification

19 0.057732712 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese

20 0.057614285 241 acl-2010-Transition-Based Parsing with Confidence-Weighted Classification


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.181), (1, 0.034), (2, -0.123), (3, 0.176), (4, -0.041), (5, -0.013), (6, -0.031), (7, 0.004), (8, -0.025), (9, 0.121), (10, -0.007), (11, 0.084), (12, 0.036), (13, -0.094), (14, -0.126), (15, -0.014), (16, 0.243), (17, -0.229), (18, 0.041), (19, -0.078), (20, -0.025), (21, -0.109), (22, -0.015), (23, -0.199), (24, -0.064), (25, 0.04), (26, 0.069), (27, -0.044), (28, -0.123), (29, -0.039), (30, -0.041), (31, -0.031), (32, 0.113), (33, -0.079), (34, 0.031), (35, 0.183), (36, -0.065), (37, -0.079), (38, 0.215), (39, 0.059), (40, -0.031), (41, -0.065), (42, -0.04), (43, -0.133), (44, -0.045), (45, -0.132), (46, -0.081), (47, -0.089), (48, 0.107), (49, 0.113)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92248249 80 acl-2010-Cross Lingual Adaptation: An Experiment on Sentiment Classifications

Author: Bin Wei ; Christopher Pal

Abstract: In this paper, we study the problem of using an annotated corpus in English for the same natural language processing task in another language. While various machine translation systems are available, automated translation is still far from perfect. To minimize the noise introduced by translations, we propose to use only key ‘reliable” parts from the translations and apply structural correspondence learning (SCL) to find a low dimensional representation shared by the two languages. We perform experiments on an EnglishChinese sentiment classification task and compare our results with a previous cotraining approach. To alleviate the problem of data sparseness, we create extra pseudo-examples for SCL by making queries to a search engine. Experiments on real-world on-line review data demonstrate the two techniques can effectively improvetheperformancecomparedtoprevious work.

2 0.84680557 78 acl-2010-Cross-Language Text Classification Using Structural Correspondence Learning

Author: Peter Prettenhofer ; Benno Stein

Abstract: We present a new approach to crosslanguage text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce taskspecific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of interlanguage correspondence modeling. We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas.

3 0.65829873 50 acl-2010-Bilingual Lexicon Generation Using Non-Aligned Signatures

Author: Daphna Shezaf ; Ari Rappoport

Abstract: Bilingual lexicons are fundamental resources. Modern automated lexicon generation methods usually require parallel corpora, which are not available for most language pairs. Lexicons can be generated using non-parallel corpora or a pivot language, but such lexicons are noisy. We present an algorithm for generating a high quality lexicon from a noisy one, which only requires an independent corpus for each language. Our algorithm introduces non-aligned signatures (NAS), a cross-lingual word context similarity score that avoids the over-constrained and inef- ficient nature of alignment-based methods. We use NAS to eliminate incorrect translations from the generated lexicon. We evaluate our method by improving the quality of noisy Spanish-Hebrew lexicons generated from two pivot English lexicons. Our algorithm substantially outperforms other lexicon generation methods.

4 0.43640062 122 acl-2010-Generating Fine-Grained Reviews of Songs from Album Reviews

Author: Swati Tata ; Barbara Di Eugenio

Abstract: Music Recommendation Systems often recommend individual songs, as opposed to entire albums. The challenge is to generate reviews for each song, since only full album reviews are available on-line. We developed a summarizer that combines information extraction and generation techniques to produce summaries of reviews of individual songs. We present an intrinsic evaluation of the extraction components, and of the informativeness of the summaries; and a user study of the impact of the song review summaries on users’ decision making processes. Users were able to make quicker and more informed decisions when presented with the summary as compared to the full album review.

5 0.43382508 157 acl-2010-Last but Definitely Not Least: On the Role of the Last Sentence in Automatic Polarity-Classification

Author: Israela Becker ; Vered Aharonson

Abstract: Two psycholinguistic and psychophysical experiments show that in order to efficiently extract polarity of written texts such as customerreviews on the Internet, one should concentrate computational efforts on messages in the final position of the text.

6 0.42771298 105 acl-2010-Evaluating Multilanguage-Comparability of Subjectivity Analysis Systems

7 0.4000119 209 acl-2010-Sentiment Learning on Product Reviews via Sentiment Ontology Tree

8 0.34965977 210 acl-2010-Sentiment Translation through Lexicon Induction

9 0.34523356 18 acl-2010-A Study of Information Retrieval Weighting Schemes for Sentiment Analysis

10 0.32678664 123 acl-2010-Generating Focused Topic-Specific Sentiment Lexicons

11 0.31601954 25 acl-2010-Adapting Self-Training for Semantic Role Labeling

12 0.31238785 63 acl-2010-Comparable Entity Mining from Comparative Questions

13 0.30207905 42 acl-2010-Automatically Generating Annotator Rationales to Improve Sentiment Classification

14 0.29914016 212 acl-2010-Simple Semi-Supervised Training of Part-Of-Speech Taggers

15 0.29140711 26 acl-2010-All Words Domain Adapted WSD: Finding a Middle Ground between Supervision and Unsupervision

16 0.28227389 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints

17 0.28030849 253 acl-2010-Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing

18 0.27877593 235 acl-2010-Tools for Multilingual Grammar-Based Translation on the Web

19 0.27778986 193 acl-2010-Personalising Speech-To-Speech Translation in the EMIME Project

20 0.27426061 226 acl-2010-The Human Language Project: Building a Universal Corpus of the World's Languages


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.032), (14, 0.015), (25, 0.06), (39, 0.011), (42, 0.023), (44, 0.012), (49, 0.248), (59, 0.078), (71, 0.043), (72, 0.035), (73, 0.055), (76, 0.019), (78, 0.021), (83, 0.08), (84, 0.024), (98, 0.161)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78690708 80 acl-2010-Cross Lingual Adaptation: An Experiment on Sentiment Classifications

Author: Bin Wei ; Christopher Pal

Abstract: In this paper, we study the problem of using an annotated corpus in English for the same natural language processing task in another language. While various machine translation systems are available, automated translation is still far from perfect. To minimize the noise introduced by translations, we propose to use only key ‘reliable” parts from the translations and apply structural correspondence learning (SCL) to find a low dimensional representation shared by the two languages. We perform experiments on an EnglishChinese sentiment classification task and compare our results with a previous cotraining approach. To alleviate the problem of data sparseness, we create extra pseudo-examples for SCL by making queries to a search engine. Experiments on real-world on-line review data demonstrate the two techniques can effectively improvetheperformancecomparedtoprevious work.

2 0.70883864 253 acl-2010-Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing

Author: Manabu Sassano ; Sadao Kurohashi

Abstract: We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.

3 0.7062974 79 acl-2010-Cross-Lingual Latent Topic Extraction

Author: Duo Zhang ; Qiaozhu Mei ; ChengXiang Zhai

Abstract: Probabilistic latent topic models have recently enjoyed much success in extracting and analyzing latent topics in text in an unsupervised way. One common deficiency of existing topic models, though, is that they would not work well for extracting cross-lingual latent topics simply because words in different languages generally do not co-occur with each other. In this paper, we propose a way to incorporate a bilingual dictionary into a probabilistic topic model so that we can apply topic models to extract shared latent topics in text data of different languages. Specifically, we propose a new topic model called Probabilistic Cross-Lingual Latent Semantic Analysis (PCLSA) which extends the Proba- bilistic Latent Semantic Analysis (PLSA) model by regularizing its likelihood function with soft constraints defined based on a bilingual dictionary. Both qualitative and quantitative experimental results show that the PCLSA model can effectively extract cross-lingual latent topics from multilingual text data.

4 0.63579357 78 acl-2010-Cross-Language Text Classification Using Structural Correspondence Learning

Author: Peter Prettenhofer ; Benno Stein

Abstract: We present a new approach to crosslanguage text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce taskspecific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of interlanguage correspondence modeling. We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas.

5 0.63295722 209 acl-2010-Sentiment Learning on Product Reviews via Sentiment Ontology Tree

Author: Wei Wei ; Jon Atle Gulla

Abstract: Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product’s attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HLSOT approach is easily generalized to labeling a mix of reviews of more than one products.

6 0.63105261 127 acl-2010-Global Learning of Focused Entailment Graphs

7 0.63055611 207 acl-2010-Semantics-Driven Shallow Parsing for Chinese Semantic Role Labeling

8 0.62998307 113 acl-2010-Extraction and Approximation of Numerical Attributes from the Web

9 0.629843 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition

10 0.62972069 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features

11 0.62959367 133 acl-2010-Hierarchical Search for Word Alignment

12 0.62928772 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation

13 0.62880909 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing

14 0.62821114 188 acl-2010-Optimizing Informativeness and Readability for Sentiment Summarization

15 0.62778538 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

16 0.62656236 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification

17 0.62627971 174 acl-2010-Modeling Semantic Relevance for Question-Answer Pairs in Web Social Communities

18 0.62356639 261 acl-2010-Wikipedia as Sense Inventory to Improve Diversity in Web Search Results

19 0.62312675 5 acl-2010-A Framework for Figurative Language Detection Based on Sense Differentiation

20 0.62303072 245 acl-2010-Understanding the Semantic Structure of Noun Phrase Queries