emnlp emnlp2012 emnlp2012-137 knowledge-graph by maker-knowledge-mining

137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes


Source: pdf

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Takuya Kawada ; Stijn De Saeger ; Jun'ichi Kazama ; Yiou Wang

Abstract: In this paper we explore the utility of sentiment analysis and semantic word classes for improving why-question answering on a large-scale web corpus. Our work is motivated by the observation that a why-question and its answer often follow the pattern that if something undesirable happens, the reason is also often something undesirable, and if something desirable happens, the reason is also often something desirable. To the best of our knowledge, this is the first work that introduces sentiment analysis to non-factoid question answering. We combine this simple idea with semantic word classes for ranking answers to why-questions and show that on a set of 850 why-questions our method gains 15.2% improvement in precision at the top-1 answer over a baseline state-of-the-art QA system that achieved the best performance in a shared task of Japanese non-factoid QA in NTCIR-6.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 jp la j Abstract In this paper we explore the utility of sentiment analysis and semantic word classes for improving why-question answering on a large-scale web corpus. [sent-3, score-0.518]

2 Our work is motivated by the observation that a why-question and its answer often follow the pattern that if something undesirable happens, the reason is also often something undesirable, and if something desirable happens, the reason is also often something desirable. [sent-4, score-0.675]

3 To the best of our knowledge, this is the first work that introduces sentiment analysis to non-factoid question answering. [sent-5, score-0.418]

4 2% improvement in precision at the top-1 answer over a baseline state-of-the-art QA system that achieved the best performance in a shared task of Japanese non-factoid QA in NTCIR-6. [sent-7, score-0.45]

5 1 Introduction Question Answering (QA) research for factoid questions has recently achieved great success as demonstrated by IBM’s Watson at Jeopardy: its accuracy has been reported to be around 85% on factoid questions (Ferrucci et al. [sent-8, score-0.492]

6 , 2007) have stimulated the research community to move beyond factoid QA, comparatively little attention has been paid to QA for non-factoid questions such as why questions and how to questions, and the performance of the state-of-art nonfactoid QA systems reported in the literature (Murata et al. [sent-12, score-0.491]

7 Consider the following question Q1, and its answer candidates A1-1 and A1-2. [sent-23, score-0.665]

8 lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl by automatic sentiment analysis of questions and answers. [sent-32, score-0.509]

9 A second observation motivating this work is that there are often significant associations between the lexico-semantic classes of words in a question and those in its answer sentence. [sent-33, score-0.692]

10 For instance, questions concerning diseases like Q1 often have answers that include references to specific semantic word classes such as chemicals (like A1-1), viruses, body parts, and so on. [sent-34, score-0.571]

11 Another issue is that simply introducing the sentiment orientation of words or phrases in question and answer sentences in a naive way is insufficient, since answer candidate sentences may contain multiple sentiment expressions with different polarities in answer candidates (i. [sent-37, score-2.473]

12 , about 33% of correct answers had such multiple sentiment expressions with different polarities in our test set). [sent-39, score-0.589]

13 For example, if A1-2 contained a second sentiment expression with negative polarity like the example below, “Trusting a specific food is not effective for preventing cancer, but maintaining a healthy weight may help lower the risk of various types of cancer. [sent-40, score-0.487]

14 ” both A1-1 and A1-2 would contain sentiment expressions with the same polarity as that of Q1. [sent-41, score-0.517]

15 Thus, it is difficult to expect that the sentiment orientation alone will work well for recognizing A1-1 as a correct answer to Q1. [sent-42, score-0.881]

16 To address this problem, we consider the combination of sentiment polarity and the contents of sentiment expressions associated with the polarity in questions and their answer candidates as well. [sent-43, score-1.76]

17 To deal with the data sparseness problem arising in using the content of sentiment expressions, we developed a feature set that combines the polarity and the semantic word classes effectively. [sent-44, score-0.634]

18 We exploit these two main ideas (concerned with the sentiment orientation and the semantic classes described so far) for training a supervised classifier to rank answer candidates to why-questions. [sent-45, score-1.094]

19 8% in P@ 1) when answer candidates containing at least one correct answer are given to our re-ranker. [sent-49, score-1.042]

20 2 Approach Our proposed method is composed of answer retrieval and answer re-ranking. [sent-50, score-0.982]

21 The first step, answer retrieval, extracts a set of answer candidates to a why-question from 600 million Japanese Web corpus. [sent-51, score-1.004]

22 The answer retrieval is our implementation of the state-of-art method that has shown the best performance in the shared task of Japanese non-factoid QA in NTCIR-6 (Murata et al. [sent-52, score-0.507]

23 The second step, answer re-ranking, is the focus of this work. [sent-55, score-0.45]

24 To keep balance between the coverage and relevance of retrieved documents, we use a set of retrieved documents by these two queries for obtaining answer candidates. [sent-62, score-0.45]

25 Each document in the result of document retrieval is split into a set of answer candidates consisting of five subsequent Subsequent answer candidates can share up to two sentences to avoid errors due to wrong document segmentation. [sent-63, score-1.165]

26 4 The length of acceptable answer candidates for whyQA in the literature ranges from one sentence to two para- × graphs (Fukumoto et al. [sent-68, score-0.554]

27 (2007)’s method uses text search to look for answer candidates containing terms from the question with additional clue terms referring to “reason” or “cause. [sent-77, score-0.71]

28 The top-20 answer candidates for each question are passed on to the next step, which is answer reranking. [sent-79, score-1.115]

29 S(q, ac) assigns a score to answer candidates like tf-idf, where 1/dist(t1 , t2) functions like tf and 1/df(t2) is idf for given terms t1 and t2 that are shared by q and ac. [sent-80, score-0.554]

30 S(q,ac) = maxt1∈T Xφ tX2 X∈T log(ts(t1,t2)) (1) ts(t1,t2) =2 × dist(t1N,t2) × df(t2) Here T is a set of terms including nouns, verbs, and adjectives in question q that appear in answer candidate ac. [sent-81, score-0.596]

31 All answer candidates of a question are ranked in a descending order of their score given by SVMs. [sent-86, score-0.665]

32 We trained and tested the re-ranker using 10-fold cross validation on a corpus composed of 850 why-questions and their top20 answer candidates provided by the answer retrieval procedure in Section 2. [sent-87, score-1.111]

33 The answer candidates were manually annotated by three human annotators (not by the authors). [sent-89, score-0.584]

34 370 3 Features for Answer Re-ranking This section describes our feature sets for answer re-ranking: features expressing morphological and syntactic analysis (MSA), features representing semantic word class (SWC), and features indicating sentiment analysis (SA). [sent-91, score-1.024]

35 MSA, which has been widely used for re-ranking answers in the literature, is used to identify associations between questions and answers at the morpheme, word phrase, and syntactic dependency levels. [sent-92, score-0.51]

36 SA is used for identifying sentiment orientation associations between questions and answers as well as expressing the combination of each sentiment expression and its polarity. [sent-95, score-1.104]

37 We represent all sentences in a question and its answer candidate in three ways: morphemes, word phrases (bunsetsu5) and syntactic dependency chains. [sent-103, score-0.669]

38 From each question and answer candidate we extract n-grams of morphemes, word phrases, and syntactic dependencies, where n ranges from 1 to 3. [sent-105, score-0.622]

39 MSA1 is n-gram features from all sentences in a question and its answer candidates and distinguishes an n-gram feature found in a question from that same feature found in answer candidates. [sent-109, score-1.252]

40 MSA2 contains n-grams found in the answer 5 A bunsetsu is a syntactic constituent composed of a content word and several function words such as post-positions and case markers. [sent-110, score-0.501]

41 MSA3 is the n-gram feature that contains one of the clue terms used for answer retrieval (riyuu (reason), genin (cause) or youin (cause)). [sent-142, score-0.638]

42 Here too, n-grams obtained from the questions and answer candidates are distinguished. [sent-143, score-0.756]

43 Finally, MSA4 is the percentage of the question terms found in an answer candidate. [sent-144, score-0.561]

44 Again, word class n-grams obtained from questions are distinguished from the ones in answer candidates. [sent-171, score-0.689]

45 The second type of SWC, SWC2, represents word class n-grams in an answer candidate, in which question terms are replaced by their respective semantic word classes. [sent-173, score-0.66]

46 These features capture the correspondence between semantic word classes in the question and answer candidates. [sent-175, score-0.734]

47 We use opinion extraction tool8 and sentiment orientation lexicon in the tool for these features. [sent-178, score-0.516]

48 It extracts linguistic expressions representing opinions (henceforth, we call them sentiment phrases) from a Japanese sentence and then identifies the polarity of these sentiment phrases using machine learning techniques. [sent-183, score-0.871]

49 For example, rickets occur in Q2 and Deficiency of vitamin D can cause rickets in A2 can be identified as sentiment phrases with a negative polarity. [sent-184, score-0.697]

50 The tool identifies sentiment phrases and their polarity by using polarities of words and dependency subtrees as evidence, where these polarities are given in a word polarity dictionary. [sent-185, score-0.953]

51 In this paper, we use a trained model and a word polarity dictionary (containing about 35,000 entries) distributed via the ALAGIN forum9 for our sentiment analysis. [sent-186, score-0.487]

52 Polarity classification is evaluated under the condition that all of the sentiment phrases are correctly extracted. [sent-189, score-0.354]

53 Word polarity features are used for identifying associations between the polarity of words in a question and that in a correct answer. [sent-206, score-0.581]

54 We expect our classifier to learn from this question and answer pair that if a word with negative polarity appears in a question then its correct answer is likely to contain a negative polarity word as well. [sent-210, score-1.52]

55 SA@W1 and SA@W2 in Table 1 are sentiment analysis features from word polarity n-grams, which contain at least one word that has word polarities. [sent-211, score-0.513]

56 SA@W1 is concerned with all word polarity n-grams in questions and answer candidates. [sent-214, score-0.832]

57 3 Phrase Polarity (SA@P) Opinion extraction tool is applied to question and its answer candidate to identify sentiment phrases and their phrase-polarities. [sent-224, score-0.997]

58 In preliminary tests we found that sentiment phrases do not help to identify correct answers if answer sentences including the sentiment phrases do not have any term from the 373 question. [sent-225, score-1.314]

59 So we restrict the target sentiment phrases to those acquired from sentences containing at least one question term. [sent-226, score-0.465]

60 First, SA@P1 and SA@P2 are features concerned with phrase-polarity agreement between sentiment phrases in a question and its answer candidate. [sent-228, score-0.941]

61 We consider all possible pairs of sentiment phrases from the question and answer. [sent-229, score-0.465]

62 Secondly, following the original hypothesis underlying this paper, we assume that sentiment phrases often represent the core part of the correct answer (e. [sent-231, score-0.842]

63 SA@P3 represents this sentiment phrase contents as n-grams of morphemes, words, and syntactic dependencies of sentiment phrases, together with their phrase-polarity. [sent-235, score-0.64]

64 Furthermore, SA@P4 is the subset of SA@P3 n-grams restricted to those that include terms found in the question, and SA@P5 indicates the percentage of sentiment n-grams from the question that are found in a given answer candidate. [sent-236, score-0.868]

65 These features consist of word class n-grams and joint class-polarity n-grams taken from sentiment phrases, together with their phrase polarity. [sent-238, score-0.37]

66 SA@P10 represents the semantic content of two sentiment phrases with the same sentiment orientation (one from a question and the other from an answer candidate) using word class n-grams, together with the phrasepolarity in agreement. [sent-240, score-1.407]

67 Chiebukuro Data (2nd edition)” which is questions consisting of a single sentence and containing the interrogative naze (why), and our annotators verified that these questions are meaningful without further context. [sent-246, score-0.434]

68 Note that the correct answer to these questions does not have to be either in our target corpus or in real-world Web texts. [sent-258, score-0.69]

69 Finally, QS3 contains why-questions that have at least one answer in our target corpus (600 million Japanese Web page corpus). [sent-260, score-0.45]

70 Because randomly selected passages from our target corpus have little chance of generat- ing good why-questions we extracted passages from our target corpus that include at least one of the clue terms used in our answer retrieval step (i. [sent-263, score-0.552]

71 374 ting may not necessarily reflect a “real world” distribution of why-questions, in which ideally a wide range of people ask questions that may or may not have an answer in our corpus. [sent-267, score-0.652]

72 However, QS3 allows us to evaluate our method under the idealized conditions where we have a perfect answer retrieval module whose answer candidates always contain at least one correct answer (the source passage used for creating the why-question). [sent-268, score-1.598]

73 Under these circumstances we found that our method achieves almost 65% precision in P@ 1, which suggests that it can potentially perform with high precision if the answer candidates given by the answer retrieval module contain at least one correct answer. [sent-270, score-1.099]

74 Additionally, we use QS3 for building training data, to check whether questions that do not reflect the real-world distribution of why-questions are useful for improving the system’s performance on “real-world” questions (see Section 5. [sent-272, score-0.404]

75 In addition, we checked QS1, QS2 and QS3 for questions having the same topic, to avoid the possibility that the distribution of questions is biased towards certain topics. [sent-274, score-0.404]

76 In the end we obtained 250 questions in QS1, 250 questions in QS2 and 350 questions in QS3. [sent-279, score-0.606]

77 Set2 is mainly used for estimating estimate the ideal-case performance of our method with a perfect answer retrieval module. [sent-284, score-0.507]

78 We used our answer retrieval system to obtain the top-20 answer candidates for each question, and all question-answer (candidate) pairs were checked by three annotators, where their interrater agreement (Fleiss’ kappa) was 0. [sent-286, score-1.061]

79 Note that word and phrase polarities are not considered by the annotators in building our test sets and these polarities are automatically identified using a word polarity dictionary and opinion extraction tool. [sent-305, score-0.478]

80 We confirmed that about 35% of questions and 40% of answer candidates had at least one sentiment phrase by opinion extraction tool, and about 45% of questions and 85% of answer candidates contained at least one word having polarity by a word polarity dictionary. [sent-306, score-2.255]

81 P@ 1measures how many questions have a correct top answer candidate. [sent-309, score-0.69]

82 org/∼taku/software/TinySVM/ 375 sists of 10,000 question-answer pairs (500 questions with their 20 answer candidates), and was partitioned into 10 subsamples such that the questions in one subsample do not overlap with those of the other subsamples. [sent-314, score-0.924]

83 It shows the effect of answer re-ranking when evaluating our proposed method with training data built with real world why-questions alone. [sent-317, score-0.45]

84 B-QA is a system of our answer retrieval and the other five re-rank top-20 answer candidates using their own re-ranker. [sent-325, score-1.061]

85 B-QA: our answer retrieval system, our implementation of Murata et al. [sent-326, score-0.507]

86 The CR features include binary features indicating whether an answer candidate contains a causal relation pattern, which causal relation pattern the answer candidate has, and whether the question-answer pair contains a causal relation instance cause in the answer, effect in the question). [sent-338, score-1.444]

87 Due to this lower coverage, the WordNet features in Japanese may have a less power for finding a correct answer than those in English used in Verberne et al. [sent-352, score-0.514]

88 UpperBound: a system that ranks all n correct answers as the top n results of the 20 answer candidates if there are any. [sent-356, score-0.71]

89 The significant performance improvement by SA (features from sentiment analysis) and SWC (features from semantic word classes) (The gap between MSA+SWC+SA and MSA+SWC was 2. [sent-387, score-0.369]

90 6%–6% in P@ 1) supports the hypothesis for sentiment analysis and semantic word classes in this paper. [sent-389, score-0.454]

91 We believe that this is mainly because SA@W and SWC are based on semantic and sentiment information at the word level, and these often capture a similar type of information. [sent-395, score-0.369]

92 Here, we assume a perfect answer retrieval module that adds the source passage that was used for generating the original why-question in Set2 as a correct answer to the set of existing answer candidates, giving 21 answer candidates. [sent-399, score-1.944]

93 This evaluation result suggests that our reranker can potentially perform with high precision when at least one correct answer in answer candidates is given by the answer retrieval module. [sent-403, score-1.549]

94 Our work differs from the above approaches in that we propose semantic word classes and sentiment analysis as a new type of semantic features, and show their usefulness in why-QA. [sent-411, score-0.516]

95 Sentiment analysis has been used before on the slightly unusual task of opinion question answering, where the system is asked to answer subjective opinion questions (Stoyanov et al. [sent-412, score-0.915]

96 To the best of our knowledge though, no previous work has systematically explored the use of sentiment analysis in a general QA setting beyond opinion questions. [sent-415, score-0.383]

97 7 Conclusion In this paper, we have explored the utility of sentiment analysis and semantic word classes for ranking answer candidates to why-questions. [sent-416, score-1.008]

98 We proposed a set of semantic features that exploit sentiment analysis and semantic word classes obtained from largescale noun clustering, and used them to train an answer candidate re-ranker. [sent-417, score-1.027]

99 A system for answering non-factoid Japanese questions by using passage retrieval weighted based on type of answer. [sent-479, score-0.372]

100 Learning to rank answers to nonfactoid questions from web collections. [sent-510, score-0.363]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('answer', 0.45), ('sa', 0.364), ('sentiment', 0.307), ('swc', 0.269), ('msa', 0.231), ('questions', 0.202), ('polarity', 0.18), ('qa', 0.172), ('verberne', 0.146), ('answers', 0.118), ('rickets', 0.113), ('question', 0.111), ('candidates', 0.104), ('higashinaka', 0.099), ('japanese', 0.097), ('polarities', 0.096), ('causal', 0.092), ('orientation', 0.086), ('classes', 0.085), ('wdisease', 0.085), ('murata', 0.076), ('opinion', 0.076), ('nutrients', 0.071), ('isozaki', 0.066), ('answering', 0.064), ('cv', 0.063), ('semantic', 0.062), ('deficiency', 0.061), ('diseases', 0.061), ('vitamin', 0.061), ('wordnet', 0.06), ('cancer', 0.057), ('retrieval', 0.057), ('fukumoto', 0.057), ('cause', 0.056), ('cr', 0.054), ('passage', 0.049), ('yahoo', 0.047), ('tool', 0.047), ('phrases', 0.047), ('associations', 0.046), ('something', 0.046), ('clue', 0.045), ('factoid', 0.044), ('torisawa', 0.043), ('chemicals', 0.043), ('genin', 0.043), ('nonfactoid', 0.043), ('riyuu', 0.043), ('youin', 0.043), ('undesirable', 0.041), ('morphemes', 0.041), ('saeger', 0.041), ('expressing', 0.038), ('correct', 0.038), ('class', 0.037), ('upperbound', 0.037), ('subsamples', 0.037), ('kentaro', 0.036), ('candidate', 0.035), ('kazama', 0.034), ('subsample', 0.033), ('hashimoto', 0.031), ('stijn', 0.031), ('relation', 0.03), ('annotators', 0.03), ('expressions', 0.03), ('thumbs', 0.029), ('aerts', 0.028), ('april', 0.028), ('aq', 0.028), ('chiebukuro', 0.028), ('disallowed', 0.028), ('mars', 0.028), ('nelleke', 0.028), ('nitrosamine', 0.028), ('objection', 0.028), ('peterarno', 0.028), ('questionanswer', 0.028), ('suzan', 0.028), ('wbc', 0.028), ('wcondition', 0.028), ('whyquestions', 0.028), ('nakagawa', 0.027), ('ir', 0.026), ('surdeanu', 0.026), ('features', 0.026), ('syntactic', 0.026), ('morphological', 0.026), ('wn', 0.026), ('cross', 0.025), ('composed', 0.025), ('boves', 0.024), ('oostdijk', 0.024), ('lou', 0.024), ('auction', 0.024), ('alagin', 0.024), ('bond', 0.024), ('inhibitory', 0.024), ('salt', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Takuya Kawada ; Stijn De Saeger ; Jun'ichi Kazama ; Yiou Wang

Abstract: In this paper we explore the utility of sentiment analysis and semantic word classes for improving why-question answering on a large-scale web corpus. Our work is motivated by the observation that a why-question and its answer often follow the pattern that if something undesirable happens, the reason is also often something undesirable, and if something desirable happens, the reason is also often something desirable. To the best of our knowledge, this is the first work that introduces sentiment analysis to non-factoid question answering. We combine this simple idea with semantic word classes for ranking answers to why-questions and show that on a set of 850 why-questions our method gains 15.2% improvement in precision at the top-1 answer over a baseline state-of-the-art QA system that achieved the best performance in a shared task of Japanese non-factoid QA in NTCIR-6.

2 0.32577857 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

Author: Jianxing Yu ; Zheng-Jun Zha ; Tat-Seng Chua

Abstract: This paper proposes to generate appropriate answers for opinion questions about products by exploiting the hierarchical organization of consumer reviews. The hierarchy organizes product aspects as nodes following their parent-child relations. For each aspect, the reviews and corresponding opinions on this aspect are stored. We develop a new framework for opinion Questions Answering, which enables accurate question analysis and effective answer generation by making use the hierarchy. In particular, we first identify the (explicit/implicit) product aspects asked in the questions and their sub-aspects by referring to the hierarchy. We then retrieve the corresponding review fragments relevant to the aspects from the hierarchy. In order to gener- ate appropriate answers from the review fragments, we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review salience, coherence, diversity, and parent-child relations among the aspects. We conduct evaluations on 11 popular products in four domains. The evaluated corpus contains 70,359 consumer reviews and 220 questions on these products. Experimental results demonstrate the effectiveness of our approach.

3 0.18918496 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games

Author: Jordan Boyd-Graber ; Brianna Satinoff ; He He ; Hal Daume III

Abstract: Cost-sensitive classification, where thefeatures used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-by-word ratings of how useful features are for eliciting correct answers. Observing humans’ classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.

4 0.18031076 28 emnlp-2012-Collocation Polarity Disambiguation Using Web-based Pseudo Contexts

Author: Yanyan Zhao ; Bing Qin ; Ting Liu

Abstract: This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as ⟨long, battery life⟩ or ⟨long, ast atratrguep⟩t) (, siunc whh aisch ⟨ ltohneg s,en btatitmeernyt l iofrei⟩en otrat ⟨iolonn gof, tshtaer polarity wwohirdch (“long”) changes along owniothf different targets (“battery life” or “startup”). To disambiguate a collocation’s polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation’s polarity.Without using any additional labeled data, experiments , show that our method is effective.

5 0.16384847 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

Author: Lizhen Qu ; Rainer Gemulla ; Gerhard Weikum

Abstract: We propose the weakly supervised MultiExperts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.

6 0.15544474 34 emnlp-2012-Do Neighbours Help? An Exploration of Graph-based Algorithms for Cross-domain Sentiment Classification

7 0.14258482 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

8 0.12439799 97 emnlp-2012-Natural Language Questions for the Web of Data

9 0.10544816 15 emnlp-2012-Active Learning for Imbalanced Sentiment Classification

10 0.10155846 116 emnlp-2012-Semantic Compositionality through Recursive Matrix-Vector Spaces

11 0.096132211 101 emnlp-2012-Opinion Target Extraction Using Word-Based Translation Model

12 0.081205651 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

13 0.079772495 44 emnlp-2012-Excitatory or Inhibitory: A New Semantic Orientation Extracts Contradiction and Causality from the Web

14 0.078706227 41 emnlp-2012-Entity based QA Retrieval

15 0.076416343 107 emnlp-2012-Polarity Inducing Latent Semantic Analysis

16 0.07494767 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge

17 0.069414757 32 emnlp-2012-Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants

18 0.058122389 139 emnlp-2012-Word Salad: Relating Food Prices and Descriptions

19 0.052896842 40 emnlp-2012-Ensemble Semantics for Large-scale Unsupervised Relation Extraction

20 0.050950285 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.209), (1, 0.172), (2, -0.043), (3, 0.353), (4, 0.262), (5, -0.156), (6, -0.111), (7, 0.003), (8, -0.062), (9, 0.011), (10, 0.067), (11, 0.028), (12, 0.038), (13, -0.074), (14, 0.042), (15, -0.046), (16, 0.055), (17, 0.21), (18, 0.007), (19, -0.011), (20, 0.127), (21, -0.004), (22, 0.028), (23, 0.235), (24, 0.222), (25, -0.047), (26, 0.194), (27, 0.085), (28, 0.088), (29, 0.027), (30, -0.025), (31, 0.028), (32, 0.042), (33, 0.005), (34, -0.066), (35, 0.022), (36, 0.027), (37, -0.006), (38, 0.022), (39, 0.006), (40, -0.073), (41, 0.015), (42, -0.005), (43, -0.048), (44, -0.033), (45, -0.022), (46, 0.032), (47, -0.031), (48, -0.006), (49, -0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97495049 137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Takuya Kawada ; Stijn De Saeger ; Jun'ichi Kazama ; Yiou Wang

Abstract: In this paper we explore the utility of sentiment analysis and semantic word classes for improving why-question answering on a large-scale web corpus. Our work is motivated by the observation that a why-question and its answer often follow the pattern that if something undesirable happens, the reason is also often something undesirable, and if something desirable happens, the reason is also often something desirable. To the best of our knowledge, this is the first work that introduces sentiment analysis to non-factoid question answering. We combine this simple idea with semantic word classes for ranking answers to why-questions and show that on a set of 850 why-questions our method gains 15.2% improvement in precision at the top-1 answer over a baseline state-of-the-art QA system that achieved the best performance in a shared task of Japanese non-factoid QA in NTCIR-6.

2 0.81460208 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

Author: Jianxing Yu ; Zheng-Jun Zha ; Tat-Seng Chua

Abstract: This paper proposes to generate appropriate answers for opinion questions about products by exploiting the hierarchical organization of consumer reviews. The hierarchy organizes product aspects as nodes following their parent-child relations. For each aspect, the reviews and corresponding opinions on this aspect are stored. We develop a new framework for opinion Questions Answering, which enables accurate question analysis and effective answer generation by making use the hierarchy. In particular, we first identify the (explicit/implicit) product aspects asked in the questions and their sub-aspects by referring to the hierarchy. We then retrieve the corresponding review fragments relevant to the aspects from the hierarchy. In order to gener- ate appropriate answers from the review fragments, we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review salience, coherence, diversity, and parent-child relations among the aspects. We conduct evaluations on 11 popular products in four domains. The evaluated corpus contains 70,359 consumer reviews and 220 questions on these products. Experimental results demonstrate the effectiveness of our approach.

3 0.72530526 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games

Author: Jordan Boyd-Graber ; Brianna Satinoff ; He He ; Hal Daume III

Abstract: Cost-sensitive classification, where thefeatures used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-by-word ratings of how useful features are for eliciting correct answers. Observing humans’ classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.

4 0.44701913 97 emnlp-2012-Natural Language Questions for the Web of Data

Author: Mohamed Yahya ; Klaus Berberich ; Shady Elbassuoni ; Maya Ramanath ; Volker Tresp ; Gerhard Weikum

Abstract: The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the . in question translation and the resulting query answering.

5 0.41703126 28 emnlp-2012-Collocation Polarity Disambiguation Using Web-based Pseudo Contexts

Author: Yanyan Zhao ; Bing Qin ; Ting Liu

Abstract: This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as ⟨long, battery life⟩ or ⟨long, ast atratrguep⟩t) (, siunc whh aisch ⟨ ltohneg s,en btatitmeernyt l iofrei⟩en otrat ⟨iolonn gof, tshtaer polarity wwohirdch (“long”) changes along owniothf different targets (“battery life” or “startup”). To disambiguate a collocation’s polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation’s polarity.Without using any additional labeled data, experiments , show that our method is effective.

6 0.40604031 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

7 0.39966848 107 emnlp-2012-Polarity Inducing Latent Semantic Analysis

8 0.38682607 34 emnlp-2012-Do Neighbours Help? An Exploration of Graph-based Algorithms for Cross-domain Sentiment Classification

9 0.38498592 15 emnlp-2012-Active Learning for Imbalanced Sentiment Classification

10 0.32605073 41 emnlp-2012-Entity based QA Retrieval

11 0.322725 44 emnlp-2012-Excitatory or Inhibitory: A New Semantic Orientation Extracts Contradiction and Causality from the Web

12 0.26342428 139 emnlp-2012-Word Salad: Relating Food Prices and Descriptions

13 0.25898302 32 emnlp-2012-Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants

14 0.24415025 116 emnlp-2012-Semantic Compositionality through Recursive Matrix-Vector Spaces

15 0.24178906 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge

16 0.23320988 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

17 0.23087816 101 emnlp-2012-Opinion Target Extraction Using Word-Based Translation Model

18 0.18923597 62 emnlp-2012-Identifying Constant and Unique Relations by using Time-Series Text

19 0.18473832 77 emnlp-2012-Learning Constraints for Consistent Timeline Extraction

20 0.18467255 29 emnlp-2012-Concurrent Acquisition of Word Meaning and Lexical Categories


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.023), (16, 0.027), (25, 0.02), (34, 0.055), (36, 0.31), (60, 0.113), (63, 0.067), (64, 0.018), (65, 0.03), (70, 0.023), (73, 0.015), (74, 0.036), (76, 0.059), (80, 0.015), (86, 0.026), (95, 0.065)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.72553325 137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Takuya Kawada ; Stijn De Saeger ; Jun'ichi Kazama ; Yiou Wang

Abstract: In this paper we explore the utility of sentiment analysis and semantic word classes for improving why-question answering on a large-scale web corpus. Our work is motivated by the observation that a why-question and its answer often follow the pattern that if something undesirable happens, the reason is also often something undesirable, and if something desirable happens, the reason is also often something desirable. To the best of our knowledge, this is the first work that introduces sentiment analysis to non-factoid question answering. We combine this simple idea with semantic word classes for ranking answers to why-questions and show that on a set of 850 why-questions our method gains 15.2% improvement in precision at the top-1 answer over a baseline state-of-the-art QA system that achieved the best performance in a shared task of Japanese non-factoid QA in NTCIR-6.

2 0.47917554 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews

Author: Jianxing Yu ; Zheng-Jun Zha ; Tat-Seng Chua

Abstract: This paper proposes to generate appropriate answers for opinion questions about products by exploiting the hierarchical organization of consumer reviews. The hierarchy organizes product aspects as nodes following their parent-child relations. For each aspect, the reviews and corresponding opinions on this aspect are stored. We develop a new framework for opinion Questions Answering, which enables accurate question analysis and effective answer generation by making use the hierarchy. In particular, we first identify the (explicit/implicit) product aspects asked in the questions and their sub-aspects by referring to the hierarchy. We then retrieve the corresponding review fragments relevant to the aspects from the hierarchy. In order to gener- ate appropriate answers from the review fragments, we develop a multi-criteria optimization approach for answer generation by simultaneously taking into account review salience, coherence, diversity, and parent-child relations among the aspects. We conduct evaluations on 11 popular products in four domains. The evaluated corpus contains 70,359 consumer reviews and 220 questions on these products. Experimental results demonstrate the effectiveness of our approach.

3 0.47783193 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

Author: Lizhen Qu ; Rainer Gemulla ; Gerhard Weikum

Abstract: We propose the weakly supervised MultiExperts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews. In contrast to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review. A key challenge in this setting is that it is hard to obtain sentence-level training data for both polarity and strength. For this reason, MEM is weakly supervised: It starts with potentially noisy indicators obtained from coarse-grained training data (i.e., document-level ratings), a small set of diverse base predictors, and, if available, small amounts of fine-grained training data. We integrate these noisy indicators into a unified probabilistic framework using ideas from ensemble learning and graph-based semi-supervised learning. Our experiments indicate that MEM outperforms state-of-the-art methods by a significant margin.

4 0.47333503 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

Author: Jayant Krishnamurthy ; Tom Mitchell

Abstract: We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms ofweak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.

5 0.46693847 110 emnlp-2012-Reading The Web with Learned Syntactic-Semantic Inference Rules

Author: Ni Lao ; Amarnag Subramanya ; Fernando Pereira ; William W. Cohen

Abstract: We study how to extend a large knowledge base (Freebase) by reading relational information from a large Web text corpus. Previous studies on extracting relational knowledge from text show the potential of syntactic patterns for extraction, but they do not exploit background knowledge of other relations in the knowledge base. We describe a distributed, Web-scale implementation of a path-constrained random walk model that learns syntactic-semantic inference rules for binary relations from a graph representation of the parsed text and the knowledge base. Experiments show significant accuracy improvements in binary relation prediction over methods that consider only text, or only the existing knowledge base.

6 0.46495649 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

7 0.46425349 47 emnlp-2012-Explore Person Specific Evidence in Web Person Name Disambiguation

8 0.46416634 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents

9 0.46366644 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games

10 0.46203452 52 emnlp-2012-Fast Large-Scale Approximate Graph Construction for NLP

11 0.4620263 92 emnlp-2012-Multi-Domain Learning: When Do Domains Matter?

12 0.46068689 107 emnlp-2012-Polarity Inducing Latent Semantic Analysis

13 0.45785287 114 emnlp-2012-Revisiting the Predictability of Language: Response Completion in Social Media

14 0.45543426 78 emnlp-2012-Learning Lexicon Models from Search Logs for Query Expansion

15 0.45508549 39 emnlp-2012-Enlarging Paraphrase Collections through Generalization and Instantiation

16 0.45477873 93 emnlp-2012-Multi-instance Multi-label Learning for Relation Extraction

17 0.45408884 97 emnlp-2012-Natural Language Questions for the Web of Data

18 0.45376885 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

19 0.45367509 5 emnlp-2012-A Discriminative Model for Query Spelling Correction with Latent Structural SVM

20 0.45278424 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields