acl acl2013 acl2013-284 knowledge-graph by maker-knowledge-mining

284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions


Source: pdf

Author: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

Abstract: Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This paper aims to infer the sentiment similarity between word pairs with respect to their senses. [sent-3, score-0.469]

2 To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. [sent-4, score-0.402]

3 The model aims to predict a vector of basic human emotions for each sense of the words. [sent-5, score-0.395]

4 The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. [sent-6, score-0.804]

5 , 1998) can effectively capture the similarity between semantically related words like "car" and "automobile", but they are less effective in relating words with similar sentiment orientation like "excellent" and "superior". [sent-11, score-0.527]

6 For example, the following relations show the semantic similarity between some sentiment words computed by LSA: ? [sent-12, score-0.489]

7 65 Clearly, the sentiment similarity between the above words should be in the reversed order. [sent-69, score-0.431]

8 In fact, the sentiment intensity in "excellent" is closer to "superior" than "good". [sent-70, score-0.282]

9 cn < < sentiment similarity between "good" and "bad" should be 0. [sent-76, score-0.406]

10 In this paper, we propose a probabilistic approach to detect the sentiment similarity of words regarding their senses and underlying sentiments. [sent-77, score-0.497]

11 For this purpose, we propose to model the hidden emotions of word senses. [sent-78, score-0.506]

12 Clearly, the sentiment words in IQAPs are the pivots to infer the yes or no answers. [sent-83, score-0.428]

13 We show that sentiment similarity between such words (e. [sent-84, score-0.431]

14 The second application (SO prediction) aims to determine the sentiment orientation of individual words. [sent-87, score-0.331]

15 Previous research utilized the semantic relations between words obtained from WordNet (Hassan and Radev, 2010) and semantic similarity measures (e. [sent-88, score-0.238]

16 In this paper, we show that sentiment similarity between word pairs can be effectively utilized to compute SO of words. [sent-91, score-0.485]

17 c A2s0s1o3ci Aatsiosonc fioartio Cno fmorpu Ctoamtiopnuatalt Lioin gauli Lsitnicgsu,i psatgices 983–9 2, 2 Sentiment Similarity through Hidden Emotions As we discussed above, semantic similarity measures are less effective to infer sentiment similarity between word pairs. [sent-95, score-0.621]

18 In addition, different senses of sentiment words carry different human emotions. [sent-96, score-0.346]

19 In fact, a sentiment word can be represented as a vector of emotions with in- tensity values from "very weak" to "very strong". [sent-97, score-0.638]

20 For example, Table 1 shows several sentiment words and their corresponding emotion vectors based the following set of emotions: e = [anger, disgust, sadness, fear, guilt, interest, joy, shame, surprise]. [sent-98, score-0.518]

21 5 intensity values with respect to the emotions "disgust" and "sadness" with an overall -0. [sent-101, score-0.356]

22 r4p967ise] The difficulty of the sentiment similarity prediction task is evident when terms carry different types of emotions. [sent-116, score-0.476]

23 For instance, all the words in Table 1 have negative sentiment orientation, but, they carry different emotions with different emo- tion vectors. [sent-117, score-0.695]

24 For example, "rude" reflects the emotions "anger" and "disgust", while the word "doleful" only reflects the emotion "sadness". [sent-118, score-0.547]

25 We show that emotion vectors of the words can be effectively utilized to predict the sentiment similarity between them. [sent-120, score-0.697]

26 Previous research shows little agreement about the number and types of the basic emotions (Ortony and Turner 1990; Izard 1971). [sent-121, score-0.356]

27 Thus, we assume that the number and types of basic emotions are hidden and not pre-defined and propose a Probabilistic Sense Sentiment Similarity (PSSS) approach to extract the hidden emotions of word senses to infer their sentiment similarity. [sent-122, score-1.396]

28 There are various feelings and emotions behind such ratings with respect to the content of the reviews. [sent-127, score-0.443]

29 Figure 1 shows the intermediate layer of hidden emotions behind the ratings (sentiments) assigned to the documents (reviews) containing the words. [sent-128, score-0.617]

30 It shows that hidden emotions (ei) link the rating (rj) and the documents (dk). [sent-130, score-0.632]

31 In this Section, we aim to employ ratings and the relations among ratings, docu- ments, and words to extract the hidden emotions. [sent-131, score-0.262]

32 As Figures 2 shows, the rating r from a set of ratings R= {r1,… rp} is assigned to a hidden emotion set E={e1, ,ek}. [sent-133, score-0.488]

33 A document d from a set of documents D= {d1, ,dN} with vocabulary set W= {w1, ,wM} is associated with the hidden emotion set. [sent-134, score-0.323]

34 Hidden emotional model The model presented in Figure 2(a) has been explored in (Mohtarami et al. [sent-136, score-0.252]

35 The joint probability for the BHEM is defined as follows considering hidden emotion e: - regarding class probability of the hidden emotion e to be assigned to the observation (w,d, r): ? [sent-191, score-0.648]

36 E-step: Calculates posterior probabilities for hidden emotions given the words, documents and ratings, and 2. [sent-377, score-0.53]

37 The reason is that w bypasses d to directly associate with the hidden emotion e in Figure 2(b). [sent-650, score-0.299]

38 Finally, we construct the emotional vectors using the algorithm presented in Table 2. [sent-823, score-0.314]

39 Our goal is to infer the emotional vector for each word w that can be obtained by the probability P(w|e). [sent-826, score-0.315]

40 Enriching Hidden Emotional Models We enrich our emotional model by employing the requirement that the emotional vectors of two synonym words w1 and w2 should be similar. [sent-847, score-0.591]

41 For this purpose, we utilize the semantic similarity between each two words and create an enriched matrix. [sent-848, score-0.306]

42 To compute the semantic similarity between word senses, we utilize their synsets as follows: ? [sent-850, score-0.23]

43 In addition, note that employing the synset of the words help to obtain different emotional vectors for each sense of a word. [sent-915, score-0.378]

44 The resultant enriched matrix W×W is multiplied to the inputs of our hidden model (matrices W×D or W×R? [sent-916, score-0.312]

45 To address this issue, we measure the confidence of an opinion word in the enriched matrix as follows. [sent-933, score-0.251]

46 , 2013) to compute the sentiment similarity between two words. [sent-1011, score-0.43]

47 This approach compares the emotional vector of the given words. [sent-1012, score-0.252]

48 Let X and Y be the emotional vectors of two words. [sent-1013, score-0.314]

49 as similar in sentiment iff they satisfy both of the following conditions: 1. [sent-1081, score-0.282]

50 Finally, we compute the sentiment similarity (SS) as follows: ? [sent-1141, score-0.43]

51 − 5 Applications We explain our approach in utilizing sentiment similarity between words to perform IQAP inference and SO prediction tasks respectively. [sent-1192, score-0.542]

52 In IQAPs, we employ the sentiment similarity between the adjectives in questions and answers to interpret the indirect answers. [sent-1193, score-0.447]

53 In SO-prediction task, we aim to compute more accurate SO using our sentiment similarity method. [sent-1200, score-0.43]

54 Turney and Littman (2003) proposed a method in which the SO of a word is calculated based on its semantic similarity with seven positive words minus its similarity with seven negative words as shown in Figure 5. [sent-1201, score-0.385]

55 Here, we consider six emotions for both bridged and series models. [sent-1223, score-0.581]

56 The reason is that PSSS is based on the combination between sentiment space (through using ratings, and matrices W×R in BHEM, D×R in SHEM) and semantic space (through the input W×D in SHEM and enriched matrix W×W in both hidden models). [sent-1245, score-0.626]

57 This is because the emotional vectors of the words are directly computed from the EM steps of BHEM. [sent-1250, score-0.391]

58 However, the emotional vectors of SHEM are computed after finishing the EM steps using Equation (14). [sent-1251, score-0.366]

59 This causes the SHEM model to estimate the number and type of the hidden emotions with a lower performance as compared to BHEM, although the performances of SHEM and BHEM are comparable as explained in Section 7. [sent-1252, score-0.58]

60 Evaluation of IQAPs Inference To apply our PSSS on IQAPs inference task, we use it as the sentiment similarity measure in the algorithm explained in Figure 4. [sent-1254, score-0.469]

61 The second row of Table 4 show the results of using a popular semantic similarity measure, PMI, as the sentiment similarity (SS) measure in Figure 4. [sent-1260, score-0.558]

62 76104 The result shows that PMI is less effective to capture the sentiment similarity. [sent-1268, score-0.282]

63 Table 4 shows the effectiveness of our sentiment similarity measure. [sent-1271, score-0.406]

64 1 … … Analysis and Discussions Number and Types of Emotions In our PSSS approach, there is no limitation on the number and types of emotions as we assumed emotions are hidden. [sent-1281, score-0.712]

65 Figure 6 and 7 show the results of the hidden models (SHEM and BHEM) on SO prediction and IQAPs inference tasks respectively with different number of emotions. [sent-1283, score-0.261]

66 First, for SHEM, there is no significant difference between the performances with six and 11 emotions in the SO prediction task. [sent-1288, score-0.515]

67 Performance of BHEM and SHEM on SO prediction through different #of emotions Figure 7. [sent-1290, score-0.426]

68 Performance of BHEM and SHEM on IQAPs inference through different #of emotions same for BHEM. [sent-1291, score-0.397]

69 Also, the performances of SHEM on the IQAP inference task with six and 11 emotions are comparable. [sent-1292, score-0.486]

70 So, we consider the dimension in which both hidden emotional models present a reasonable performance over both tasks. [sent-1294, score-0.402]

71 Second, as shown in the Figures 6 and 7, in contrast to BHEM, the performance of SHEM does not considerably change with different number of emotions over both tasks. [sent-1296, score-0.356]

72 This is because, in SHEM, the emotional vectors of the words are derived from the emotional vectors of the documents after the EM steps, see Equation (14). [sent-1297, score-0.677]

73 However, in BHEM, the emotional vectors are directly obtained from the EM steps. [sent-1298, score-0.314]

74 Therefore, based on the above discussion, the estimated number of emotions is six in our development dataset. [sent-1301, score-0.393]

75 Then, the type of the emotions can be interpreted by observing the top k words in each emotion. [sent-1307, score-0.381]

76 For example, Table 5 shows the top 6 words for three out of six emotions obtained for BHEM. [sent-1308, score-0.418]

77 The corresponding emotions for these categories can be interpreted as "wonderful", "boring" and "disreputable", respectively. [sent-1310, score-0.356]

78 We also observed that, in SHEM with eleven emotion numbers, some of the emotion categories have similar top k words such that they can be merged to represent the same emotion. [sent-1311, score-0.323]

79 Thus, it indicates that the BHEM is better than SHEM to estimates the number of emotions than SHEM. [sent-1312, score-0.377]

80 2 Effect of Synsets and Antonyms We show the important effect of synsets and antonyms in computing the sentiment similarity of words. [sent-1314, score-0.543]

81 For this purpose, we repeat the experiment for SO prediction by computing sentiment similarity of word pairs with and without using synonyms and antonyms. [sent-1315, score-0.54]

82 Effect of confidence values in SO prediction with different emotion numbers in BHEM highest performances are obtained when we use synonyms and the two lowest performances are achieved when we don't use synonyms. [sent-1322, score-0.439]

83 To illustrate the utility of the confidence value, we repeat the experiment for SO prediction by BHEM using all the words appears in enriched matrix with different confidence thresholds. [sent-1329, score-0.406]

84 This is because a large threshold filters a large number of words from enriched model that decreases the effect of the enriched matrix. [sent-1342, score-0.259]

85 Series Model The bridged and series models are both based on the hidden emotions that were developed to predict the sense sentiment similarity. [sent-1352, score-1.015]

86 • In BHEM, the emotional vectors are directly computed from the EM steps. [sent-1359, score-0.344]

87 However, the emotional vector of a word in SHEM is computed using the emotional vectors of the documents containing the word. [sent-1360, score-0.62]

88 This adds noises to the emotional vectors of the words. [sent-1361, score-0.314]

89 • BHEM gives more accurate estimation over type and number of emotions versus SHEM. [sent-1362, score-0.356]

90 Most previous works employed semantic similarity of word pairs to address SO prediction and IQAP inference tasks. [sent-1366, score-0.263]

91 However, such approaches did not take into account the sentiment similarity between words. [sent-1374, score-0.406]

92 We showed that measuring the sentiment similarities between the adjectives in question and answer leads to higher performance as compared to semantic similarity measures. [sent-1381, score-0.52]

93 (2012), we proposed an approach to predict the sentiment similarity of words using their emotional vectors. [sent-1383, score-0.683]

94 We assumed that the type and number of emotions are pre-defined and our approach was based on this assumption. [sent-1384, score-0.356]

95 Furthermore, the emotions in different dataset can be varied. [sent-1386, score-0.356]

96 , (2013) by considering the emotions as hidden and presented a hidden emotional model called SHEM. [sent-1388, score-0.908]

97 This paper also consider the emotions as hidden and presents another hidden emotional model called BHEM that gives more accurate estimation of the numbers and types of the hidden emotions. [sent-1389, score-1.058]

98 9 Conclusion We propose a probabilistic approach to infer the sentiment similarity between word senses with respect to automatically learned hidden emotions. [sent-1390, score-0.658]

99 Experiments show that our sentiment similarity models lead to effective emotional vector construction and significantly outperform semantic similarity measures for the two NLP task. [sent-1393, score-0.81]

100 Semantic text similarity using corpus-based word similarity and string similarity. [sent-1412, score-0.248]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('bhem', 0.441), ('emotions', 0.356), ('shem', 0.35), ('sentiment', 0.282), ('emotional', 0.252), ('iqaps', 0.243), ('psss', 0.175), ('mohtarami', 0.167), ('hidden', 0.15), ('emotion', 0.149), ('bridged', 0.149), ('similarity', 0.124), ('iqap', 0.106), ('enriched', 0.104), ('rating', 0.102), ('ratings', 0.087), ('antonyms', 0.082), ('wj', 0.078), ('pmi', 0.078), ('confidence', 0.074), ('prediction', 0.07), ('infer', 0.063), ('vectors', 0.062), ('neviarouskaya', 0.061), ('sentimentally', 0.061), ('equation', 0.059), ('em', 0.058), ('yes', 0.058), ('ss', 0.057), ('proceeding', 0.055), ('performances', 0.052), ('sadness', 0.05), ('orientation', 0.049), ('answer', 0.045), ('mitra', 0.044), ('pp', 0.044), ('synonyms', 0.042), ('adjectives', 0.041), ('inference', 0.041), ('amiri', 0.04), ('wi', 0.039), ('senses', 0.039), ('sense', 0.039), ('wsd', 0.039), ('series', 0.039), ('littman', 0.038), ('six', 0.037), ('disgust', 0.037), ('matrix', 0.037), ('lsa', 0.036), ('opinion', 0.036), ('marneffe', 0.033), ('utilized', 0.033), ('negative', 0.032), ('deceive', 0.03), ('disreputable', 0.03), ('doleful', 0.03), ('wosynwant', 0.03), ('turney', 0.03), ('computed', 0.03), ('thresholding', 0.03), ('synsets', 0.029), ('syn', 0.029), ('reviews', 0.028), ('mpqa', 0.028), ('semantic', 0.028), ('positive', 0.027), ('acii', 0.027), ('ortony', 0.027), ('extraordinary', 0.027), ('nonuniform', 0.027), ('hadi', 0.027), ('regarding', 0.027), ('chew', 0.027), ('effect', 0.026), ('utilize', 0.025), ('matrices', 0.025), ('bayes', 0.025), ('inherit', 0.025), ('words', 0.025), ('excellent', 0.024), ('documents', 0.024), ('value', 0.024), ('compute', 0.024), ('prendinger', 0.023), ('potts', 0.023), ('alena', 0.023), ('follows', 0.023), ('steps', 0.022), ('maas', 0.022), ('hassan', 0.022), ('landauer', 0.022), ('depend', 0.022), ('explained', 0.022), ('effectively', 0.022), ('repeat', 0.022), ('indicates', 0.021), ('reflects', 0.021), ('mitsuru', 0.021), ('resultant', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions

Author: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

Abstract: Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.

2 0.22830421 209 acl-2013-Joint Modeling of News Readerâ•Žs and Comment Writerâ•Žs Emotions

Author: Huanhuan Liu ; Shoushan Li ; Guodong Zhou ; Chu-ren Huang ; Peifeng Li

Abstract: Emotion classification can be generally done from both the writer’s and reader’s perspectives. In this study, we find that two foundational tasks in emotion classification, i.e., reader’s emotion classification on the news and writer’s emotion classification on the comments, are strongly related to each other in terms of coarse-grained emotion categories, i.e., negative and positive. On the basis, we propose a respective way to jointly model these two tasks. In particular, a cotraining algorithm is proposed to improve semi-supervised learning of the two tasks. Experimental evaluation shows the effectiveness of our joint modeling approach. . 1

3 0.22147116 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

4 0.19681536 188 acl-2013-Identifying Sentiment Words Using an Optimization-based Model without Seed Words

Author: Hongliang Yu ; Zhi-Hong Deng ; Shiyingxue Li

Abstract: Sentiment Word Identification (SWI) is a basic technique in many sentiment analysis applications. Most existing researches exploit seed words, and lead to low robustness. In this paper, we propose a novel optimization-based model for SWI. Unlike previous approaches, our model exploits the sentiment labels of documents instead of seed words. Several experiments on real datasets show that WEED is effective and outperforms the state-of-the-art methods with seed words.

5 0.18896697 79 acl-2013-Character-to-Character Sentiment Analysis in Shakespeare's Plays

Author: Eric T. Nalisnick ; Henry S. Baird

Abstract: We present an automatic method for analyzing sentiment dynamics between characters in plays. This literary format’s structured dialogue allows us to make assumptions about who is participating in a conversation. Once we have an idea of who a character is speaking to, the sentiment in his or her speech can be attributed accordingly, allowing us to generate lists of a character’s enemies and allies as well as pinpoint scenes critical to a character’s emotional development. Results of experiments on Shakespeare’s plays are presented along with discussion of how this work can be extended to unstructured texts (i.e. novels).

6 0.18677753 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

7 0.16897275 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

8 0.15313603 318 acl-2013-Sentiment Relevance

9 0.14595678 211 acl-2013-LABR: A Large Scale Arabic Book Reviews Dataset

10 0.13238777 345 acl-2013-The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis

11 0.12858947 81 acl-2013-Co-Regression for Cross-Language Review Rating Prediction

12 0.12317397 148 acl-2013-Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams

13 0.11385977 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

14 0.11331926 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

15 0.098169476 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

16 0.093089022 43 acl-2013-Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity

17 0.088655531 187 acl-2013-Identifying Opinion Subgroups in Arabic Online Discussions

18 0.086476266 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

19 0.086108923 91 acl-2013-Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning

20 0.085887194 168 acl-2013-Generating Recommendation Dialogs by Extracting Information from User Reviews


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.161), (1, 0.236), (2, 0.001), (3, 0.137), (4, -0.081), (5, -0.145), (6, -0.002), (7, -0.0), (8, 0.022), (9, 0.135), (10, 0.121), (11, -0.01), (12, -0.016), (13, -0.065), (14, 0.085), (15, 0.056), (16, -0.007), (17, 0.02), (18, 0.073), (19, 0.099), (20, -0.088), (21, -0.177), (22, 0.013), (23, 0.005), (24, -0.157), (25, 0.145), (26, 0.156), (27, -0.012), (28, 0.028), (29, 0.03), (30, 0.004), (31, 0.03), (32, -0.073), (33, 0.006), (34, -0.028), (35, -0.069), (36, -0.0), (37, 0.014), (38, 0.033), (39, 0.024), (40, -0.033), (41, -0.114), (42, -0.015), (43, -0.007), (44, -0.02), (45, -0.084), (46, -0.007), (47, 0.058), (48, 0.005), (49, 0.08)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91331321 284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions

Author: Mitra Mohtarami ; Man Lan ; Chew Lim Tan

Abstract: Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper aims to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotionbased approach that is built on a hidden emotional model. The model aims to predict a vector of basic human emotions for each sense of the words. The resultant emotional vectors are then employed to infer the sentiment similarity of word pairs. We apply the proposed approach to address two main NLP tasks, namely, Indirect yes/no Question Answer Pairs inference and Sentiment Orientation prediction. Extensive experiments demonstrate the effectiveness of the proposed approach.

2 0.81876272 209 acl-2013-Joint Modeling of News Readerâ•Žs and Comment Writerâ•Žs Emotions

Author: Huanhuan Liu ; Shoushan Li ; Guodong Zhou ; Chu-ren Huang ; Peifeng Li

Abstract: Emotion classification can be generally done from both the writer’s and reader’s perspectives. In this study, we find that two foundational tasks in emotion classification, i.e., reader’s emotion classification on the news and writer’s emotion classification on the comments, are strongly related to each other in terms of coarse-grained emotion categories, i.e., negative and positive. On the basis, we propose a respective way to jointly model these two tasks. In particular, a cotraining algorithm is proposed to improve semi-supervised learning of the two tasks. Experimental evaluation shows the effectiveness of our joint modeling approach. . 1

3 0.75785959 79 acl-2013-Character-to-Character Sentiment Analysis in Shakespeare's Plays

Author: Eric T. Nalisnick ; Henry S. Baird

Abstract: We present an automatic method for analyzing sentiment dynamics between characters in plays. This literary format’s structured dialogue allows us to make assumptions about who is participating in a conversation. Once we have an idea of who a character is speaking to, the sentiment in his or her speech can be attributed accordingly, allowing us to generate lists of a character’s enemies and allies as well as pinpoint scenes critical to a character’s emotional development. Results of experiments on Shakespeare’s plays are presented along with discussion of how this work can be extended to unstructured texts (i.e. novels).

4 0.70977587 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

Author: Veronica Perez-Rosas ; Rada Mihalcea ; Louis-Philippe Morency

Abstract: During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.

5 0.69861829 282 acl-2013-Predicting and Eliciting Addressee's Emotion in Online Dialogue

Author: Takayuki Hasegawa ; Nobuhiro Kaji ; Naoki Yoshinaga ; Masashi Toyoda

Abstract: While there have been many attempts to estimate the emotion of an addresser from her/his utterance, few studies have explored how her/his utterance affects the emotion of the addressee. This has motivated us to investigate two novel tasks: predicting the emotion of the addressee and generating a response that elicits a specific emotion in the addressee’s mind. We target Japanese Twitter posts as a source of dialogue data and automatically build training data for learning the predictors and generators. The feasibility of our approaches is assessed by using 1099 utterance-response pairs that are built by . five human workers.

6 0.58014274 188 acl-2013-Identifying Sentiment Words Using an Optimization-based Model without Seed Words

7 0.50627631 318 acl-2013-Sentiment Relevance

8 0.49118307 211 acl-2013-LABR: A Large Scale Arabic Book Reviews Dataset

9 0.48845512 91 acl-2013-Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning

10 0.48610035 117 acl-2013-Detecting Turnarounds in Sentiment Analysis: Thwarting

11 0.48315677 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

12 0.47169912 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

13 0.46589813 148 acl-2013-Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams

14 0.44045776 345 acl-2013-The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis

15 0.43965933 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

16 0.42655247 81 acl-2013-Co-Regression for Cross-Language Review Rating Prediction

17 0.37254527 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

18 0.36506104 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

19 0.35485119 294 acl-2013-Re-embedding words

20 0.34832323 49 acl-2013-An annotated corpus of quoted opinions in news articles


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.468), (6, 0.012), (11, 0.045), (21, 0.01), (24, 0.053), (26, 0.06), (35, 0.076), (42, 0.018), (48, 0.043), (70, 0.031), (88, 0.039), (90, 0.012), (95, 0.035)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98424995 104 acl-2013-DKPro Similarity: An Open Source Framework for Text Similarity

Author: Daniel Bar ; Torsten Zesch ; Iryna Gurevych

Abstract: We present DKPro Similarity, an open source framework for text similarity. Our goal is to provide a comprehensive repository of text similarity measures which are implemented using standardized interfaces. DKPro Similarity comprises a wide variety of measures ranging from ones based on simple n-grams and common subsequences to high-dimensional vector comparisons and structural, stylistic, and phonetic measures. In order to promote the reproducibility of experimental results and to provide reliable, permanent experimental conditions for future studies, DKPro Similarity additionally comes with a set of full-featured experimental setups which can be run out-of-the-box and be used for future systems to built upon.

2 0.97658324 269 acl-2013-PLIS: a Probabilistic Lexical Inference System

Author: Eyal Shnarch ; Erel Segal-haLevi ; Jacob Goldberger ; Ido Dagan

Abstract: This paper presents PLIS, an open source Probabilistic Lexical Inference System which combines two functionalities: (i) a tool for integrating lexical inference knowledge from diverse resources, and (ii) a framework for scoring textual inferences based on the integrated knowledge. We provide PLIS with two probabilistic implementation of this framework. PLIS is available for download and developers of text processing applications can use it as an off-the-shelf component for injecting lexical knowledge into their applications. PLIS is easily configurable, components can be extended or replaced with user generated ones to enable system customization and further research. PLIS includes an online interactive viewer, which is a powerful tool for investigating lexical inference processes. 1 Introduction and background Semantic Inference is the process by which machines perform reasoning over natural language texts. A semantic inference system is expected to be able to infer the meaning of one text from the meaning of another, identify parts of texts which convey a target meaning, and manipulate text units in order to deduce new meanings. Semantic inference is needed for many Natural Language Processing (NLP) applications. For instance, a Question Answering (QA) system may encounter the following question and candidate answer (Example 1): Q: which explorer discovered the New World? A: Christopher Columbus revealed America. As there are no overlapping words between the two sentences, to identify that A holds an answer for Q, background world knowledge is needed to link Christopher Columbus with explorer and America with New World. Linguistic knowledge is also needed to identify that reveal and discover refer to the same concept. Knowledge is needed in order to bridge the gap between text fragments, which may be dissimilar on their surface form but share a common meaning. For the purpose of semantic inference, such knowledge can be derived from various resources (e.g. WordNet (Fellbaum, 1998) and others, detailed in Section 2.1) in a form which we denote as inference links (often called inference/entailment rules), each is an ordered pair of elements in which the first implies the meaning of the second. For instance, the link ship→vessel can be derived from tshtaen hypernym rkel sahtiiopn→ ovfe Wsseolr cdNanet b. Other applications can benefit from utilizing inference links to identify similarity between language expressions. In Information Retrieval, the user’s information need may be expressed in relevant documents differently than it is expressed in the query. Summarization systems should identify text snippets which convey the same meaning. Our work addresses a generic, application in- dependent, setting of lexical inference. We therefore adopt the terminology of Textual Entailment (Dagan et al., 2006), a generic paradigm for applied semantic inference which captures inference needs of many NLP applications in a common underlying task: given two textual fragments, termed hypothesis (H) and text (T), the task is to recognize whether T implies the meaning of H, denoted T→H. For instance, in a QA application, H reprTe→seHnts. Fthoer question, a innd a T Q a c aanpdpilidcaattei answer. pInthis setting, T is likely to hold an answer for the question if it entails the question. It is challenging to properly extract the needed inference knowledge from available resources, and to effectively utilize it within the inference process. The integration of resources, each has its own format, is technically complex and the quality 97 ProceedingSsof oiaf, th Beu 5lg1asrtia A,n Anuuaglu Mst 4ee-9tin 2g0 o1f3. th ?ec A20ss1o3ci Aastisoonci faotrio Cno fomrp Cuotamtipountaalti Loinnaglu Lisitnigcsu,is patigcess 97–102, Figure 1: PLIS schema - a text-hypothesis pair is processed by the Lexical Integrator which uses a set of lexical resources to extract inference chains which connect the two. The Lexical Inference component provides probability estimations for the validity of each level of the process. ofthe resulting inference links is often unknown in advance and varies considerably. For coping with this challenge we developed PLIS, a Probabilistic Lexical Inference System1 . PLIS, illustrated in Fig 1, has two main modules: the Lexical Integra- tor (Section 2) accepts a set of lexical resources and a text-hypothesis pair, and finds all the lexical inference relations between any pair of text term ti and hypothesis term hj, based on the available lexical relations found in the resources (and their combination). The Lexical Inference module (Section 3) provides validity scores for these relations. These term-level scores are used to estimate the sentence-level likelihood that the meaning of the hypothesis can be inferred from the text, thus making PLIS a complete lexical inference system. Lexical inference systems do not look into the structure of texts but rather consider them as bag ofterms (words or multi-word expressions). These systems are easy to implement, fast to run, practical across different genres and languages, while maintaining a competitive level of performance. PLIS can be used as a stand-alone efficient inference system or as the lexical component of any NLP application. PLIS is a flexible system, allowing users to choose the set of knowledge resources as well as the model by which inference 1The complete software package is available at http:// www.cs.biu.ac.il/nlp/downloads/PLIS.html and an online interactive viewer is available for examination at http://irsrv2. cs.biu.ac.il/nlp-net/PLIS.html. is done. PLIS can be easily extended with new knowledge resources and new inference models. It comes with a set of ready-to-use plug-ins for many common lexical resources (Section 2.1) as well as two implementation of the scoring framework. These implementations, described in (Shnarch et al., 2011; Shnarch et al., 2012), provide probability estimations for inference. PLIS has an interactive online viewer (Section 4) which provides a visualization of the entire inference process, and is very helpful for analysing lexical inference models and lexical resources usability. 2 Lexical integrator The input for the lexical integrator is a set of lexical resources and a pair of text T and hypothesis H. The lexical integrator extracts lexical inference links from the various lexical resources to connect each text term ti ∈ T with each hypothesis term hj ∈ H2. A lexical i∈nfTer wenicthe elianckh hinydpicoathteess a semantic∈ rHelation between two terms. It could be a directional relation (Columbus→navigator) or a bai ddiirreeccttiioonnaall one (car ←→ automobile). dSirinecceti knowledge resources vary lien) their representation methods, the lexical integrator wraps each lexical resource in a common plug-in interface which encapsulates resource’s inner representation method and exposes its knowledge as a list of inference links. The implemented plug-ins that come with PLIS are described in Section 2.1. Adding a new lexical resource and integrating it with the others only demands the implementation of the plug-in interface. As the knowledge needed to connect a pair of terms, ti and hj, may be scattered across few resources, the lexical integrator combines inference links into lexical inference chains to deduce new pieces of knowledge, such as Columbus −r −e −so −u −rc −e →2 −r −e −so −u −rc −e →1 navigator explorer. Therefore, the only assumption −t −he − l−e −x →ica elx integrator makes, regarding its input lexical resources, is that the inferential lexical relations they provide are transitive. The lexical integrator generates lexical infer- ence chains by expanding the text and hypothesis terms with inference links. These links lead to new terms (e.g. navigator in the above chain example and t0 in Fig 1) which can be further expanded, as all inference links are transitive. A transitivity 2Where iand j run from 1 to the length of the text and hypothesis respectively. 98 limit is set by the user to determine the maximal length for inference chains. The lexical integrator uses a graph-based representation for the inference chains, as illustrates in Fig 1. A node holds the lemma, part-of-speech and sense of a single term. The sense is the ordinal number of WordNet sense. Whenever we do not know the sense of a term we implement the most frequent sense heuristic.3 An edge represents an inference link and is labeled with the semantic relation of this link (e.g. cytokine→protein is larbeellaetdio wni othf tt hheis sW linokrd (Nee.gt .re clayttiookni hypernym). 2.1 Available plug-ins for lexical resources We have implemented plug-ins for the follow- ing resources: the English lexicon WordNet (Fellbaum, 1998)(based on either JWI, JWNL or extJWNL java APIs4), CatVar (Habash and Dorr, 2003), a categorial variations database, Wikipedia-based resource (Shnarch et al., 2009), which applies several extraction methods to derive inference links from the text and structure of Wikipedia, VerbOcean (Chklovski and Pantel, 2004), a knowledge base of fine-grained semantic relations between verbs, Lin’s distributional similarity thesaurus (Lin, 1998), and DIRECT (Kotlerman et al., 2010), a directional distributional similarity thesaurus geared for lexical inference. To summarize, the lexical integrator finds all possible inference chains (of a predefined length), resulting from any combination of inference links extracted from lexical resources, which link any t, h pair of a given text-hypothesis. Developers can use this tool to save the hassle of interfacing with the different lexical knowledge resources, and spare the labor of combining their knowledge via inference chains. The lexical inference model, described next, provides a mean to decide whether a given hypothesis is inferred from a given text, based on weighing the lexical inference chains extracted by the lexical integrator. 3 Lexical inference There are many ways to implement an inference model which identifies inference relations between texts. A simple model may consider the 3This disambiguation policy was better than considering all senses of an ambiguous term in preliminary experiments. However, it is a matter of changing a variable in the configuration of PLIS to switch between these two policies. 4http://wordnet.princeton.edu/wordnet/related-projects/ number of hypothesis terms for which inference chains, originated from text terms, were found. In PLIS, the inference model is a plug-in, similar to the lexical knowledge resources, and can be easily replaced to change the inference logic. We provide PLIS with two implemented baseline lexical inference models which are mathematically based. These are two Probabilistic Lexical Models (PLMs), HN-PLM and M-PLM which are described in (Shnarch et al., 2011; Shnarch et al., 2012) respectively. A PLM provides probability estimations for the three parts of the inference process (as shown in Fig 1): the validity probability of each inference chain (i.e. the probability for a valid inference relation between its endpoint terms) P(ti → hj), the probability of each hypothesis term to →b e i hnferred by the entire text P(T → hj) (term-level probability), eanntdir teh tee probability o hf the entire hypothesis to be inferred by the text P(T → H) (sentencelteov eble probability). HN-PLM describes a generative process by which the hypothesis is generated from the text. Its parameters are the reliability level of each of the resources it utilizes (that is, the prior probability that applying an arbitrary inference link derived from each resource corresponds to a valid inference). For learning these parameters HN-PLM applies a schema of the EM algorithm (Dempster et al., 1977). Its performance on the recognizing textual entailment task, RTE (Bentivogli et al., 2009; Bentivogli et al., 2010), are in line with the state of the art inference systems, including complex systems which perform syntactic analysis. This model is improved by M-PLM, which deduces sentence-level probability from term-level probabilities by a Markovian process. PLIS with this model was used for a passage retrieval for a question answering task (Wang et al., 2007), and outperformed state of the art inference systems. Both PLMs model the following prominent aspects of the lexical inference phenomenon: (i) considering the different reliability levels of the input knowledge resources, (ii) reducing inference chain probability as its length increases, and (iii) increasing term-level probability as we have more inference chains which suggest that the hypothesis term is inferred by the text. Both PLMs only need sentence-level annotations from which they derive term-level inference probabilities. To summarize, the lexical inference module 99 ?(? → ?) Figure 2: PLIS interactive viewer with Example 1 demonstrates knowledge integration of multiple inference chains and resource combination (additional explanations, which are not part of the demo, are provided in orange). provides the setting for interfacing with the lexical integrator. Additionally, the module provides the framework for probabilistic inference models which estimate term-level probabilities and integrate them into a sentence-level inference decision, while implementing prominent aspects of lexical inference. The user can choose to apply another inference logic, not necessarily probabilistic, by plugging a different lexical inference model into the provided inference infrastructure. 4 The PLIS interactive system PLIS comes with an online interactive viewer5 in which the user sets the parameters of PLIS, inserts a text-hypothesis pair and gets a visualization of the entire inference process. This is a powerful tool for investigating knowledge integration and lexical inference models. Fig 2 presents a screenshot of the processing of Example 1. On the right side, the user configures the system by selecting knowledge resources, adjusting their configuration, setting the transitivity limit, and choosing the lexical inference model to be applied by PLIS. After inserting a text and a hypothesis to the appropriate text boxes, the user clicks on the infer button and PLIS generates all lexical inference chains, of length up to the transitivity limit, that connect text terms with hypothesis terms, as available from the combination of the selected input re5http://irsrv2.cs.biu.ac.il/nlp-net/PLIS.html sources. Each inference chain is presented in a line between the text and hypothesis. PLIS also displays the probability estimations for all inference levels; the probability of each chain is presented at the end of its line. For each hypothesis term, term-level probability, which weighs all inference chains found for it, is given below the dashed line. The overall sentence-level probability integrates the probabilities of all hypothesis terms and is displayed in the box at the bottom right corner. Next, we detail the inference process of Example 1, as presented in Fig 2. In this QA example, the probability of the candidate answer (set as the text) to be relevant for the given question (the hypothesis) is estimated. When utilizing only two knowledge resources (WordNet and Wikipedia), PLIS is able to recognize that explorer is inferred by Christopher Columbus and that New World is inferred by America. Each one of these pairs has two independent inference chains, numbered 1–4, as evidence for its inference relation. Both inference chains 1 and 3 include a single inference link, each derived from a different relation of the Wikipedia-based resource. The inference model assigns a higher probability for chain 1since the BeComp relation is much more reliable than the Link relation. This comparison illustrates the ability of the inference model to learn how to differ knowledge resources by their reliability. Comparing the probability assigned by the in100 ference model for inference chain 2 with the probabilities assigned for chains 1 and 3, reveals the sophisticated way by which the inference model integrates lexical knowledge. Inference chain 2 is longer than chain 1, therefore its probability is lower. However, the inference model assigns chain 2 a higher probability than chain 3, even though the latter is shorter, since the model is sensitive enough to consider the difference in reliability levels between the two highly reliable hypernym relations (from WordNet) of chain 2 and the less reliable Link relation (from Wikipedia) of chain 3. Another aspect of knowledge integration is exemplified in Fig 2 by the three circled probabilities. The inference model takes into consideration the multiple pieces of evidence for the inference of New World (inference chains 3 and 4, whose probabilities are circled). This results in a termlevel probability estimation for New World (the third circled probability) which is higher than the probabilities of each chain separately. The third term of the hypothesis, discover, remains uncovered by the text as no inference chain was found for it. Therefore, the sentence-level inference probability is very low, 37%. In order to identify that the hypothesis is indeed inferred from the text, the inference model should be provided with indications for the inference of discover. To that end, the user may increase the transitivity limit in hope that longer inference chains provide the needed information. In addition, the user can examine other knowledge resources in search for the missing inference link. In this example, it is enough to add VerbOcean to the input of PLIS to expose two inference chains which connect reveal with discover by combining an inference link from WordNet and another one from VerbOcean. With this additional information, the sentence-level probability increases to 76%. This is a typical scenario of utilizing PLIS, either via the interactive system or via the software, for analyzing the usability of the different knowledge resources and their combination. A feature of the interactive system, which is useful for lexical resources analysis, is that each term in a chain is clickable and links to another screen which presents all the terms that are inferred from it and those from which it is inferred. Additionally, the interactive system communicates with a server which runs PLIS, in a fullduplex WebSocket connection6. This mode of operation is publicly available and provides a method for utilizing PLIS, without having to install it or the lexical resources it uses. Finally, since PLIS is a lexical system it can easily be adjusted to other languages. One only needs to replace the basic lexical text processing tools and plug in knowledge resources in the target language. If PLIS is provided with bilingual resources,7 it can operate also as a cross-lingual inference system (Negri et al., 2012). For instance, the text in Fig 3 is given in English, while the hypothesis is written in Spanish (given as a list of lemma:part-of-speech). The left side of the figure depicts a cross-lingual inference process in which the only lexical knowledge resource used is a man- ually built English-Spanish dictionary. As can be seen, two Spanish terms, jugador and casa remain uncovered since the dictionary alone cannot connect them to any of the English terms in the text. As illustrated in the right side of Fig 3, PLIS enables the combination of the bilingual dictionary with monolingual resources to produce cross-lingual inference chains, such as footballer−h −y −p −er−n y −m →player− −m −a −nu − →aljugador. Such inferenc−e − c−h −a −in − →s hpalavey trh− e− capability otro. overcome monolingual language variability (the first link in this chain) as well as to provide cross-lingual translation (the second link). 5 Conclusions To utilize PLIS one should gather lexical resources, obtain sentence-level annotations and train the inference model. Annotations are available in common data sets for task such as QA, Information Retrieval (queries are hypotheses and snippets are texts) and Student Response Analysis (reference answers are the hypotheses that should be inferred by the student answers). For developers of NLP applications, PLIS offers a ready-to-use lexical knowledge integrator which can interface with many common lexical knowledge resources and constructs lexical inference chains which combine the knowledge in them. A developer who wants to overcome lexical language variability, or to incorporate background knowledge, can utilize PLIS to inject lex6We used the socket.io implementation. 7A bilingual resource holds inference links which connect terms in different languages (e.g. an English-Spanish dictionary can provide the inference link explorer→explorador). 101 Figure 3 : PLIS as a cross-lingual inference system. Left: the process with a single manual bilingual resource. Right: PLIS composes cross-lingual inference chains to increase hypothesis coverage and increase sentence-level inference probability. ical knowledge into any text understanding application. PLIS can be used as a lightweight inference system or as the lexical component of larger, more complex inference systems. Additionally, PLIS provides scores for infer- ence chains and determines the way to combine them in order to recognize sentence-level inference. PLIS comes with two probabilistic lexical inference models which achieved competitive performance levels in the tasks of recognizing textual entailment and passage retrieval for QA. All aspects of PLIS are configurable. The user can easily switch between the built-in lexical resources, inference models and even languages, or extend the system with additional lexical resources and new inference models. Acknowledgments The authors thank Eden Erez for his help with the interactive viewer and Miquel Espl a` Gomis for the bilingual dictionaries. This work was partially supported by the European Community’s 7th Framework Programme (FP7/2007-2013) under grant agreement no. 287923 (EXCITEMENT) and the Israel Science Foundation grant 880/12. References Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge. In Proc. of TAC. Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Trang Dang, and Danilo Giampiccolo. 2010. The sixth PASCAL recognizing textual entailment challenge. In Proc. of TAC. Timothy Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the web for fine-grained semantic verb relations. In Proc. of EMNLP. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Lecture Notes in Computer Science, volume 3944, pages 177–190. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the royal statistical society, series [B], 39(1): 1–38. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, Massachusetts. Nizar Habash and Bonnie Dorr. 2003. A categorial variation database for English. In Proc. of NAACL. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proc. of COLOING-ACL. Matteo Negri, Alessandro Marchetti, Yashar Mehdad, Luisa Bentivogli, and Danilo Giampiccolo. 2012. Semeval-2012 task 8: Cross-lingual textual entailment for content synchronization. In Proc. of SemEval. Eyal Shnarch, Libby Barak, and Ido Dagan. 2009. Extracting lexical reference rules from Wikipedia. In Proc. of ACL. Eyal Shnarch, Jacob Goldberger, and Ido Dagan. 2011. Towards a probabilistic model for lexical entailment. In Proc. of the TextInfer Workshop. Eyal Shnarch, Ido Dagan, and Jacob Goldberger. 2012. A probabilistic lexical model for ranking textual inferences. In Proc. of *SEM. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? A quasisynchronous grammar for QA. In Proc. of EMNLP. 102

3 0.97004408 150 acl-2013-Extending an interoperable platform to facilitate the creation of multilingual and multimodal NLP applications

Author: Georgios Kontonatsios ; Paul Thompson ; Riza Theresa Batista-Navarro ; Claudiu Mihaila ; Ioannis Korkontzelos ; Sophia Ananiadou

Abstract: U-Compare is a UIMA-based workflow construction platform for building natural language processing (NLP) applications from heterogeneous language resources (LRs), without the need for programming skills. U-Compare has been adopted within the context of the METANET Network of Excellence, and over 40 LRs that process 15 European languages have been added to the U-Compare component library. In line with METANET’s aims of increasing communication between citizens of different European countries, U-Compare has been extended to facilitate the development of a wider range of applications, including both mul- tilingual and multimodal workflows. The enhancements exploit the UIMA Subject of Analysis (Sofa) mechanism, that allows different facets of the input data to be represented. We demonstrate how our customised extensions to U-Compare allow the construction and testing of NLP applications that transform the input data in different ways, e.g., machine translation, automatic summarisation and text-to-speech.

4 0.96908009 12 acl-2013-A New Set of Norms for Semantic Relatedness Measures

Author: Sean Szumlanski ; Fernando Gomez ; Valerie K. Sims

Abstract: We have elicited human quantitative judgments of semantic relatedness for 122 pairs of nouns and compiled them into a new set of relatedness norms that we call Rel-122. Judgments from individual subjects in our study exhibit high average correlation to the resulting relatedness means (r = 0.77, σ = 0.09, N = 73), although not as high as Resnik’s (1995) upper bound for expected average human correlation to similarity means (r = 0.90). This suggests that human perceptions of relatedness are less strictly constrained than perceptions of similarity and establishes a clearer expectation for what constitutes human-like performance by a computational measure of semantic relatedness. We compare the results of several WordNet-based similarity and relatedness measures to our Rel-122 norms and demonstrate the limitations of WordNet for discovering general indications of semantic relatedness. We also offer a critique of the field’s reliance upon similarity norms to evaluate relatedness measures.

5 0.96350646 277 acl-2013-Part-of-speech tagging with antagonistic adversaries

Author: Anders Sgaard

Abstract: Supervised NLP tools and on-line services are often used on data that is very different from the manually annotated data used during development. The performance loss observed in such cross-domain applications is often attributed to covariate shifts, with out-of-vocabulary effects as an important subclass. Many discriminative learning algorithms are sensitive to such shifts because highly indicative features may swamp other indicative features. Regularized and adversarial learning algorithms have been proposed to be more robust against covariate shifts. We present a new perceptron learning algorithm using antagonistic adversaries and compare it to previous proposals on 12 multilin- gual cross-domain part-of-speech tagging datasets. While previous approaches do not improve on our supervised baseline, our approach is better across the board with an average 4% error reduction.

6 0.95907575 362 acl-2013-Turning on the Turbo: Fast Third-Order Non-Projective Turbo Parsers

same-paper 7 0.92773938 284 acl-2013-Probabilistic Sense Sentiment Similarity through Hidden Emotions

8 0.84576595 307 acl-2013-Scalable Decipherment for Machine Translation via Hash Sampling

9 0.82553226 118 acl-2013-Development and Analysis of NLP Pipelines in Argo

10 0.77918959 105 acl-2013-DKPro WSD: A Generalized UIMA-based Framework for Word Sense Disambiguation

11 0.75412619 51 acl-2013-AnnoMarket: An Open Cloud Platform for NLP

12 0.70421594 43 acl-2013-Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity

13 0.70092785 304 acl-2013-SEMILAR: The Semantic Similarity Toolkit

14 0.6989727 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE

15 0.68838727 297 acl-2013-Recognizing Partial Textual Entailment

16 0.66950256 385 acl-2013-WebAnno: A Flexible, Web-based and Visually Supported System for Distributed Annotations

17 0.66623831 96 acl-2013-Creating Similarity: Lateral Thinking for Vertical Similarity Judgments

18 0.659235 237 acl-2013-Margin-based Decomposed Amortized Inference

19 0.65692461 157 acl-2013-Fast and Robust Compressive Summarization with Dual Decomposition and Multi-Task Learning

20 0.65660775 191 acl-2013-Improved Bayesian Logistic Supervised Topic Models with Data Augmentation