emnlp emnlp2013 emnlp2013-94 knowledge-graph by maker-knowledge-mining

94 emnlp-2013-Identifying Manipulated Offerings on Review Portals


Source: pdf

Author: Jiwei Li ; Myle Ott ; Claire Cardie

Abstract: Recent work has developed supervised methods for detecting deceptive opinion spam— fake reviews written to sound authentic and deliberately mislead readers. And whereas past work has focused on identifying individual fake reviews, this paper aims to identify offerings (e.g., hotels) that contain fake reviews. We introduce a semi-supervised manifold ranking algorithm for this task, which relies on a small set of labeled individual reviews for training. Then, in the absence of gold standard labels (at an offering level), we introduce a novel evaluation procedure that ranks artificial instances of real offerings, where each artificial offering contains a known number of injected deceptive reviews. Experiments on a novel dataset of hotel reviews show that the proposed method outperforms state-of-art learning baselines.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Recent work has developed supervised methods for detecting deceptive opinion spam— fake reviews written to sound authentic and deliberately mislead readers. [sent-3, score-1.081]

2 And whereas past work has focused on identifying individual fake reviews, this paper aims to identify offerings (e. [sent-4, score-0.259]

3 We introduce a semi-supervised manifold ranking algorithm for this task, which relies on a small set of labeled individual reviews for training. [sent-7, score-0.849]

4 Then, in the absence of gold standard labels (at an offering level), we introduce a novel evaluation procedure that ranks artificial instances of real offerings, where each artificial offering contains a known number of injected deceptive reviews. [sent-8, score-0.654]

5 Experiments on a novel dataset of hotel reviews show that the proposed method outperforms state-of-art learning baselines. [sent-9, score-0.786]

6 1 Introduction Consumers increasingly rely on user-generated online reviews when making purchase decisions (Cone, 2011; Ipsos, 2012). [sent-10, score-0.413]

7 Accordingly, there is growing interest in developing automatic, usually learning-based, methods to help users identify deceptive opinion spam (see Section 2). [sent-15, score-0.609]

8 Even in fully-supervised settings, however, automatic methods are imperfect at identifying individual deceptive reviews, and erroneously labeling genuine reviews as deceptive may frustrate and alienate honest reviewers. [sent-16, score-1.401]

9 An alternative approach, not yet considered in previous work, is to instead identify those product or service offerings where fake reviews appear with high probability. [sent-17, score-0.646]

10 For example, a hotel manager may post fake positive reviews to promote their own hotel, or fake negative reviews to demote a competitor’s hotel. [sent-18, score-1.405]

11 In both cases, rather than identifying these deceptive reviews individually, it may be preferable to identify the manipulated offering (i. [sent-19, score-1.074]

12 1 Accordingly, this paper addresses the novel task of identifying manipulated offerings, which we frame as a ranking problem, where the goal is to rank offerings by the proportion of their reviews that are believed to be deceptive. [sent-22, score-0.796]

13 We propose a novel threelayer graph model, based on manifold ranking (Zhou et al, 2003a; 2003b), to jointly model deceptive language at the offering-, review- and term-level. [sent-23, score-0.94]

14 In particular, rather than treating reviews within the same offering as independent units, there is a reinforcing relationship between offerings and reviews. [sent-24, score-0.614]

15 For example, the Federal Trade Commission (FTC) has updated their guidelines on the use of endorsements and testimonials in advertising to suggest that posting deceptive reviews may be unlawful in the United States (FTC, 2009). [sent-26, score-0.937]

16 Intuitively, and as depicted in Figure 1 for hotel offerings, we represent hotels, reviews and terms as nodes in a graph, where each hotel is connected to its reviews, and each review, in turn, is connected to the terms used within it. [sent-30, score-1.115]

17 The influence of labeled data is propagated along the graph to unlabeled data, such that a hotel is considered more deceptive if it is heavily linked with other deceptive reviews, and a review, in turn, is more deceptive if it is generated by a deceptive hotel. [sent-31, score-2.322]

18 The success of our semi-supervised approach further depends on the ability to learn patterns of truthful and deceptive reviews that generalize across reviews of different offerings. [sent-32, score-1.555]

19 This is challenging, because reviews often contain offering-specific vocabulary. [sent-33, score-0.413]

20 For example, reviews of hotels in Los Angeles are more likely to include keywords such as “beach”, “sea”, “sunshine” or “LA”, while reviews of Juneau hotels may contain “glacier”, “Juneau”, “bear” or “aurora borealis. [sent-34, score-1.37]

21 ” A hotel review might also mention the hotel’s restaurant or bar by name. [sent-35, score-0.448]

22 Accordingly, we propose a dimensionality-reduction approach, based on Latent Dirichlet Allocation (LDA) (Blei et al, 2003), to obtain a vector representation of reviews for the ranking algorithm that generalizes across reviews of different offerings. [sent-37, score-0.946]

23 Specifically, we train 1934 an LDA-based topic model to view each review as a mixture of aspect-, city-, hotel- and review-specific topics (see Section 6). [sent-38, score-0.24]

24 We find that, compared to models trained either on the full vocabulary, or trained on standard LDA document-topic vectors, this representation allows our models to generalize better across reviews of different offerings. [sent-44, score-0.435]

25 In particular, in the absence of gold standard offering-level labels, we introduce a novel evaluation procedure for this task, in which we rank numerous versions of each hotel, where each hotel version contains a different number of injected, known deceptive reviews. [sent-46, score-0.88]

26 Thus, we expect hotel versions with larger proportions of deceptive reviews to be ranked higher than those with smaller proportions. [sent-47, score-1.317]

27 (201 1) dataset of 800 positive (5-star) reviews of 20 Chicago hotels (400 deceptive and 400 truthful). [sent-49, score-1.188]

28 For evaluation, we construct a new FOUR-CITIES dataset, containing 40 deceptive and 40 truthful reviews for each of eight hotels in four different cities (640 reviews total), following the procedure outlined in Ott et al. [sent-50, score-1.87]

29 We find that our manifold ranking approach outperforms several state-of-theart learning baselines on this task, including transductive Support Vector Regression. [sent-52, score-0.412]

30 We additionally apply our approach to a large-scale collection of real-world reviews from TripAdvisor and explore the resulting ranking. [sent-53, score-0.413]

31 In the sections below, we discuss related work (Section 2) and describe the datasets used in this work (Section 3), the dimensionality-reduction approach for representing reviews (Section 4), and the semi-supervised manifold ranking approach (Section 5). [sent-54, score-0.825]

32 2 Related Work A number of recent approaches have focused on identifying individual fake reviews or users who post fake reviews. [sent-56, score-0.667]

33 Yoo and Gretzel (2009) gathered 40 truthful and 42 deceptive hotel reviews and manually compare the psychologically relevant linguistic differences between them. [sent-58, score-1.471]

34 (201 1) solicit deceptive reviews from workers on Amazon Mechanical Turk, and built a dataset containing 400 deceptive and 400 truthful reviews, which they use to train and evaluate supervised SVM classifiers. [sent-62, score-1.623]

35 (2012) study spam produced by groups of fake reviewers. [sent-66, score-0.212]

36 (2013) use topic models to detect differences between deceptive and truthful topic-word distributions. [sent-68, score-0.767]

37 In contrast, in this work we aim to identify fake reviews at an offering level. [sent-69, score-0.588]

38 The manifoldranking method (Zhou et al, 2003a; Zhou et al, 2003b) is a mutual reinforcement ranking approach initially proposed to rank data points along their underlying manifold structure. [sent-80, score-0.471]

39 3 Dataset In this paper, we train all of our models using the CHICAGO dataset of Ott et al (201 1), which contains 20 deceptive and 20 truthful reviews from each of 20 Chicago hotels (800 reviews total). [sent-82, score-1.904]

40 This dataset is 2Approaches for identifying individual fake reviews may be applied to our task, for example, by averaging the review-level predictions for an offering. [sent-83, score-0.575]

41 unique in that it contains known (gold standard) deceptive reviews, solicited through Amazon Mechanical Turk, and is publicly-available. [sent-86, score-0.481]

42 3 Unfortunately, the CHICAGO dataset is limited, both in size (800 reviews) and scope, in that it only contains reviews of hotels in one city: Chicago. [sent-87, score-0.707]

43 Accordingly, in order to perform a more realistic evaluation for our task, we construct a new dataset, FOUR-CITIES, that contains 40 deceptive and 40 truthful reviews from each of eight hotels in four different cities (640 reviews total). [sent-88, score-1.87]

44 We build the FOUR-CITIES dataset using the same procedure as Ott et al (201 1), by creating and dividing 320 Mechanical Turk jobs, called Human- Intelligence Tasks (HITs), evenly across eight of the most popular hotels in our four chosen cities (see Table 1). [sent-89, score-0.436]

45 Each HIT presents a worker with the name of a hotel and a link to the hotel’s website. [sent-90, score-0.351]

46 Workers are asked to imagine that they work for the marketing department of the hotel and that their boss has asked them to write a fake positive review, as if they were a customer, to be posted on a travel review website. [sent-91, score-0.562]

47 Finally, we augment our deceptive FOUR-CITIES reviews with a matching set of truthful reviews from TripAdvisor by randomly sampling 40 positive (5star) reviews for each of the eight chosen hotels. [sent-93, score-1.975]

48 While we cannot know for sure that the sampled reviews are truthful, previous work has suggested that rates of deception among popular hotels is likely to be low (Jindal and Liu, 2008; Lim et al, 2010). [sent-94, score-0.685]

49 4 Topic Models for Dimensionality Reduction As mentioned in the introduction, we want to learn patterns of truthful and deceptive reviews that apply 3We use the dataset available at: http : / /www . [sent-95, score-1.142]

50 This is challenging, however, because hotel reviews often contain specific information about the hotel or city, and it is unclear whether these features will generalize to reviews of other hotels. [sent-101, score-1.55]

51 , we exclude city, hotel and review-specific topics, as well as the background topic. [sent-106, score-0.378]

52 2 Inference for RLDA Given the review collection, our goal is to find the most likely assignment yw (and zw if yw = 0) for each word, w, in each review. [sent-132, score-0.325]

53 Ekw , E1w , E2w , E3w , E4w denote the number of times that the word w is assigned to aspect k, the background topic, review-specific topic r, hotel-specific topic h, and cityspecific topic c, respectively. [sent-137, score-0.281]

54 Manifold Ranking for Hotels In this section, we describe our ranking algorithm based on manifold ranking (Zhou et al, 2003a; Zhou — et al, 2003b) that tries to jointly model deceptive language at the hotel-, review- and term-level. [sent-143, score-1.013]

55 EHR, ERR and ERT respectively denote the edges between hotels and reviews, reviews and reviews and reviews and terms. [sent-147, score-1.511]

56 The normalized matrix between reviews DRR ∈ RNR×NR is calculated as follows: DRR = M−12 ·P · M−21 (5) sim(Ri, wj) denotes the similarity between review Ri and term wj and is the conditional probability of word wj given review Ri. [sent-152, score-0.77]

57 Input:The hotel set VD, review set VR, term set VT, normalized transition probability matrix DHR, DRR, DRH, DRT, DTT, DTR. [sent-164, score-0.475]

58 Initialization: set the score labeled reviews to +1 or −1 and other unlabeled reviews 0: SR0 = [++11 ,o . [sent-167, score-0.85]

59 2 Reinforcement Ranking Based on the Manifold Method Based on the set of labeled reviews, nodes for truthful reviews (positive) are initialized with a high score (1) and nodes for deceptive reviews, a low score (-1). [sent-195, score-1.144]

60 Let SH, SR and ST denote the ranking scores of hotels, reviews and terms, which are updated during each iteration as follows until convergence6: S kR kHT+ 1 = ? [sent-197, score-0.533]

61 (The score of labeled reviews will be fixed to +1 or −1. [sent-207, score-0.437]

62 Here, we will directly evaluate the effectiveness of RLDA by comparing the performance of binary deceptive vs. [sent-209, score-0.481]

63 Second, we train a two-layer manifold classifier, which is a simplified version of the model presented in Section 5. [sent-215, score-0.292]

64 In this model, the graph consists of only review and term layers, and the score of a labeled review is fixed to 1or -1 in each iteration. [sent-216, score-0.268]

65 After convergence, reviews with scores greater than 0 are classified as truthful, and less than 0 as deceptive. [sent-217, score-0.413]

66 We find that SVM and MANIFOLD are comparable in all six conditions, and not surprisingly, perform best when evaluated on reviews from the two Chicago hotels in our FOUR-CITIES data. [sent-222, score-0.685]

67 However, the N-GRAM and LDA feature sets perform much worse than RDLA when evaluation is performed on reviews from the other three (non-Chicago) cities. [sent-223, score-0.413]

68 This confirms that classifiers trained on n-gram features overfit to the training data (CHICAGO) and do not generalize well to reviews from other cities. [sent-224, score-0.455]

69 7 Identifying Manipulated Hotels In this section, we evaluate the performance of our manifold ranking approach (see Section 5) on the task of identifying manipulated hotels. [sent-226, score-0.531]

70 We consider several baseline ranking approaches to compare to our manifold ranking approach. [sent-228, score-0.532]

71 Like the manifold ranking approach, the baselines also employ both the CHICAGO dataset (labeled) and FOUR-CITIES dataset (without labels). [sent-229, score-0.456]

72 Topic number is set 7While we have not investigated the effects ofunlabeled data in detail, providing additional unlabeled data (beyond the test set) boosts the manifold ranking performances reported below by 1-2%. [sent-231, score-0.412]

73 Result correspond to evaluation on reviews for the two Chicago hotels from FOUR-CITIES and non-Chicago FOUR-CITIES reviews (six hotels). [sent-245, score-1.098]

74 Each baseline makes review-level predictions and then ranks each hotel by the average of those predictions. [sent-247, score-0.351]

75 For example, the first version of a hotel will have 40 truthful and 0 deceptive reviews, the second version 39 truthful and 1 deceptive, and the 41st version 0 truthful and 40 deceptive. [sent-258, score-1.51]

76 In total, we generate 41 8 = 328 versions of hotel reviews. [sent-259, score-0.379]

77 We expect v1e ×rsi 8on =s w 32ith8 larger proportions eofv deceptive 1939 reviews to receive lower scores by the ranking models (i. [sent-260, score-1.039]

78 In particular, NDCG rewards rankings with the most relevant results at the top positions (Liu, 2009), which is also our objective, namely, to rank versions that have higher proportions of deceptive reviews nearer to the top. [sent-265, score-0.947]

79 Let R(m) denote the relevance score of mth ranked hotel version. [sent-266, score-0.37]

80 We define the ideal ranking according to the proportion of deceptive reviews in different versions, and report NDCG scores for the Nth ranked hotel versions (N = 8 to 321), at intervals of 8 (to account for ties among the eight hotels). [sent-268, score-1.466]

81 K trained on limited data, to generalize to reviews of different hotels and in different locations. [sent-275, score-0.707]

82 We also find that approaches that model a reinforcing relationship between hotels and their reviews are better than approaches that model reviews as independent units, e. [sent-276, score-1.119]

83 This confirms our intuition that a hotel is more deceptive if it is connected with many deceptive reviews, and, in turn, a review is more deceptive if from a deceptive hotel. [sent-281, score-2.372]

84 8 Qualitative Evaluation We now present qualitative evaluations for the RLDA topic model and the manifold ranking model. [sent-282, score-0.472]

85 Table 3 gives the top words for four aspect topics and four city-specific topics in the RLDA topic model; Table 4 gives the highest and lowest ranking term weights in our three-layer manifold model. [sent-284, score-0.715]

86 By comparing the first row of topics in Table 3, corresponding to aspect topics, to the top words in Table 4, we observe that the learned topics relate to truthful and deceptive classes. [sent-285, score-0.942]

87 For example, Topics 1 and 4 share many terms with the top truthful terms in the manifold model, e. [sent-286, score-0.518]

88 Similarly, Topics 2 and 7 share many terms with the top deceptive terms in the manifold model, e. [sent-289, score-0.773]

89 The top row presents topic words from four aspect topics (K = 10) and the bottom row presents top words from four city-specific topics. [sent-293, score-0.231]

90 Finally, we apply our ranking model to a large-scale collection of realworld reviews from TripAdvisor. [sent-301, score-0.533]

91 We crawl 878,561 reviews from 3,945 hotels in 25 US cities from TripAdvisor excluding all non-5-star reviews and removing hotels with fewer than 100 reviews. [sent-302, score-1.406]

92 In the end, we collect 244,810 reviews from 838 hotels. [sent-303, score-0.413]

93 We apply our manifold ranking model and rank all 838 hotels. [sent-304, score-0.412]

94 First, we present a histogram of the resulting manifold ranking scores in Figure 6. [sent-305, score-0.412]

95 04, which in our quantitative evaluation (Section 7) corresponded to a hotel with 34 truthful and 6 deceptive reviews. [sent-307, score-1.058]

96 These results suggest that the majority of reviews in TripAdvisor are truthful, in line with previous findings by Ott et al. [sent-308, score-0.413]

97 Next, we note that previous work has hypothesized that deceptive reviews are more likely to be posted by first-time review writers, or singleton reviewers (Ott et al, 2011; Wu et al, 2011). [sent-310, score-1.02]

98 Accordingly, if this hypothesis were valid, then manipulated hotels would have an above-average proportion 1941 of singleton reviews. [sent-311, score-0.419]

99 Noting that lower scores correspond to a higher predicted proportion ofdeceptive reviews, we observe that hotels that are ranked as being more deceptive by our model have much higher proportions of singleton reviews, on average, compared to hotels ranked as less deceptive. [sent-313, score-1.142]

100 9 Conclusion We study the problem of identifying manipulated offerings on review portals and propose a novel threelayer graph model, based on manifold ranking for ranking offerings based on the proportion of reviews expected to be instances of deceptive opinion spam. [sent-314, score-2.001]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('deceptive', 0.481), ('reviews', 0.413), ('hotel', 0.351), ('manifold', 0.292), ('hotels', 0.272), ('truthful', 0.226), ('rlda', 0.166), ('ranking', 0.12), ('offerings', 0.119), ('ott', 0.118), ('fake', 0.114), ('spam', 0.098), ('review', 0.097), ('manipulated', 0.093), ('yw', 0.083), ('topics', 0.083), ('al', 0.077), ('tripadvisor', 0.071), ('wj', 0.068), ('multi', 0.066), ('lda', 0.064), ('zw', 0.062), ('offering', 0.061), ('topic', 0.06), ('rj', 0.058), ('chicago', 0.057), ('sim', 0.053), ('aspect', 0.05), ('ddrraaww', 0.048), ('dir', 0.047), ('dimensionality', 0.047), ('jindal', 0.039), ('ndcg', 0.038), ('cities', 0.036), ('ri', 0.036), ('ftc', 0.036), ('iiff', 0.036), ('ndcgn', 0.036), ('reinforcement', 0.035), ('accordingly', 0.034), ('vr', 0.033), ('zhou', 0.031), ('injected', 0.031), ('drr', 0.031), ('opinion', 0.03), ('eight', 0.029), ('singleton', 0.029), ('ffoorr', 0.028), ('eeaacchh', 0.028), ('cardie', 0.028), ('versions', 0.028), ('draw', 0.027), ('background', 0.027), ('term', 0.027), ('myle', 0.026), ('identifying', 0.026), ('proportions', 0.025), ('proportion', 0.025), ('labeled', 0.024), ('chirita', 0.024), ('cityspecific', 0.024), ('deliberately', 0.024), ('dengyong', 0.024), ('drt', 0.024), ('ehr', 0.024), ('jiwei', 0.024), ('juneau', 0.024), ('manifoldranking', 0.024), ('myleott', 0.024), ('srk', 0.024), ('testimonials', 0.024), ('threelayer', 0.024), ('bing', 0.023), ('graph', 0.023), ('wan', 0.023), ('kx', 0.023), ('gibbs', 0.022), ('generalize', 0.022), ('claire', 0.022), ('dataset', 0.022), ('reduction', 0.021), ('nitin', 0.021), ('corne', 0.021), ('shk', 0.021), ('jianwu', 0.021), ('err', 0.021), ('reinforcing', 0.021), ('vh', 0.021), ('yoo', 0.021), ('ww', 0.02), ('classifiers', 0.02), ('liu', 0.02), ('hi', 0.02), ('absence', 0.02), ('blei', 0.02), ('ranked', 0.019), ('row', 0.019), ('portals', 0.019), ('posting', 0.019), ('authentic', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 94 emnlp-2013-Identifying Manipulated Offerings on Review Portals

Author: Jiwei Li ; Myle Ott ; Claire Cardie

Abstract: Recent work has developed supervised methods for detecting deceptive opinion spam— fake reviews written to sound authentic and deliberately mislead readers. And whereas past work has focused on identifying individual fake reviews, this paper aims to identify offerings (e.g., hotels) that contain fake reviews. We introduce a semi-supervised manifold ranking algorithm for this task, which relies on a small set of labeled individual reviews for training. Then, in the absence of gold standard labels (at an offering level), we introduce a novel evaluation procedure that ranks artificial instances of real offerings, where each artificial offering contains a known number of injected deceptive reviews. Experiments on a novel dataset of hotel reviews show that the proposed method outperforms state-of-art learning baselines.

2 0.20929959 202 emnlp-2013-Where Not to Eat? Improving Public Policy by Predicting Hygiene Inspections Using Online Reviews

Author: Jun Seok Kang ; Polina Kuznetsova ; Michael Luca ; Yejin Choi

Abstract: This paper offers an approach for governments to harness the information contained in social media in order to make public inspections and disclosure more efficient. As a case study, we turn to restaurant hygiene inspections which are done for restaurants throughout the United States and in most of the world and are a frequently cited example of public inspections and disclosure. We present the first empirical study that shows the viability of statistical models that learn the mapping between textual signals in restaurant reviews and the hygiene inspection records from the Department of Public Health. The learned model achieves over 82% accuracy in discriminating severe – offenders from places with no violation, and provides insights into salient cues in reviews that are indicative of the restaurant’s sanitary conditions. Our study suggests that public disclosure policy can be improved by mining public opinions from social media to target inspections and to provide alternative forms of disclosure to customers.

3 0.10974138 77 emnlp-2013-Exploiting Domain Knowledge in Aspect Extraction

Author: Zhiyuan Chen ; Arjun Mukherjee ; Bing Liu ; Meichun Hsu ; Malu Castellanos ; Riddhiman Ghosh

Abstract: Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized Pólya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MCLDA outperforms the existing state-of-the-art models markedly.

4 0.098488718 95 emnlp-2013-Identifying Multiple Userids of the Same Author

Author: Tieyun Qian ; Bing Liu

Abstract: This paper studies the problem of identifying users who use multiple userids to post in social media. Since multiple userids may belong to the same author, it is hard to directly apply supervised learning to solve the problem. This paper proposes a new method, which still uses supervised learning but does not require training documents from the involved userids. Instead, it uses documents from other userids for classifier building. The classifier can be applied to documents of the involved userids. This is possible because we transform the document space to a similarity space and learning is performed in this new space. Our evaluation is done in the online review domain. The experimental results using a large number of userids and their reviews show that the proposed method is highly effective. 1

5 0.07075315 169 emnlp-2013-Semi-Supervised Representation Learning for Cross-Lingual Text Classification

Author: Min Xiao ; Yuhong Guo

Abstract: Cross-lingual adaptation aims to learn a prediction model in a label-scarce target language by exploiting labeled data from a labelrich source language. An effective crosslingual adaptation system can substantially reduce the manual annotation effort required in many natural language processing tasks. In this paper, we propose a new cross-lingual adaptation approach for document classification based on learning cross-lingual discriminative distributed representations of words. Specifically, we propose to maximize the loglikelihood of the documents from both language domains under a cross-lingual logbilinear document model, while minimizing the prediction log-losses of labeled documents. We conduct extensive experiments on cross-lingual sentiment classification tasks of Amazon product reviews. Our experimental results demonstrate the efficacy of the pro- posed cross-lingual adaptation approach.

6 0.063732438 120 emnlp-2013-Learning Latent Word Representations for Domain Adaptation using Supervised Word Clustering

7 0.060387172 29 emnlp-2013-Automatic Domain Partitioning for Multi-Domain Learning

8 0.059446316 99 emnlp-2013-Implicit Feature Detection via a Constrained Topic Model and SVM

9 0.058752902 147 emnlp-2013-Optimized Event Storyline Generation based on Mixture-Event-Aspect Model

10 0.058250956 63 emnlp-2013-Discourse Level Explanatory Relation Extraction from Product Reviews Using First-Order Logic

11 0.054774191 121 emnlp-2013-Learning Topics and Positions from Debatepedia

12 0.050893348 148 emnlp-2013-Orthonormal Explicit Topic Analysis for Cross-Lingual Document Matching

13 0.049611773 47 emnlp-2013-Collective Opinion Target Extraction in Chinese Microblogs

14 0.046340536 62 emnlp-2013-Detection of Product Comparisons - How Far Does an Out-of-the-Box Semantic Role Labeling System Take You?

15 0.045677401 100 emnlp-2013-Improvements to the Bayesian Topic N-Gram Models

16 0.043935668 16 emnlp-2013-A Unified Model for Topics, Events and Users on Twitter

17 0.042656578 184 emnlp-2013-This Text Has the Scent of Starbucks: A Laplacian Structured Sparsity Model for Computational Branding Analytics

18 0.041978545 7 emnlp-2013-A Hierarchical Entity-Based Approach to Structuralize User Generated Content in Social Media: A Case of Yahoo! Answers

19 0.037884675 109 emnlp-2013-Is Twitter A Better Corpus for Measuring Sentiment Similarity?

20 0.037122294 188 emnlp-2013-Tree Kernel-based Negation and Speculation Scope Detection with Structured Syntactic Parse Features


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.118), (1, 0.056), (2, -0.091), (3, -0.024), (4, 0.045), (5, -0.013), (6, 0.061), (7, 0.035), (8, 0.009), (9, -0.072), (10, -0.06), (11, -0.137), (12, -0.109), (13, 0.009), (14, 0.075), (15, 0.114), (16, 0.004), (17, -0.03), (18, -0.07), (19, -0.14), (20, 0.009), (21, 0.011), (22, -0.043), (23, -0.007), (24, -0.068), (25, 0.303), (26, -0.079), (27, 0.065), (28, -0.135), (29, 0.024), (30, -0.016), (31, -0.075), (32, 0.023), (33, 0.055), (34, -0.053), (35, 0.037), (36, 0.034), (37, 0.079), (38, 0.038), (39, -0.274), (40, 0.173), (41, 0.146), (42, -0.168), (43, 0.012), (44, 0.096), (45, -0.083), (46, -0.032), (47, -0.036), (48, -0.018), (49, -0.126)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96493948 94 emnlp-2013-Identifying Manipulated Offerings on Review Portals

Author: Jiwei Li ; Myle Ott ; Claire Cardie

Abstract: Recent work has developed supervised methods for detecting deceptive opinion spam— fake reviews written to sound authentic and deliberately mislead readers. And whereas past work has focused on identifying individual fake reviews, this paper aims to identify offerings (e.g., hotels) that contain fake reviews. We introduce a semi-supervised manifold ranking algorithm for this task, which relies on a small set of labeled individual reviews for training. Then, in the absence of gold standard labels (at an offering level), we introduce a novel evaluation procedure that ranks artificial instances of real offerings, where each artificial offering contains a known number of injected deceptive reviews. Experiments on a novel dataset of hotel reviews show that the proposed method outperforms state-of-art learning baselines.

2 0.92090911 202 emnlp-2013-Where Not to Eat? Improving Public Policy by Predicting Hygiene Inspections Using Online Reviews

Author: Jun Seok Kang ; Polina Kuznetsova ; Michael Luca ; Yejin Choi

Abstract: This paper offers an approach for governments to harness the information contained in social media in order to make public inspections and disclosure more efficient. As a case study, we turn to restaurant hygiene inspections which are done for restaurants throughout the United States and in most of the world and are a frequently cited example of public inspections and disclosure. We present the first empirical study that shows the viability of statistical models that learn the mapping between textual signals in restaurant reviews and the hygiene inspection records from the Department of Public Health. The learned model achieves over 82% accuracy in discriminating severe – offenders from places with no violation, and provides insights into salient cues in reviews that are indicative of the restaurant’s sanitary conditions. Our study suggests that public disclosure policy can be improved by mining public opinions from social media to target inspections and to provide alternative forms of disclosure to customers.

3 0.43527031 95 emnlp-2013-Identifying Multiple Userids of the Same Author

Author: Tieyun Qian ; Bing Liu

Abstract: This paper studies the problem of identifying users who use multiple userids to post in social media. Since multiple userids may belong to the same author, it is hard to directly apply supervised learning to solve the problem. This paper proposes a new method, which still uses supervised learning but does not require training documents from the involved userids. Instead, it uses documents from other userids for classifier building. The classifier can be applied to documents of the involved userids. This is possible because we transform the document space to a similarity space and learning is performed in this new space. Our evaluation is done in the online review domain. The experimental results using a large number of userids and their reviews show that the proposed method is highly effective. 1

4 0.40751421 77 emnlp-2013-Exploiting Domain Knowledge in Aspect Extraction

Author: Zhiyuan Chen ; Arjun Mukherjee ; Bing Liu ; Meichun Hsu ; Malu Castellanos ; Riddhiman Ghosh

Abstract: Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized Pólya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MCLDA outperforms the existing state-of-the-art models markedly.

5 0.38610446 184 emnlp-2013-This Text Has the Scent of Starbucks: A Laplacian Structured Sparsity Model for Computational Branding Analytics

Author: William Yang Wang ; Edward Lin ; John Kominek

Abstract: We propose a Laplacian structured sparsity model to study computational branding analytics. To do this, we collected customer reviews from Starbucks, Dunkin’ Donuts, and other coffee shops across 38 major cities in the Midwest and Northeastern regions of USA. We study the brand related language use through these reviews, with focuses on the brand satisfaction and gender factors. In particular, we perform three tasks: automatic brand identification from raw text, joint brand-satisfaction prediction, and joint brandgender-satisfaction prediction. This work extends previous studies in text classification by incorporating the dependency and interaction among local features in the form of structured sparsity in a log-linear model. Our quantitative evaluation shows that our approach which combines the advantages of graphical modeling and sparsity modeling techniques significantly outperforms various standard and stateof-the-art text classification algorithms. In addition, qualitative analysis of our model reveals important features of the language uses associated with the specific brands.

6 0.30503023 100 emnlp-2013-Improvements to the Bayesian Topic N-Gram Models

7 0.30093241 63 emnlp-2013-Discourse Level Explanatory Relation Extraction from Product Reviews Using First-Order Logic

8 0.27867332 147 emnlp-2013-Optimized Event Storyline Generation based on Mixture-Event-Aspect Model

9 0.2515384 199 emnlp-2013-Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students

10 0.24902868 131 emnlp-2013-Mining New Business Opportunities: Identifying Trend related Products by Leveraging Commercial Intents from Microblogs

11 0.24695712 29 emnlp-2013-Automatic Domain Partitioning for Multi-Domain Learning

12 0.24512008 28 emnlp-2013-Automated Essay Scoring by Maximizing Human-Machine Agreement

13 0.22632796 188 emnlp-2013-Tree Kernel-based Negation and Speculation Scope Detection with Structured Syntactic Parse Features

14 0.22243606 99 emnlp-2013-Implicit Feature Detection via a Constrained Topic Model and SVM

15 0.22020781 200 emnlp-2013-Well-Argued Recommendation: Adaptive Models Based on Words in Recommender Systems

16 0.2120882 178 emnlp-2013-Success with Style: Using Writing Style to Predict the Success of Novels

17 0.21047866 189 emnlp-2013-Two-Stage Method for Large-Scale Acquisition of Contradiction Pattern Pairs using Entailment

18 0.20459807 203 emnlp-2013-With Blinkers on: Robust Prediction of Eye Movements across Readers

19 0.20128575 144 emnlp-2013-Opinion Mining in Newspaper Articles by Entropy-Based Word Connections

20 0.2001335 191 emnlp-2013-Understanding and Quantifying Creativity in Lexical Composition


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.02), (9, 0.014), (18, 0.037), (22, 0.061), (30, 0.051), (40, 0.387), (51, 0.153), (66, 0.037), (71, 0.028), (75, 0.041), (90, 0.021), (96, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.73780233 94 emnlp-2013-Identifying Manipulated Offerings on Review Portals

Author: Jiwei Li ; Myle Ott ; Claire Cardie

Abstract: Recent work has developed supervised methods for detecting deceptive opinion spam— fake reviews written to sound authentic and deliberately mislead readers. And whereas past work has focused on identifying individual fake reviews, this paper aims to identify offerings (e.g., hotels) that contain fake reviews. We introduce a semi-supervised manifold ranking algorithm for this task, which relies on a small set of labeled individual reviews for training. Then, in the absence of gold standard labels (at an offering level), we introduce a novel evaluation procedure that ranks artificial instances of real offerings, where each artificial offering contains a known number of injected deceptive reviews. Experiments on a novel dataset of hotel reviews show that the proposed method outperforms state-of-art learning baselines.

2 0.65536636 2 emnlp-2013-A Convex Alternative to IBM Model 2

Author: Andrei Simion ; Michael Collins ; Cliff Stein

Abstract: The IBM translation models have been hugely influential in statistical machine translation; they are the basis of the alignment models used in modern translation systems. Excluding IBM Model 1, the IBM translation models, and practically all variants proposed in the literature, have relied on the optimization of likelihood functions or similar functions that are non-convex, and hence have multiple local optima. In this paper we introduce a convex relaxation of IBM Model 2, and describe an optimization algorithm for the relaxation based on a subgradient method combined with exponentiated-gradient updates. Our approach gives the same level of alignment accuracy as IBM Model 2.

3 0.43090701 77 emnlp-2013-Exploiting Domain Knowledge in Aspect Extraction

Author: Zhiyuan Chen ; Arjun Mukherjee ; Bing Liu ; Meichun Hsu ; Malu Castellanos ; Riddhiman Ghosh

Abstract: Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized Pólya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MCLDA outperforms the existing state-of-the-art models markedly.

4 0.43052399 51 emnlp-2013-Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction

Author: Jason Weston ; Antoine Bordes ; Oksana Yakhnenko ; Nicolas Usunier

Abstract: This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on scoring functions that operate by learning low-dimensional embeddings of words, entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over methods that rely on text features alone.

5 0.43000227 48 emnlp-2013-Collective Personal Profile Summarization with Social Networks

Author: Zhongqing Wang ; Shoushan LI ; Fang Kong ; Guodong Zhou

Abstract: Personal profile information on social media like LinkedIn.com and Facebook.com is at the core of many interesting applications, such as talent recommendation and contextual advertising. However, personal profiles usually lack organization confronted with the large amount of available information. Therefore, it is always a challenge for people to find desired information from them. In this paper, we address the task of personal profile summarization by leveraging both personal profile textual information and social networks. Here, using social networks is motivated by the intuition that, people with similar academic, business or social connections (e.g. co-major, co-university, and cocorporation) tend to have similar experience and summaries. To achieve the learning process, we propose a collective factor graph (CoFG) model to incorporate all these resources of knowledge to summarize personal profiles with local textual attribute functions and social connection factors. Extensive evaluation on a large-scale dataset from LinkedIn.com demonstrates the effectiveness of the proposed approach. 1

6 0.42984629 179 emnlp-2013-Summarizing Complex Events: a Cross-Modal Solution of Storylines Extraction and Reconstruction

7 0.42707327 152 emnlp-2013-Predicting the Presence of Discourse Connectives

8 0.42468041 64 emnlp-2013-Discriminative Improvements to Distributional Sentence Similarity

9 0.42335156 202 emnlp-2013-Where Not to Eat? Improving Public Policy by Predicting Hygiene Inspections Using Online Reviews

10 0.42244828 82 emnlp-2013-Exploring Representations from Unlabeled Data with Co-training for Chinese Word Segmentation

11 0.42180219 154 emnlp-2013-Prior Disambiguation of Word Tensors for Constructing Sentence Vectors

12 0.42175466 47 emnlp-2013-Collective Opinion Target Extraction in Chinese Microblogs

13 0.42128143 79 emnlp-2013-Exploiting Multiple Sources for Open-Domain Hypernym Discovery

14 0.42120746 76 emnlp-2013-Exploiting Discourse Analysis for Article-Wide Temporal Classification

15 0.42042586 99 emnlp-2013-Implicit Feature Detection via a Constrained Topic Model and SVM

16 0.42012703 80 emnlp-2013-Exploiting Zero Pronouns to Improve Chinese Coreference Resolution

17 0.41938245 69 emnlp-2013-Efficient Collective Entity Linking with Stacking

18 0.4192912 110 emnlp-2013-Joint Bootstrapping of Corpus Annotations and Entity Types

19 0.41907507 132 emnlp-2013-Mining Scientific Terms and their Definitions: A Study of the ACL Anthology

20 0.41896158 53 emnlp-2013-Cross-Lingual Discriminative Learning of Sequence Models with Posterior Regularization