acl acl2012 acl2012-190 knowledge-graph by maker-knowledge-mining

190 acl-2012-Syntactic Stylometry for Deception Detection


Source: pdf

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. ,

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. [sent-2, score-0.817]

2 Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. [sent-3, score-0.845]

3 Our results improve the best published result on the hotel review data (Ott et al. [sent-4, score-0.18]

4 , 1 Introduction Previous studies in computerized deception detection have relied only on shallow lexicosyntactic cues. [sent-7, score-0.664]

5 (2007)) , while some recent ones explored the use of machine learning techniques using simple lexico-syntactic patterns, such as n-grams and part-of-speech (POS) tags (Mihalcea and Strapparava (2009) , Ott et al. [sent-13, score-0.037]

6 These previous studies unveil interesting correlations between certain lexical items or categories with deception that may not be readily apparent to human judges. [sent-15, score-0.362]

7 (2011) in the hotel review domain results 171 ychoi@cs . [sent-17, score-0.18]

8 edu in very insightful observations viewers tend to use verbs and (e. [sent-19, score-0.06]

9 , “I” , “my” ) more often, viewers tend to use more of that deceptive repersonal pronouns while truthful renouns, adjectives, prepositions. [sent-21, score-0.691]

10 In parallel to these shallow lexical patterns, might there be deep syntactic structures that are lurking in deceptive writing? [sent-22, score-0.708]

11 This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. [sent-23, score-0.817]

12 Over four different datasets spanning from the product review domain to the essay domain, we find that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. [sent-24, score-0.845]

13 Our results improve the best published result on the hotel review data of Ott et al. [sent-25, score-0.18]

14 We also achieve substantial improvement over the essay data of Mihalcea and Strapparava (2009) , obtaining upto 85. [sent-28, score-0.228]

15 2 Four Datasets To explore different types of deceptive writing, we consider the following four datasets spanning from the product review to the essay domain: I. [sent-30, score-0.888]

16 (2011) , this dataset contains 400 truthful reviews obtained from www . [sent-32, score-0.475]

17 com and 400 deceptive reviews gathered using Amazon Mechanical Turk, evenly distributed across 20 Chicago hotels. [sent-34, score-0.652]

18 c so2c0ia1t2io Ans fso rc Ciatoiomnp fuotart Cio nmaplu Ltiantgiounisatlic Lsi,n pgaugiestsi1c 7s1–175, DeceptiveTripAdvisorT–rGuotlhdfulDeceptivTeripAdvisor–THreuutrhisftuicl Figure 1: Parsed trees II. [sent-37, score-0.04]

19 TripAdvisor—Heuristic: This dataset contains 400 truthful and 400 deceptive reviews harvested from www . [sent-38, score-0.97]

20 com, based on fake review detection heuristics introduced in Feng et al. [sent-40, score-0.357]

21 Yelp: This dataset is our own creation using www . [sent-43, score-0.086]

22 We collect 400 filtered reviews and 400 displayed reviews for 35 Italian restaurants with average ratings in the range of [3. [sent-46, score-0.474]

23 Class labels are based on the meta data, which tells us whether each review is filtered by Yelp’s automated review filtering system or not. [sent-49, score-0.311]

24 We expect that filtered reviews roughly correspond to deceptive reviews, and displayed reviews to truthful ones, but not without considerable noise. [sent-50, score-1.105]

25 We only collect 5-star reviews to avoid unwanted noise from varying 1Specifically, using the notation of Feng et al. [sent-51, score-0.253]

26 (2012) , data created by Strategy-distΦ heuristic, with HS , S as deceptive and HS0 , T as truthful. [sent-52, score-0.447]

27 Essays: Introduced in Mihalcea and Strapparava (2009) , this corpus contains truthful and deceptive essays collected using Amazon Mechanic Turk for the following three topics: “Abortion” (100 essays per class) , “Best Friend” (98 essays per class) , and “Death Penalty” (98 essays per class) . [sent-55, score-1.055]

28 3 Feature Encoding Words Previous work has shown that bag-ofwords are effective in detecting domain-specific deception (Ott et al. [sent-56, score-0.444]

29 Shallow Syntax As has been used in many previous studies in stylometry (e. [sent-59, score-0.151]

30 (1998) , Zhao and Zobel (2007)) , we utilize part-of-speech (POS) tags to encode shallow syntactic information. [sent-62, score-0.201]

31 (2011) found that even though POS tags are effective in detecting fake product reviews, they are not as effective as words. [sent-64, score-0.307]

32 Deep syntax We experiment with four different encodings of production rules based on the Probabilistic Context Free Grammar (PCFG) parse trees as follows: • r: unlexicalized production rules (i. [sent-66, score-0.676]

33 , all production lriuzleeds except tfoiorn nth rouslees w (ii. [sent-68, score-0.19]

34 • p ˆr:r oudnulcetxiiocnali rzueldes production ru →les “ ycooum”b. [sent-78, score-0.19]

35 , all production rules) combined with the grandparent node, e. [sent-85, score-0.238]

36 4 Experimental Results For all classification tasks, we use SVM classifier, 80% of data for training and 20% for testing, with 5-fold cross validation. [sent-88, score-0.029]

37 We use Berkeley PCFG parser (Petrov and Klein, 2007) to parse sentences. [sent-90, score-0.045]

38 Table 2 presents the classification performance using various features across four different datasets introduced earlier. [sent-91, score-0.111]

39 1 TripAdvisor–Gold We first discuss the results for the TripAdvisor– Gold dataset shown in Table 2. [sent-93, score-0.044]

40 (2011) , bag-of-words features achieve surprisingly high performance, reaching upto 89. [sent-95, score-0.19]

41 Deep syntactic features, encoded as rˆ∗ slightly improves this performance, achieving r9∗0. [sent-97, score-0.104]

42 vWeshe thni tsh peeserf syntactic features are combined with unigram features, we attain the best performance of 91. [sent-99, score-0.135]

43 Given the power of word-based features, one might wonder, whether the PCFG driven features are being useful only due to their lexical production rules. [sent-105, score-0.261]

44 To address such doubts, we include experiments with unlexicalized rules, r and rˆ. [sent-106, score-0.051]

45 8% accuracy respectively, which are significantly higher than that of a random baseline (∼50. [sent-109, score-0.027]

46 0%) , confirming statistical differences in deep syntactic isrtmruicntgur setsa. [sent-110, score-0.164]

47 Another question one might have is whether the performance gain of PCFG features are mostly from local sequences of POS tags, indirectly encoded in the production rules. [sent-113, score-0.259]

48 Comparing the performance of [shallow syntax+words] and [deep syntax+words] in Table 2, we find sta- tistical evidence that deep syntax based features offer information that are not available in simple POS sequences. [sent-114, score-0.187]

49 The sig- DTrecipeApdvisorT–rGuotlhdDTercipeApdvisorT–rHuetuhr SVWBPHAARDVPQSPPRNSVWBPHAARDVPNWPXRHNNP ACDONVPJPPURCTPIWNTHJADJPWADHJPPP Table 3: Most discriminative phrasal tags in PCFG parse trees: TripAdvisor data. [sent-118, score-0.137]

50 nificance of these results comes from the fact that these two datasets consists of real (fake) reviews in the wild, rather than manufactured ones that might invite unwanted signals that can unexpectedly help with classification accuracy. [sent-119, score-0.36]

51 In sum, these results indicate the existence of the statistical signals hidden in deep syntax even in real product reviews with noisy gold standards. [sent-120, score-0.516]

52 3 Essay Finally in Table 2, the last dataset Essay confirms the similar trends again, that the deep syntactic features consistently improve the performance over several baselines based only on shallow lexico-syntactic features. [sent-122, score-0.398]

53 The final results, reaching accuracy as high as 85%, substantially outperform what has been previously reported in Mihalcea and Strapparava (2009) . [sent-123, score-0.094]

54 How robust are the syntactic cues in the cross topic setting? [sent-124, score-0.135]

55 Table 4 compares the results of Mihalcea and Strapparava (2009) and ours, demonstrating that syntactic features achieve substantially and surprisingly more robust results. [sent-125, score-0.13]

56 4 Discriminative Production Rules To give more concrete insights, we provide 10 most discriminative unlexicalized production rules (augmented with the grand parent node) for each class in Table 1. [sent-127, score-0.39]

57 We order the rules based on the feature weights assigned by LIBLINEAR classifier. [sent-128, score-0.051]

58 Notice that the two production rules in bolds — [SBARˆ NP → S] and [NP ˆt VioPn → eNsP in SBAR] — are parts oPf →the S parse t[NrePe ˆsVhPow →n i nN Figure R1 , —wh aorsee pseanrttesn ocfe ihs eta pkaernse ef trromee an actual fake review. [sent-129, score-0.432]

59 Table 3 shows the most discriminative phrasal tags in the PCFG parse 174 t reasinti nngg::DAea &thP; BenBAes &tF; DrnABbo &rti; Don M&Sr;∗ 20096568. [sent-130, score-0.137]

60 00 Table 4: Cross topic deception detection accuracy: Essay data trees for each class. [sent-136, score-0.492]

61 Interestingly, we find more frequent use of VP, SBAR (clause introduced by subordinating conjunction) , and WHADVP in deceptive reviews than truthful reviews. [sent-137, score-0.866]

62 5 Related Work Much of the previous work for detecting deceptive product reviews focused on related, but slightly different problems, e. [sent-138, score-0.802]

63 (2010)) due to notable difficulty in obtaining gold standard labels. [sent-145, score-0.056]

64 4 The Yelp data we explored in this work shares a similar spirit in that gold standard labels are harvested from existing meta data, which are not guaranteed to align well with true hidden labels as to deceptive v. [sent-146, score-0.593]

65 Two previous work obtained more precise gold standard labels by hiring Amazon turkers to write deceptive articles (e. [sent-149, score-0.503]

66 (2011)) , both of which have been examined in this study with respect to their syntactic characteristics. [sent-152, score-0.067]

67 Although we are not aware of any prior work that dealt with syntactic cues in deceptive writing directly, prior work on hedge detection (e. [sent-153, score-0.678]

68 6 Conclusion We investigated syntactic stylometry for deception detection, adding a somewhat unconventional angle to previous studies. [sent-157, score-0.778]

69 Experimental results consistently find statistical evidence of deep syntactic patterns that are helpful in discriminating deceptive writing. [sent-158, score-0.645]

70 4It is not possible for a human judge to tell with full confidence whether a given review is a fake or not. [sent-159, score-0.237]

71 On lying and being lied to: A linguistic analysis of deception in computer-mediated communication. [sent-195, score-0.362]

72 Ex- ploiting rich features for detecting hedges and their scope. [sent-212, score-0.114]

73 The lie detector: Explorations in the automatic recognition of deceptive language. [sent-224, score-0.447]

74 Finding deceptive opinion spam by any stretch of the imagination. [sent-235, score-0.492]

75 Cues to deception and ability to detect lies as a function of police interview styles. [sent-264, score-0.362]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('deceptive', 0.447), ('deception', 0.362), ('ott', 0.246), ('tripadvisor', 0.211), ('reviews', 0.205), ('production', 0.19), ('truthful', 0.184), ('essay', 0.168), ('strapparava', 0.161), ('stylometry', 0.151), ('mihalcea', 0.142), ('jindal', 0.131), ('yelp', 0.121), ('fake', 0.12), ('review', 0.117), ('essays', 0.106), ('shallow', 0.097), ('deep', 0.097), ('nitin', 0.096), ('detection', 0.09), ('pcfg', 0.088), ('detecting', 0.082), ('unconventional', 0.079), ('angle', 0.072), ('sbar', 0.072), ('product', 0.068), ('reaching', 0.067), ('syntactic', 0.067), ('hotel', 0.063), ('greene', 0.06), ('hancock', 0.06), ('upto', 0.06), ('viewers', 0.06), ('vrij', 0.06), ('syntax', 0.058), ('gold', 0.056), ('liblinear', 0.056), ('feng', 0.054), ('computerized', 0.053), ('brook', 0.053), ('stony', 0.053), ('pennebaker', 0.053), ('unlexicalized', 0.051), ('bing', 0.051), ('rules', 0.051), ('datasets', 0.049), ('cm', 0.048), ('grandparent', 0.048), ('harvested', 0.048), ('unwanted', 0.048), ('somewhat', 0.047), ('amazon', 0.045), ('spam', 0.045), ('parse', 0.045), ('dataset', 0.044), ('meta', 0.042), ('www', 0.042), ('mukherjee', 0.04), ('prp', 0.04), ('yejin', 0.04), ('trees', 0.04), ('cues', 0.039), ('driven', 0.039), ('spanning', 0.039), ('class', 0.039), ('investigates', 0.039), ('encoded', 0.037), ('tags', 0.037), ('unigram', 0.036), ('relied', 0.036), ('filtered', 0.035), ('writing', 0.035), ('ny', 0.035), ('consistently', 0.034), ('cfg', 0.033), ('free', 0.033), ('pos', 0.033), ('choi', 0.033), ('features', 0.032), ('fan', 0.032), ('lim', 0.032), ('turk', 0.032), ('signals', 0.032), ('np', 0.031), ('surprisingly', 0.031), ('concrete', 0.031), ('introduced', 0.03), ('cross', 0.029), ('displayed', 0.029), ('discriminative', 0.028), ('baselines', 0.027), ('heuristic', 0.027), ('phrasal', 0.027), ('accuracy', 0.027), ('invite', 0.026), ('kristen', 0.026), ('ihs', 0.026), ('footprints', 0.026), ('uhn', 0.026), ('lexicosyntactic', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 190 acl-2012-Syntactic Stylometry for Deception Detection

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. ,

2 0.12070784 144 acl-2012-Modeling Review Comments

Author: Arjun Mukherjee ; Bing Liu

Abstract: Writing comments about news articles, blogs, or reviews have become a popular activity in social media. In this paper, we analyze reader comments about reviews. Analyzing review comments is important because reviews only tell the experiences and evaluations of reviewers about the reviewed products or services. Comments, on the other hand, are readers’ evaluations of reviews, their questions and concerns. Clearly, the information in comments is valuable for both future readers and brands. This paper proposes two latent variable models to simultaneously model and extract these key pieces of information. The results also enable classification of comments accurately. Experiments using Amazon review comments demonstrate the effectiveness of the proposed models.

3 0.095876411 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content

Author: Gregory Druck ; Bo Pang

Abstract: There are a growing number of popular web sites where users submit and review instructions for completing tasks as varied as building a table and baking a pie. In addition to providing their subjective evaluation, reviewers often provide actionable refinements. These refinements clarify, correct, improve, or provide alternatives to the original instructions. However, identifying and reading all relevant reviews is a daunting task for a user. In this paper, we propose a generative model that jointly identifies user-proposed refinements in instruction reviews at multiple granularities, and aligns them to the appropriate steps in the original instructions. Labeled data is not readily available for these tasks, so we focus on the unsupervised setting. In experiments in the recipe domain, our model provides 90. 1% F1 for predicting refinements at the review level, and 77.0% F1 for predicting refinement segments within reviews.

4 0.073526815 8 acl-2012-A Corpus of Textual Revisions in Second Language Writing

Author: John Lee ; Jonathan Webster

Abstract: This paper describes the creation of the first large-scale corpus containing drafts and final versions of essays written by non-native speakers, with the sentences aligned across different versions. Furthermore, the sentences in the drafts are annotated with comments from teachers. The corpus is intended to support research on textual revision by language learners, and how it is influenced by feedback. This corpus has been converted into an XML format conforming to the standards of the Text Encoding Initiative (TEI).

5 0.069112562 100 acl-2012-Fine Granular Aspect Analysis using Latent Structural Models

Author: Lei Fang ; Minlie Huang

Abstract: In this paper, we present a structural learning model forjoint sentiment classification and aspect analysis of text at various levels of granularity. Our model aims to identify highly informative sentences that are aspect-specific in online custom reviews. The primary advantages of our model are two-fold: first, it performs document-level and sentence-level sentiment polarity classification jointly; second, it is able to find informative sentences that are closely related to some respects in a review, which may be helpful for aspect-level sentiment analysis such as aspect-oriented summarization. The proposed method was evaluated with 9,000 Chinese restaurant reviews. Preliminary experiments demonstrate that our model obtains promising performance. 1

6 0.059464727 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

7 0.054634001 108 acl-2012-Hierarchical Chunk-to-String Translation

8 0.051189926 37 acl-2012-Baselines and Bigrams: Simple, Good Sentiment and Topic Classification

9 0.049725518 185 acl-2012-Strong Lexicalization of Tree Adjoining Grammars

10 0.047200043 154 acl-2012-Native Language Detection with Tree Substitution Grammars

11 0.04679824 109 acl-2012-Higher-order Constituent Parsing and Parser Combination

12 0.042993486 88 acl-2012-Exploiting Social Information in Grounded Language Learning via Grammatical Reduction

13 0.040931329 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

14 0.040146083 115 acl-2012-Identifying High-Impact Sub-Structures for Convolution Kernels in Document-level Sentiment Classification

15 0.039992627 193 acl-2012-Text-level Discourse Parsing with Rich Linguistic Features

16 0.037516128 186 acl-2012-Structuring E-Commerce Inventory

17 0.037275847 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

18 0.037049659 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

19 0.036393628 151 acl-2012-Multilingual Subjectivity and Sentiment Analysis

20 0.036291398 38 acl-2012-Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.13), (1, 0.048), (2, -0.005), (3, -0.063), (4, -0.033), (5, 0.008), (6, -0.014), (7, 0.02), (8, -0.071), (9, 0.011), (10, -0.015), (11, -0.009), (12, -0.061), (13, 0.024), (14, -0.02), (15, -0.049), (16, 0.026), (17, -0.063), (18, -0.016), (19, -0.053), (20, 0.011), (21, 0.024), (22, -0.031), (23, 0.07), (24, -0.045), (25, -0.042), (26, -0.004), (27, -0.043), (28, 0.066), (29, -0.006), (30, -0.029), (31, -0.121), (32, -0.087), (33, 0.044), (34, 0.018), (35, 0.033), (36, -0.17), (37, 0.101), (38, 0.036), (39, -0.002), (40, 0.111), (41, 0.317), (42, 0.005), (43, 0.01), (44, -0.001), (45, 0.072), (46, -0.019), (47, -0.101), (48, 0.153), (49, -0.154)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9291203 190 acl-2012-Syntactic Stylometry for Deception Detection

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. ,

2 0.67193687 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content

Author: Gregory Druck ; Bo Pang

Abstract: There are a growing number of popular web sites where users submit and review instructions for completing tasks as varied as building a table and baking a pie. In addition to providing their subjective evaluation, reviewers often provide actionable refinements. These refinements clarify, correct, improve, or provide alternatives to the original instructions. However, identifying and reading all relevant reviews is a daunting task for a user. In this paper, we propose a generative model that jointly identifies user-proposed refinements in instruction reviews at multiple granularities, and aligns them to the appropriate steps in the original instructions. Labeled data is not readily available for these tasks, so we focus on the unsupervised setting. In experiments in the recipe domain, our model provides 90. 1% F1 for predicting refinements at the review level, and 77.0% F1 for predicting refinement segments within reviews.

3 0.6079545 144 acl-2012-Modeling Review Comments

Author: Arjun Mukherjee ; Bing Liu

Abstract: Writing comments about news articles, blogs, or reviews have become a popular activity in social media. In this paper, we analyze reader comments about reviews. Analyzing review comments is important because reviews only tell the experiences and evaluations of reviewers about the reviewed products or services. Comments, on the other hand, are readers’ evaluations of reviews, their questions and concerns. Clearly, the information in comments is valuable for both future readers and brands. This paper proposes two latent variable models to simultaneously model and extract these key pieces of information. The results also enable classification of comments accurately. Experiments using Amazon review comments demonstrate the effectiveness of the proposed models.

4 0.52970511 8 acl-2012-A Corpus of Textual Revisions in Second Language Writing

Author: John Lee ; Jonathan Webster

Abstract: This paper describes the creation of the first large-scale corpus containing drafts and final versions of essays written by non-native speakers, with the sentences aligned across different versions. Furthermore, the sentences in the drafts are annotated with comments from teachers. The corpus is intended to support research on textual revision by language learners, and how it is influenced by feedback. This corpus has been converted into an XML format conforming to the standards of the Text Encoding Initiative (TEI).

5 0.42262709 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

Author: Wan-Yu Lin ; Nanyun Peng ; Chun-Chao Yen ; Shou-de Lin

Abstract: In this paper, we introduce a framework that identifies online plagiarism by exploiting lexical, syntactic and semantic features that includes duplication-gram, reordering and alignment of words, POS and phrase tags, and semantic similarity of sentences. We establish an ensemble framework to combine the predictions of each model. Results demonstrate that our system can not only find considerable amount of real-world online plagiarism cases but also outperforms several state-of-the-art algorithms and commercial software. Keywords Plagiarism Detection, Lexical, Syntactic, Semantic 1.

6 0.33829752 200 acl-2012-Toward Automatically Assembling Hittite-Language Cuneiform Tablet Fragments into Larger Texts

7 0.32970676 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

8 0.32172233 37 acl-2012-Baselines and Bigrams: Simple, Good Sentiment and Topic Classification

9 0.30761135 185 acl-2012-Strong Lexicalization of Tree Adjoining Grammars

10 0.30697513 108 acl-2012-Hierarchical Chunk-to-String Translation

11 0.29408187 100 acl-2012-Fine Granular Aspect Analysis using Latent Structural Models

12 0.28821367 186 acl-2012-Structuring E-Commerce Inventory

13 0.2802037 129 acl-2012-Learning High-Level Planning from Text

14 0.27873909 92 acl-2012-FLOW: A First-Language-Oriented Writing Assistant System

15 0.27834699 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

16 0.24607393 110 acl-2012-Historical Analysis of Legal Opinions with a Sparse Mixed-Effects Latent Variable Model

17 0.24435648 15 acl-2012-A Meta Learning Approach to Grammatical Error Correction

18 0.2382386 195 acl-2012-The Creation of a Corpus of English Metalanguage

19 0.23478554 77 acl-2012-Ecological Evaluation of Persuasive Messages Using Google AdWords

20 0.23461205 76 acl-2012-Distributional Semantics in Technicolor


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(25, 0.017), (26, 0.037), (28, 0.037), (29, 0.334), (30, 0.069), (37, 0.032), (39, 0.07), (59, 0.023), (74, 0.022), (82, 0.033), (84, 0.03), (85, 0.032), (90, 0.098), (92, 0.042), (94, 0.02), (99, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.71857935 190 acl-2012-Syntactic Stylometry for Deception Detection

Author: Song Feng ; Ritwik Banerjee ; Yejin Choi

Abstract: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. ,

2 0.63733512 121 acl-2012-Iterative Viterbi A* Algorithm for K-Best Sequential Decoding

Author: Zhiheng Huang ; Yi Chang ; Bo Long ; Jean-Francois Crespo ; Anlei Dong ; Sathiya Keerthi ; Su-Lin Wu

Abstract: Sequential modeling has been widely used in a variety of important applications including named entity recognition and shallow parsing. However, as more and more real time large-scale tagging applications arise, decoding speed has become a bottleneck for existing sequential tagging algorithms. In this paper we propose 1-best A*, 1-best iterative A*, k-best A* and k-best iterative Viterbi A* algorithms for sequential decoding. We show the efficiency of these proposed algorithms for five NLP tagging tasks. In particular, we show that iterative Viterbi A* decoding can be several times or orders of magnitude faster than the state-of-the-art algorithm for tagging tasks with a large number of labels. This algorithm makes real-time large-scale tagging applications with thousands of labels feasible.

3 0.41913241 75 acl-2012-Discriminative Strategies to Integrate Multiword Expression Recognition and Parsing

Author: Matthieu Constant ; Anthony Sigogne ; Patrick Watrin

Abstract: and Parsing Anthony Sigogne Universit e´ Paris-Est LIGM, CNRS France s igogne @univ-mlv . fr Patrick Watrin Universit e´ de Louvain CENTAL Belgium pat rick .wat rin @ ucl ouvain .be view, their incorporation has also been considered The integration of multiword expressions in a parsing procedure has been shown to improve accuracy in an artificial context where such expressions have been perfectly pre-identified. This paper evaluates two empirical strategies to integrate multiword units in a real constituency parsing context and shows that the results are not as promising as has sometimes been suggested. Firstly, we show that pregrouping multiword expressions before parsing with a state-of-the-art recognizer improves multiword recognition accuracy and unlabeled attachment score. However, it has no statistically significant impact in terms of F-score as incorrect multiword expression recognition has important side effects on parsing. Secondly, integrating multiword expressions in the parser grammar followed by a reranker specific to such expressions slightly improves all evaluation metrics.

4 0.4103187 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

Author: Arjun Mukherjee ; Bing Liu

Abstract: Aspect extraction is a central problem in sentiment analysis. Current methods either extract aspects without categorizing them, or extract and categorize them using unsupervised topic modeling. By categorizing, we mean the synonymous aspects should be clustered into the same category. In this paper, we solve the problem in a different setting where the user provides some seed words for a few aspect categories and the model extracts and clusters aspect terms into categories simultaneously. This setting is important because categorizing aspects is a subjective task. For different application purposes, different categorizations may be needed. Some form of user guidance is desired. In this paper, we propose two statistical models to solve this seeded problem, which aim to discover exactly what the user wants. Our experimental results show that the two proposed models are indeed able to perform the task effectively. 1

5 0.41021064 206 acl-2012-UWN: A Large Multilingual Lexical Knowledge Base

Author: Gerard de Melo ; Gerhard Weikum

Abstract: We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.

6 0.41005194 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

7 0.40935841 144 acl-2012-Modeling Review Comments

8 0.40889013 187 acl-2012-Subgroup Detection in Ideological Discussions

9 0.4077886 19 acl-2012-A Ranking-based Approach to Word Reordering for Statistical Machine Translation

10 0.40667063 148 acl-2012-Modified Distortion Matrices for Phrase-Based Statistical Machine Translation

11 0.40462703 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

12 0.40385491 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

13 0.40251023 175 acl-2012-Semi-supervised Dependency Parsing using Lexical Affinities

14 0.40189439 182 acl-2012-Spice it up? Mining Refinements to Online Instructions from User Generated Content

15 0.40071273 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models

16 0.40065727 191 acl-2012-Temporally Anchored Relation Extraction

17 0.3993361 139 acl-2012-MIX Is Not a Tree-Adjoining Language

18 0.3991282 83 acl-2012-Error Mining on Dependency Trees

19 0.39852756 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

20 0.39844459 99 acl-2012-Finding Salient Dates for Building Thematic Timelines