acl acl2011 acl2011-194 knowledge-graph by maker-knowledge-mining

194 acl-2011-Language Use: What can it tell us?


Source: pdf

Author: Marjorie Freedman ; Alex Baron ; Vasin Punyakanok ; Ralph Weischedel

Abstract: For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 [name] [address1] [address2] [address3] [ emai l ] Abstract For 20 years, information extraction has focused on facts expressed in text. [sent-2, score-0.16]

2 In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. [sent-3, score-0.107]

3 For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? [sent-4, score-0.244]

4 We report results on both English and Arabic discussion forums. [sent-5, score-0.059]

5 1 Introduction Extracting explicitly stated information has been tested in and evaluations. [sent-6, score-0.063]

6 MUC1 QA3 ACE2 Sentiment analysis uses implicit meaning of text, but has focused primarily on text known to be rich in opinions (product reviews, editorials) and delves into only one aspect of implicit meaning. [sent-12, score-0.142]

7 Our long-term goal is to predict social roles in informal group discussion from language uses (LU), even if those roles are not explicitly stated; for example, using the communication during a meeting, identify the leader of a group. [sent-13, score-0.651]

8 This paper provides a snapshot of preliminary, ongoing research in predicting two classes of language use: 1 http://www-nlpir. [sent-14, score-0.147]

9 html 341 [name] [address1] [address2] [address3] [ emai l ] [name] [address1] [address2] [address3] [ emai l ] Establish-Credibility and Attempt-To-Persuade. [sent-21, score-0.216]

10 Technical challenges include dealing with the facts that those LUs are rare and subjective and that human judgments have low agreement. [sent-22, score-0.297]

11 Because the phenomena are rare, always predicting the absence of a LU is a very high baseline. [sent-25, score-0.206]

12 2 Language Uses (LUs) A language use refers to an aspect of the social intention of how a communicator uses language. [sent-28, score-0.244]

13 The information that supports a decision about an implicit social action or role is likely to be distributed over more than one turn in a dialog; therefore, a language use is defined, annotated, and predicted across a thread in the dialog. [sent-29, score-0.591]

14 Because our current work uses discussion forums, threads provide a natural, explicit unit of analysis. [sent-30, score-0.278]

15 An Attempt-to-Persuade occurs when a poster tries to convince other participants to change their beliefs or actions over the course of a thread. [sent-32, score-0.227]

16 Typically, there is at least some resistance on the part of the posters being persuaded. [sent-33, score-0.051]

17 To distinguish between actual persuasion and discussions that involve differing opinions, a poster needs to engage Proceedings ofP thoer t4l9atnhd A, Onrnuegaoln M,e Jeuntineg 19 o-f2 t4h,e 2 A0s1s1o. [sent-34, score-0.388]

18 i ac t2io0n11 fo Ar Cssoocmiaptuiotanti foonra Clo Lminpguutiast i ocns:aslh Loirntpgaupisetrics , pages 341–345, in multiple persuasion posts (turns) to be considered exhibiting the LU. [sent-36, score-0.341]

19 Establish-Credibility occurs when a poster attempts to increase their standing within the group. [sent-37, score-0.104]

20 , explicit statements of authority, demonstration expertise through knowledge, providing verifiable information (e. [sent-40, score-0.041]

21 Data selection focused on the number of messages and posters in a thread, as well as the frequency of known indicators like quotations. [sent-47, score-0.188]

22 Elsewhere, similar, iterative annotation processes have yielded significant improvements in agreement for word sense and coreference (Hovy et al. [sent-51, score-0.094]

23 While LUs were annotated for a poster over the full thread, annotators also marked specific messages in the thread for presence of evidence of the language use. [sent-53, score-0.733]

24 Table 1 includes annotator consistency at both the evidence (message) and LU level. [sent-54, score-0.193]

25 Discussions suggested that disagreement did not come from a misunderstand- ing of the task but was the result of differing intuitions about difficult-to-define labels. [sent-61, score-0.185]

26 The task is to predict for every participant in a given thread, whether the participant exhibits Attempt-to-Persuade and/or Establish-Credibility. [sent-64, score-0.244]

27 If there is insufficient evidence of an LU for a participant, then the LU value for that poster is negative. [sent-65, score-0.197]

28 Internally we measured predictions of message-level evidence as well. [sent-67, score-0.18]

29 For English, 139 threads from Google Groups and LiveJournal have been annotated for Attempt-to-Persuade, and 103 threads for Attempt-to-Establish-Credibility. [sent-69, score-0.475]

30 Due to low annotator agreement, attempting to resolve annotation disagreement by the standard adjudication process was too timeconsuming. [sent-75, score-0.361]

31 Instead, the evaluation scheme, similar to the pyramid scheme used for summarization evaluation, assigns scores to each example based on its level of agreement among the annotators. [sent-76, score-0.092]

32 Specifically, each example is assigned positive and negative scores, p = n+/N and n = n-/N, where is the number of annotators that annotate the example as positive, and n for the negative. [sent-77, score-0.138]

33 A system that outputs positive on the example results in p correct and n incorrect. [sent-79, score-0.05]

34 The system gets p incorrect and n correct for predicting negative. [sent-80, score-0.096]

35 Each example xi is associated with positive and negative scores, pi and ni. [sent-83, score-0.166]

36 Let ri = 1 if the system outputs positive for example xi and 0 for negative. [sent-84, score-0.089]

37 The partial accuracy, recall, precision, and Fmeasure can be computed by: pA = 100×∑i(ripi+(1-ri)ni) / ∑i(pi+ni) pR = 100×∑iripi / ∑ipi pP = 100× ∑iripi / ∑iri pF = 2 pR pP/(pR+pP) The maximum pA and pF may be less than 100 when there is disagreement between annotators. [sent-85, score-0.132]

38 npA = 100×pA/max(pA) npF = 100×pF/max(pF) 4 URLs and judgments are available by email. [sent-87, score-0.071]

39 We process a thread in three stages: (1) linguistic analysis of each message (post) to yield features, (2) Prediction of message-level properties using an SVM on the extracted features, and (3) Simple rules that predict language uses over the thread. [sent-89, score-0.504]

40 Figure 1: Mes age and LU Prediction Phase 1: The SERIF Information Extraction Engine extracts features which are designed to capture different aspects of the posts. [sent-90, score-0.038]

41 The features in- clude simple features that can be extracted from the surface text of the posts and the structure of the posts within the threads. [sent-91, score-0.236]

42 d s, subjective words, and mentions of levPelh (aSse c t2io: nG 3iv),e na ntr SaiVniMng p dreadtai cftrs oimf t hthe p mosets csoagne- tains evidence for an LU. [sent-104, score-0.201]

43 The motivation for this level is (1) Posts provide a compact unit with reliably extractable, specific, explicit features. [sent-105, score-0.046]

44 (3) Pointing to posts offers a more clear justification for the predictions. [sent-107, score-0.118]

45 (4) In our experiments, errors here do not seem to percolate to the thread level. [sent-108, score-0.313]

46 In 343 fact, accuracy at the message level is not directly predictive of accuracy at the thread level. [sent-109, score-0.494]

47 Phase 3: Given the infrequency of the Attemptto-Persuade and Establish-Credibility LUs, we wrote a few rules to predict LUs over threads, given the predictions at the message level. [sent-110, score-0.278]

48 For instance, if the number of messages with evidence for persuasion is greater than 2 from a given participant, then the system predicts AttemptToPersuade. [sent-111, score-0.45]

49 To predict that a poster is exhibiting the Attempt-to-Persuade LU, the system need not find every piece of evidence that the LU is present, but rather just needs to find sufficient evidence for identifying the LU. [sent-113, score-0.403]

50 Our message level classifiers were trained with an SVM that optimizes F-measure (Joachims, 2005). [sent-114, score-0.181]

51 Because annotation disagreement is a major challenge, we experimented with various ways to account for (and make use of) noisy, dual annotat- ed text. [sent-115, score-0.18]

52 removing examples with disagreement; treating an example as negative if any annotator marked the example negative; and treating an example as positive if any annotator marked the example as positive. [sent-118, score-0.289]

53 An alternative (and more principled) approach is to incorporate positive and negative scores for each example into the optimization procedure. [sent-119, score-0.089]

54 Because each example was annotated by the same number of annotators (2 in this case), we are able to treat each annotator’s decision as an independent example without augmenting the SVM optimization process. [sent-120, score-0.086]

55 Table 4Table 3 shows results for predicting message level evidence of an LU (Phase 2). [sent-126, score-0.37]

56 Table 5Table 4 shows performance on the task of predicting an LU for each poster. [sent-127, score-0.096]

57 Additionally, Arabic messages are much shorter, and the phenomena is even more rare (as illustrated by the high npA, accuracy, of the A baseline). [sent-129, score-0.349]

58 like our dataset, each example in the external evaluation dataset was annotated by 3 annotators. [sent-130, score-0.037]

59 r72 6 Related Research Research in authorship profiling (Chung & Pennebaker, 2007; Argamon et al, in press; and Abbasi and Chen, 2005) has identified traits, such as status, sex, age, gender, and native language. [sent-140, score-0.12]

60 Models and predictions in this field have primarily used simple word-based features, e. [sent-141, score-0.087]

61 Social science researchers have studied how social roles develop in online communities (Fisher, et al. [sent-144, score-0.328]

62 , 2006), and have attempted to categorize these roles in multiple ways (Golder and Donath 2004; Turner et al. [sent-145, score-0.128]

63 (2007) have investigated the feasibility of detecting such roles automatically using posting frequency (but not the content of the messages). [sent-148, score-0.165]

64 Sentiment analysis requires understanding the implicit nature of the text. [sent-149, score-0.071]

65 Work on perspective and sentiment analysis frequently uses a corpus known to be rich in sentiment such as reviews or editorials (e. [sent-150, score-0.247]

66 Both the MPQA corpus and the various corpora of editorials and reviews have tended towards more formal, edited, non-conversational text. [sent-154, score-0.194]

67 Our work in contrast, specifically targets interactive discussions in an informal setting. [sent-155, score-0.149]

68 Work outside of computational linguistics that has looked at persuasion has tended to examine language in a persuasive context (e. [sent-156, score-0.219]

69 Their work focuses on chat transcripts in an experimental setting designed to be rich in the phenomena of interest. [sent-161, score-0.11]

70 Like our work, their predictions operate over the conversation, and not a single utterance. [sent-162, score-0.087]

71 We work with threaded online discussions in which the phenomena in question are rare. [sent-165, score-0.175]

72 Our annotators and system must distinguish between the language use and text that is opinionated without an intention to persuade or establish credibility. [sent-166, score-0.182]

73 7 Conclusions and Future Work In this work in progress, we presented a hybrid statistical & rule-based approach to detecting properties not explicitly stated, but evident from language use. [sent-167, score-0.037]

74 Annotation at the message (turn) level provided training data useful for predicting rare phenomena at the discussion level while reducing the need for turn-level predictions to be accurate. [sent-168, score-0.681]

75 Weighing subjective judgments overcame the need for high annotator consistency. [sent-169, score-0.243]

76 For English, the system beats both baselines with respect to accuracy and F, despite the fact that because the phenomena are rare, always predicting the absence of a language use is a high baseline. [sent-170, score-0.273]

77 This work has explored LUs, the implicit, social purpose behind the words of a message. [sent-172, score-0.157]

78 Future work will explore incorporating LU predictions to predict the social roles played by the participants in a thread, for example using persuasion and credibility to establish which participants in a discussion are serving as informal leaders. [sent-173, score-0.912]

79 All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U. [sent-175, score-0.041]

80 , (2006) “Friends, foes, and fringe: norms and structure in political discussion networks”, Proceedings of the 2006 international conference on Digital government research. [sent-227, score-0.059]

81 "Visualizing the signatures of social roles in online discussion groups," In The Journal of Social Structure, vol. [sent-265, score-0.344]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('lus', 0.325), ('thread', 0.313), ('threads', 0.219), ('lu', 0.179), ('persuasion', 0.166), ('social', 0.157), ('messages', 0.137), ('message', 0.135), ('disagreement', 0.132), ('roles', 0.128), ('golder', 0.123), ('welser', 0.123), ('posts', 0.118), ('phenomena', 0.11), ('emai', 0.108), ('poster', 0.104), ('arabic', 0.103), ('rare', 0.102), ('annotator', 0.1), ('predicting', 0.096), ('fisher', 0.094), ('participant', 0.094), ('editorials', 0.094), ('evidence', 0.093), ('pf', 0.088), ('predictions', 0.087), ('informal', 0.084), ('donath', 0.082), ('iripi', 0.082), ('npa', 0.082), ('persuade', 0.082), ('weibe', 0.082), ('pennebaker', 0.082), ('phase', 0.075), ('subjective', 0.072), ('abbasi', 0.072), ('strzalkowski', 0.072), ('judgments', 0.071), ('implicit', 0.071), ('beliefs', 0.067), ('odni', 0.067), ('beats', 0.067), ('profiling', 0.067), ('turner', 0.067), ('discussions', 0.065), ('stated', 0.063), ('iarpa', 0.063), ('credibility', 0.063), ('weighing', 0.059), ('discussion', 0.059), ('exhibiting', 0.057), ('predict', 0.056), ('participants', 0.056), ('mpqa', 0.055), ('argamon', 0.055), ('predicts', 0.054), ('sentiment', 0.053), ('tended', 0.053), ('differing', 0.053), ('authorship', 0.053), ('facts', 0.052), ('fmeasure', 0.051), ('posters', 0.051), ('snapshot', 0.051), ('intention', 0.051), ('positive', 0.05), ('action', 0.05), ('annotators', 0.049), ('tweet', 0.048), ('annotation', 0.048), ('pr', 0.047), ('reviews', 0.047), ('level', 0.046), ('agreement', 0.046), ('chung', 0.046), ('urls', 0.046), ('attempting', 0.045), ('ace', 0.045), ('communities', 0.043), ('pa', 0.043), ('modifier', 0.041), ('statements', 0.041), ('xi', 0.039), ('negative', 0.039), ('communication', 0.039), ('pi', 0.038), ('age', 0.038), ('joachims', 0.037), ('annotated', 0.037), ('smith', 0.037), ('detecting', 0.037), ('hthe', 0.036), ('mcfarland', 0.036), ('communicator', 0.036), ('mozart', 0.036), ('serif', 0.036), ('extractable', 0.036), ('imperatives', 0.036), ('computermediated', 0.036), ('adjudication', 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000008 194 acl-2011-Language Use: What can it tell us?

Author: Marjorie Freedman ; Alex Baron ; Vasin Punyakanok ; Ralph Weischedel

Abstract: For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums. 1

2 0.12884571 286 acl-2011-Social Network Extraction from Texts: A Thesis Proposal

Author: Apoorv Agarwal

Abstract: In my thesis, Ipropose to build a system that would enable extraction of social interactions from texts. To date Ihave defined a comprehensive set of social events and built a preliminary system that extracts social events from news articles. Iplan to improve the performance of my current system by incorporating semantic information. Using domain adaptation techniques, Ipropose to apply my system to a wide range of genres. By extracting linguistic constructs relevant to social interactions, I will be able to empirically analyze different kinds of linguistic constructs that people use to express social interactions. Lastly, I will attempt to make convolution kernels more scalable and interpretable.

3 0.12355303 156 acl-2011-IMASS: An Intelligent Microblog Analysis and Summarization System

Author: Jui-Yu Weng ; Cheng-Lun Yang ; Bo-Nian Chen ; Yen-Kai Wang ; Shou-De Lin

Abstract: This paper presents a system to summarize a Microblog post and its responses with the goal to provide readers a more constructive and concise set of information for efficient digestion. We introduce a novel two-phase summarization scheme. In the first phase, the post plus its responses are classified into four categories based on the intention, interrogation, sharing, discussion and chat. For each type of post, in the second phase, we exploit different strategies, including opinion analysis, response pair identification, and response relevancy detection, to summarize and highlight critical information to display. This system provides an alternative thinking about machinesummarization: by utilizing AI approaches, computers are capable of constructing deeper and more user-friendly abstraction. 1

4 0.1176804 218 acl-2011-MemeTube: A Sentiment-based Audiovisual System for Analyzing and Displaying Microblog Messages

Author: Cheng-Te Li ; Chien-Yuan Wang ; Chien-Lin Tseng ; Shou-De Lin

Abstract: Micro-blogging services provide platforms for users to share their feelings and ideas on the move. In this paper, we present a search-based demonstration system, called MemeTube, to summarize the sentiments of microblog messages in an audiovisual manner. MemeTube provides three main functions: (1) recognizing the sentiments of messages (2) generating music melody automatically based on detected sentiments, and (3) produce an animation of real-time piano playing for audiovisual display. Our MemeTube system can be accessed via: http://mslab.csie.ntu.edu.tw/memetube/ .

5 0.11635871 31 acl-2011-Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations

Author: Sara Rosenthal ; Kathleen McKeown

Abstract: We investigate whether wording, stylistic choices, and online behavior can be used to predict the age category of blog authors. Our hypothesis is that significant changes in writing style distinguish pre-social media bloggers from post-social media bloggers. Through experimentation with a range of years, we found that the birth dates of students in college at the time when social media such as AIM, SMS text messaging, MySpace and Facebook first became popular, enable accurate age prediction. We also show that internet writing characteristics are important features for age prediction, but that lexical content is also needed to produce significantly more accurate results. Our best results allow for 81.57% accuracy.

6 0.10606055 64 acl-2011-C-Feel-It: A Sentiment Analyzer for Micro-blogs

7 0.10189823 133 acl-2011-Extracting Social Power Relationships from Natural Language

8 0.098742977 121 acl-2011-Event Discovery in Social Media Feeds

9 0.087825641 288 acl-2011-Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications

10 0.087031931 157 acl-2011-I Thou Thee, Thou Traitor: Predicting Formal vs. Informal Address in English Literature

11 0.086761974 95 acl-2011-Detection of Agreement and Disagreement in Broadcast Conversations

12 0.086369924 45 acl-2011-Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews

13 0.086231105 211 acl-2011-Liars and Saviors in a Sentiment Annotated Corpus of Comments to Political Debates

14 0.085951231 7 acl-2011-A Corpus for Modeling Morpho-Syntactic Agreement in Arabic: Gender, Number and Rationality

15 0.084324501 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

16 0.082814947 292 acl-2011-Target-dependent Twitter Sentiment Classification

17 0.080449507 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling

18 0.078534298 281 acl-2011-Sentiment Analysis of Citations using Sentence Structure-Based Features

19 0.075100377 159 acl-2011-Identifying Noun Product Features that Imply Opinions

20 0.073871531 289 acl-2011-Subjectivity and Sentiment Analysis of Modern Standard Arabic


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.193), (1, 0.133), (2, 0.018), (3, -0.026), (4, -0.013), (5, 0.087), (6, 0.05), (7, -0.062), (8, -0.012), (9, 0.021), (10, -0.085), (11, -0.01), (12, -0.082), (13, 0.041), (14, -0.052), (15, -0.058), (16, -0.022), (17, -0.03), (18, -0.044), (19, -0.08), (20, 0.071), (21, 0.092), (22, -0.045), (23, 0.082), (24, -0.037), (25, 0.013), (26, 0.058), (27, 0.017), (28, 0.02), (29, -0.096), (30, 0.035), (31, -0.042), (32, -0.114), (33, 0.106), (34, 0.022), (35, -0.008), (36, -0.122), (37, -0.041), (38, 0.002), (39, 0.019), (40, 0.067), (41, 0.074), (42, 0.041), (43, -0.042), (44, -0.136), (45, 0.003), (46, -0.006), (47, -0.045), (48, -0.062), (49, -0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9389165 194 acl-2011-Language Use: What can it tell us?

Author: Marjorie Freedman ; Alex Baron ; Vasin Punyakanok ; Ralph Weischedel

Abstract: For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums. 1

2 0.78197229 31 acl-2011-Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations

Author: Sara Rosenthal ; Kathleen McKeown

Abstract: We investigate whether wording, stylistic choices, and online behavior can be used to predict the age category of blog authors. Our hypothesis is that significant changes in writing style distinguish pre-social media bloggers from post-social media bloggers. Through experimentation with a range of years, we found that the birth dates of students in college at the time when social media such as AIM, SMS text messaging, MySpace and Facebook first became popular, enable accurate age prediction. We also show that internet writing characteristics are important features for age prediction, but that lexical content is also needed to produce significantly more accurate results. Our best results allow for 81.57% accuracy.

3 0.7646873 133 acl-2011-Extracting Social Power Relationships from Natural Language

Author: Philip Bramsen ; Martha Escobar-Molano ; Ami Patel ; Rafael Alonso

Abstract: Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem – albeit a hard one – and constitute a case for future research in computational sociolinguistics. 1

4 0.74977386 286 acl-2011-Social Network Extraction from Texts: A Thesis Proposal

Author: Apoorv Agarwal

Abstract: In my thesis, Ipropose to build a system that would enable extraction of social interactions from texts. To date Ihave defined a comprehensive set of social events and built a preliminary system that extracts social events from news articles. Iplan to improve the performance of my current system by incorporating semantic information. Using domain adaptation techniques, Ipropose to apply my system to a wide range of genres. By extracting linguistic constructs relevant to social interactions, I will be able to empirically analyze different kinds of linguistic constructs that people use to express social interactions. Lastly, I will attempt to make convolution kernels more scalable and interpretable.

5 0.73288655 288 acl-2011-Subjective Natural Language Problems: Motivations, Applications, Characterizations, and Implications

Author: Cecilia Ovesdotter Alm

Abstract: This opinion paper discusses subjective natural language problems in terms of their motivations, applications, characterizations, and implications. It argues that such problems deserve increased attention because of their potential to challenge the status of theoretical understanding, problem-solving methods, and evaluation techniques in computational linguistics. The author supports a more holistic approach to such problems; a view that extends beyond opinion mining or sentiment analysis.

6 0.69855589 156 acl-2011-IMASS: An Intelligent Microblog Analysis and Summarization System

7 0.58419287 218 acl-2011-MemeTube: A Sentiment-based Audiovisual System for Analyzing and Displaying Microblog Messages

8 0.58100009 306 acl-2011-Towards Style Transformation from Written-Style to Audio-Style

9 0.57077664 214 acl-2011-Lost in Translation: Authorship Attribution using Frame Semantics

10 0.53933674 97 acl-2011-Discovering Sociolinguistic Associations with Structured Sparsity

11 0.52853447 136 acl-2011-Finding Deceptive Opinion Spam by Any Stretch of the Imagination

12 0.52472574 120 acl-2011-Even the Abstract have Color: Consensus in Word-Colour Associations

13 0.51723391 289 acl-2011-Subjectivity and Sentiment Analysis of Modern Standard Arabic

14 0.51282901 84 acl-2011-Contrasting Opposing Views of News Articles on Contentious Issues

15 0.50190175 299 acl-2011-The Arabic Online Commentary Dataset: an Annotated Dataset of Informal Arabic with High Dialectal Content

16 0.49625421 212 acl-2011-Local Histograms of Character N-grams for Authorship Attribution

17 0.49445513 211 acl-2011-Liars and Saviors in a Sentiment Annotated Corpus of Comments to Political Debates

18 0.49290413 74 acl-2011-Combining Indicators of Allophony

19 0.48811761 77 acl-2011-Computing and Evaluating Syntactic Complexity Features for Automated Scoring of Spontaneous Non-Native Speech

20 0.48449978 138 acl-2011-French TimeBank: An ISO-TimeML Annotated Reference Corpus


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.032), (17, 0.036), (26, 0.042), (31, 0.017), (37, 0.058), (39, 0.036), (41, 0.08), (55, 0.019), (59, 0.043), (72, 0.039), (88, 0.327), (91, 0.039), (96, 0.144), (97, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78207147 194 acl-2011-Language Use: What can it tell us?

Author: Marjorie Freedman ; Alex Baron ; Vasin Punyakanok ; Ralph Weischedel

Abstract: For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums. 1

2 0.63972896 103 acl-2011-Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation

Author: Ivan Titov

Abstract: We consider a semi-supervised setting for domain adaptation where only unlabeled data is available for the target domain. One way to tackle this problem is to train a generative model with latent variables on the mixture of data from the source and target domains. Such a model would cluster features in both domains and ensure that at least some of the latent variables are predictive of the label on the source domain. The danger is that these predictive clusters will consist of features specific to the source domain only and, consequently, a classifier relying on such clusters would perform badly on the target domain. We introduce a constraint enforcing that marginal distributions of each cluster (i.e., each latent variable) do not vary significantly across domains. We show that this constraint is effec- tive on the sentiment classification task (Pang et al., 2002), resulting in scores similar to the ones obtained by the structural correspondence methods (Blitzer et al., 2007) without the need to engineer auxiliary tasks.

3 0.62482566 108 acl-2011-EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar

Author: Chung-chi Huang ; Mei-hua Chen ; Shih-ting Huang ; Jason S. Chang

Abstract: We introduce a new method for learning to detect grammatical errors in learner’s writing and provide suggestions. The method involves parsing a reference corpus and inferring grammar patterns in the form of a sequence of content words, function words, and parts-of-speech (e.g., “play ~ role in Ving” and “look forward to Ving”). At runtime, the given passage submitted by the learner is matched using an extended Levenshtein algorithm against the set of pattern rules in order to detect errors and provide suggestions. We present a prototype implementation of the proposed method, EdIt, that can handle a broad range of errors. Promising results are illustrated with three common types of errors in nonnative writing. 1

4 0.59503233 264 acl-2011-Reordering Metrics for MT

Author: Alexandra Birch ; Miles Osborne

Abstract: One of the major challenges facing statistical machine translation is how to model differences in word order between languages. Although a great deal of research has focussed on this problem, progress is hampered by the lack of reliable metrics. Most current metrics are based on matching lexical items in the translation and the reference, and their ability to measure the quality of word order has not been demonstrated. This paper presents a novel metric, the LRscore, which explicitly measures the quality of word order by using permutation distance metrics. We show that the metric is more consistent with human judgements than other metrics, including the BLEU score. We also show that the LRscore can successfully be used as the objective function when training translation model parameters. Training with the LRscore leads to output which is preferred by humans. Moreover, the translations incur no penalty in terms of BLEU scores.

5 0.53290135 332 acl-2011-Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification

Author: Danushka Bollegala ; David Weir ; John Carroll

Abstract: We describe a sentiment classification method that is applicable when we do not have any labeled data for a target domain but have some labeled data for multiple other domains, designated as the source domains. We automat- ically create a sentiment sensitive thesaurus using both labeled and unlabeled data from multiple source domains to find the association between words that express similar sentiments in different domains. The created thesaurus is then used to expand feature vectors to train a binary classifier. Unlike previous cross-domain sentiment classification methods, our method can efficiently learn from multiple source domains. Our method significantly outperforms numerous baselines and returns results that are better than or comparable to previous cross-domain sentiment classification methods on a benchmark dataset containing Amazon user reviews for different types of products.

6 0.51995826 121 acl-2011-Event Discovery in Social Media Feeds

7 0.51763916 93 acl-2011-Dealing with Spurious Ambiguity in Learning ITG-based Word Alignment

8 0.51626259 31 acl-2011-Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations

9 0.51524377 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

10 0.51281965 295 acl-2011-Temporal Restricted Boltzmann Machines for Dependency Parsing

11 0.51183194 5 acl-2011-A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing

12 0.50909555 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

13 0.50770688 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

14 0.50642276 37 acl-2011-An Empirical Evaluation of Data-Driven Paraphrase Generation Techniques

15 0.50627202 218 acl-2011-MemeTube: A Sentiment-based Audiovisual System for Analyzing and Displaying Microblog Messages

16 0.50600129 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

17 0.50510621 40 acl-2011-An Error Analysis of Relation Extraction in Social Media Documents

18 0.5051055 244 acl-2011-Peeling Back the Layers: Detecting Event Role Fillers in Secondary Contexts

19 0.50459844 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

20 0.50451702 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling