acl acl2013 acl2013-310 knowledge-graph by maker-knowledge-mining

310 acl-2013-Semantic Frames to Predict Stock Price Movement


Source: pdf

Author: Boyi Xie ; Rebecca J. Passonneau ; Leon Wu ; German G. Creamer

Abstract: Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 There has been much work on semantic frame parsers, but less that applies them to general NLP problems. [sent-6, score-0.366]

2 We address a task to predict change in stock price from financial news. [sent-7, score-0.9]

3 Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. [sent-10, score-0.439]

4 Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task. [sent-11, score-0.56]

5 1 Introduction A growing literature evaluates the financial effects of media on the market (Tetlock, 2007; Engelberg and Parsons, 2011). [sent-12, score-0.402]

6 Recent work has applied NLP techniques to various financial media (conventional news, tweets) to detect sentiment in conventional news (Devitt and Ahmad, 2007; Haider and Mehrotra, 2011) or message boards (Chua et al. [sent-13, score-0.48]

7 , 2009), or discriminate expert from nonexpert investors in financial tweets (Bar-Haim et al. [sent-14, score-0.294]

8 We hypothesize that conventional news can be used to predict changes in the stock price of specific companies, and that the semantic features that best represent relevant aspects of the news vary across Stevens Institute of Technology Hoboken, NJ USA gcreamer@ stevens . [sent-18, score-0.885]

9 Friday, Figure 1: Summary of financial news items pertaining to Google in April, 2012. [sent-60, score-0.357]

10 To test this hypothesis, we use price information to label data from six years of financial news. [sent-62, score-0.606]

11 Our experiments test several document representations for two binary classification tasks, change of price and polarity. [sent-63, score-0.439]

12 Our main contribution is a novel tree representation based on semantic frame parses that performs significantly better than enriched bag-of-words vectors. [sent-64, score-0.508]

13 Figure 1 shows a constructed example based on extracts from financial news about Google in April, 2012. [sent-65, score-0.357]

14 It illustrates how a series of events reported in the news precedes and potentially predicts a large change in Google’s stock price. [sent-66, score-0.414]

15 Google’s early announcement of quarterly earnings possibly presages trouble, and its stock price falls soon after reports of a legal action against Google by Oracle. [sent-67, score-0.55]

16 Accurate detection of events and relations that might have an impact on stock price should benefit from document representation that captures sentiment in lexical items (e. [sent-69, score-0.671]

17 A frame is a lexical semantic representa873 Proce dingSsof oifa, th Beu 5l1gsarti Aan,An u aglu Mste 4e-ti9n2g 0 o1f3 t. [sent-72, score-0.366]

18 To the best of our knowledge, this paper is the first to apply semantic frames in this domain. [sent-79, score-0.227]

19 On the polarity task, the semantic frame features encoded as trees perform significantly better across years and sectors than bag-of-words vectors (BOW), and outperform BOW vectors enhanced with semantic frame features, and a supervised topic modeling approach. [sent-80, score-1.087]

20 The results on the price change task show the same trend, but are not statistically significant, possibly due to the volatility of the market in 2007 and the following several years. [sent-81, score-0.603]

21 Yet even modest predictive performance on both tasks could have an impact, as discussed below, if incorporated into financial models such as Rydberg and Shephard (2003). [sent-82, score-0.235]

22 Section 4 presents vector-based and tree-based features from semantic frame parses, and section 5 describes our dataset. [sent-84, score-0.366]

23 Many news organizations that feature financial news, such as Reuters, the Wall Street Journal and Bloomberg, devote significant resources to the analysis of corporate news. [sent-87, score-0.397]

24 Much of the data that would support studies of a link between the news media and the market are publicly available. [sent-88, score-0.289]

25 Because very few stock market investors directly observe firms’ production activities, they get most of their information sec- ondhand. [sent-91, score-0.405]

26 We hypothesize that semantic frames can address these issues. [sent-107, score-0.227]

27 Most of the NLP literature on semantic frames addresses how to build robust semantic frame parsers, with intrinsic evaluation against gold standard parses. [sent-108, score-0.593]

28 There have been few applications of semantic frame parsing for extrinsic tasks. [sent-109, score-0.366]

29 To test for measurable benefits of semantic frame parsing, this paper poses the following questions: 1. [sent-110, score-0.366]

30 Are semantic frames useful for document representation of financial news? [sent-111, score-0.498]

31 Rather, we investigate whether computational linguistic methodologies can improve our understanding of a company’s fundamental market value, and whether linguistic information derived from news produces a consistent enough result to benefit more comprehensive financial models. [sent-119, score-0.486]

32 3 Related Work NLP has recently been applied to financial text for market analysis, primarily using bag-ofwords (BOW) document representation. [sent-120, score-0.364]

33 Luss and d’Aspremont (2008) use text classification to model price movements of financial assets on a per-day basis. [sent-121, score-0.599]

34 (2009) address a text regression problem to predict the financial risk of investment in companies. [sent-124, score-0.275]

35 They analyze 10-K reports to predict stock return volatility. [sent-125, score-0.257]

36 (2012) correlate text with financial time series volume and price data. [sent-128, score-0.568]

37 They find that graph centrality measures like page rank and degree are more strongly correlated to both price and traded volume for an aggregation of similar companies, while individual stocks are less corre- lated. [sent-129, score-0.333]

38 (2000) present an approach to identify news stories that influence the behavior of financial markets, and predict trends in stock prices based on the content of news stories that precede the trends. [sent-131, score-0.771]

39 We explore a rich feature space that relies on frame semantic parsing. [sent-134, score-0.366]

40 Other resources for sentiment detection include the Dictionary of Affect in Language (DAL) to score the prior polarity of words, as in Agarwal et al. [sent-138, score-0.241]

41 , 2003), based on the theory of frame semantics (Fillmore, 1976). [sent-142, score-0.315]

42 Our work addresses classification tasks that have potential relevance to an influential financial model (Rydberg and Shephard, 2003). [sent-147, score-0.266]

43 This model decomposes stock price analysis of financial data into a three-part ADS model - activity (a binary process modeling the price move or not), direction (another binary process modeling the direction of the moves) and size (a number quantifying the size of the moves). [sent-148, score-1.118]

44 Our two binary classification tasks for news, price change and polarity, are analogous to their activity and direction. [sent-149, score-0.439]

45 At present, our goal is limited to the determination of whether NLP features can uncover information from news that could help predict stock price movement or support analysts’ investigations. [sent-151, score-0.712]

46 The second is a tree representation that encodes semantic frame features, and depends on tree kernel measures for support vector machine classification. [sent-154, score-0.596]

47 The semantic parses of both methods are derived from SEMAFOR1 (Das and Smith, 2012; Das and Smith, 2011), which solves the se- mantic parsing problem by rule-based target identification, log-linear model based frame identification and frame element filling. [sent-155, score-0.714]

48 FrameNet defines hundreds of frames, each of which represents a scenario associated with semantic roles, or frame elements, that serve as participants in the scenario the frame signifies. [sent-167, score-0.681]

49 Here we use F for the frame name, FT for the target words, and FE for frame elements. [sent-172, score-0.63]

50 For example, we define idf-adjusted weighted frame features, such as wF for attribute F in document d as wisF thFe,d f=requ fe(nFc,yd) o ×f r laomg|ed∈ FD|D: iF|n∈ d |, D w ihser the f w(Fh,odle) document set and | · | is the cardinality operator. [sent-174, score-0.315]

51 SemTree can distinguish the roles of each company of interest, or designated object (e. [sent-184, score-0.277]

52 1 Construction of Tree Representation The semantic frame parse of a sentence is a forest of trees, each of which corresponds to a semantic frame. [sent-189, score-0.417]

53 SemTree encodes the original frame structure and its leaf words and phrases, and highlights a designated object at a particular node as follows. [sent-190, score-0.478]

54 For each lexical item (target) that evokes a frame, a backbone is found by extracting the path from the root to the role filler mentioning a designated object; the backbone is then reversed to promote the designated object. [sent-191, score-0.278]

55 If multiple frames have been assigned to the same designated object, their backbones are merged. [sent-192, score-0.347]

56 Lastly, the frame elements and frame targets are inserted at the frame root. [sent-193, score-0.945]

57 The top of Figure 2 shows the semantic parse for sentence a from section 2; we use it to illustrate tree construction for designated object Oracle. [sent-194, score-0.287]

58 The reversed paths extracted from each frame root to the designated object Oracle become the backbones (Figures 2-(3)&(4)). [sent-196, score-0.528]

59 Message say Figure 2: Constructing a tree representation for the designated object Oracle in sentence shown. [sent-240, score-0.272]

60 5 Dataset We use publicly available financial news from Reuters from January 2007 through August 2012. [sent-247, score-0.357]

61 This time frame includes a severe economic downturn in 2007-2010 followed by a modest recovery in 2011-2012. [sent-248, score-0.355]

62 The timestamp of the news is extracted for a later alignment with stock price information, which will be discussed in section 6. [sent-251, score-0.672]

63 We remove articles that only list stock prices or only show tables of accounting reports. [sent-258, score-0.282]

64 For example, the consumer staples sector has 40 companies. [sent-274, score-0.348]

65 We test the influence of news to predict (1) a change in stock price (change task), and (2) the polarity of change (increase vs. [sent-281, score-1.018]

66 1 Labels, Evaluation Metrics, and Settings We align publicly available daily stock price data from Yahoo Finance with the Reuters news using a method to avoid back-casting. [sent-288, score-0.672]

67 In particular, we use the daily adjusted closing price - the price quoted at the end of a trading day (4PM US Eastern Time), then adjusted by dividends, stock split, and other corporate actions. [sent-289, score-1.008]

68 We create two types of labels for news documents using the price data, to label the existence of a change and the direction of change. [sent-290, score-0.53]

69 Based on the finding of a one-day delay of the price response to the information embedded in the news by Tetlock et al. [sent-292, score-0.455]

70 To constrain the number of parameters, we also use a threshold value (r) of a 2% change, based on the distribution of price changes across our data. [sent-294, score-0.333]

71 +1 i f pptt( 00))++∆∆tt>< pptt( −−11))aanndd cchhaannggee == ++11 is the adjusted closing price at the end of the last trading day, and pt(0)+∆t is the price of the end of the trading day after the ∆t day delay. [sent-297, score-0.836]

72 The average ratios of +/- classes for change and polarity over the six years’ data are 0. [sent-300, score-0.231]

73 Because the time frame for our experiments includes an economic crisis followed by a recovery period, we note that the ratio between increase and decrease of price flips between 2007, where it is 1. [sent-305, score-0.724]

74 0 272720 Table 4: Average MCC for the change and polarity tasks by feature representation, for 2008-2010; for 2011-2012; for all 5 years and associated p-values of ANOVAs for comparison to BOW. [sent-369, score-0.269]

75 Separate means are shown for the test years of financial crisis (2008-2010) and economic recovery (201 1-2012) to highlight the differences in performance that might result from market volatility. [sent-384, score-0.442]

76 12pcuert,ce santl,es c,a plr ,ic ceosm, hpuarnty, d fieslal,p npaominetidng enti y 7 Table 5: Sample sLDA topics for consumer staples for test year 2010 (train on 2009), polarity task. [sent-393, score-0.398]

77 SemTree combined with FWD (SemTreeFWD) generally gives the best performance in both change and polarity tasks. [sent-394, score-0.231]

78 SemTree results here are based on the subset tree (SST) kernel, because of its greater precision in computing common frame structures and consistently better performance over the subtree (ST) kernel. [sent-395, score-0.388]

79 For polarity detection, SemTreeFWD was significantly better than BOW for each sector (see boldface p-values). [sent-400, score-0.317]

80 Table 5 displays a sample of sLDA topics with good performance on polarity for the consumer staples sector for training year 2009. [sent-403, score-0.559]

81 The positive topics are related to stock index details and retail data. [sent-404, score-0.267]

82 - (PHENOMENON(Perception active(Target)(PERCEIVER AGENTIVE)(PHENOMENON))) - (TRIGGER(Response)) - (Target(cuts)) - (VICTIM(Cause harm(Target(hurt))(VICTIM))) Figure 3: Best performing SemTree fragments for increase (+) and decrease (-) of price for consumer staples sector across training years. [sent-415, score-0.751]

83 3 words, with an av- erage of 14 frames per sentence, 3 of them with a GICS company as a role filler. [sent-418, score-0.326]

84 Because SemTree encodes only the frames containing a designated object (company), these are the frames we evaluated. [sent-419, score-0.515]

85 On average, about half the frames with a designated object were correct, and two thirds of those frames we judged to be important. [sent-420, score-0.515]

86 Post hoc analysis indicates this may be due to the aptness of semantic frame parsing for polarity. [sent-424, score-0.366]

87 To analyze which were the best performing features within sectors, we extracted the best performing frame fragments for the polarity task using a tree kernel feature engineering method presented in Pighin and Moschitti (2009). [sent-427, score-0.626]

88 Figure 3 shows the best performing SemTree fragments of the polarity task for the consumer staples sector. [sent-429, score-0.377]

89 Recall that we hypothesized differences in semantic frame features across sectors. [sent-430, score-0.366]

90 For example, in consumer staples, (EVALUEE(Judgment communication)) has positive polarity, compared with negative polarity in information technology sector. [sent-433, score-0.24]

91 (GOODS(Commerce sell)) is related to a decrease in price for 2008 and 2009 but to an increase in price for 2010-2012. [sent-446, score-0.702]

92 We attribute this change to the difficulty of predicting stock price trends when there is the high volatility typical of a financial crisis. [sent-450, score-0.926]

93 50 per share, Ackman wrote, meaning that would essentially be paying for real estate, but gaining Longs ’ pharmacy benefit management business and retail operations for free is treated as predicting a positive polarity for CVS. [sent-457, score-0.237]

94 8 Conclusion We have presented a model for predicting stock price movement from news. [sent-460, score-0.55]

95 The experimental results for our feature representation perform significantly better than BOW on the polarity task, and show promise on the change task. [sent-463, score-0.267]

96 The signals generated by this algorithm could improve the prediction of a financial time series model, such as ADS (Rydberg and Shephard, 2003). [sent-465, score-0.235]

97 We will also explore different labeling methods, such as a threshold for price change tuned by sectors and background economics. [sent-468, score-0.569]

98 A sentiment detection engine for internet stock message boards. [sent-501, score-0.302]

99 Sentiment polarity identification in financial news: A cohesionbased approach. [sent-535, score-0.391]

100 Semantic frames as an anchor representation for sentiment analysis. [sent-618, score-0.297]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('price', 0.333), ('semtree', 0.315), ('frame', 0.315), ('financial', 0.235), ('stock', 0.217), ('frames', 0.176), ('sector', 0.161), ('mcc', 0.161), ('sectors', 0.161), ('polarity', 0.156), ('fwd', 0.149), ('market', 0.129), ('bow', 0.126), ('news', 0.122), ('designated', 0.121), ('company', 0.114), ('dal', 0.103), ('staples', 0.103), ('tetlock', 0.1), ('companies', 0.089), ('firms', 0.088), ('slda', 0.088), ('google', 0.087), ('sentiment', 0.085), ('consumer', 0.084), ('change', 0.075), ('tree', 0.073), ('longs', 0.066), ('perceiver', 0.066), ('rydberg', 0.066), ('volatility', 0.066), ('investors', 0.059), ('oracle', 0.056), ('year', 0.055), ('judgment', 0.054), ('sst', 0.054), ('cvs', 0.054), ('victim', 0.054), ('semantic', 0.051), ('backbones', 0.05), ('creamer', 0.05), ('luss', 0.05), ('retail', 0.05), ('semtreefwd', 0.05), ('shephard', 0.05), ('sued', 0.05), ('trading', 0.048), ('agentive', 0.048), ('skew', 0.048), ('kernel', 0.048), ('sue', 0.046), ('april', 0.044), ('kernels', 0.044), ('agarwal', 0.043), ('object', 0.042), ('android', 0.041), ('ppi', 0.041), ('economic', 0.04), ('corporate', 0.04), ('predict', 0.04), ('media', 0.038), ('corp', 0.038), ('das', 0.038), ('years', 0.038), ('day', 0.037), ('august', 0.037), ('decrease', 0.036), ('ads', 0.036), ('framenet', 0.036), ('moschitti', 0.036), ('representation', 0.036), ('role', 0.036), ('prices', 0.035), ('fragments', 0.034), ('lavrenko', 0.034), ('anovas', 0.033), ('aspremont', 0.033), ('baldi', 0.033), ('defend', 0.033), ('donor', 0.033), ('engelberg', 0.033), ('estate', 0.033), ('evaluee', 0.033), ('germ', 0.033), ('gics', 0.033), ('haider', 0.033), ('jurman', 0.033), ('kogan', 0.033), ('lawsuits', 0.033), ('orcl', 0.033), ('suing', 0.033), ('parses', 0.033), ('reuters', 0.032), ('ruppenhofer', 0.032), ('java', 0.031), ('hurt', 0.031), ('classification', 0.031), ('fillmore', 0.031), ('business', 0.031), ('accounting', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999905 310 acl-2013-Semantic Frames to Predict Stock Price Movement

Author: Boyi Xie ; Rebecca J. Passonneau ; Leon Wu ; German G. Creamer

Abstract: Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task.

2 0.21454532 265 acl-2013-Outsourcing FrameNet to the Crowd

Author: Marco Fossati ; Claudio Giuliano ; Sara Tonelli

Abstract: We present the first attempt to perform full FrameNet annotation with crowdsourcing techniques. We compare two approaches: the first one is the standard annotation methodology of lexical units and frame elements in two steps, while the second is a novel approach aimed at acquiring frames in a bottom-up fashion, starting from frame element annotation. We show that our methodology, relying on a single annotation step and on simplified role definitions, outperforms the standard one both in terms of accuracy and time.

3 0.18509044 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

Author: Jianfeng Si ; Arjun Mukherjee ; Bing Liu ; Qing Li ; Huayi Li ; Xiaotie Deng

Abstract: This paper proposes a technique to leverage topic based sentiments from Twitter to help predict the stock market. We first utilize a continuous Dirichlet Process Mixture model to learn the daily topic set. Then, for each topic we derive its sentiment according to its opinion words distribution to build a sentiment time series. We then regress the stock index and the Twitter sentiment time series to predict the market. Experiments on real-life S&P100; Index show that our approach is effective and performs better than existing state-of-the-art non-topic based methods. 1

4 0.13381505 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

Author: Alexandra Balahur ; Hristo Tanev

Abstract: Nowadays, the importance of Social Media is constantly growing, as people often use such platforms to share mainstream media news and comment on the events that they relate to. As such, people no loger remain mere spectators to the events that happen in the world, but become part of them, commenting on their developments and the entities involved, sharing their opinions and distributing related content. This paper describes a system that links the main events detected from clusters of newspaper articles to tweets related to them, detects complementary information sources from the links they contain and subsequently applies sentiment analysis to classify them into positive, negative and neutral. In this manner, readers can follow the main events happening in the world, both from the perspective of mainstream as well as social media and the public’s perception on them. This system will be part of the EMM media monitoring framework working live and it will be demonstrated using Google Earth.

5 0.13330643 224 acl-2013-Learning to Extract International Relations from Political Context

Author: Brendan O'Connor ; Brandon M. Stewart ; Noah A. Smith

Abstract: We describe a new probabilistic model for extracting events between major political actors from news corpora. Our unsupervised model brings together familiar components in natural language processing (like parsers and topic models) with contextual political information— temporal and dyad dependence—to infer latent event classes. We quantitatively evaluate the model’s performance on political science benchmarks: recovering expert-assigned event class valences, and detecting real-world conflict. We also conduct a small case study based on our model’s inferences. A supplementary appendix, and replication software/data are available online, at: http://brenocon.com/irevents

6 0.12958905 188 acl-2013-Identifying Sentiment Words Using an Optimization-based Model without Seed Words

7 0.11270738 162 acl-2013-FrameNet on the Way to Babel: Creating a Bilingual FrameNet Using Wiktionary as Interlingual Connection

8 0.11008187 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

9 0.10675814 134 acl-2013-Embedding Semantic Similarity in Tree Kernels for Domain Adaptation of Relation Extraction

10 0.10481297 119 acl-2013-Diathesis alternation approximation for verb clustering

11 0.099913739 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

12 0.098757014 146 acl-2013-Exploiting Social Media for Natural Language Processing: Bridging the Gap between Language-centric and Real-world Applications

13 0.095303215 148 acl-2013-Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams

14 0.091009699 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

15 0.081235252 253 acl-2013-Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts

16 0.076106071 318 acl-2013-Sentiment Relevance

17 0.074559242 117 acl-2013-Detecting Turnarounds in Sentiment Analysis: Thwarting

18 0.071995884 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

19 0.068905741 187 acl-2013-Identifying Opinion Subgroups in Arabic Online Discussions

20 0.068793625 351 acl-2013-Topic Modeling Based Classification of Clinical Reports


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.194), (1, 0.153), (2, -0.023), (3, 0.041), (4, -0.054), (5, -0.015), (6, 0.055), (7, 0.072), (8, 0.048), (9, 0.006), (10, 0.032), (11, 0.028), (12, -0.021), (13, 0.006), (14, -0.01), (15, -0.014), (16, 0.001), (17, 0.06), (18, 0.08), (19, 0.015), (20, 0.076), (21, -0.009), (22, 0.012), (23, -0.036), (24, -0.015), (25, -0.062), (26, -0.111), (27, -0.064), (28, 0.009), (29, 0.092), (30, -0.021), (31, 0.069), (32, 0.04), (33, -0.077), (34, -0.132), (35, -0.061), (36, -0.099), (37, -0.103), (38, 0.105), (39, 0.123), (40, 0.093), (41, 0.074), (42, -0.033), (43, 0.0), (44, -0.139), (45, 0.074), (46, -0.131), (47, -0.017), (48, 0.054), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91449535 310 acl-2013-Semantic Frames to Predict Stock Price Movement

Author: Boyi Xie ; Rebecca J. Passonneau ; Leon Wu ; German G. Creamer

Abstract: Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task.

2 0.74334699 265 acl-2013-Outsourcing FrameNet to the Crowd

Author: Marco Fossati ; Claudio Giuliano ; Sara Tonelli

Abstract: We present the first attempt to perform full FrameNet annotation with crowdsourcing techniques. We compare two approaches: the first one is the standard annotation methodology of lexical units and frame elements in two steps, while the second is a novel approach aimed at acquiring frames in a bottom-up fashion, starting from frame element annotation. We show that our methodology, relying on a single annotation step and on simplified role definitions, outperforms the standard one both in terms of accuracy and time.

3 0.66569704 119 acl-2013-Diathesis alternation approximation for verb clustering

Author: Lin Sun ; Diana McCarthy ; Anna Korhonen

Abstract: Although diathesis alternations have been used as features for manual verb classification, and there is recent work on incorporating such features in computational models of human language acquisition, work on large scale verb classification has yet to examine the potential for using diathesis alternations as input features to the clustering process. This paper proposes a method for approximating diathesis alternation behaviour in corpus data and shows, using a state-of-the-art verb clustering system, that features based on alternation approximation outperform those based on independent subcategorization frames. Our alternation-based approach is particularly adept at leveraging information from less frequent data.

4 0.57857382 224 acl-2013-Learning to Extract International Relations from Political Context

Author: Brendan O'Connor ; Brandon M. Stewart ; Noah A. Smith

Abstract: We describe a new probabilistic model for extracting events between major political actors from news corpora. Our unsupervised model brings together familiar components in natural language processing (like parsers and topic models) with contextual political information— temporal and dyad dependence—to infer latent event classes. We quantitatively evaluate the model’s performance on political science benchmarks: recovering expert-assigned event class valences, and detecting real-world conflict. We also conduct a small case study based on our model’s inferences. A supplementary appendix, and replication software/data are available online, at: http://brenocon.com/irevents

5 0.54465884 192 acl-2013-Improved Lexical Acquisition through DPP-based Verb Clustering

Author: Roi Reichart ; Anna Korhonen

Abstract: Subcategorization frames (SCFs), selectional preferences (SPs) and verb classes capture related aspects of the predicateargument structure. We present the first unified framework for unsupervised learning of these three types of information. We show how to utilize Determinantal Point Processes (DPPs), elegant probabilistic models that are defined over the possible subsets of a given dataset and give higher probability mass to high quality and diverse subsets, for clustering. Our novel clustering algorithm constructs a joint SCF-DPP DPP kernel matrix and utilizes the efficient sampling algorithms of DPPs to cluster together verbs with similar SCFs and SPs. We evaluate the induced clusters in the context of the three tasks and show results that are superior to strong baselines for each 1.

6 0.48850527 162 acl-2013-FrameNet on the Way to Babel: Creating a Bilingual FrameNet Using Wiktionary as Interlingual Connection

7 0.4729664 213 acl-2013-Language Acquisition and Probabilistic Models: keeping it simple

8 0.46699122 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

9 0.46217978 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

10 0.44755644 349 acl-2013-The mathematics of language learning

11 0.44707963 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

12 0.44391808 117 acl-2013-Detecting Turnarounds in Sentiment Analysis: Thwarting

13 0.42818183 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

14 0.40464258 188 acl-2013-Identifying Sentiment Words Using an Optimization-based Model without Seed Words

15 0.39887401 148 acl-2013-Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams

16 0.39705706 91 acl-2013-Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning

17 0.39303854 318 acl-2013-Sentiment Relevance

18 0.39027986 344 acl-2013-The Effects of Lexical Resource Quality on Preference Violation Detection

19 0.39006475 14 acl-2013-A Novel Classifier Based on Quantum Computation

20 0.38555503 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.046), (6, 0.035), (11, 0.05), (15, 0.018), (24, 0.052), (26, 0.084), (35, 0.066), (38, 0.015), (42, 0.036), (48, 0.04), (51, 0.267), (70, 0.047), (88, 0.047), (90, 0.027), (95, 0.072)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.85855025 35 acl-2013-Adaptation Data Selection using Neural Language Models: Experiments in Machine Translation

Author: Kevin Duh ; Graham Neubig ; Katsuhito Sudoh ; Hajime Tsukada

Abstract: Data selection is an effective approach to domain adaptation in statistical machine translation. The idea is to use language models trained on small in-domain text to select similar sentences from large general-domain corpora, which are then incorporated into the training data. Substantial gains have been demonstrated in previous works, which employ standard ngram language models. Here, we explore the use of neural language models for data selection. We hypothesize that the continuous vector representation of words in neural language models makes them more effective than n-grams for modeling un- known word contexts, which are prevalent in general-domain text. In a comprehensive evaluation of 4 language pairs (English to German, French, Russian, Spanish), we found that neural language models are indeed viable tools for data selection: while the improvements are varied (i.e. 0.1 to 1.7 gains in BLEU), they are fast to train on small in-domain data and can sometimes substantially outperform conventional n-grams.

same-paper 2 0.79359984 310 acl-2013-Semantic Frames to Predict Stock Price Movement

Author: Boyi Xie ; Rebecca J. Passonneau ; Leon Wu ; German G. Creamer

Abstract: Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task.

3 0.61220646 201 acl-2013-Integrating Translation Memory into Phrase-Based Machine Translation during Decoding

Author: Kun Wang ; Chengqing Zong ; Keh-Yih Su

Abstract: Since statistical machine translation (SMT) and translation memory (TM) complement each other in matched and unmatched regions, integrated models are proposed in this paper to incorporate TM information into phrase-based SMT. Unlike previous multi-stage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding. On a Chinese–English TM database, our experiments show that the proposed integrated Model-III is significantly better than either the SMT or the TM systems when the fuzzy match score is above 0.4. Furthermore, integrated Model-III achieves overall 3.48 BLEU points improvement and 2.62 TER points reduction in comparison with the pure SMT system. Be- . sides, the proposed models also outperform previous approaches significantly.

4 0.60903066 383 acl-2013-Vector Space Model for Adaptation in Statistical Machine Translation

Author: Boxing Chen ; Roland Kuhn ; George Foster

Abstract: This paper proposes a new approach to domain adaptation in statistical machine translation (SMT) based on a vector space model (VSM). The general idea is first to create a vector profile for the in-domain development (“dev”) set. This profile might, for instance, be a vector with a dimensionality equal to the number of training subcorpora; each entry in the vector reflects the contribution of a particular subcorpus to all the phrase pairs that can be extracted from the dev set. Then, for each phrase pair extracted from the training data, we create a vector with features defined in the same way, and calculate its similarity score with the vector representing the dev set. Thus, we obtain a de- coding feature whose value represents the phrase pair’s closeness to the dev. This is a simple, computationally cheap form of instance weighting for phrase pairs. Experiments on large scale NIST evaluation data show improvements over strong baselines: +1.8 BLEU on Arabic to English and +1.4 BLEU on Chinese to English over a non-adapted baseline, and significant improvements in most circumstances over baselines with linear mixture model adaptation. An informal analysis suggests that VSM adaptation may help in making a good choice among words with the same meaning, on the basis of style and genre.

5 0.54472089 318 acl-2013-Sentiment Relevance

Author: Christian Scheible ; Hinrich Schutze

Abstract: A number of different notions, including subjectivity, have been proposed for distinguishing parts of documents that convey sentiment from those that do not. We propose a new concept, sentiment relevance, to make this distinction and argue that it better reflects the requirements of sentiment analysis systems. We demonstrate experimentally that sentiment relevance and subjectivity are related, but different. Since no large amount of labeled training data for our new notion of sentiment relevance is available, we investigate two semi-supervised methods for creating sentiment relevance classifiers: a distant supervision approach that leverages structured information about the domain of the reviews; and transfer learning on feature representations based on lexical taxonomies that enables knowledge transfer. We show that both methods learn sentiment relevance classifiers that perform well.

6 0.54190058 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

7 0.54115868 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

8 0.53837681 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

9 0.53594071 369 acl-2013-Unsupervised Consonant-Vowel Prediction over Hundreds of Languages

10 0.53259671 236 acl-2013-Mapping Source to Target Strings without Alignment by Analogical Learning: A Case Study with Transliteration

11 0.53196454 117 acl-2013-Detecting Turnarounds in Sentiment Analysis: Thwarting

12 0.53098696 196 acl-2013-Improving pairwise coreference models through feature space hierarchy learning

13 0.53053874 233 acl-2013-Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media

14 0.53005922 70 acl-2013-Bilingually-Guided Monolingual Dependency Grammar Induction

15 0.52972424 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

16 0.52967149 81 acl-2013-Co-Regression for Cross-Language Review Rating Prediction

17 0.52940434 95 acl-2013-Crawling microblogging services to gather language-classified URLs. Workflow and case study

18 0.52742219 333 acl-2013-Summarization Through Submodularity and Dispersion

19 0.52697366 295 acl-2013-Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages

20 0.52493495 187 acl-2013-Identifying Opinion Subgroups in Arabic Online Discussions