acl acl2013 acl2013-20 knowledge-graph by maker-knowledge-mining

20 acl-2013-A Stacking-based Approach to Twitter User Geolocation Prediction


Source: pdf

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: We implement a city-level geolocation prediction system for Twitter users. The system infers a user’s location based on both tweet text and user-declared metadata using a stacking approach. We demonstrate that the stacking method substantially outperforms benchmark methods, achieving 49% accuracy on a benchmark dataset. We further evaluate our method on a recent crawl of Twitter data to investigate the impact of temporal factors on model generalisation. Our results suggest that user-declared location metadata is more sensitive to temporal change than the text of Twitter messages. We also describe two ways of accessing/demoing our system.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 net Abstract We implement a city-level geolocation prediction system for Twitter users. [sent-7, score-0.657]

2 The system infers a user’s location based on both tweet text and user-declared metadata using a stacking approach. [sent-8, score-0.674]

3 We demonstrate that the stacking method substantially outperforms benchmark methods, achieving 49% accuracy on a benchmark dataset. [sent-9, score-0.29]

4 Our results suggest that user-declared location metadata is more sensitive to temporal change than the text of Twitter messages. [sent-11, score-0.302]

5 1 Introduction In this paper, we present and evaluate a geolocation prediction method for Twitter users. [sent-13, score-0.657]

6 1 Given a user’s tweet data as input, the task of user level geolocation prediction is to infer a primary location (i. [sent-14, score-1.182]

7 (2012)) for the user from a discrete set of pre-defined locations (Cheng et al. [sent-17, score-0.261]

8 For instance, President Obama’s location might be predicted to be Washington D. [sent-19, score-0.219]

9 , USA, based on his public tweets and profile metadata. [sent-21, score-0.317]

10 Although Twitter allows users to specify a plain text description of their location in their profile, these descriptions tend to be ad hoc and unreliable (Cheng 1We only use public Twitter data for experiments and emplification in this study. [sent-26, score-0.412]

11 Recently, user geolocation prediction based on a user’s tweets has become popular (Wing and Baldridge, 2011; Roller et al. [sent-29, score-1.011]

12 , 2012), based on the assumption that tweets implicitly contain locating information, and with appropriate statistical modeling, the true location can be inferred. [sent-30, score-0.405]

13 For instance, if a user frequently mentions NYC, JFK and yankees, it is likely that they are from New York City, USA. [sent-31, score-0.163]

14 In this paper, we discuss an implementation of a global city-level geolocation prediction system for English Twitter users. [sent-32, score-0.657]

15 The system utilises both tweet text and public profile metadata for modeling and inference. [sent-33, score-0.422]

16 Specifically, we train multinomial Bayes classifiers based on location indicative words (LIWs) in tweets (Han et al. [sent-34, score-0.502]

17 The proposed stacking model is compared with benchmarks on a public geolocation dataset. [sent-37, score-0.883]

18 Experimental results demonstrate that our stacking model outperforms benchmark methods by a large margin, achieving 49% accuracy on the test data. [sent-38, score-0.242]

19 We further evaluate the stacking model on a more recent crawl of public tweets. [sent-39, score-0.293]

20 This experiment tests the effectiveness of a geolocation model trained on “old” data when applied to “new” data. [sent-40, score-0.581]

21 The results reveal that user-declared locations are more variable over time than tweet text and time zone data. [sent-41, score-0.378]

22 2 Background and Related Work Identifying the geolocation of objects has been widely studied in the research literature over target objects including webpages (Zong et al. [sent-42, score-0.581]

23 Recently, a considerable amount of work has been devoted to extending geolocation prediction for Twitter Proce diSnogfsia o,f B thuelg 5a1rista, A Annu gauslt M 4-e9eti 2n0g1 o3f. [sent-46, score-0.657]

24 hc e2 A01s 3o Acisastoiocnia ftoiorn C fo rm Cpoumtaptiuotna tilo Lnianlg Lui nstgiucsis,pt iacgses 7–12, users (Cheng et al. [sent-48, score-0.104]

25 User geolocation is generally related to a “home” location where a user regularly resides, and user mobility is ignored. [sent-54, score-1.091]

26 Twitter allows users to declare their home locations in plain text in their profile, however, this data has been found to be unstructured and ad hoc in preliminary research (Cheng et al. [sent-55, score-0.345]

27 , 1999) cannot be applied to Twitter-based user geolocation, as IPs are only known to the service provider and are non-trivial to retrieve in a mobile Internet environment. [sent-59, score-0.197]

28 Although social network information has been proven effective in inferring user locations (Backstrom et al. [sent-60, score-0.299]

29 , 2013), we focus exclusively on message and metadata information in this paper, as they are more readily accessible. [sent-63, score-0.155]

30 Beyond identifying geographical references using off-the-shelf tools, more sophisticated methods have been introduced in the social media realm. [sent-73, score-0.134]

31 (2010) built a simple generative model based on tweet words, and further added words which are local to particular regions and applied smoothing to under-represented locations. [sent-75, score-0.178]

32 (201 1) applied different similarity measures to the task, and investigated the relative difficulty of geolocation prediction at city, state, and country levels. [sent-77, score-0.657]

33 Wing and Baldridge (201 1) introduced a grid-based representation for geolocation modeling and inference based on fixed latitude and longitude values, and aggregated all tweets in a single cell. [sent-78, score-0.772]

34 One drawback to the uniform8 sized cell representation is that it introduces class imbalance: urban areas tend to contain far more tweets than rural areas. [sent-80, score-0.248]

35 Given that most tweets are from urban areas, Han et al. [sent-83, score-0.224]

36 (2012) consider a citybased class division, and explore different feature selection methods to extract “location indicative words”, which they show to improve prediction accuracy. [sent-84, score-0.148]

37 Additionally, time zone information has been incorporated in a coarse-to-fine hierarchical model by first determining the time zone, and then disambiguating locations within it (Mahmud et al. [sent-85, score-0.2]

38 When designing a practical geolocation sys- tem, simple models such as naive Bayes and nearest prototype methods (e. [sent-91, score-0.692]

39 As such, we build off the text-based naive Bayes-based geolocation system of Han et al. [sent-95, score-0.641]

40 By selecting a reduced set of “location indicative words”, prediction can be further accelerated. [sent-97, score-0.124]

41 3 Methodology In this study, we adopt the same city-based representation and multinomial naive Bayes learner as Han et al. [sent-98, score-0.112]

42 The city-based representation consists of 3,709 cities throughout the world, and is obtained by aggregating smaller cities with the largest nearby city. [sent-100, score-0.152]

43 (2012) found that using feature selection to identify “location indicative words” led to improvements in geolocation performance. [sent-102, score-0.629]

44 (2012), only the text of Twitter messages was used, and training was based exclusively on geotagged tweets, despite these accounting for only around 1% of the total public data on Twitter. [sent-106, score-0.312]

45 In this research, we include additional non-geotagged tweets (e. [sent-107, score-0.191]

46 , posted from a non-GPS enabled device) for those users who have geotagged tweets (allowing us to determine a home location for the user). [sent-109, score-0.832]

47 In addition to including non-geotagged data in modeling and inference, we further take advantage of the text-based metadata embedded in a user’s public profile (and included in the JSON object for each tweet). [sent-110, score-0.244]

48 This metadata is potentially complementary to the tweet message and of benefit for geolocation prediction, especially the userdeclared location and time zone, which we consider here. [sent-111, score-1.141]

49 As such, we adopt a statistical approach to model each selected metadata field, by capturing the text in the form of character 4-grams, and training a multinomial naive Bayes classifier for each field. [sent-117, score-0.23]

50 To combine together the tweet text and metadata fields, we use stacking (Wolpert, 1992). [sent-118, score-0.49]

51 First, a multinomial naive Bayes base classifier (L0) is learned for each data type using 10-fold cross validation. [sent-120, score-0.142]

52 This is carried out for the tweet text (TEXT), user-declared location (MB-LOC) and user-declared time zone (MB-TZ). [sent-121, score-0.464]

53 4 Evaluation and Discussion In this section, we compare our proposed stacking approach with existing benchmarks on a public dataset, and investigate the impact of time using a recently collected dataset. [sent-124, score-0.302]

54 1 Evaluation Measures In line with other work on user geolocation prediction, we use three evaluation measures: • Acc : The percentage of correct city-level predictions. [sent-126, score-0.744]

55 665 913 181 170 92 1330 9 Table 1: Results over WORLD radius of the home location (Cheng et al. [sent-141, score-0.3]

56 • Median : The median distance from the predMicetdedia city to teh me ehdoimane ldoiscatatinocne (Eisenstein et al. [sent-145, score-0.113]

57 4M users whose tweets are primarily identified as English based on the output of the langid . [sent-151, score-0.333]

58 py language identification tool (Lui and Baldwin, 2012), and who have posted at least 10 geotagged tweets. [sent-152, score-0.237]

59 The city-level home location for a geotagged user is determined as follows. [sent-153, score-0.675]

60 First, each of a user’s geotagged tweets is mapped to its nearest city (based on the same set of 3,709 cities used for the city-based location representation). [sent-154, score-0.758]

61 Then, the most frequent city for a user is selected as the home location. [sent-155, score-0.347]

62 (2012) based on KD-tree partitioned grid cells, which we denote as KL; and (2) the multinomial naive Bayes city-level geolocation model of Han et al. [sent-157, score-0.735]

63 To remedy this, we find the closest city to the centroid of each grid cell in the KD-tree representation, and map the classification onto this city. [sent-160, score-0.11]

64 We present results including non-geotagged data for users with geotagged messages for the two methods, as KL-NG and MBNG, respectively. [sent-161, score-0.346]

65 We also present results based on the user-declared location (MB-LOC) and time zone (MB-TZ), and finally the stacking method (STACKING) which combines MB-NG, MB-LOC and MB-TZ. [sent-162, score-0.48]

66 9 The approximate doubling of Acc for KLNG and MB-NG over KL and MB, respectively, demonstrates the high utility of non-geotagged data in tweet text-based geolocation prediction. [sent-164, score-0.759]

67 (2010) that userdeclared locations are too unreliable to use for user geolocation, we find evidence indicating that they are indeed a valuable source ofinformation for this task. [sent-169, score-0.331]

68 The best overall results are achieved for the stacking approach (STACKING), assigning almost half of the test users to the correct city-level location, and improving more than four-fold on the previous-best accuracy (i. [sent-170, score-0.298]

69 These results also suggest that there is strong complementarity between user metadata and tweet text. [sent-173, score-0.459]

70 3 Evaluation on Time-Heterogeneous Data In addition to the original held-out test data (WORLDtest) from WORLD, we also developed a new geotagged evaluation dataset using the Twitter Streaming API. [sent-175, score-0.212]

71 Given that Twitter users and topics change over time, an essential question is whether the statistical model learned from the “old” training data is still effective over the “new” test data? [sent-179, score-0.104]

72 By selecting users with at least 10 geotagged tweets and a declared language of English, 55k users were obtained. [sent-181, score-0.611]

73 For each user, their recent status updates were aggregated, and non-English users were filtered out based on the language predictions of langid . [sent-182, score-0.142]

74 For some users with geotagged tweets from many cities, the most frequent city might not be an appropriate representation of their home location for evaluation. [sent-184, score-0.875]

75 To improve the evaluation data quality, we therefore exclude users who have less than 50% of their geotagged tweets originating from a single city. [sent-185, score-0.507]

76 5 Architecture and Access In this section, we describe the architecture of the proposed geolocation system, as well as two ways of accessing the live system. [sent-229, score-0.581]

77 3 The core structure of the system consists of two parts: (1) the interface; (2) the back-end geolocation service. [sent-230, score-0.581]

78 We offer two interfaces to access the system: a Twitter bot and a web interface. [sent-231, score-0.112]

79 A daemon process detects any user mentions of the bot in tweets via keyword matching through the Twitter search API. [sent-233, score-0.442]

80 The screen name of the tweet author is extracted and sent to the back-end geolocation service, and the predicted user geolocation is sent to the Twitter user in a direct message, as shown in Figure 1. [sent-234, score-1.818]

81 Users can input a Twitter user screen name through the web interface, whereby a call is made to the back-end geolocation service to geolocate that user. [sent-241, score-0.829]

82 com/ t q0 10 or / ac l 0 1 2 3 10 Figure 2: Web interface for user geolocation. [sent-243, score-0.163]

83 These coordinates are utilised to validate our predictions, and are not used in the geolocation process. [sent-245, score-0.581]

84 The red marker is the predicted city-based user geolocation. [sent-246, score-0.198]

85 When the Twitter bot is mentioned in a tweet, that user is sent a direct message with the predicted geolocation. [sent-248, score-0.368]

86 tion results are rendered on a map (along with any geotagged tweets for the user) as in Figure 2. [sent-249, score-0.403]

87 4 The back-end geolocation service crawls recent tweets for a given user in real time,5 and word and n-gram features are extracted from both the text and the user metadata. [sent-250, score-1.132]

88 6 Summary and Future Work In this paper, we presented a city-level geolocation prediction system for Twitter users. [sent-252, score-0.657]

89 Over a public dataset, our stacking method exploiting both tweet text and user metadata substantially — — 4Currently, only Google Chrome is supported. [sent-253, score-0.723]

90 com/ int l en / chrome / / 5Up to 200 tweets are crawled, the upper bound of messages returned per single request based on Twitter API v1. [sent-256, score-0.264]

91 Find me if you can: improving geographical prediction with social and spatial proximity. [sent-274, score-0.256]

92 You are where you tweet: a content-based approach to geo-locating twitter users. [sent-283, score-0.248]

93 Geolocation prediction in social media data by finding location indicative words. [sent-305, score-0.346]

94 Tweets from justin bieber’s heart: the dynamics of the location field in user profiles. [sent-311, score-0.347]

95 Detecting geographical references in the form of place names and associated spatial natural language. [sent-329, score-0.142]

96 An efficient location extraction algorithm by leveraging web contextual information. [sent-356, score-0.208]

97 Supervised text-based geolocation using language models on an adaptive grid. [sent-372, score-0.581]

98 : a classification approach to geolocating users based on their social ties. [sent-378, score-0.142]

99 Earthquake shakes twitter users: real-time event detection by social sensors. [sent-389, score-0.286]

100 Sim- ple supervised document geolocation with geodesic grids. [sent-395, score-0.581]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('geolocation', 0.581), ('twitter', 0.248), ('geotagged', 0.212), ('livetest', 0.194), ('stacking', 0.194), ('tweets', 0.191), ('location', 0.184), ('tweet', 0.178), ('user', 0.163), ('metadata', 0.118), ('home', 0.116), ('backstrom', 0.108), ('cheng', 0.106), ('han', 0.105), ('users', 0.104), ('zone', 0.102), ('locations', 0.098), ('geographical', 0.096), ('bot', 0.088), ('worldtest', 0.086), ('roller', 0.076), ('prediction', 0.076), ('cities', 0.076), ('public', 0.07), ('city', 0.068), ('wing', 0.066), ('glasgow', 0.065), ('mahmud', 0.065), ('naive', 0.06), ('profile', 0.056), ('mb', 0.055), ('bayes', 0.053), ('multinomial', 0.052), ('www', 0.051), ('acc', 0.05), ('indicative', 0.048), ('benchmark', 0.048), ('eisenstein', 0.048), ('lars', 0.047), ('lieberman', 0.047), ('spatial', 0.046), ('sent', 0.045), ('median', 0.045), ('kl', 0.044), ('buyukokkten', 0.043), ('chrome', 0.043), ('kinsella', 0.043), ('maceachren', 0.043), ('nicta', 0.043), ('quercini', 0.043), ('sadilek', 0.043), ('userdeclared', 0.043), ('lui', 0.042), ('grid', 0.042), ('crandall', 0.038), ('hecht', 0.038), ('langid', 0.038), ('leidner', 0.038), ('raleigh', 0.038), ('rout', 0.038), ('wolpert', 0.038), ('social', 0.038), ('benchmarks', 0.038), ('message', 0.037), ('stacked', 0.037), ('world', 0.036), ('generalisation', 0.035), ('geospatial', 0.035), ('sigspatial', 0.035), ('predicted', 0.035), ('service', 0.034), ('cells', 0.033), ('urban', 0.033), ('analytics', 0.032), ('newer', 0.032), ('sakaki', 0.032), ('messages', 0.03), ('locating', 0.03), ('base', 0.03), ('australian', 0.029), ('baldridge', 0.029), ('crawl', 0.029), ('melbourne', 0.029), ('jose', 0.028), ('unreliable', 0.027), ('hoc', 0.027), ('screen', 0.027), ('hong', 0.027), ('nearest', 0.027), ('classifiers', 0.027), ('timothy', 0.027), ('zong', 0.027), ('yin', 0.027), ('certainly', 0.026), ('geographic', 0.026), ('posted', 0.025), ('old', 0.025), ('web', 0.024), ('prototype', 0.024), ('class', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999934 20 acl-2013-A Stacking-based Approach to Twitter User Geolocation Prediction

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: We implement a city-level geolocation prediction system for Twitter users. The system infers a user’s location based on both tweet text and user-declared metadata using a stacking approach. We demonstrate that the stacking method substantially outperforms benchmark methods, achieving 49% accuracy on a benchmark dataset. We further evaluate our method on a recent crawl of Twitter data to investigate the impact of temporal factors on model generalisation. Our results suggest that user-declared location metadata is more sensitive to temporal change than the text of Twitter messages. We also describe two ways of accessing/demoing our system.

2 0.21177624 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

Author: Alexandra Balahur ; Hristo Tanev

Abstract: Nowadays, the importance of Social Media is constantly growing, as people often use such platforms to share mainstream media news and comment on the events that they relate to. As such, people no loger remain mere spectators to the events that happen in the world, but become part of them, commenting on their developments and the entities involved, sharing their opinions and distributing related content. This paper describes a system that links the main events detected from clusters of newspaper articles to tweets related to them, detects complementary information sources from the links they contain and subsequently applies sentiment analysis to classify them into positive, negative and neutral. In this manner, readers can follow the main events happening in the world, both from the perspective of mainstream as well as social media and the public’s perception on them. This system will be part of the EMM media monitoring framework working live and it will be demonstrated using Google Earth.

3 0.18615323 146 acl-2013-Exploiting Social Media for Natural Language Processing: Bridging the Gap between Language-centric and Real-world Applications

Author: Simone Paolo Ponzetto ; Andrea Zielinski

Abstract: unkown-abstract

4 0.17878054 233 acl-2013-Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media

Author: Weiwei Guo ; Hao Li ; Heng Ji ; Mona Diab

Abstract: Many current Natural Language Processing [NLP] techniques work well assuming a large context of text as input data. However they become ineffective when applied to short texts such as Twitter feeds. To overcome the issue, we want to find a related newswire document to a given tweet to provide contextual support for NLP tasks. This requires robust modeling and understanding of the semantics of short texts. The contribution of the paper is two-fold: 1. we introduce the Linking-Tweets-toNews task as well as a dataset of linked tweet-news pairs, which can benefit many NLP applications; 2. in contrast to previ- ous research which focuses on lexical features within the short texts (text-to-word information), we propose a graph based latent variable model that models the inter short text correlations (text-to-text information). This is motivated by the observation that a tweet usually only covers one aspect of an event. We show that using tweet specific feature (hashtag) and news specific feature (named entities) as well as temporal constraints, we are able to extract text-to-text correlations, and thus completes the semantic picture of a short text. Our experiments show significant improvement of our new model over baselines with three evaluation metrics in the new task.

5 0.16996613 319 acl-2013-Sequential Summarization: A New Application for Timely Updated Twitter Trending Topics

Author: Dehong Gao ; Wenjie Li ; Renxian Zhang

Abstract: The growth of the Web 2.0 technologies has led to an explosion of social networking media sites. Among them, Twitter is the most popular service by far due to its ease for realtime sharing of information. It collects millions of tweets per day and monitors what people are talking about in the trending topics updated timely. Then the question is how users can understand a topic in a short time when they are frustrated with the overwhelming and unorganized tweets. In this paper, this problem is approached by sequential summarization which aims to produce a sequential summary, i.e., a series of chronologically ordered short subsummaries that collectively provide a full story about topic development. Both the number and the content of sub-summaries are automatically identified by the proposed stream-based and semantic-based approaches. These approaches are evaluated in terms of sequence coverage, sequence novelty and sequence correlation and the effectiveness of their combination is demonstrated. 1 Introduction and Background Twitter, as a popular micro-blogging service, collects millions of real-time short text messages (known as tweets) every second. It acts as not only a public platform for posting trifles about users’ daily lives, but also a public reporter for real-time news. Twitter has shown its powerful ability in information delivery in many events, like the wildfires in San Diego and the earthquake in Japan. Nevertheless, the side effect is individual users usually sink deep under millions of flooding-in tweets. To alleviate this problem, the applications like whatthetrend 1 have evolved from Twitter to provide services that encourage users to edit explanatory tweets about a trending topic, which can be regarded as topic summaries. It is to some extent a good way to help users understand trending topics. 1 whatthetrend.com There is also pioneering research in automatic Twitter trending topic summarization. (O'Connor et al., 2010) explained Twitter trending topics by providing a list of significant terms. Users could utilize these terms to drill down to the tweets which are related to the trending topics. (Sharifi et al., 2010) attempted to provide a one-line summary for each trending topic using phrase reinforcement ranking. The relevance model employed by (Harabagiu and Hickl, 2011) generated summaries in larger size, i.e., 250word summaries, by synthesizing multiple high rank tweets. (Duan et al., 2012) incorporate the user influence and content quality information in timeline tweet summarization and employ reinforcement graph to generate summaries for trending topics. Twitter summarization is an emerging research area. Current approaches still followed the traditional summarization route and mainly focused on mining tweets of both significance and representativeness. Though, the summaries generated in such a way can sketch the most important aspects of the topic, they are incapable of providing full descriptions of the changes of the focus of a topic, and the temporal information or freshness of the tweets, especially for those newsworthy trending topics, like earthquake and sports meeting. As the main information producer in Twitter, the massive crowd keeps close pace with the development of trending topics and provide the timely updated information. The information dynamics and timeliness is an important consideration for Twitter summarization. That is why we propose sequential summarization in this work, which aims to produce sequential summaries to capture the temporal changes of mass focus. Our work resembles update summarization promoted by TAC 2 which required creating summaries with new information assuming the reader has already read some previous documents under the same topic. Given two chronologically ordered documents sets about a topic, the systems were asked to generate two 2 www.nist.gov/tac 567 summaries, and the second one should inform the user of new information only. In order to achieve this goal, existing approaches mainly emphasized the novelty of the subsequent summary (Li and Croft, 2006; Varma et al., 2009; Steinberger and Jezek, 2009). Different from update summarization, we focus more on the temporal change of trending topics. In particular, we need to automatically detect the “update points” among a myriad of related tweets. It is the goal of this paper to set up a new practical summarization application tailored for timely updated Twitter messages. With the aim of providing a full description of the focus changes and the records of the timeline of a trending topic, the systems are expected to discover the chronologically ordered sets of information by themselves and they are free to generate any number of update summaries according to the actual situations instead of a fixed number of summaries as specified in DUC/TAC. Our main contributions include novel approaches to sequential summarization and corresponding evaluation criteria for this new application. All of them will be detailed in the following sections. 2 Sequential Summarization Sequential summarization proposed here aims to generate a series of chronologically ordered subsummaries for a given Twitter trending topic. Each sub-summary is supposed to represent one main subtopic or one main aspect of the topic, while a sequential summary, made up by the subsummaries, should retain the order the information is delivered to the public. In such a way, the sequential summary is able to provide a general picture of the entire topic development. 2.1 Subtopic Segmentation One of the keys to sequential summarization is subtopic segmentation. How many subtopics have attracted the public attention, what are they, and how are they developed? It is important to provide the valuable and organized materials for more fine-grained summarization approaches. We proposed the following two approaches to automatically detect and chronologically order the subtopics. 2.1.1 Stream-based Subtopic Detection and Ordering Typically when a subtopic is popular enough, it will create a certain level of surge in the tweet stream. In other words, every surge in the tweet stream can be regarded as an indicator of the appearance of a subtopic that is worthy of being summarized. Our early investigation provides evidence to support this assumption. By examining the correlations between tweet content changes and volume changes in randomly selected topics, we have observed that the changes in tweet volume can really provide the clues of topic development or changes of crowd focus. The stream-based subtopic detection approach employs the offline peak area detection (Opad) algorithm (Shamma et al., 2010) to locate such surges by tracing tweet volume changes. It regards the collection of tweets at each such surge time range as a new subtopic. Offline Peak Area Detection (Opad) Algorithm 1: Input: TS (tweets stream, each twi with timestamp ti); peak interval window ∆? (in hour), and time stepℎ (ℎ ≪ ∆?); 2: Output: Peak Areas PA. 3: Initial: two time slots: ?′ = ? = ?0 + ∆?; Tweet numbers: ?′ = ? = ?????(?) 4: while (?? = ? + ℎ) < ??−1 5: update ?′ = ?? + ∆? and ?′ = ?????(?′) 6: if (?′ < ? And up-hilling) 7: output one peak area ??? 8: state of down-hilling 9: else 10: update ? = ?′ and ? = ?′ 11: state of up-hilling 12: 13: function ?????(?) 14: Count tweets in time interval T The subtopics detected by the Opad algorithm are naturally ordered in the timeline. 2.1.2 Semantic-based Subtopic Detection and Ordering Basically the stream-based approach monitors the changes of the level of user attention. It is easy to implement and intuitively works, but it fails to handle the cases where the posts about the same subtopic are received at different time ranges due to the difference of geographical and time zones. This may make some subtopics scattered into several time slots (peak areas) or one peak area mixed with more than one subtopic. In order to sequentially segment the subtopics from the semantic aspect, the semantic-based subtopic detection approach breaks the time order of tweet stream, and regards each tweet as an individual short document. It takes advantage of Dynamic Topic Modeling (David and Michael, 2006) to explore the tweet content. 568 DTM in nature is a clustering approach which can dynamically generate the subtopic underlying the topic. Any clustering approach requires a pre-specified cluster number. To avoid tuning the cluster number experimentally, the subtopic number required by the semantic-based approach is either calculated according to heuristics or determined by the number of the peak areas detected from the stream-based approach in this work. Unlike the stream-based approach, the subtopics formed by DTM are the sets of distributions of subtopic and word probabilities. They are time independent. Thus, the temporal order among these subtopics is not obvious and needs to be discovered. We use the probabilistic relationships between tweets and topics learned from DTM to assign each tweet to a subtopic that it most likely belongs to. Then the subtopics are ordered temporally according to the mean values of their tweets’ timestamps. 2.2 Sequential Summary Generation Once the subtopics are detected and ordered, the tweets belonging to each subtopic are ranked and the most significant one is extracted to generate the sub-summary regarding that subtopic. Two different ranking strategies are adopted to conform to two different subtopic detection mechanisms. For a tweet in a peak area, the linear combination of two measures is considered to independently. Each sub-summary is up to 140 characters in length to comply with the limit of tweet, but the annotators are free to choose the number of sub-summaries. It ends up with 6.3 and 4.8 sub-summaries on average in a sequential summary written by the two annotators respectively. These two sets of sequential summaries are regarded as reference summaries to evaluate system-generated summaries from the following three aspects. Sequence Coverage Sequence coverage measures the N-gram match between system-generated summaries and human-written summaries (stopword removed first). Considering temporal information is an important factor in sequential summaries, we evaluate its significance to be a sub-summary: (1) subtopic representativeness measured by the  cosine similarity between the tweet and the centroid of all the tweets in the same peak area; (2) crowding endorsement measured by the times that the tweet is re-tweeted normalized by the total number of re-tweeting. With the DTM model, the significance of the tweets is evaluated directly by word distribution per subtopic. MMR (Carbonell and Goldstein, 1998) is used to reduce redundancy in sub-summary generation. 3 Experiments and Evaluations The experiments are conducted on the 24 Twitter trending topics collected using Twitter APIs 3 . The statistics are shown in Table 1. Due to the shortage of gold-standard sequential summaries, we invite two annotators to read the chronologically ordered tweets, and write a series of sub-summaries for each topic 3https://dev.twitter.com/ propose the position-aware coverage measure by accommodating the position information in matching. Let S={s1, s2, sk} denote a … … …, sequential summary and si the ith sub-summary, N-gram coverage is defined as: ???????? =|? 1?|?∑?∈? ?∑? ? ?∈?∙ℎ ?∑ ? ?∈?-?ℎ? ?∑? ∈-? ?,? ? ? ?∈? ? ? ? ? ? ? (ℎ?(?-?-? ? ? ?) where, ??? = |? − ?| + 1, i and j denote the serial numbers of the sub-summaries in the systemgenerated summary ??? and the human-written summary ?ℎ? , respectively. ? serves as a coefficient to discount long-distance matched sub-summaries. We evaluate unigram, bigram, and skipped bigram matches. Like in ROUGE (Lin, 2004), the skip distance is up to four words.  Sequence Novelty Sequence novelty evaluates the average novelty of two successive sub-summaries. Information content (IC) has been used to measure the novelty of update summaries by (Aggarwal et al., 2009). In this paper, the novelty of a system569 generated sequential summary is defined as the average of IC increments of two adjacent subsummaries, ??????? =|?|1 − 1?∑>1(????− ????, ??−1) × where |?| is the number of sub-summaries in the sequential summary. ???? = ∑?∈?? ??? . ????, ??−1 = ∑?∈??∩??−1 ??? is the overlapped information in the two adjacent sub-summaries. ??? = ???? ?????????(?, ???) where w is a word, ???? is the inverse tweet frequency of w, and ??? is all the tweets in the trending topic. The relevance function is introduced to ensure that the information brought by new sub-summaries is not only novel but also related to the topic.  Sequence Correlation Sequence correlation evaluates the sequential matching degree between system-generated and human-written summaries. In statistics, Kendall’s tau coefficient is often used to measure the association between two sequences (Lapata, 2006). The basic idea is to count the concordant and discordant pairs which contain the same elements in two sequences. Borrowing this idea, for each sub-summary in a human-generated summary, we find its most matched subsummary (judged by the cosine similarity measure) in the corresponding system-generated summary and then define the correlation according to the concordance between the two matched sub-summary sequences. ??????????? 2(|#???????????????| |#???????????????|) − = ?(? − 1) where n is the number of human-written subsummaries. Tables 2 and 3 below present the evaluation results. For the stream-based approach, we set ∆t=3 hours experimentally. For the semanticbased approach, we compare three different approaches to defining the sub-topic number K: (1) Semantic-based 1: Following the approach proposed in (Li et al., 2007), we first derive the matrix of tweet cosine similarity. Given the 1norm of eigenvalues ?????? (? = 1, 2, ,?) of the similarity matrix and the ratios ?? = ??????/?2 , the subtopic number ? = ? + 1 if ?? − ??+1 > ? (? 0.4 ). (2) Semantic-based 2: Using the rule of thumb in (Wan and Yang, 2008), ? = √? , where n is the tweet number. (3) Combined: K is defined as the number of the peak areas detected from the Opad algorithm, meanwhile we use the … = tweets within peak areas as the tweets of DTM. This is our new idea. The experiments confirm the superiority of the semantic-based approach over the stream-based approach in summary content coverage and novelty evaluations, showing that the former is better at subtopic content modeling. The subsummaries generated by the stream-based approach have comparative sequence (i.e., order) correlation with the human summaries. Combining the advantages the two approaches leads to the best overall results. SCebomaSCs beonmtdivr1eac( ∆nrdδ-bm(ta=i∆g0-cs3e.t)5=d32U0 n.3ig510r32a7m B0 .i1g 6r3589a46m87 SB0 k.i1 gp8725r69ame173d Table 2. N-Gram Coverage Evaluation Sem CtraeonTmaA tmicapb-nplibentria ec3os-de.abcd N(a∆hs(o1evt∆=(sdetδ=3l2)t 0y).a4n)dCoN0r .o 73e vl071ea96lti783 oy nEvCalo0ur a. 3 tei3792ol3a489nt650io n 4 Concluding Remarks We start a new application for Twitter trending topics, i.e., sequential summarization, to reveal the developing scenario of the trending topics while retaining the order of information presentation. We develop several solutions to automatically detect, segment and order subtopics temporally, and extract the most significant tweets into the sub-summaries to compose sequential summaries. Empirically, the combination of the stream-based approach and the semantic-based approach leads to sequential summaries with high coverage, low redundancy, and good order. Acknowledgments The work described in this paper is supported by a Hong Kong RGC project (PolyU No. 5202/12E) and a National Nature Science Foundation of China (NSFC No. 61272291). References Aggarwal Gaurav, Sumbaly Roshan and Sinha Shakti. 2009. Update Summarization. Stanford: CS224N Final Projects. 570 Blei M. David and Jordan I. Michael. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, 113120. Pittsburgh, Pennsylvania. Carbonell Jaime and Goldstein Jade. 1998. The use of MMR, diversity based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International Conference on Research and Development in Information Retrieval, 335-336. Melbourne, Australia. Duan Yajuan, Chen Zhimin, Wei Furu, Zhou Ming and Heung-Yeung Shum. 2012. Twitter Topic Summarization by Ranking Tweets using Social Influence and Content Quality. In Proceedings of the 24th International Conference on Computational Linguistics, 763-780. Mumbai, India. Harabagiu Sanda and Hickl Andrew. 2011. Relevance Modeling for Microblog Summarization. In Proceedings of 5th International AAAI Conference on Weblogs and Social Media. Barcelona, Spain. Lapata Mirella. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics, 32(4): 1-14. Li Wenyuan, Ng Wee-Keong, Liu Ying and Ong Kok-Leong. 2007. Enhancing the Effectiveness of Clustering with Spectra Analysis. IEEE Transactions on Knowledge and Data Engineering, 19(7):887-902. Li Xiaoyan and Croft W. Bruce. 2006. Improving novelty detection for general topics using sentence level information patterns. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, 238-247. New York, USA. Lin Chin-Yew. 2004. ROUGE: a Package for Automatic Evaluation of Summaries. In Proceedings of the ACL Workshop on Text Summarization Branches Out, 74-81 . Barcelona, Spain. Liu Fei, Liu Yang and Weng Fuliang. 2011. Why is “SXSW ” trending? Exploring Multiple Text Sources for Twitter Topic Summarization. In Proceedings of the ACL Workshop on Language in Social Media, 66-75. Portland, Oregon. O'Connor Brendan, Krieger Michel and Ahn David. 2010. TweetMotif: Exploratory Search and Topic Summarization for Twitter. In Proceedings of the 4th International AAAI Conference on Weblogs and Social Media, 384-385. Atlanta, Georgia. Shamma A. David, Kennedy Lyndon and Churchill F. Elizabeth. 2010. Tweetgeist: Can the Twitter Timeline Reveal the Structure of Broadcast Events? In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, 589-593. Savannah, Georgia, USA. Sharifi Beaux, Hutton Mark-Anthony and Kalita Jugal. 2010. Summarizing Microblogs Automatically. In Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 685688. Los Angeles, California. Steinberger Josef and Jezek Karel. 2009. Update summarization based on novel topic distribution. In Proceedings of the 9th ACM Symposium on Document Engineering, 205-213. Munich, Germany. Varma Vasudeva, Bharat Vijay, Kovelamudi Sudheer, Praveen Bysani, Kumar K. N, Kranthi Reddy, Karuna Kumar and Nitin Maganti. 2009. IIIT Hyderabad at TAC 2009. In Proceedings of the 2009 Text Analysis Conference. GaithsBurg, Maryland. Wan Xiaojun and Yang Jianjun. 2008. Multidocument summarization using cluster-based link analysis. In Proceedings of the 3 1st Annual International Conference on Research and Development in Information Retrieval, 299-306. Singapore, Singapore. 571

6 0.15932199 240 acl-2013-Microblogs as Parallel Corpora

7 0.13382749 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

8 0.13189545 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

9 0.12705255 45 acl-2013-An Empirical Study on Uncertainty Identification in Social Media Context

10 0.1188503 114 acl-2013-Detecting Chronic Critics Based on Sentiment Polarity and Userâ•Žs Behavior in Social Media

11 0.11261931 340 acl-2013-Text-Driven Toponym Resolution using Indirect Supervision

12 0.11149985 139 acl-2013-Entity Linking for Tweets

13 0.11034116 33 acl-2013-A user-centric model of voting intention from Social Media

14 0.10799544 328 acl-2013-Stacking for Statistical Machine Translation

15 0.10502593 148 acl-2013-Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams

16 0.093435906 301 acl-2013-Resolving Entity Morphs in Censored Data

17 0.092112049 338 acl-2013-Task Alternation in Parallel Sentence Retrieval for Twitter Translation

18 0.088531263 42 acl-2013-Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster

19 0.078715518 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

20 0.076227017 37 acl-2013-Adaptive Parser-Centric Text Normalization


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.14), (1, 0.144), (2, 0.026), (3, 0.043), (4, 0.154), (5, 0.057), (6, 0.18), (7, 0.083), (8, 0.187), (9, -0.161), (10, -0.17), (11, 0.029), (12, 0.075), (13, -0.054), (14, 0.038), (15, -0.025), (16, 0.034), (17, -0.002), (18, 0.037), (19, -0.077), (20, 0.036), (21, 0.077), (22, 0.011), (23, -0.055), (24, 0.007), (25, 0.014), (26, 0.036), (27, 0.035), (28, -0.005), (29, 0.008), (30, 0.015), (31, 0.018), (32, 0.016), (33, 0.058), (34, 0.011), (35, 0.069), (36, -0.035), (37, 0.007), (38, 0.018), (39, 0.029), (40, -0.037), (41, -0.039), (42, -0.08), (43, 0.12), (44, 0.044), (45, -0.002), (46, 0.019), (47, 0.021), (48, -0.023), (49, -0.012)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95133942 20 acl-2013-A Stacking-based Approach to Twitter User Geolocation Prediction

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: We implement a city-level geolocation prediction system for Twitter users. The system infers a user’s location based on both tweet text and user-declared metadata using a stacking approach. We demonstrate that the stacking method substantially outperforms benchmark methods, achieving 49% accuracy on a benchmark dataset. We further evaluate our method on a recent crawl of Twitter data to investigate the impact of temporal factors on model generalisation. Our results suggest that user-declared location metadata is more sensitive to temporal change than the text of Twitter messages. We also describe two ways of accessing/demoing our system.

2 0.84760982 45 acl-2013-An Empirical Study on Uncertainty Identification in Social Media Context

Author: Zhongyu Wei ; Junwen Chen ; Wei Gao ; Binyang Li ; Lanjun Zhou ; Yulan He ; Kam-Fai Wong

Abstract: Uncertainty text detection is important to many social-media-based applications since more and more users utilize social media platforms (e.g., Twitter, Facebook, etc.) as information source to produce or derive interpretations based on them. However, existing uncertainty cues are ineffective in social media context because of its specific characteristics. In this paper, we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets. We then conduct experiments on the generated tweets corpus to study the effectiveness of different types of features for uncertainty text identification.

3 0.79709613 146 acl-2013-Exploiting Social Media for Natural Language Processing: Bridging the Gap between Language-centric and Real-world Applications

Author: Simone Paolo Ponzetto ; Andrea Zielinski

Abstract: unkown-abstract

4 0.73472977 233 acl-2013-Linking Tweets to News: A Framework to Enrich Short Text Data in Social Media

Author: Weiwei Guo ; Hao Li ; Heng Ji ; Mona Diab

Abstract: Many current Natural Language Processing [NLP] techniques work well assuming a large context of text as input data. However they become ineffective when applied to short texts such as Twitter feeds. To overcome the issue, we want to find a related newswire document to a given tweet to provide contextual support for NLP tasks. This requires robust modeling and understanding of the semantics of short texts. The contribution of the paper is two-fold: 1. we introduce the Linking-Tweets-toNews task as well as a dataset of linked tweet-news pairs, which can benefit many NLP applications; 2. in contrast to previ- ous research which focuses on lexical features within the short texts (text-to-word information), we propose a graph based latent variable model that models the inter short text correlations (text-to-text information). This is motivated by the observation that a tweet usually only covers one aspect of an event. We show that using tweet specific feature (hashtag) and news specific feature (named entities) as well as temporal constraints, we are able to extract text-to-text correlations, and thus completes the semantic picture of a short text. Our experiments show significant improvement of our new model over baselines with three evaluation metrics in the new task.

5 0.71981722 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

Author: Alexandra Balahur ; Hristo Tanev

Abstract: Nowadays, the importance of Social Media is constantly growing, as people often use such platforms to share mainstream media news and comment on the events that they relate to. As such, people no loger remain mere spectators to the events that happen in the world, but become part of them, commenting on their developments and the entities involved, sharing their opinions and distributing related content. This paper describes a system that links the main events detected from clusters of newspaper articles to tweets related to them, detects complementary information sources from the links they contain and subsequently applies sentiment analysis to classify them into positive, negative and neutral. In this manner, readers can follow the main events happening in the world, both from the perspective of mainstream as well as social media and the public’s perception on them. This system will be part of the EMM media monitoring framework working live and it will be demonstrated using Google Earth.

6 0.70642823 33 acl-2013-A user-centric model of voting intention from Social Media

7 0.69377166 319 acl-2013-Sequential Summarization: A New Application for Timely Updated Twitter Trending Topics

8 0.67476171 114 acl-2013-Detecting Chronic Critics Based on Sentiment Polarity and Userâ•Žs Behavior in Social Media

9 0.60381007 42 acl-2013-Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster

10 0.60370785 95 acl-2013-Crawling microblogging services to gather language-classified URLs. Workflow and case study

11 0.59473073 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

12 0.58558607 301 acl-2013-Resolving Entity Morphs in Censored Data

13 0.55612743 240 acl-2013-Microblogs as Parallel Corpora

14 0.47342315 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

15 0.41582921 340 acl-2013-Text-Driven Toponym Resolution using Indirect Supervision

16 0.41259435 141 acl-2013-Evaluating a City Exploration Dialogue System with Integrated Question-Answering and Pedestrian Navigation

17 0.40987989 139 acl-2013-Entity Linking for Tweets

18 0.38110429 30 acl-2013-A computational approach to politeness with application to social factors

19 0.37326068 268 acl-2013-PATHS: A System for Accessing Cultural Heritage Collections

20 0.36412901 338 acl-2013-Task Alternation in Parallel Sentence Retrieval for Twitter Translation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.041), (6, 0.044), (11, 0.047), (23, 0.011), (24, 0.055), (26, 0.087), (35, 0.083), (38, 0.015), (42, 0.036), (48, 0.038), (70, 0.038), (82, 0.258), (88, 0.051), (90, 0.018), (95, 0.068)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.80481905 20 acl-2013-A Stacking-based Approach to Twitter User Geolocation Prediction

Author: Bo Han ; Paul Cook ; Timothy Baldwin

Abstract: We implement a city-level geolocation prediction system for Twitter users. The system infers a user’s location based on both tweet text and user-declared metadata using a stacking approach. We demonstrate that the stacking method substantially outperforms benchmark methods, achieving 49% accuracy on a benchmark dataset. We further evaluate our method on a recent crawl of Twitter data to investigate the impact of temporal factors on model generalisation. Our results suggest that user-declared location metadata is more sensitive to temporal change than the text of Twitter messages. We also describe two ways of accessing/demoing our system.

2 0.76924658 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

Author: Veronica Perez-Rosas ; Rada Mihalcea ; Louis-Philippe Morency

Abstract: During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.

3 0.67834795 17 acl-2013-A Random Walk Approach to Selectional Preferences Based on Preference Ranking and Propagation

Author: Zhenhua Tian ; Hengheng Xiang ; Ziqi Liu ; Qinghua Zheng

Abstract: This paper presents an unsupervised random walk approach to alleviate data sparsity for selectional preferences. Based on the measure of preferences between predicates and arguments, the model aggregates all the transitions from a given predicate to its nearby predicates, and propagates their argument preferences as the given predicate’s smoothed preferences. Experimental results show that this approach outperforms several state-of-the-art methods on the pseudo-disambiguation task, and it better correlates with human plausibility judgements.

4 0.5699656 132 acl-2013-Easy-First POS Tagging and Dependency Parsing with Beam Search

Author: Ji Ma ; Jingbo Zhu ; Tong Xiao ; Nan Yang

Abstract: In this paper, we combine easy-first dependency parsing and POS tagging algorithms with beam search and structured perceptron. We propose a simple variant of “early-update” to ensure valid update in the training process. The proposed solution can also be applied to combine beam search and structured perceptron with other systems that exhibit spurious ambiguity. On CTB, we achieve 94.01% tagging accuracy and 86.33% unlabeled attachment score with a relatively small beam width. On PTB, we also achieve state-of-the-art performance. 1

5 0.56594753 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

Author: Angeliki Lazaridou ; Ivan Titov ; Caroline Sporleder

Abstract: We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level. We deviate from the traditional view of discourse, as we induce types of discourse relations and associated discourse cues relevant to the considered opinion analysis task; consequently, the induced discourse relations play the role of opinion and aspect shifters. The quantitative analysis that we conducted indicated that the integration of a discourse model increased the prediction accuracy results with respect to the discourse-agnostic approach and the qualitative analysis suggests that the induced representations encode a meaningful discourse structure.

6 0.56462467 318 acl-2013-Sentiment Relevance

7 0.56108534 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

8 0.55764931 131 acl-2013-Dual Training and Dual Prediction for Polarity Classification

9 0.55476987 295 acl-2013-Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages

10 0.55220991 95 acl-2013-Crawling microblogging services to gather language-classified URLs. Workflow and case study

11 0.55104929 369 acl-2013-Unsupervised Consonant-Vowel Prediction over Hundreds of Languages

12 0.55017561 333 acl-2013-Summarization Through Submodularity and Dispersion

13 0.55008519 117 acl-2013-Detecting Turnarounds in Sentiment Analysis: Thwarting

14 0.54941583 194 acl-2013-Improving Text Simplification Language Modeling Using Unsimplified Text Data

15 0.54931939 196 acl-2013-Improving pairwise coreference models through feature space hierarchy learning

16 0.54849368 147 acl-2013-Exploiting Topic based Twitter Sentiment for Stock Prediction

17 0.54747909 236 acl-2013-Mapping Source to Target Strings without Alignment by Analogical Learning: A Case Study with Transliteration

18 0.54680866 70 acl-2013-Bilingually-Guided Monolingual Dependency Grammar Induction

19 0.54514068 373 acl-2013-Using Conceptual Class Attributes to Characterize Social Media Users

20 0.5448997 340 acl-2013-Text-Driven Toponym Resolution using Indirect Supervision