emnlp emnlp2013 emnlp2013-34 knowledge-graph by maker-knowledge-mining

34 emnlp-2013-Automatically Classifying Edit Categories in Wikipedia Revisions


Source: pdf

Author: Johannes Daxenberger ; Iryna Gurevych

Abstract: In this paper, we analyze a novel set of features for the task of automatic edit category classification. Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. In a supervised machine learning experiment, we achieve a micro-averaged F1 score of .62 on a corpus of edits from the English Wikipedia. In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Abstract In this paper, we analyze a novel set of features for the task of automatic edit category classification. [sent-3, score-0.639]

2 Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. [sent-4, score-0.981]

3 Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. [sent-5, score-0.169]

4 62 on a corpus of edits from the English Wikipedia. [sent-7, score-0.484]

5 In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. [sent-8, score-0.513]

6 A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. [sent-9, score-0.577]

7 We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. [sent-10, score-0.817]

8 Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles. [sent-11, score-0.243]

9 1 Introduction Due to its ever-evolving and collaboratively built content, Wikipedia has been the subject of many NLP studies. [sent-12, score-0.045]

10 While the number of newly created articles in the online encyclopedia declined in the last few years (Suh et al. [sent-13, score-0.14]

11 , 2009), the number of edits in existing articles is rather stable. [sent-14, score-0.539]

12 Wikipedia’s revision history stores all changes made to any page in the encyclopedia in separate revisions. [sent-23, score-0.381]

13 Previous studies have exploited revision history data in tasks such as preposition error correction (Cahill et al. [sent-24, score-0.452]

14 , 2013), spelling error correction (Zesch, 2012) or paraphrasing (Max and Wisniewski, 2010). [sent-25, score-0.19]

15 (2013) outline several applications benefiting from revision history data. [sent-28, score-0.321]

16 They argue for a unified approach to extract and classify edits from revision histories based on a predefined edit category taxonomy. [sent-29, score-1.495]

17 In this work, we show how the extraction and automatic multi-label classification of any edit in Wikipedia can be handled with a single approach. [sent-30, score-0.601]

18 Therefore, we use the 21-category edit classification taxonomy developed in previous work (Daxenberger and Gurevych, 2012). [sent-31, score-0.653]

19 This taxonomy enables a finegrained analysis of edit activity in revision histories. [sent-32, score-0.851]

20 We present the results from an automatic classification experiment, based on an annotated corpus of edits in the English Wikipedia. [sent-33, score-0.572]

21 2 To the best of our knowledge, this is the first approach allowing to classify each single edit in Wikipedia into one or more of 21 different edit categories using a supervised machine learning 2http : / /www . [sent-35, score-1.103]

22 de / dat a / edit -cl as s i ficat i on 578 ProceSe datintlges, o Wfa tsh ein 2g01to3n, C UoSnfAe,re 1n8c-e2 o1n O Ecmtopbier ic 2a0l1 M3. [sent-38, score-0.513]

23 An edit is a coherent, local change which modifies a document and which can be related to certain meta data (e. [sent-42, score-0.617]

24 In edit category classification, we aim to detect all n edits evk−1,v with 0 ≤ k < n in adjacent versions rv−1, rv ovf− a 1d,vowcuimthen 0t ≤ (we < ref ner i tno a tdhjea ocelndetr v erervsiiosinosn r as rv−1 and to the newer as rv) and assign each of them to one or more edit categories. [sent-46, score-1.889]

25 There exist at least two main applications of edit category classification: First, a fine-grained classification of edits in collaboratively created documents such as Wikipedia articles, scientific papers or research proposals, would help us to better understand the collaborative writing process. [sent-47, score-1.256]

26 This includes answers to questions about the kind of contribution of individual authors (Who has added substantial contents? [sent-48, score-0.047]

27 ) and about the kind of collabora- tion which characterizes different articles (Liu and Ram, 2011). [sent-50, score-0.102]

28 Second, automatic classification of edits generates huge amounts of training data for the above mentioned NLP systems. [sent-51, score-0.601]

29 Edit category classification is related to the better known task of document pair classification. [sent-52, score-0.244]

30 In document pair classification, a pair of documents has to be assigned to one or more categories (e. [sent-53, score-0.068]

31 Here, the document may be a very short text, such as a sentence or a single word. [sent-56, score-0.03]

32 Applications of document pair classification include plagiarism detection (Potthast et al. [sent-57, score-0.184]

33 , 2012) or text similarity detection (B¨ ar et al. [sent-59, score-0.043]

34 In edit category classification, we also have two documents. [sent-61, score-0.639]

35 However, these documents are different versions of the same text. [sent-62, score-0.04]

36 The main contributions of this paper are: First, we introduce a novel feature set for edit category classification. [sent-64, score-0.639]

37 We propose the new task of edit category classification and show that our model is able to classify edits from a 21-category taxonomy. [sent-66, score-1.25]

38 Furthermore, our model achieves state-of-theart performance in a fluency edit classification task 579 (Bronner and Monz, 2012). [sent-67, score-0.665]

39 Third, we analyze collaboration patterns based on edit categories on two subsets of Wikipedia articles, namely featured and non-featured articles. [sent-68, score-0.734]

40 We detect correlations between collaboration patterns and high-quality articles. [sent-69, score-0.135]

41 This is demonstrated by the fact that featured articles have a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles. [sent-70, score-0.291]

42 We also demonstrate an application of our classifier model in Section 5 by mining frequent collaboration patterns in the revi- sion histories of different articles. [sent-75, score-0.203]

43 2 Related Work Wikipedia is a huge data source for generating training data for edit category classification, as all previous versions of each page in the encyclopedia are stored in its revision history. [sent-77, score-1.056]

44 Unsurprisingly, the number of studies extracting certain kinds of Wikipedia edits keeps growing. [sent-78, score-0.505]

45 Most of these use manually defined rules or filters find the right kind of edits. [sent-79, score-0.026]

46 Among the latter, there are NLP applications such as the detection of lexical errors (Nelken and Yamangil, 2008), spelling error correction (Max and Wisniewski, 2010; Zesch, 2012), preposition error correction (Cahill et al. [sent-80, score-0.339]

47 , 2013), sentence compression (Nelken and Yamangil, 2008; Yamangil and Nelken, 2008), summarization (Nelken and Yamangil, 2008), simplification (Yatskar et al. [sent-81, score-0.023]

48 , 2010; Woodsend and Lapata, 2011), paraphrasing (Max and Wisniewski, 2010; Dutrey et al. [sent-82, score-0.027]

49 , 2011), textual entailment (Zanzotto and Pennacchiotti, 2010; Cabrio et al. [sent-83, score-0.025]

50 Bronner and Monz (2012) define features for the supervised classification of factual and fluency edits. [sent-88, score-0.175]

51 Furthermore, they use features based on POS tags, named entities, acronyms, and a lan- Figure 1: An example edit from WPEC labeled with REFERENCE-M, as displayed by Wikimedia’s diff page tool. [sent-90, score-0.583]

52 Vandalism detection in Wikipedia has mostly been defined as a binary machine learning task, where the goal is to classify a pair of adjacent revisions as vandalized or not-vandalized based on edit category features. [sent-93, score-0.832]

53 (201 1), the authors group these features into meta data (author, comment and time stamp of a revision), reputation (author and article reputation), textual (language independent, i. [sent-95, score-0.233]

54 This classifier was also used in the vandalism detection study of Javanmardi et al. [sent-102, score-0.176]

55 Different to the approach of Bronner and Monz (2012) and previous vandalism classification studies, we built a model which accounts for multilabeling and a fine-grained edit category system. [sent-104, score-0.86]

56 Our feature set builds upon existing work while adding a substantial number of new features. [sent-105, score-0.021]

57 In this corpus, each pair of adjacent revisions is segmented into one or more edits. [sent-108, score-0.137]

58 This enables an accurate picture of the editing process, as an au- 580 thor may perform several independent edits in the same revision. [sent-109, score-0.572]

59 when an entire new paragraph including text, references and markup is added. [sent-115, score-0.121]

60 These are calculated via a line-based diff comparison on the source text (including wiki markup). [sent-117, score-0.068]

61 As previously suggested (Daxenberger and Gurevych, 2012), inside modified lines, only the span of text which has actually been changed is marked as edit (either Insertion, Deletion or Modification), not the entire line. [sent-118, score-0.535]

62 In Daxenberger and Gurevych (2012), we divide the 21-category taxonomy into text-base (meaningchanging edits), surface (non meaning-changing ed- its) and Wikipedia policy (VANDALISM and REVERT) edits. [sent-121, score-0.052]

63 Among the text-base edits, we include categories for templates, references (internal and external links), files and information, each of which is further divided into an insertion (I), deletion (D) and modification (M) category. [sent-122, score-0.164]

64 Surface edits consist of paraphrases, spelling and grammar corrections, relocations and markup edits. [sent-123, score-0.665]

65 The latter category contains all edits which affect markup elements that are not covered by any of the other categories and is divided into insertions, deletions and modifications. [sent-124, score-0.817]

66 We also suggested an OTHER category, which is intended for edits which cannot be labeled due to segmentation errors. [sent-126, score-0.506]

67 Figure 1shows an example edit from WPEC, labeled with the REFERENCE- 2 In this example, n = 1(unigrams). [sent-127, score-0.513]

68 3 True if m corresponds to internal link, false otherwise. [sent-128, score-0.037]

69 Table 1: List of edit category classification features with explanations. [sent-129, score-0.727]

70 The values correspond to the the example edit from Figure 1. [sent-130, score-0.513]

71 m may refer to internal link, external link, image, template or markup element. [sent-131, score-0.179]

72 The overall interannotator agreement measured as Krippendorf’s α is . [sent-137, score-0.024]

73 WPEC consists of 981 revision pairs, segmented into 1,995 edits. [sent-140, score-0.291]

74 We define edit category classification as a multi-label classification task. [sent-141, score-0.815]

75 For the sake of readability, in the following we will refer to an edit evk−1,v as ei, with ei ∈ E, where 0 ≤ i< 1995 and Ev −is1 vthe set of all edi∈ts. [sent-142, score-0.547]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('edit', 0.513), ('edits', 0.484), ('revision', 0.265), ('wpec', 0.214), ('daxenberger', 0.183), ('wikipedia', 0.147), ('vandalism', 0.133), ('rv', 0.128), ('category', 0.126), ('bronner', 0.122), ('nelken', 0.122), ('yamangil', 0.122), ('markup', 0.121), ('gurevych', 0.116), ('collaboration', 0.106), ('adler', 0.092), ('monz', 0.09), ('classification', 0.088), ('revisions', 0.08), ('wisniewski', 0.08), ('correction', 0.079), ('meta', 0.074), ('reputation', 0.073), ('histories', 0.068), ('fluency', 0.064), ('evk', 0.061), ('javanmardi', 0.061), ('stamp', 0.061), ('spelling', 0.06), ('encyclopedia', 0.058), ('articles', 0.055), ('ferschke', 0.053), ('zesch', 0.053), ('homogeneity', 0.053), ('potthast', 0.053), ('taxonomy', 0.052), ('ukp', 0.048), ('cahill', 0.048), ('deletions', 0.048), ('featured', 0.048), ('insertions', 0.048), ('adt', 0.045), ('collaboratively', 0.045), ('diff', 0.045), ('detection', 0.043), ('insertion', 0.043), ('editing', 0.04), ('versions', 0.04), ('classify', 0.039), ('categories', 0.038), ('internal', 0.037), ('deletion', 0.036), ('ei', 0.034), ('history', 0.033), ('adjacent', 0.031), ('preposition', 0.03), ('document', 0.03), ('patterns', 0.029), ('huge', 0.029), ('paraphrase', 0.028), ('link', 0.028), ('max', 0.028), ('paraphrasing', 0.027), ('newer', 0.027), ('ref', 0.027), ('darmstadt', 0.027), ('declined', 0.027), ('revert', 0.027), ('thor', 0.027), ('segmented', 0.026), ('kind', 0.026), ('modification', 0.026), ('page', 0.025), ('textual', 0.025), ('error', 0.024), ('interannotator', 0.024), ('cabrio', 0.024), ('edi', 0.024), ('nunes', 0.024), ('ovf', 0.024), ('ubiquitous', 0.024), ('author', 0.023), ('simplification', 0.023), ('benefiting', 0.023), ('plagiarism', 0.023), ('breiman', 0.023), ('yatskar', 0.023), ('factual', 0.023), ('johannes', 0.023), ('wiki', 0.023), ('woodsend', 0.023), ('suggested', 0.022), ('studies', 0.021), ('characterizes', 0.021), ('madnani', 0.021), ('zanzotto', 0.021), ('recasens', 0.021), ('enables', 0.021), ('substantial', 0.021), ('external', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 34 emnlp-2013-Automatically Classifying Edit Categories in Wikipedia Revisions

Author: Johannes Daxenberger ; Iryna Gurevych

Abstract: In this paper, we analyze a novel set of features for the task of automatic edit category classification. Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. In a supervised machine learning experiment, we achieve a micro-averaged F1 score of .62 on a corpus of edits from the English Wikipedia. In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles.

2 0.1149377 61 emnlp-2013-Detecting Promotional Content in Wikipedia

Author: Shruti Bhosale ; Heath Vinicombe ; Raymond Mooney

Abstract: This paper presents an approach for detecting promotional content in Wikipedia. By incorporating stylometric features, including features based on n-gram and PCFG language models, we demonstrate improved accuracy at identifying promotional articles, compared to using only lexical information and metafeatures.

3 0.069200411 168 emnlp-2013-Semi-Supervised Feature Transformation for Dependency Parsing

Author: Wenliang Chen ; Min Zhang ; Yue Zhang

Abstract: In current dependency parsing models, conventional features (i.e. base features) defined over surface words and part-of-speech tags in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data. In this paper, we propose a novel semi-supervised approach to addressing the problem by transforming the base features into high-level features (i.e. meta features) with the help of a large amount of automatically parsed data. The meta features are used together with base features in our final parser. Our studies indicate that our proposed approach is very effective in processing unseen data and features. Experiments on Chinese and English data sets show that the final parser achieves the best-reported accuracy on the Chinese data and comparable accuracy with the best known parsers on the English data.

4 0.063860334 135 emnlp-2013-Monolingual Marginal Matching for Translation Model Adaptation

Author: Ann Irvine ; Chris Quirk ; Hal Daume III

Abstract: When using a machine translation (MT) model trained on OLD-domain parallel data to translate NEW-domain text, one major challenge is the large number of out-of-vocabulary (OOV) and new-translation-sense words. We present a method to identify new translations of both known and unknown source language words that uses NEW-domain comparable document pairs. Starting with a joint distribution of source-target word pairs derived from the OLD-domain parallel corpus, our method recovers a new joint distribution that matches the marginal distributions of the NEW-domain comparable document pairs, while minimizing the divergence from the OLD-domain distribution. Adding learned translations to our French-English MT model results in gains of about 2 BLEU points over strong baselines.

5 0.051408108 69 emnlp-2013-Efficient Collective Entity Linking with Stacking

Author: Zhengyan He ; Shujie Liu ; Yang Song ; Mu Li ; Ming Zhou ; Houfeng Wang

Abstract: Entity disambiguation works by linking ambiguous mentions in text to their corresponding real-world entities in knowledge base. Recent collective disambiguation methods enforce coherence among contextual decisions at the cost of non-trivial inference processes. We propose a fast collective disambiguation approach based on stacking. First, we train a local predictor g0 with learning to rank as base learner, to generate initial ranking list of candidates. Second, top k candidates of related instances are searched for constructing expressive global coherence features. A global predictor g1 is trained in the augmented feature space and stacking is employed to tackle the train/test mismatch problem. The proposed method is fast and easy to implement. Experiments show its effectiveness over various algorithms on several public datasets. By learning a rich semantic relatedness measure be- . tween entity categories and context document, performance is further improved.

6 0.050143976 97 emnlp-2013-Identifying Web Search Query Reformulation using Concept based Matching

7 0.043663111 160 emnlp-2013-Relational Inference for Wikification

8 0.042919517 9 emnlp-2013-A Log-Linear Model for Unsupervised Text Normalization

9 0.042583611 114 emnlp-2013-Joint Learning and Inference for Grammatical Error Correction

10 0.041818064 24 emnlp-2013-Application of Localized Similarity for Web Documents

11 0.040569387 42 emnlp-2013-Building Specialized Bilingual Lexicons Using Large Scale Background Knowledge

12 0.040264696 169 emnlp-2013-Semi-Supervised Representation Learning for Cross-Lingual Text Classification

13 0.036419597 150 emnlp-2013-Pair Language Models for Deriving Alternative Pronunciations and Spellings from Pronunciation Dictionaries

14 0.036110934 31 emnlp-2013-Automatic Feature Engineering for Answer Selection and Extraction

15 0.036046937 64 emnlp-2013-Discriminative Improvements to Distributional Sentence Similarity

16 0.035968028 132 emnlp-2013-Mining Scientific Terms and their Definitions: A Study of the ACL Anthology

17 0.035339698 110 emnlp-2013-Joint Bootstrapping of Corpus Annotations and Entity Types

18 0.032365091 167 emnlp-2013-Semi-Markov Phrase-Based Monolingual Alignment

19 0.030372556 107 emnlp-2013-Interactive Machine Translation using Hierarchical Translation Models

20 0.030328497 7 emnlp-2013-A Hierarchical Entity-Based Approach to Structuralize User Generated Content in Social Media: A Case of Yahoo! Answers


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.104), (1, 0.021), (2, -0.014), (3, -0.011), (4, -0.0), (5, 0.01), (6, 0.041), (7, 0.038), (8, 0.035), (9, -0.043), (10, 0.01), (11, 0.044), (12, -0.01), (13, 0.022), (14, 0.011), (15, 0.022), (16, -0.097), (17, -0.022), (18, 0.016), (19, 0.061), (20, -0.112), (21, 0.077), (22, 0.097), (23, 0.014), (24, 0.075), (25, 0.034), (26, 0.073), (27, 0.011), (28, 0.131), (29, -0.005), (30, 0.125), (31, -0.045), (32, 0.032), (33, 0.058), (34, 0.135), (35, -0.015), (36, 0.023), (37, -0.086), (38, 0.017), (39, 0.041), (40, 0.183), (41, -0.059), (42, -0.049), (43, -0.115), (44, -0.063), (45, -0.099), (46, -0.132), (47, -0.202), (48, 0.083), (49, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95822668 34 emnlp-2013-Automatically Classifying Edit Categories in Wikipedia Revisions

Author: Johannes Daxenberger ; Iryna Gurevych

Abstract: In this paper, we analyze a novel set of features for the task of automatic edit category classification. Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. In a supervised machine learning experiment, we achieve a micro-averaged F1 score of .62 on a corpus of edits from the English Wikipedia. In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles.

2 0.79852891 61 emnlp-2013-Detecting Promotional Content in Wikipedia

Author: Shruti Bhosale ; Heath Vinicombe ; Raymond Mooney

Abstract: This paper presents an approach for detecting promotional content in Wikipedia. By incorporating stylometric features, including features based on n-gram and PCFG language models, we demonstrate improved accuracy at identifying promotional articles, compared to using only lexical information and metafeatures.

3 0.47839424 168 emnlp-2013-Semi-Supervised Feature Transformation for Dependency Parsing

Author: Wenliang Chen ; Min Zhang ; Yue Zhang

Abstract: In current dependency parsing models, conventional features (i.e. base features) defined over surface words and part-of-speech tags in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data. In this paper, we propose a novel semi-supervised approach to addressing the problem by transforming the base features into high-level features (i.e. meta features) with the help of a large amount of automatically parsed data. The meta features are used together with base features in our final parser. Our studies indicate that our proposed approach is very effective in processing unseen data and features. Experiments on Chinese and English data sets show that the final parser achieves the best-reported accuracy on the Chinese data and comparable accuracy with the best known parsers on the English data.

4 0.35403448 114 emnlp-2013-Joint Learning and Inference for Grammatical Error Correction

Author: Alla Rozovskaya ; Dan Roth

Abstract: State-of-the-art systems for grammatical error correction are based on a collection of independently-trained models for specific errors. Such models ignore linguistic interactions at the sentence level and thus do poorly on mistakes that involve grammatical dependencies among several words. In this paper, we identify linguistic structures with interacting grammatical properties and propose to address such dependencies via joint inference and joint learning. We show that it is possible to identify interactions well enough to facilitate a joint approach and, consequently, that joint methods correct incoherent predictions that independentlytrained classifiers tend to produce. Furthermore, because the joint learning model considers interacting phenomena during training, it is able to identify mistakes that require mak- ing multiple changes simultaneously and that standard approaches miss. Overall, our model significantly outperforms the Illinois system that placed first in the CoNLL-2013 shared task on grammatical error correction.

5 0.34765467 133 emnlp-2013-Modeling Scientific Impact with Topical Influence Regression

Author: James Foulds ; Padhraic Smyth

Abstract: When reviewing scientific literature, it would be useful to have automatic tools that identify the most influential scientific articles as well as how ideas propagate between articles. In this context, this paper introduces topical influence, a quantitative measure of the extent to which an article tends to spread its topics to the articles that cite it. Given the text of the articles and their citation graph, we show how to learn a probabilistic model to recover both the degree of topical influence of each article and the influence relationships between articles. Experimental results on corpora from two well-known computer science conferences are used to illustrate and validate the proposed approach.

6 0.34294423 189 emnlp-2013-Two-Stage Method for Large-Scale Acquisition of Contradiction Pattern Pairs using Entailment

7 0.33771536 178 emnlp-2013-Success with Style: Using Writing Style to Predict the Success of Novels

8 0.33255398 24 emnlp-2013-Application of Localized Similarity for Web Documents

9 0.32451361 198 emnlp-2013-Using Soft Constraints in Joint Inference for Clinical Concept Recognition

10 0.3207747 199 emnlp-2013-Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students

11 0.30851942 135 emnlp-2013-Monolingual Marginal Matching for Translation Model Adaptation

12 0.30628657 69 emnlp-2013-Efficient Collective Entity Linking with Stacking

13 0.27115381 162 emnlp-2013-Russian Stress Prediction using Maximum Entropy Ranking

14 0.27070281 26 emnlp-2013-Assembling the Kazakh Language Corpus

15 0.27015612 5 emnlp-2013-A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task

16 0.25749174 64 emnlp-2013-Discriminative Improvements to Distributional Sentence Similarity

17 0.256152 150 emnlp-2013-Pair Language Models for Deriving Alternative Pronunciations and Spellings from Pronunciation Dictionaries

18 0.24716589 171 emnlp-2013-Shift-Reduce Word Reordering for Machine Translation

19 0.24675043 27 emnlp-2013-Authorship Attribution of Micro-Messages

20 0.24519403 197 emnlp-2013-Using Paraphrases and Lexical Semantics to Improve the Accuracy and the Robustness of Supervised Models in Situated Dialogue Systems


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.07), (18, 0.03), (22, 0.03), (30, 0.051), (45, 0.474), (50, 0.021), (51, 0.124), (66, 0.021), (71, 0.024), (75, 0.028), (77, 0.01), (96, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78034508 34 emnlp-2013-Automatically Classifying Edit Categories in Wikipedia Revisions

Author: Johannes Daxenberger ; Iryna Gurevych

Abstract: In this paper, we analyze a novel set of features for the task of automatic edit category classification. Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. In a supervised machine learning experiment, we achieve a micro-averaged F1 score of .62 on a corpus of edits from the English Wikipedia. In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles.

2 0.69317013 171 emnlp-2013-Shift-Reduce Word Reordering for Machine Translation

Author: Katsuhiko Hayashi ; Katsuhito Sudoh ; Hajime Tsukada ; Jun Suzuki ; Masaaki Nagata

Abstract: This paper presents a novel word reordering model that employs a shift-reduce parser for inversion transduction grammars. Our model uses rich syntax parsing features for word reordering and runs in linear time. We apply it to postordering of phrase-based machine translation (PBMT) for Japanese-to-English patent tasks. Our experimental results show that our method achieves a significant improvement of +3.1 BLEU scores against 30.15 BLEU scores of the baseline PBMT system.

3 0.67965877 58 emnlp-2013-Dependency Language Models for Sentence Completion

Author: Joseph Gubbins ; Andreas Vlachos

Abstract: Sentence completion is a challenging semantic modeling task in which models must choose the most appropriate word from a given set to complete a sentence. Although a variety of language models have been applied to this task in previous work, none of the existing approaches incorporate syntactic information. In this paper we propose to tackle this task using a pair of simple language models in which the probability of a sentence is estimated as the probability of the lexicalisation of a given syntactic dependency tree. We apply our approach to the Microsoft Research Sentence Completion Challenge and show that it improves on n-gram language models by 8.7 percentage points, achieving the highest accuracy reported to date apart from neural language models that are more complex and ex- pensive to train.

4 0.38417321 116 emnlp-2013-Joint Parsing and Disfluency Detection in Linear Time

Author: Mohammad Sadegh Rasooli ; Joel Tetreault

Abstract: We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.

5 0.37038869 168 emnlp-2013-Semi-Supervised Feature Transformation for Dependency Parsing

Author: Wenliang Chen ; Min Zhang ; Yue Zhang

Abstract: In current dependency parsing models, conventional features (i.e. base features) defined over surface words and part-of-speech tags in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data. In this paper, we propose a novel semi-supervised approach to addressing the problem by transforming the base features into high-level features (i.e. meta features) with the help of a large amount of automatically parsed data. The meta features are used together with base features in our final parser. Our studies indicate that our proposed approach is very effective in processing unseen data and features. Experiments on Chinese and English data sets show that the final parser achieves the best-reported accuracy on the Chinese data and comparable accuracy with the best known parsers on the English data.

6 0.36959478 146 emnlp-2013-Optimal Incremental Parsing via Best-First Dynamic Programming

7 0.35594904 50 emnlp-2013-Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization

8 0.35110456 150 emnlp-2013-Pair Language Models for Deriving Alternative Pronunciations and Spellings from Pronunciation Dictionaries

9 0.34776571 190 emnlp-2013-Ubertagging: Joint Segmentation and Supertagging for English

10 0.34125137 14 emnlp-2013-A Synchronous Context Free Grammar for Time Normalization

11 0.34038213 87 emnlp-2013-Fish Transporters and Miracle Homes: How Compositional Distributional Semantics can Help NP Parsing

12 0.33648697 107 emnlp-2013-Interactive Machine Translation using Hierarchical Translation Models

13 0.33534199 114 emnlp-2013-Joint Learning and Inference for Grammatical Error Correction

14 0.32705235 53 emnlp-2013-Cross-Lingual Discriminative Learning of Sequence Models with Posterior Regularization

15 0.32692388 132 emnlp-2013-Mining Scientific Terms and their Definitions: A Study of the ACL Anthology

16 0.32592964 66 emnlp-2013-Dynamic Feature Selection for Dependency Parsing

17 0.32546562 86 emnlp-2013-Feature Noising for Log-Linear Structured Prediction

18 0.32414261 181 emnlp-2013-The Effects of Syntactic Features in Automatic Prediction of Morphology

19 0.3239333 68 emnlp-2013-Effectiveness and Efficiency of Open Relation Extraction

20 0.32387841 187 emnlp-2013-Translation with Source Constituency and Dependency Trees