emnlp emnlp2011 emnlp2011-12 knowledge-graph by maker-knowledge-mining

12 emnlp-2011-A Weakly-supervised Approach to Argumentative Zoning of Scientific Documents


Source: pdf

Author: Yufan Guo ; Anna Korhonen ; Thierry Poibeau

Abstract: Documents Anna Korhonen Thierry Poibeau Computer Laboratory LaTTiCe, UMR8094 University of Cambridge, UK CNRS & ENS, France alk2 3 @ cam . ac .uk thierry .po ibeau @ ens . fr tific literature according to categories of information structure (or discourse, rhetorical, argumentative or Argumentative Zoning (AZ) analysis of the argumentative structure of a scientific paper has proved useful for a number of information access tasks. Current approaches to AZ rely on supervised machine learning (ML). – – Requiring large amounts of annotated data, these approaches are expensive to develop and port to different domains and tasks. A potential solution to this problem is to use weaklysupervised ML instead. We investigate the performance of four weakly-supervised classifiers on scientific abstract data annotated for multiple AZ classes. Our best classifier based on the combination of active learning and selftraining outperforms our best supervised classifier, yielding a high accuracy of 81% when using just 10% of the labeled data. This result suggests that weakly-supervised learning could be employed to improve the practical applicability and portability of AZ across different information access tasks.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 fr tific literature according to categories of information structure (or discourse, rhetorical, argumentative or Argumentative Zoning (AZ) analysis of the argumentative structure of a scientific paper has proved useful for a number of information access tasks. [sent-7, score-0.48]

2 Current approaches to AZ rely on supervised machine learning (ML). [sent-8, score-0.078]

3 A potential solution to this problem is to use weaklysupervised ML instead. [sent-10, score-0.123]

4 We investigate the performance of four weakly-supervised classifiers on scientific abstract data annotated for multiple AZ classes. [sent-11, score-0.148]

5 Our best classifier based on the combination of active learning and selftraining outperforms our best supervised classifier, yielding a high accuracy of 81% when using just 10% of the labeled data. [sent-12, score-0.473]

6 1 Introduction Many practical tasks require accessing specific types of information in scientific literature. [sent-14, score-0.148]

7 For example, a reader of scientific literature may be looking for information about the objective of the study in question, the methods used in the study, the results obtained, or the conclusions drawn by authors. [sent-15, score-0.148]

8 Some of these classify sentences according to typical section names seen in scientific documents (Lin et al. [sent-18, score-0.148]

9 on argumentative zones (Teufel and Moens, 2002; Mizuta et al. [sent-22, score-0.192]

10 However, relying on fully supervised machine learning (ML) and a large body of annotated data, existing approaches are expensive to develop and port to different scientific domains and tasks. [sent-31, score-0.27]

11 Relying on a small amount of labeled data and a large pool of unlabeled data, weakly-supervised techniques (e. [sent-33, score-0.175]

12 semi-supervision, active learning, co/tri-training, self-training) aim to keep the advantages of fully supervised approaches. [sent-35, score-0.326]

13 They have been applied to a wide range of NLP tasks, including named-entity recognition, question answering, information extraction, text classification and many others (Abney, 2008), yielding performance levels similar or equivalent to those of fully supervised techniques. [sent-36, score-0.206]

14 ec th2o0d1s1 i Ans Nsoactuiartaioln La fonrg Cuaogmep Purtoatcieosnsainlg L,in pgaugies ti 2c7s3–283, have not yet been applied to the analysis of information structure of scientific documents by aforementioned approaches. [sent-39, score-0.148]

15 Recent experiments have demonstrated the usefulness of weakly-supervised learning for classifying discourse relations in scientific texts, e. [sent-40, score-0.196]

16 In this paper, we investigate the potential of weakly-supervised learning for Argumentative Zoning (AZ) of scientific abstracts. [sent-45, score-0.148]

17 AZ is an approach to information structure which provides an analysis of the rhetorical progression of the scientific argument in a document (Teufel and Moens, 2002). [sent-46, score-0.242]

18 It has been used to analyze scientific texts in various disciplines including computational linguistics (Teufel and Moens, 2002), law, (Hachey and Grover, 2006), biology (Mizuta et al. [sent-47, score-0.202]

19 This suggests that a weakly-supervised approach would be more practical than a fully supervised one for the real-world application of AZ. [sent-52, score-0.122]

20 Our best weaklysupervised classifier (Active SVM with selftraining) outperforms the best supervised classifier (SVM), yielding high accuracy of 81% when using just 10% of the labeled data. [sent-55, score-0.392]

21 When using just one third of the labeled data, it performs equally well as a fully supervised SVM which uses 100% of the labeled data. [sent-56, score-0.373]

22 Our investigation suggests that weaklysupervised learning could be employed to improve the practical applicability and portability of AZ to different information access tasks. [sent-57, score-0.206]

23 (2010) provide a corpus of 1000 biomedical abstracts (consisting of 7985 sentences and 225785 words) annotated according to three schemes of information structure those based on section names (Hirohata et al. [sent-61, score-0.192]

24 AZ is a scheme which provides an analysis of the rhetorical progression of the scientific argument, following the knowledge claims made by authors. [sent-67, score-0.285]

25 Seven categories of this scheme (out of the 10 possible) actually appear in abstracts and in the resulting corpus. [sent-76, score-0.151]

26 For example, the Method zone (METH) is for sentences which describe a way of doing research, esp. [sent-78, score-0.088]

27 according to a defined and regular plan; a special form of procedure or characteristic set of procedures employed in a field of study as a mode of investigation and inquiry. [sent-79, score-0.083]

28 An example of a biomedical abstract annotated according to AZ is shown in Figure 1, with different zones highlighted in different colors. [sent-80, score-0.151]

29 For example, the RES zone is highlighted in lemon green. [sent-81, score-0.088]

30 according to a defined and regular plan; a special form of procedure or characteristic set of procedures employed in a field of study as a mode of investigation and inquiry Result RES The effect, consequence, issue or outcome of an experiment; the quantity, formula, etc. [sent-86, score-0.083]

31 Analysis of N-terminal globin adducts is a common approach for monitoring the internal formation of BD dBearcivkedgr eopuonxdides. [sent-90, score-0.111]

32 The procedure utilizes trypsin hydrolysis of globin and immunoaffinity (IA) purification of alkylated heptapeptides. [sent-94, score-0.081]

33 As internal standard, the labeled rat-[(13)C(5)(15)N]-Val (1-1 1) was prepared through direct alkylation of the corresponding peptide with EB. [sent-97, score-0.143]

34 (2010) used a variety of features in their fully supervised ML experiments on different schemes of information structure. [sent-120, score-0.162]

35 , 2007) trained on biomedical literature were employed for POS tagging, lemmatization and parsing. [sent-163, score-0.13]

36 2 Machine learning methods Support Vector Machines (SVM) and Conditional Random Fields (CRF) have proved the best performing fully supervised methods in most recent works on information structure, e. [sent-172, score-0.176]

37 We therefore implemented these methods as well as weakly supervised variations of them: active SVM with and without self-training, transductive SVM and semi-supervised CRF. [sent-178, score-0.337]

38 m mW we w · xant − −to b bm =axi 0m,i wzeh tehree d wist iasn ictse f nroomrm tahle v hyperplane to the data points, or the distance between two parallel hyperplanes each of which separates the data. [sent-184, score-0.104]

39 2 Weakly-supervised methods Active SVM (ASVM) starts with a small amount of labeled data, and iteratively chooses a proportion of unlabeled data for which SVM has less confidence to be labeled (the labels can be restored from the original corpus) and used in the next round of learning, i. [sent-197, score-0.283]

40 Active SVM with self-training (ASSVM) is an extension of ASVM where each round of training has two steps: (i) training on the labeled, and testing on the unlabeled data, and querying; (ii) training on both labeled and unlabeled/machine-labeled data by using the estimates from step (i). [sent-206, score-0.175]

41 The idea of ASSVM is to make the best use of the labeled data, and to make the most use of the unlabeled data. [sent-207, score-0.175]

42 Transductive SVM (TSVM) is an extension of SVM which takes advantage of both labeled and unlabeled data (Vapnik, 1998). [sent-208, score-0.175]

43 2 Results Table 3 shows the results for the four weaklysupervised and two supervised methods when 10% of the training data (i. [sent-232, score-0.201]

44 ASSVM eisn ttheen cbeess)t perform- ing method with an accuracy of 81% and the macro Table 3: Results when using 10% of the labeled data Acc. [sent-237, score-0.185]

45 Figure 2: Learning curve for different methods when using 0-100% of the labeled data SVM CRF ASVM ASSVM TSVM SSCRF 1 aucrAycc0000000. [sent-282, score-0.108]

46 76 (the macro F-score is calculated for the 5 scheme categories which are found by all the methods). [sent-290, score-0.12]

47 Both methods outperform supervised SVM with a statistically significant difference (p < . [sent-293, score-0.078]

48 Figure 2 shows the learning curve of different methods (in terms of accuracy) when the percentage of the labeled data (in the training set) ranges from 0 to 100%. [sent-312, score-0.108]

49 ASSVM outperforms other methods, reaching its best performance of 88% accuracy when using ∼40% of the labeled data. [sent-313, score-0.144]

50 Indeed when using i3n3g% ∼ o4f0 0th%e olafb tehleed l data, dit performs already equally well as fully-supervised SVM using 100% of the labeled data. [sent-314, score-0.143]

51 The advantage of ASSVM over ASVM (the second best method) is clear especially when 20-40% of the labeled data is used. [sent-315, score-0.108]

52 SVM and TSVM tend to perform quite similarly with each other when more than 25% of the labeled data is used, but when less data is available, SVM performs better. [sent-316, score-0.108]

53 Looking at the CRF-based methods, SSCRF outperforms CRF in particular when 10-25% of the labeled data is used. [sent-317, score-0.108]

54 1, we employed in our experiments a collection of features which had performed well in previous supervised AZ experiments. [sent-328, score-0.128]

55 performing We took our best method ASSVM and conducted leave- one-out analysis of the features with 10% of the labeled data. [sent-330, score-0.108]

56 Table 4: Leaving one feature out results for ASSVM when using 10% of the labeled data Acc. [sent-332, score-0.108]

57 Voice is particularly important for CON, which differs from other categories in the sense that it is marked by frequent usage of active voice. [sent-419, score-0.24]

58 5 Discussion In our experiments, the majority of weaklysupervised methods outperformed their corresponding supervised methods when using just 10% of the labeled data. [sent-428, score-0.309]

59 (2010) made a similar discovery when comparing fully supervised versions of SVM and CRF. [sent-431, score-0.122]

60 Our best performing weakly-supervised methods were those based on active learning. [sent-432, score-0.204]

61 Making a good use of both labeled and unlabeled data, active learning combined with self-training (ASSVM) proved to be the most useful method. [sent-433, score-0.433]

62 Given 10% of the labeled data, ASSVM obtained an accuracy of 81% and F-score of . [sent-434, score-0.144]

63 It reached its top performance (88% accuracy) when using 40% of the labeled data, and performed equally well as fully supervised SVM (i. [sent-436, score-0.265]

64 100% of the labeled data) when using just one third of the labeled data. [sent-438, score-0.216]

65 This result is in line with the results of many other text classification works where active learning (alone or in combination with other techniques such as self-training) has proved similarly useful, e. [sent-439, score-0.295]

66 While active learning iteratively explores the unknown aspects of the unlabeled data, semisupervised learning attempts to make the best use of what it already knows about the data. [sent-444, score-0.302]

67 In our experiments, semi-supervised methods (TSVM and SSCRF) did not perform equally well as active learning TSVM even produced a lower accuracy than SVM with the same amount of labeled data although these methods have gained success in related works. [sent-445, score-0.383]

68 (2006) employed a much larger data set than we did one including 5448 labeled instances (in 3 classes) and 5210-25 145 unlabeled instances. [sent-453, score-0.225]

69 Given more labeled and unlabeled data per class we might be able to obtain better performance using SSCRF also on our task. [sent-454, score-0.175]

70 However, given the high cost of obtaining labeled data methods not needing it are preferable. [sent-455, score-0.108]

71 – 6 Conclusions and future work Our experiments show that weakly-supervised learning can be used to identify AZ in scientific documents with good accuracy when only a limited amount of labeled data is available. [sent-456, score-0.292]

72 Recently, some work has been done on the related task of classification of discourse relations in scientific texts: (Hernault et al. [sent-464, score-0.233]

73 They obtained 30-60% accuracy on the RST Discourse Treebank (including 41 relation types) when using 100-10000 labeled and 100000 unlabeled instances. [sent-466, score-0.211]

74 The accuracy was 20-60% when using the labeled data only. [sent-467, score-0.144]

75 However, although related, the task of discourse relation classification differs substantially 280 from our task in that it focuses on local discourse relations while our task focuses on the global structure of the scientific document. [sent-468, score-0.281]

76 First, the approach to active learning could be improved in various ways. [sent-470, score-0.204]

77 The query strategy we employed (uncertainty sam- pling) is a relatively straightforward method which only considers the best estimate for each unlabeled instance, disregarding other estimates that may contain useful information. [sent-471, score-0.117]

78 , 2006) shows that Logistic Regression (LR) outperforms SVM when used with active learning, yielding higher F-score on the Reuters21578 data set (binary classification, 10,788 documents in total, 100 of them labeled). [sent-480, score-0.251]

79 It would be interesting to explore whether supervised methods other than SVM are optimal for active learning when applied to our task. [sent-481, score-0.282]

80 In addition, other combinations of weaklysupervised methods might be worth looking into, such as EM+active learning (McCallum and Nigam, 1998) and co-training+EM+active learning (Muslea et al. [sent-484, score-0.123]

81 , 2002), which have proved promising in related text classification works. [sent-485, score-0.091]

82 Our feature analysis showed that not all the features which had proved promising in fully supervised experiments were equally promising when applied to weakly-supervised learning from smaller data. [sent-487, score-0.211]

83 One the key motivations for developing a weaklysupervised approach is to facilitate easy porting of schemes such as AZ to new tasks and domains. [sent-491, score-0.163]

84 Re- cent research shows that active learning in a target domain can leverage information from a different but related (source) domain (Rai et al. [sent-492, score-0.204]

85 In the future, we plan to investigate the usefulness of weakly-supervised learning for identifying other schemes of information structure, e. [sent-498, score-0.081]

86 , 2010), and not only in scientific abstracts but also in full journal papers which typically exemplify a larger set of scheme categories. [sent-504, score-0.263]

87 , 2007), and for practical tasks such as manual review of scientific papers for research purposes (Guo et al. [sent-507, score-0.148]

88 Approximate statistical tests for comparing supervised classification learning algorithms. [sent-550, score-0.115]

89 Identifying the information structure of scientific abstracts: an investigation of three different schemes. [sent-559, score-0.181]

90 Identifying sections in scientific abstracts using conditional random fields. [sent-590, score-0.22]

91 Learning from labeled and unlabeled documents: A comparative study on semisupervised text classification. [sent-610, score-0.206]

92 Corpora for the conceptualisation and zoning of scientific papers. [sent-624, score-0.254]

93 Employing em and pool-based active learning for text classification. [sent-643, score-0.204]

94 A baseline feature set for learning rhetorical zones using full articles in the biomedical domain. [sent-679, score-0.215]

95 Using argumentation to extract key sentences from biomedical abstracts. [sent-723, score-0.115]

96 Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users. [sent-744, score-0.117]

97 Summarizing scientific articles: Experiments with relevance and rhetorical status. [sent-771, score-0.212]

98 Towards domain-independent argumentative zoning: Evidence from chemistry and computational linguistics. [sent-778, score-0.192]

99 Support vector machine active learning with applications to text classification. [sent-784, score-0.204]

100 Support vector machine active learning with applications to text classification. [sent-788, score-0.204]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('az', 0.388), ('teufel', 0.248), ('assvm', 0.229), ('active', 0.204), ('svm', 0.184), ('meth', 0.179), ('mizuta', 0.176), ('sscrf', 0.176), ('obj', 0.17), ('tsvm', 0.159), ('guo', 0.153), ('scientific', 0.148), ('bkg', 0.141), ('moens', 0.138), ('res', 0.138), ('hirohata', 0.123), ('weaklysupervised', 0.123), ('argumentative', 0.121), ('labeled', 0.108), ('asvm', 0.106), ('liakata', 0.106), ('zoning', 0.106), ('crf', 0.092), ('fut', 0.088), ('zone', 0.088), ('biomedical', 0.08), ('supervised', 0.078), ('con', 0.076), ('abstracts', 0.072), ('voice', 0.072), ('chemistry', 0.071), ('shatkay', 0.071), ('tbahriti', 0.071), ('zones', 0.071), ('ml', 0.069), ('unlabeled', 0.067), ('rel', 0.064), ('rhetorical', 0.064), ('jiao', 0.061), ('mullen', 0.061), ('bd', 0.06), ('transductive', 0.055), ('hachey', 0.055), ('biology', 0.054), ('proved', 0.054), ('grover', 0.053), ('hernault', 0.053), ('hyperplanes', 0.053), ('novak', 0.053), ('sinz', 0.053), ('hyperplane', 0.051), ('employed', 0.05), ('tong', 0.049), ('gr', 0.049), ('discourse', 0.048), ('korhonen', 0.048), ('yielding', 0.047), ('globin', 0.046), ('hoi', 0.046), ('ruch', 0.046), ('subj', 0.046), ('fully', 0.044), ('scheme', 0.043), ('mf', 0.043), ('macro', 0.041), ('ncsubj', 0.041), ('plan', 0.041), ('schemes', 0.04), ('weka', 0.038), ('classification', 0.037), ('categories', 0.036), ('accuracy', 0.036), ('adducts', 0.035), ('argumentation', 0.035), ('assay', 0.035), ('cccp', 0.035), ('chichester', 0.035), ('coresc', 0.035), ('deb', 0.035), ('epoxides', 0.035), ('grs', 0.035), ('immunoaffinity', 0.035), ('merity', 0.035), ('muslea', 0.035), ('peptide', 0.035), ('universvm', 0.035), ('yufan', 0.035), ('equally', 0.035), ('int', 0.034), ('investigation', 0.033), ('sun', 0.033), ('location', 0.033), ('intervals', 0.032), ('semisupervised', 0.031), ('software', 0.031), ('progression', 0.03), ('chapelle', 0.03), ('ens', 0.03), ('med', 0.03), ('monitoring', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 12 emnlp-2011-A Weakly-supervised Approach to Argumentative Zoning of Scientific Documents

Author: Yufan Guo ; Anna Korhonen ; Thierry Poibeau

Abstract: Documents Anna Korhonen Thierry Poibeau Computer Laboratory LaTTiCe, UMR8094 University of Cambridge, UK CNRS & ENS, France alk2 3 @ cam . ac .uk thierry .po ibeau @ ens . fr tific literature according to categories of information structure (or discourse, rhetorical, argumentative or Argumentative Zoning (AZ) analysis of the argumentative structure of a scientific paper has proved useful for a number of information access tasks. Current approaches to AZ rely on supervised machine learning (ML). – – Requiring large amounts of annotated data, these approaches are expensive to develop and port to different domains and tasks. A potential solution to this problem is to use weaklysupervised ML instead. We investigate the performance of four weakly-supervised classifiers on scientific abstract data annotated for multiple AZ classes. Our best classifier based on the combination of active learning and selftraining outperforms our best supervised classifier, yielding a high accuracy of 81% when using just 10% of the labeled data. This result suggests that weakly-supervised learning could be employed to improve the practical applicability and portability of AZ across different information access tasks.

2 0.11808154 28 emnlp-2011-Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances

Author: Burr Settles

Abstract: This paper describes DUALIST, an active learning annotation paradigm which solicits and learns from labels on both features (e.g., words) and instances (e.g., documents). We present a novel semi-supervised training algorithm developed for this setting, which is (1) fast enough to support real-time interactive speeds, and (2) at least as accurate as preexisting methods for learning with mixed feature and instance labels. Human annotators in user studies were able to produce near-stateof-the-art classifiers—on several corpora in a variety of application domains—with only a few minutes of effort.

3 0.10126399 9 emnlp-2011-A Non-negative Matrix Factorization Based Approach for Active Dual Supervision from Document and Word Labels

Author: Chao Shen ; Tao Li

Abstract: In active dual supervision, not only informative examples but also features are selected for labeling to build a high quality classifier with low cost. However, how to measure the informativeness for both examples and feature on the same scale has not been well solved. In this paper, we propose a non-negative matrix factorization based approach to address this issue. We first extend the matrix factorization framework to explicitly model the corresponding relationships between feature classes and examples classes. Then by making use of the reconstruction error, we propose a unified scheme to determine which feature or example a classifier is most likely to benefit from having labeled. Empirical results demonstrate the effectiveness of our proposed methods.

4 0.075945392 17 emnlp-2011-Active Learning with Amazon Mechanical Turk

Author: Florian Laws ; Christian Scheible ; Hinrich Schutze

Abstract: Supervised classification needs large amounts of annotated training data that is expensive to create. Two approaches that reduce the cost of annotation are active learning and crowdsourcing. However, these two approaches have not been combined successfully to date. We evaluate the utility of active learning in crowdsourcing on two tasks, named entity recognition and sentiment detection, and show that active learning outperforms random selection of annotation examples in a noisy crowdsourcing scenario.

5 0.062445335 68 emnlp-2011-Hypotheses Selection Criteria in a Reranking Framework for Spoken Language Understanding

Author: Marco Dinarelli ; Sophie Rosset

Abstract: Reranking models have been successfully applied to many tasks of Natural Language Processing. However, there are two aspects of this approach that need a deeper investigation: (i) Assessment of hypotheses generated for reranking at classification phase: baseline models generate a list of hypotheses and these are used for reranking without any assessment; (ii) Detection of cases where reranking models provide a worst result: the best hypothesis provided by the reranking model is assumed to be always the best result. In some cases the reranking model provides an incorrect hypothesis while the baseline best hypothesis is correct, especially when baseline models are accurate. In this paper we propose solutions for these two aspects: (i) a semantic inconsistency metric to select possibly more correct n-best hypotheses, from a large set generated by an SLU basiline model. The selected hypotheses are reranked applying a state-of-the-art model based on Partial Tree Kernels, which encode SLU hypotheses in Support Vector Machines with complex structured features; (ii) finally, we apply a decision strategy, based on confidence values, to select the final hypothesis between the first ranked hypothesis provided by the baseline SLU model and the first ranked hypothesis provided by the re-ranker. We show the effectiveness of these solutions presenting comparative results obtained reranking hypotheses generated by a very accurate Conditional Random Field model. We evaluate our approach on the French MEDIA corpus. The results show significant improvements with respect to current state-of-the-art and previous 1104 Sophie Rosset LIMSI-CNRS B.P. 133, 91403 Orsay Cedex France ro s set @ l ims i fr . re-ranking models.

6 0.058756806 48 emnlp-2011-Enhancing Chinese Word Segmentation Using Unlabeled Data

7 0.056566324 119 emnlp-2011-Semantic Topic Models: Combining Word Distributional Statistics and Dictionary Definitions

8 0.051367719 142 emnlp-2011-Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities

9 0.051188339 23 emnlp-2011-Bootstrapped Named Entity Recognition for Product Attribute Extraction

10 0.051043749 96 emnlp-2011-Multilayer Sequence Labeling

11 0.0413482 84 emnlp-2011-Learning the Information Status of Noun Phrases in Spoken Dialogues

12 0.04102812 50 emnlp-2011-Evaluating Dependency Parsing: Robust and Heuristics-Free Cross-Annotation Evaluation

13 0.040870763 106 emnlp-2011-Predicting a Scientific Communitys Response to an Article

14 0.039709728 57 emnlp-2011-Extreme Extraction - Machine Reading in a Week

15 0.039355189 7 emnlp-2011-A Joint Model for Extended Semantic Role Labeling

16 0.039315324 128 emnlp-2011-Structured Relation Discovery using Generative Models

17 0.038878772 67 emnlp-2011-Hierarchical Verb Clustering Using Graph Factorization

18 0.037871905 94 emnlp-2011-Modelling Discourse Relations for Arabic

19 0.037865348 98 emnlp-2011-Named Entity Recognition in Tweets: An Experimental Study

20 0.037612379 144 emnlp-2011-Unsupervised Learning of Selectional Restrictions and Detection of Argument Coercions


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.152), (1, -0.094), (2, -0.04), (3, 0.029), (4, 0.022), (5, -0.011), (6, -0.002), (7, -0.069), (8, -0.06), (9, -0.011), (10, -0.006), (11, -0.144), (12, -0.031), (13, 0.042), (14, -0.09), (15, -0.06), (16, 0.059), (17, -0.023), (18, -0.121), (19, 0.1), (20, 0.015), (21, 0.121), (22, 0.047), (23, 0.056), (24, -0.066), (25, -0.038), (26, -0.026), (27, 0.153), (28, -0.048), (29, 0.113), (30, -0.003), (31, 0.043), (32, 0.01), (33, 0.216), (34, 0.015), (35, -0.044), (36, 0.139), (37, 0.066), (38, 0.157), (39, 0.085), (40, 0.238), (41, -0.019), (42, 0.093), (43, -0.069), (44, -0.052), (45, -0.073), (46, -0.035), (47, -0.036), (48, -0.027), (49, 0.105)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94018936 12 emnlp-2011-A Weakly-supervised Approach to Argumentative Zoning of Scientific Documents

Author: Yufan Guo ; Anna Korhonen ; Thierry Poibeau

Abstract: Documents Anna Korhonen Thierry Poibeau Computer Laboratory LaTTiCe, UMR8094 University of Cambridge, UK CNRS & ENS, France alk2 3 @ cam . ac .uk thierry .po ibeau @ ens . fr tific literature according to categories of information structure (or discourse, rhetorical, argumentative or Argumentative Zoning (AZ) analysis of the argumentative structure of a scientific paper has proved useful for a number of information access tasks. Current approaches to AZ rely on supervised machine learning (ML). – – Requiring large amounts of annotated data, these approaches are expensive to develop and port to different domains and tasks. A potential solution to this problem is to use weaklysupervised ML instead. We investigate the performance of four weakly-supervised classifiers on scientific abstract data annotated for multiple AZ classes. Our best classifier based on the combination of active learning and selftraining outperforms our best supervised classifier, yielding a high accuracy of 81% when using just 10% of the labeled data. This result suggests that weakly-supervised learning could be employed to improve the practical applicability and portability of AZ across different information access tasks.

2 0.70455581 28 emnlp-2011-Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances

Author: Burr Settles

Abstract: This paper describes DUALIST, an active learning annotation paradigm which solicits and learns from labels on both features (e.g., words) and instances (e.g., documents). We present a novel semi-supervised training algorithm developed for this setting, which is (1) fast enough to support real-time interactive speeds, and (2) at least as accurate as preexisting methods for learning with mixed feature and instance labels. Human annotators in user studies were able to produce near-stateof-the-art classifiers—on several corpora in a variety of application domains—with only a few minutes of effort.

3 0.63536769 9 emnlp-2011-A Non-negative Matrix Factorization Based Approach for Active Dual Supervision from Document and Word Labels

Author: Chao Shen ; Tao Li

Abstract: In active dual supervision, not only informative examples but also features are selected for labeling to build a high quality classifier with low cost. However, how to measure the informativeness for both examples and feature on the same scale has not been well solved. In this paper, we propose a non-negative matrix factorization based approach to address this issue. We first extend the matrix factorization framework to explicitly model the corresponding relationships between feature classes and examples classes. Then by making use of the reconstruction error, we propose a unified scheme to determine which feature or example a classifier is most likely to benefit from having labeled. Empirical results demonstrate the effectiveness of our proposed methods.

4 0.43764672 48 emnlp-2011-Enhancing Chinese Word Segmentation Using Unlabeled Data

Author: Weiwei Sun ; Jia Xu

Abstract: This paper investigates improving supervised word segmentation accuracy with unlabeled data. Both large-scale in-domain data and small-scale document text are considered. We present a unified solution to include features derived from unlabeled data to a discriminative learning model. For the large-scale data, we derive string statistics from Gigaword to assist a character-based segmenter. In addition, we introduce the idea about transductive, document-level segmentation, which is designed to improve the system recall for out-ofvocabulary (OOV) words which appear more than once inside a document. Novel features1 result in relative error reductions of 13.8% and 15.4% in terms of F-score and the recall of OOV words respectively.

5 0.43284559 82 emnlp-2011-Learning Local Content Shift Detectors from Document-level Information

Author: Richard Farkas

Abstract: Information-oriented document labeling is a special document multi-labeling task where the target labels refer to a specific information instead of the topic of the whole document. These kind oftasks are usually solved by looking up indicator phrases and analyzing their local context to filter false positive matches. Here, we introduce an approach for machine learning local content shifters which detects irrelevant local contexts using just the original document-level training labels. We handle content shifters in general, instead of learning a particular language phenomenon detector (e.g. negation or hedging) and form a single system for document labeling and content shift detection. Our empirical results achieved 24% error reduction compared to supervised baseline methods – on three document label– ing tasks.

6 0.40648836 17 emnlp-2011-Active Learning with Amazon Mechanical Turk

7 0.38938448 23 emnlp-2011-Bootstrapped Named Entity Recognition for Product Attribute Extraction

8 0.33262399 91 emnlp-2011-Literal and Metaphorical Sense Identification through Concrete and Abstract Context

9 0.32432243 106 emnlp-2011-Predicting a Scientific Communitys Response to an Article

10 0.32368067 96 emnlp-2011-Multilayer Sequence Labeling

11 0.3103683 84 emnlp-2011-Learning the Information Status of Noun Phrases in Spoken Dialogues

12 0.30673799 68 emnlp-2011-Hypotheses Selection Criteria in a Reranking Framework for Spoken Language Understanding

13 0.27334732 94 emnlp-2011-Modelling Discourse Relations for Arabic

14 0.26023051 79 emnlp-2011-Lateen EM: Unsupervised Training with Multiple Objectives, Applied to Dependency Grammar Induction

15 0.24756357 7 emnlp-2011-A Joint Model for Extended Semantic Role Labeling

16 0.23564081 142 emnlp-2011-Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities

17 0.23160814 46 emnlp-2011-Efficient Subsampling for Training Complex Language Models

18 0.23005322 63 emnlp-2011-Harnessing WordNet Senses for Supervised Sentiment Classification

19 0.22810028 67 emnlp-2011-Hierarchical Verb Clustering Using Graph Factorization

20 0.22688095 1 emnlp-2011-A Bayesian Mixture Model for PoS Induction Using Multiple Features


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(23, 0.1), (36, 0.027), (37, 0.516), (45, 0.047), (53, 0.01), (54, 0.022), (57, 0.01), (62, 0.016), (64, 0.016), (66, 0.029), (69, 0.011), (79, 0.034), (82, 0.019), (90, 0.011), (96, 0.03), (98, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.91904968 74 emnlp-2011-Inducing Sentence Structure from Parallel Corpora for Reordering

Author: John DeNero ; Jakob Uszkoreit

Abstract: When translating among languages that differ substantially in word order, machine translation (MT) systems benefit from syntactic preordering—an approach that uses features from a syntactic parse to permute source words into a target-language-like order. This paper presents a method for inducing parse trees automatically from a parallel corpus, instead of using a supervised parser trained on a treebank. These induced parses are used to preorder source sentences. We demonstrate that our induced parser is effective: it not only improves a state-of-the-art phrase-based system with integrated reordering, but also approaches the performance of a recent preordering method based on a supervised parser. These results show that the syntactic structure which is relevant to MT pre-ordering can be learned automatically from parallel text, thus establishing a new application for unsupervised grammar induction.

2 0.91821969 5 emnlp-2011-A Fast Re-scoring Strategy to Capture Long-Distance Dependencies

Author: Anoop Deoras ; Tomas Mikolov ; Kenneth Church

Abstract: A re-scoring strategy is proposed that makes it feasible to capture more long-distance dependencies in the natural language. Two pass strategies have become popular in a number of recognition tasks such as ASR (automatic speech recognition), MT (machine translation) and OCR (optical character recognition). The first pass typically applies a weak language model (n-grams) to a lattice and the second pass applies a stronger language model to N best lists. The stronger language model is intended to capture more longdistance dependencies. The proposed method uses RNN-LM (recurrent neural network language model), which is a long span LM, to rescore word lattices in the second pass. A hill climbing method (iterative decoding) is proposed to search over islands of confusability in the word lattice. An evaluation based on Broadcast News shows speedups of 20 over basic N best re-scoring, and word error rate reduction of 8% (relative) on a highly competitive setup.

same-paper 3 0.81660491 12 emnlp-2011-A Weakly-supervised Approach to Argumentative Zoning of Scientific Documents

Author: Yufan Guo ; Anna Korhonen ; Thierry Poibeau

Abstract: Documents Anna Korhonen Thierry Poibeau Computer Laboratory LaTTiCe, UMR8094 University of Cambridge, UK CNRS & ENS, France alk2 3 @ cam . ac .uk thierry .po ibeau @ ens . fr tific literature according to categories of information structure (or discourse, rhetorical, argumentative or Argumentative Zoning (AZ) analysis of the argumentative structure of a scientific paper has proved useful for a number of information access tasks. Current approaches to AZ rely on supervised machine learning (ML). – – Requiring large amounts of annotated data, these approaches are expensive to develop and port to different domains and tasks. A potential solution to this problem is to use weaklysupervised ML instead. We investigate the performance of four weakly-supervised classifiers on scientific abstract data annotated for multiple AZ classes. Our best classifier based on the combination of active learning and selftraining outperforms our best supervised classifier, yielding a high accuracy of 81% when using just 10% of the labeled data. This result suggests that weakly-supervised learning could be employed to improve the practical applicability and portability of AZ across different information access tasks.

4 0.49191034 68 emnlp-2011-Hypotheses Selection Criteria in a Reranking Framework for Spoken Language Understanding

Author: Marco Dinarelli ; Sophie Rosset

Abstract: Reranking models have been successfully applied to many tasks of Natural Language Processing. However, there are two aspects of this approach that need a deeper investigation: (i) Assessment of hypotheses generated for reranking at classification phase: baseline models generate a list of hypotheses and these are used for reranking without any assessment; (ii) Detection of cases where reranking models provide a worst result: the best hypothesis provided by the reranking model is assumed to be always the best result. In some cases the reranking model provides an incorrect hypothesis while the baseline best hypothesis is correct, especially when baseline models are accurate. In this paper we propose solutions for these two aspects: (i) a semantic inconsistency metric to select possibly more correct n-best hypotheses, from a large set generated by an SLU basiline model. The selected hypotheses are reranked applying a state-of-the-art model based on Partial Tree Kernels, which encode SLU hypotheses in Support Vector Machines with complex structured features; (ii) finally, we apply a decision strategy, based on confidence values, to select the final hypothesis between the first ranked hypothesis provided by the baseline SLU model and the first ranked hypothesis provided by the re-ranker. We show the effectiveness of these solutions presenting comparative results obtained reranking hypotheses generated by a very accurate Conditional Random Field model. We evaluate our approach on the French MEDIA corpus. The results show significant improvements with respect to current state-of-the-art and previous 1104 Sophie Rosset LIMSI-CNRS B.P. 133, 91403 Orsay Cedex France ro s set @ l ims i fr . re-ranking models.

5 0.48493952 46 emnlp-2011-Efficient Subsampling for Training Complex Language Models

Author: Puyang Xu ; Asela Gunawardana ; Sanjeev Khudanpur

Abstract: We propose an efficient way to train maximum entropy language models (MELM) and neural network language models (NNLM). The advantage of the proposed method comes from a more robust and efficient subsampling technique. The original multi-class language modeling problem is transformed into a set of binary problems where each binary classifier predicts whether or not a particular word will occur. We show that the binarized model is as powerful as the standard model and allows us to aggressively subsample negative training examples without sacrificing predictive performance. Empirical results show that we can train MELM and NNLM at 1% ∼ 5% of the strtaaninda MrdE complexity LwMith a no %los ∼s 5in% performance.

6 0.4268221 15 emnlp-2011-A novel dependency-to-string model for statistical machine translation

7 0.41793314 8 emnlp-2011-A Model of Discourse Predictions in Human Sentence Processing

8 0.41083634 136 emnlp-2011-Training a Parser for Machine Translation Reordering

9 0.40442166 59 emnlp-2011-Fast and Robust Joint Models for Biomedical Event Extraction

10 0.40309805 123 emnlp-2011-Soft Dependency Constraints for Reordering in Hierarchical Phrase-Based Translation

11 0.39852607 108 emnlp-2011-Quasi-Synchronous Phrase Dependency Grammars for Machine Translation

12 0.39602366 66 emnlp-2011-Hierarchical Phrase-based Translation Representations

13 0.39458948 13 emnlp-2011-A Word Reordering Model for Improved Machine Translation

14 0.39437348 134 emnlp-2011-Third-order Variational Reranking on Packed-Shared Dependency Forests

15 0.39396459 23 emnlp-2011-Bootstrapped Named Entity Recognition for Product Attribute Extraction

16 0.38016385 1 emnlp-2011-A Bayesian Mixture Model for PoS Induction Using Multiple Features

17 0.37826943 25 emnlp-2011-Cache-based Document-level Statistical Machine Translation

18 0.37428606 85 emnlp-2011-Learning to Simplify Sentences with Quasi-Synchronous Grammar and Integer Programming

19 0.37218323 53 emnlp-2011-Experimental Support for a Categorical Compositional Distributional Model of Meaning

20 0.37187445 30 emnlp-2011-Compositional Matrix-Space Models for Sentiment Analysis