acl acl2013 acl2013-194 acl2013-194-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: David Kauchak
Abstract: In this paper we examine language modeling for text simplification. Unlike some text-to-text translation tasks, text simplification is a monolingual translation task allowing for text in both the input and output domain to be used for training the language model. We explore the relationship between normal English and simplified English and compare language models trained on varying amounts of text from each. We evaluate the models intrinsically with perplexity and extrinsically on the lexical simplification task from SemEval 2012. We find that a combined model using both simplified and normal English data achieves a 23% improvement in perplexity and a 24% improvement on the lexical simplification task over a model trained only on simple data. Post-hoc analysis shows that the additional unsimplified data provides better coverage for unseen and rare n-grams.
Michiel Bacchiani and Brian Roark. 2003. Unsupervised language model adaptation. In Proceedings of ICASSP. Michele Banko, Vibhu Mittal, and Michael Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of ACL. Jerome R. Bellegarda. 2004. Statistical language model adaptation: Review and perspectives. Speech Communication. Or Biran, Samuel Brody, and Noe m´ ie Elhadad. 2011. Putting it simply: A context-aware approach to lexical simplification. In Proceedings of ACL. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLP. Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. Knowledge Based Systems. Stanley Chen, Douglas Beeferman, and Ronald Rosenfeld. 1998. Evaluation metrics for language models. In DARPA Broadcast News Transcription and Understanding Workshop. Trevor Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research. William Coster and David Kauchak. 2011a. Learning to simplify sentences using Wikipedia. In Proceedings of Text-To-Text Generation. William Coster and David Kauchak. 2011b. Simple English Wikipedia: A new text simplification task. In Proceedings of ACL. Hal Daume and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of ACL. Christopher Cieri David Graff. 2003. En- glish gigaword. http : / /www .ldc . upenn .edu / Cat alog/ Cat alogEnt ry . j sp ? cat alogI d=LDC2 0 0 3 T 0 5 . Carsten Eickhoff, Pavel Serdyukov, and Arjen P. de Vries. 2010. Web page classification on child suitability. In Proceedings of CIKM. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Proceedings of HLT-NAACL. Le Quan Ha, E. I. Sicilia-Garcia, Ji Ming, and F. J. Smith. 2003. Extension of Zipf’s law to word and character n-grams for English and Chinese. Computational Linguistics and Chinese Language Processing. Bo-June Hsu. 2007. Generalized linear interpolation of language models. In IEEE Workshop on ASRU. Frederick Jelinek and Robert Mercer. 1980. Interpolated estimation of markov source parameters from sparse data. In Proceedings of the Workshop on Patter Recognition in Practice. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics. Gondy Leroy, James E. Endicott, Obay Mouradi, David Kauchak, and Melissa Just. 2012. Improving perceived and actual text difficulty for health information consumers using semi-automated methods. In American Medical Informatics Association (AMIA) Fall Symposium. Courtney Napoles and Mark Dredze. 2010. Learning simple Wikipedia: A cogitation in ascertaining abecedarian language. In Proceedings of HLT/NAACL Workshop on Computation Linguistics and Writing. Tadashi Nomoto. 2009. A comparison of model free versus model intensive approaches to sentence compression. In Proceedings of EMNLP. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering. Ronald Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer, Speech and Language. Lucia Specia, Sujay Kumar Jauhar, and Rada Mihalcea. 2012. Semeval-2012 task 1: English lexical simplification. In Joint Conference on Lexical and Computerational Semantics (*SEM). Lucia Specia. 2010. Translating from complex to simplified sentences. In Proceedings of Computational Processing of the Portuguese Language. Andreas Stolcke. 2002. SRILM - An extensible language modeling toolkit. In Proceedings of ICSLP. Hisami Suzuki and Jianfeng Gao. 2005. A comparative study on language model adaptation techniques. In Proceedings of EMNLP. Jenine Turner and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of ACL. Ken Urano. 2000. Lexical simplification and elaboration: Sentence comprehension and incidental vocabulary acquisition. Master’s thesis, University of Hawaii. 1545 Kristian Woodsend and Mirella Lapata. ing to simplify sentences 2011. Learn- with quasi-synchronous grammar and integer programming. In Proceedings of EMNLP. Sander Wubben, Krahmer. Antal van den Bosch, 2012. and Emiel Sentence simplification by mono- lingual machine translation. In Proceedings of ACL. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of sim- plicity: Unsupervised extraction of lexical simplifications from Wikipedia. In Proceedings of NAACL. Bing Zhao, Matthias Eck, and Stephan Vogel. 2004. Language model adaptation for statistical machine translation with structured query models. In Proceedings of COLING. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of ICCL. 1546