acl acl2011 acl2011-124 knowledge-graph by maker-knowledge-mining

124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System


Source: pdf

Author: Reyyan Yeniterzi

Abstract: Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. Using syntactic and contextual properties with the new representation provide an 7.6% relative improvement over the baseline.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. [sent-3, score-0.252]

2 In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. [sent-4, score-0.021]

3 We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. [sent-5, score-0.056]

4 Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. [sent-6, score-0.368]

5 Using syntactic and contextual properties with the new representation provide an 7. [sent-7, score-0.056]

6 1 Introduction One of the main tasks of information extraction is the Named Entity Recognition (NER) which aims to locate and classify the named entities of an unstructured text. [sent-9, score-0.145]

7 State-of-the-art NER systems have been produced for several languages, but despite all these recent improvements, developing a NER system for Turkish is still a challenging task due to the structure of the language. [sent-10, score-0.021]

8 Turkish is a morphologically complex language with very productive inflectional and derivational processes. [sent-11, score-0.05]

9 Many local and non-local syntactic structures are represented as morphemes which at the ∗The author is also affiliated with iLab and the Center for the Future of Work of Heinz College, Carnegie Mellon University 105 end produces Turkish words with complex morphological structures. [sent-12, score-0.46]

10 This productive nature of the Turkish results in production of thousands of words from a given root, which cause data sparseness problems in model training. [sent-14, score-0.075]

11 In order to prevent this behavior in our NER system, we propose several features which capture the meaning and syntactic properties of the token in addition to the contextual properties. [sent-15, score-0.127]

12 We also propose using a sequence of morphemes representation which uses roots and morphological features as tokens instead of words. [sent-16, score-0.512]

13 2 Related Work The first paper (Cucerzan and Yarowski, 1999) on Turkish NER describes a language independent bootstrapping algorithm that learns from word internal and contextual information of entities. [sent-18, score-0.034]

14 c 22001111 S Atus doecnitat Sieosnsi foonr, C paomgepsu 1t0a5ti–o1n1a0l, Linguistics the authors followed a statistical approach (HMMs) for NER task together with some other Information Extraction related tasks. [sent-23, score-0.04]

15 In order to deal with the agglutinative structure of the Turkish, the authors worked with the root-morpheme level of the word instead of the surface form. [sent-24, score-0.037]

16 A recent work (K¨ uc¨ uk and Yazici, 2009) presents the first rule-based NER system for Turkish. [sent-25, score-0.021]

17 The authors used several information sources such as dictionaries, list of well known entities and context patterns. [sent-26, score-0.058]

18 Furthermore, all these systems used word-level tokenization but in this paper we present a new tokenization method which represents each root and morphological feature as separate tokens. [sent-29, score-0.646]

19 3 Approach In this work, we used two tokenization methods. [sent-30, score-0.083]

20 We also introduced morpheme-level model in which morphological features are represented as states. [sent-32, score-0.291]

21 We used several features which were created from deep and shallow analysis of the words. [sent-33, score-0.028]

22 1 Word-Level Model Word-level tokenization is very commonly used in NER systems. [sent-36, score-0.083]

23 In this model, each word is represented with one state. [sent-37, score-0.026]

24 Since CRF can use any number of features to infer the hidden state, we develop several feature sets which allow us to represent more about the word. [sent-38, score-0.067]

25 1 Lexical Model In this model, only the word tokens are used in their surface form. [sent-41, score-0.029]

26 This model is effective for many languages which do not have complex morphological structures. [sent-42, score-0.237]

27 However for morphologically rich languages, further analysis of words is required in order to prevent data sparseness problems and pro- duce more accurate NER systems. [sent-43, score-0.092]

28 2 Root Feature An analysis (Hakkani-T u¨r, 2000) on English and Turkish news articles with around 10 million words showed that on the average 5 different Turkish word forms are produced from the same root. [sent-46, score-0.021]

29 In order to decrease this high variation of words we use the root forms of the words as an additional feature. [sent-47, score-0.204]

30 3 Part-of-Speech and Proper-Noun Features Named entities are mostly noun phrases, such as first name and last name or organization name and the type of organization. [sent-50, score-0.308]

31 This property has been used widely in NER systems as a hint to determine the possible named entities. [sent-51, score-0.087]

32 Part-of-Speech tags of the words depend highly on the language and the available Part-of-Speech tagger. [sent-52, score-0.028]

33 Taggers may distinguish the proper nouns with or without their types. [sent-53, score-0.058]

34 We used a Turkish morphological analyzer (Oflazer, 1994) which analyzes words into roots and morphological features. [sent-54, score-0.539]

35 An example to the output of the analyzer is given in Ta- ble 1. [sent-55, score-0.061]

36 The part-of-speech tag of each word is also reported by the tool 1. [sent-56, score-0.074]

37 We use these tags as additional features and call them part-of-speech (POS) features. [sent-57, score-0.056]

38 The morphological analyzer has a proper name database, which is used to tag Turkish person, location and organization names as proper nouns. [sent-58, score-0.596]

39 An example name entity with this +Prop tag is given in Table 1. [sent-59, score-0.174]

40 Although, the use of this tag is limited to the given database and not all named entities are tagged with it, we use it as a feature to distinguish named entities. [sent-60, score-0.402]

41 The initial letter of most named entities is in upper case, which makes case feature a very common feature in NER tasks. [sent-65, score-0.246]

42 We also use this feature and mark each token as UC or LC depending on the initial letter of it. [sent-66, score-0.083]

43 We don’t do 1The meanings of various Part-of-Speech tags are as follows: +A3pl - 3rd person plural; +P3sg - 3rd person singular possessive; +Gen - Genitive case; +Prop - Proper Noun; +A3sg - 3rd person singular; +Pnon - No possesive agreement; +Nom - Nominative case. [sent-67, score-0.206]

44 Table 1: Examples to the output of the Turkish morphological analyzer WORD beyinlerinin (of their brains) Amerika (America) + + + ROOT beyin Amerika anything special for the first words in sentences. [sent-68, score-0.276]

45 An example phase in word-level model is given in Table 2 2. [sent-69, score-0.022]

46 The first column is the lexical form of the word and the rest of the columns are the features and the tag is in the last column. [sent-71, score-0.102]

47 2 Morpheme-Level Model Using Part-of-Speech tags as features introduces some syntactic properties of the word to the model, but still there is missing information of other morphological tags such as number/person agreements, possessive agreements or cases. [sent-73, score-0.384]

48 In order to see the effect of these morphological tags in NER, we propose a morpheme-level tokenization method which represents a word in several states; one state for a root and one state for each morphological feature. [sent-74, score-0.815]

49 In a setting like this, the model has to be restricted from assigning different labels to different parts of the word. [sent-75, score-0.022]

50 In order to do this, we use an additional feature called root-morph feature. [sent-76, score-0.039]

51 The root-morph is a feature which is assigned the value “root” for states containing a root and the value “morph” for states containing a morpheme. [sent-77, score-0.301]

52 Since there are no prefixes in Turkish, a model trained with this feature will give zero probability (or close to zero probability if there is any smoothing) for assigning any B-* (Begin any NE) tag to a morph state. [sent-78, score-0.496]

53 Similarly, transition from a state with B-* or I-* (Inside any NE) tag to a morph state with O (Other) tag will get zero probability from the model. [sent-79, score-0.556]

54 In the figure each row represents a state and each word is represented with several states. [sent-82, score-0.085]

55 The first row of each word contains the root, POS tag and Root value for the root-morph feature. [sent-83, score-0.098]

56 The rest of the rows of the same word contains the morphemes and Morph value for the rootmorph feature. [sent-84, score-0.192]

57 Three types of named entities; person, organization and location, were tagged in this dataset. [sent-89, score-0.166]

58 If the word is not a proper name, then it is tagged with other. [sent-90, score-0.089]

59 The number of words and named entities for each NE type from train and tests sets are given in Table 4. [sent-91, score-0.145]

60 Table 4: The number of words and named entities in train and test set #WORDS#PER. [sent-92, score-0.145]

61 TRAIN445,49821,70114,51012,138 TEST 47,344 2,400 1,595 1,402 5 Experiments and Results Before using our data in the experiments we applied the Turkish morphological analyzer tool (Oflazer, 1994) and then used Morphological disambiguator (Sak et al. [sent-95, score-0.327]

62 1 Word-level Model In order to see the effects of the features individually, we inserted them to the model one by one it- eratively and applied the model to the test set. [sent-99, score-0.072]

63 We can observe that each feature is improving the performance of the system. [sent-101, score-0.039]

64 Overall the F-measure was increased by 6 points when all the features are used. [sent-102, score-0.028]

65 2 Morpheme-level Model In order to make a fair comparison between the word-level and morpheme-level models, we used all the features in both models. [sent-104, score-0.051]

66 According to the table, morpheme-level model achieved better results than word-level model in person and location 3CRF++: Yet Another CRF toolkit 4www. [sent-106, score-0.136]

67 Even though word-level model got better F-Measure score in organization entity, morphemelevel is much better than word-level model in terms of recall. [sent-112, score-0.092]

68 Using morpheme-level tokenization to introduce morphological information to the model did not hurt the system, but it also did not produce a significant improvement. [sent-113, score-0.32]

69 One can be that morphological information is not helpful in NER tasks. [sent-115, score-0.215]

70 Morphemes in Turkish words are giving the necessary syntactic meaning to the word which may not be useful in named entity finding. [sent-116, score-0.145]

71 Dividing the word into root and morphemes and using them as separate tokens may not be the best way of using morphemes in the model. [sent-118, score-0.639]

72 Other ways of representing morphemes in the model may produce more effective results. [sent-119, score-0.214]

73 Even though it is impossible to make a fair comparison between these two systems, it would Table 5: F-measure Results of Word-level Model PERSONORGANIZATIONLOCATIONOVERALL LEXICALMODEL(LM)80. [sent-123, score-0.023]

74 4% be good to note how these systems performed with respect to their baselines which is lexical model in both. [sent-174, score-0.022]

75 6 Conclusion and Future Work In this paper, we explored the effects of using features like root, POS tag, proper noun and case to the performance of NER task. [sent-176, score-0.162]

76 All these features seem to improve the system significantly. [sent-177, score-0.028]

77 We also explored a new way of including morphological information of words to the system by using several tokens for a word. [sent-178, score-0.244]

78 This method produced compatible results to the regular word-level tokenization but did not produce a significant improvement. [sent-179, score-0.104]

79 As future work we are going to explore other ways of representing morphemes in the model. [sent-180, score-0.227]

80 Here we represented morphemes as separate states, but including them as features together with the root state may produce better models. [sent-181, score-0.507]

81 Another approach we will also focus is dividing words into characters and applying character-level models (Klein et al. [sent-182, score-0.031]

82 109 Acknowledgments The author would like to thank William W. [sent-184, score-0.027]

83 The author also thank Kemal Oflazer for providing the data set and the morphological analyzer. [sent-186, score-0.242]

84 The statements made herein are solely the responsibility of the author. [sent-188, score-0.021]

85 Language independent named entity recognition combining morphological and contextual evidence. [sent-191, score-0.425]

86 Turkish language resources: Morphological parser, morphological disambiguator and web corpus. [sent-216, score-0.266]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('notprop', 0.359), ('turkish', 0.335), ('prop', 0.333), ('morph', 0.315), ('ner', 0.226), ('morphological', 0.215), ('root', 0.204), ('morphemes', 0.192), ('uc', 0.188), ('ayval', 0.179), ('ilias', 0.179), ('lc', 0.155), ('yazar', 0.154), ('tur', 0.137), ('pnon', 0.128), ('gum', 0.102), ('morp', 0.102), ('nom', 0.098), ('named', 0.087), ('tokenization', 0.083), ('noun', 0.076), ('oflazer', 0.074), ('kemal', 0.074), ('tag', 0.074), ('analyzer', 0.061), ('crf', 0.058), ('entities', 0.058), ('entity', 0.058), ('proper', 0.058), ('person', 0.052), ('amerika', 0.051), ('disambiguator', 0.051), ('flavor', 0.051), ('ilab', 0.051), ('reyyan', 0.051), ('sak', 0.051), ('organization', 0.048), ('roots', 0.048), ('lm', 0.046), ('okhan', 0.045), ('dilek', 0.044), ('pos', 0.044), ('name', 0.042), ('location', 0.04), ('feature', 0.039), ('ne', 0.038), ('agglutinative', 0.037), ('going', 0.035), ('state', 0.035), ('cucerzan', 0.034), ('contextual', 0.034), ('agreements', 0.033), ('gen', 0.032), ('tagged', 0.031), ('dividing', 0.031), ('recognition', 0.031), ('possessive', 0.03), ('states', 0.029), ('tokens', 0.029), ('features', 0.028), ('tags', 0.028), ('hmms', 0.028), ('productive', 0.027), ('author', 0.027), ('database', 0.026), ('sparseness', 0.026), ('represented', 0.026), ('mellon', 0.024), ('row', 0.024), ('morphologically', 0.023), ('fair', 0.023), ('carnegie', 0.023), ('zero', 0.023), ('letter', 0.023), ('behrang', 0.023), ('brains', 0.023), ('vlc', 0.023), ('wtoh', 0.023), ('properties', 0.022), ('separate', 0.022), ('singular', 0.022), ('model', 0.022), ('prevent', 0.022), ('acquire', 0.021), ('token', 0.021), ('uk', 0.021), ('morphology', 0.021), ('aolf', 0.021), ('genitive', 0.021), ('yeniterzi', 0.021), ('duce', 0.021), ('responsibility', 0.021), ('produced', 0.021), ('atus', 0.02), ('doecnitat', 0.02), ('jhuene', 0.02), ('paomgepsu', 0.02), ('sieosnsi', 0.02), ('ueesdain', 0.02), ('foonr', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System

Author: Reyyan Yeniterzi

Abstract: Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. Using syntactic and contextual properties with the new representation provide an 7.6% relative improvement over the baseline.

2 0.15135734 10 acl-2011-A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing

Author: John Lee ; Jason Naradowsky ; David A. Smith

Abstract: Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the “pipeline” approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.

3 0.1490743 75 acl-2011-Combining Morpheme-based Machine Translation with Post-processing Morpheme Prediction

Author: Ann Clifton ; Anoop Sarkar

Abstract: This paper extends the training and tuning regime for phrase-based statistical machine translation to obtain fluent translations into morphologically complex languages (we build an English to Finnish translation system) . Our methods use unsupervised morphology induction. Unlike previous work we focus on morphologically productive phrase pairs – our decoder can combine morphemes across phrase boundaries. Morphemes in the target language may not have a corresponding morpheme or word in the source language. Therefore, we propose a novel combination of post-processing morphology prediction with morpheme-based translation. We show, using both automatic evaluation scores and linguistically motivated analyses of the output, that our methods outperform previously proposed ones and pro- vide the best known results on the EnglishFinnish Europarl translation task. Our methods are mostly language independent, so they should improve translation into other target languages with complex morphology. 1 Translation and Morphology Languages with rich morphological systems present significant hurdles for statistical machine translation (SMT) , most notably data sparsity, source-target asymmetry, and problems with automatic evaluation. In this work, we propose to address the problem of morphological complexity in an Englishto-Finnish MT task within a phrase-based translation framework. We focus on unsupervised segmentation methods to derive the morphological information supplied to the MT model in order to provide coverage on very large datasets and for languages with few hand-annotated 32 resources. In fact, in our experiments, unsupervised morphology always outperforms the use of a hand-built morphological analyzer. Rather than focusing on a few linguistically motivated aspects of Finnish morphological behaviour, we develop techniques for handling morphological complexity in general. We chose Finnish as our target language for this work, because it exemplifies many of the problems morphologically complex languages present for SMT. Among all the languages in the Europarl data-set, Finnish is the most difficult language to translate from and into, as was demonstrated in the MT Summit shared task (Koehn, 2005) . Another reason is the current lack of knowledge about how to apply SMT successfully to agglutinative languages like Turkish or Finnish. Our main contributions are: 1) the introduction of the notion of segmented translation where we explicitly allow phrase pairs that can end with a dangling morpheme, which can connect with other morphemes as part of the translation process, and 2) the use of a fully segmented translation model in combination with a post-processing morpheme prediction system, using unsupervised morphology induction. Both of these approaches beat the state of the art on the English-Finnish translation task. Morphology can express both content and function categories, and our experiments show that it is important to use morphology both within the translation model (for morphology with content) and outside it (for morphology contributing to fluency) . Automatic evaluation measures for MT, BLEU (Papineni et al., 2002), WER (Word Error Rate) and PER (Position Independent Word Error Rate) use the word as the basic unit rather than morphemes. In a word comProce dPinogrstla ofn tdh,e O 4r9etghon A,n Jnu nael 1 M9-e 2t4i,n2g 0 o1f1 t.he ?c A2s0s1o1ci Aatsiosonc fioartio Cno fmorpu Ctoamtiopnuatalt Lioin gauli Lsitnicgsu,i psatgices 32–42, prised of multiple morphemes, getting even a single morpheme wrong means the entire word is wrong. In addition to standard MT evaluation measures, we perform a detailed linguistic analysis of the output. Our proposed approaches are significantly better than the state of the art, achieving the highest reported BLEU scores on the English-Finnish Europarl version 3 data-set. Our linguistic analysis shows that our models have fewer morpho-syntactic errors compared to the word-based baseline. 2 2.1 Models Baseline Models We set up three baseline models for comparison in this work. The first is a basic wordbased model (called Baseline in the results) ; we trained this on the original unsegmented version of the text. Our second baseline is a factored translation model (Koehn and Hoang, 2007) (called Factored) , which used as factors the word, “stem” 1 and suffix. These are derived from the same unsupervised segmentation model used in other experiments. The results (Table 3) show that a factored model was unable to match the scores of a simple wordbased baseline. We hypothesize that this may be an inherently difficult representational form for a language with the degree of morphological complexity found in Finnish. Because the morphology generation must be precomputed, for languages with a high degree of morphological complexity, the combinatorial explosion makes it unmanageable to capture the full range of morphological productivity. In addition, because the morphological variants are generated on a per-word basis within a given phrase, it excludes productive morphological combination across phrase boundaries and makes it impossible for the model to take into account any longdistance dependencies between morphemes. We conclude from this result that it may be more useful for an agglutinative language to use morphology beyond the confines of the phrasal unit, and condition its generation on more than just the local target stem. In order to compare the 1see Section 2.2. 33 performance of unsupervised segmentation for translation, our third baseline is a segmented translation model based on a supervised segmentation model (called Sup) , using the hand-built Omorfi morphological analyzer (Pirinen and Listenmaa, 2007) , which provided slightly higher BLEU scores than the word-based baseline. 2.2 Segmented Translation For segmented translation models, it cannot be taken for granted that greater linguistic accuracy in segmentation yields improved translation (Chang et al. , 2008) . Rather, the goal in segmentation for translation is instead to maximize the amount of lexical content-carrying morphology, while generalizing over the information not helpful for improving the translation model. We therefore trained several different segmentation models, considering factors of granularity, coverage, and source-target symmetry. We performed unsupervised segmentation of the target data, using Morfessor (Creutz and Lagus, 2005) and Paramor (Monson, 2008) , two top systems from the Morpho Challenge 2008 (their combined output was the Morpho Challenge winner) . However, translation models based upon either Paramor alone or the combined systems output could not match the wordbased baseline, so we concentrated on Morfessor. Morfessor uses minimum description length criteria to train a HMM-based segmentation model. When tested against a human-annotated gold standard of linguistic morpheme segmentations for Finnish, this algorithm outperforms competing unsupervised methods, achieving an F-score of 67.0% on a 3 million sentence corpus (Creutz and Lagus, 2006) . Varying the perplexity threshold in Morfessor does not segment more word types, but rather over-segments the same word types. In order to get robust, common segmentations, we trained the segmenter on the 5000 most frequent words2 ; we then used this to segment the entire data set. In order to improve coverage, we then further segmented 2For the factored model baseline we also used the same setting perplexity = 30, 5,000 most frequent words, but with all but the last suffix collapsed and called the “stem” . TabHMleoat1nr:gplhiMngor phermphocTur631ae04in, 81c9ie03ns67gi,64n0S14e567theTp 2rsa51t, 29Se 3t168able and in translation. any word type that contained a match from the most frequent suffix set, looking for the longest matching suffix character string. We call this method Unsup L-match. After the segmentation, word-internal morpheme boundary markers were inserted into the segmented text to be used to reconstruct the surface forms in the MT output. We then trained the Moses phrase-based system (Koehn et al., 2007) on the segmented and marked text. After decoding, it was a simple matter to join together all adjacent morphemes with word-internal boundary markers to reconstruct the surface forms. Figure 1(a) gives the full model overview for all the variants of the segmented translation model (supervised/unsupervised; with and without the Unsup L-match procedure) . Table 1shows how morphemes are being used in the MT system. Of the phrases that included segmentations (‘Morph’ in Table 1) , roughly a third were ‘productive’, i.e. had a hanging morpheme (with a form such as stem+) that could be joined to a suffix (‘Hanging Morph’ in Table 1) . However, in phrases used while decoding the development and test data, roughly a quarter of the phrases that generated the translated output included segmentations, but of these, only a small fraction (6%) had a hanging morpheme; and while there are many possible reasons to account for this we were unable to find a single convincing cause. 2.3 Morphology Generation Morphology generation as a post-processing step allows major vocabulary reduction in the translation model, and allows the use of morphologically targeted features for modeling inflection. A possible disadvantage of this approach is that in this model there is no opportunity to con34 sider the morphology in translation since it is removed prior to training the translation model. Morphology generation models can use a variety of bilingual and contextual information to capture dependencies between morphemes, often more long-distance than what is possible using n-gram language models over morphemes in the segmented model. Similar to previous work (Minkov et al. , 2007; Toutanova et al. , 2008) , we model morphology generation as a sequence learning problem. Un- like previous work, we use unsupervised morphology induction and use automatically generated suffix classes as tags. The first phase of our morphology prediction model is to train a MT system that produces morphologically simplified word forms in the target language. The output word forms are complex stems (a stem and some suffixes) but still missing some important suffix morphemes. In the second phase, the output of the MT decoder is then tagged with a sequence of abstract suffix tags. In particular, the output of the MT decoder is a sequence of complex stems denoted by x and the output is a sequence of suffix class tags denoted by y. We use a list of parts from (x,y) and map to a d-dimensional feature vector Φ(x, y) , with each dimension being a real number. We infer the best sequence of tags using: F(x) = argymaxp(y | x,w) where F(x) returns the highest scoring output y∗ . A conditional random field (CRF) (Lafferty et al. , 2001) defines the conditional probability as a linear score for each candidate y and a global normalization term: logp(y | x, w) = Φ(x, y) · w − log Z where Z = Py0∈ exp(Φ(x, y0) · w) . We use stochastiPc gradient descent (using crfsgd3) to train the weight vector w. So far, this is all off-the-shelf sequence learning. However, the output y∗ from the CRF decoder is still only a sequence of abstract suffix tags. The third and final phase in our morphology prediction model GEN(x) 3 http://leon. bottou. org/projects/sgd English Training Data words Finnish Training Data words Morphological Pre-Processing stem+ +morph MT System Alignment: word word word stem+ +morph stem stem+ +morph Post-Process: Morph Re-Stitching Fully inflected surface form Evaluation against original reference (a) Segmented Translation Model English Training Data words Finnish Training Data Morphological Pre-Prowceosrdsisng 1 stem+ +morph1+ +morph2 Morphological Pre-Processing 2 stem+ +morph1+ MPosrpthe-mPRr+eo-+cSmetsio crhp1i:nhg+swteomrd+ MA+lTmigwnSomyrspdthen 1mt:+ wsotermd complex stem: stem+morph1+ MPo rpsht-oPlro gcyesGse2n:erCaRtioFnstem+morph1+ morph2sLuarnfagcueagfeorMmomdealp ing Fully inflected surface form Evaluation against original reference (b) Post-Processing Model Translation & Generation Figure 1: Training and testing pipelines for the SMT models. is to take the abstract suffix tag sequence y∗ and then map it into fully inflected word forms, and rank those outputs using a morphemic language model. The abstract suffix tags are extracted from the unsupervised morpheme learning process, and are carefully designed to enable CRF training and decoding. We call this model CRFLM for short. Figure 1(b) shows the full pipeline and Figure 2 shows a worked example of all the steps involved. We use the morphologically segmented training data (obtained using the segmented corpus described in Section 2.24) and remove selected suffixes to create a morphologically simplified version of the training data. The MT model is trained on the morphologically simplified training data. The output from the MT system is then used as input to the CRF model. The CRF model was trained on a ∼210,000 Finnish sentences, consisting noefd d∼ o1n.5 a am ∼il2li1o0n,0 tokens; tishhe 2,000 cseens,te cnoncse Europarl t.e5s tm isl eito nco tnoskiesntesd; hoef 41,434 stem tokens. The labels in the output sequence y were obtained by selecting the most productive 150 stems, and then collapsing certain vowels into equivalence classes corresponding to Finnish vowel harmony patterns. Thus 4Note that unlike Section 2.2 we do not use Unsup L-match because when evaluating the CRF model on the suffix prediction task it obtained 95.61% without using Unsup L-match and 82.99% when using Unsup L-match. 35 variants -k¨ o and -ko become vowel-generic enclitic particle -kO, and variants -ss ¨a and -ssa become the vowel-generic inessive case marker -ssA, etc. This is the only language-specific component of our translation model. However, we expect this approach to work for other agglutinative languages as well. For fusional languages like Spanish, another mapping from suffix to abstract tags might be needed. These suffix transformations to their equivalence classes prevent morphophonemic variants of the same morpheme from competing against each other in the prediction model. This resulted in 44 possible label outputs per stem which was a reasonable sized tag-set for CRF training. The CRF was trained on monolingual features of the segmented text for suffix prediction, where t is the current token: Word Stem st−n, .., st, .., st+n(n = 4) Morph Prediction yt−2 , yt−1 , yt With this simple feature set, we were able to use features over longer distances, resulting in a total of 1,110,075 model features. After CRF based recovery of the suffix tag sequence, we use a bigram language model trained on a full segmented version on the training data to recover the original vowels. We used bigrams only, because the suffix vowel harmony alternation depends only upon the preceding phonemes in the word from which it was segmented. original training koskevaa mietint o¨ ¨a data: k ¨asitell ¨a ¨an segmentation: koske+ +va+ +a mietint ¨o+ + a¨ k a¨si+ +te+ +ll a¨+ + a¨+ +n (train bigram language model with mapping A = { a , a }) map n fi bniaglr asmuff liaxn gtou agbest mraocdte tag-set: koske+ +va+ +A mietint ¨o+ +A k ¨asi+ +te+ +ll ¨a+ + ¨a+ +n (train CRF model to predict the final suffix) peeling of final suffix: koske+ +va+ mietint ¨o+ k a¨si+ +te+ +ll a¨+ + a¨+ (train SMT model on this transformation of training data) (a) Training decoder output: koske+ +va+ mietint o¨+ k a¨si+ +te+ +ll a¨+ + a¨+ decoder output stitched up: koskeva+ mietint o¨+ k ¨asitell ¨a ¨a+ CRF model prediction: x = ‘koskeva+ mietint ¨o+ k ¨asitell ¨a ¨a+’, y = ‘+A +A +n’ koskeva+ +A mietint ¨o+ +A k ¨asitell a¨ ¨a+ +n unstitch morphemes: koske+ +va+ +A mietint ¨o+ +A k ¨asi+ +te+ +ll ¨a+ + ¨a+ +n language model disambiguation: koske+ +va+ +a mietint ¨o+ + a¨ k a¨si+ +te+ +ll a¨+ + a¨+ +n final stitching: koskevaa mietint o¨ ¨a k ¨asitell ¨a ¨an (the output is then compared to the reference translation) (b) Decoding Figure 2: Worked example of all steps in the post-processing morphology prediction model. 3 Experimental Results used the Europarl version 3 corpus (Koehn, 2005) English-Finnish training data set, as well as the standard development and test data sets. Our parallel training data consists of ∼1 million senFor all of the models built in this paper, we tpeanrcaelsle lo tfr a4i0n nwgor ddast or less, sw ohfi ∼le 1t mhei development and test sets were each 2,000 sentences long. In all the experiments conducted in this paper, we used the Moses5 phrase-based translation system (Koehn et al. , 2007) , 2008 version. We trained all of the Moses systems herein using the standard features: language model, reordering model, translation model, and word penalty; in addition to these, the factored experiments called for additional translation and generation features for the added factors as noted above. We used in all experiments the following settings: a hypothesis stack size 100, distortion limit 6, phrase translations limit 20, and maximum phrase length 20. For the language models, we used SRILM 5-gram language models (Stolcke, 2002) for all factors. For our word-based Baseline system, we trained a word-based model using the same Moses system with identical settings. For evaluation against segmented translation systems in segmented forms before word reconstruction, we also segmented the baseline system’s word-based output. All the BLEU scores reported are for lowercase evaluation. We did an initial evaluation of the segmented output translation for each system using the no5http://www.statmt.org/moses/ 36 TabSlBUeuna2gps:meulSipengLmta-e nioatedchMo12dme804-.lB8S714cL±oEr0eUs.6 9 S8up19Nre.358ofe498rUs9ntoihe supervised segmentation baseline model. m-BLEU indicates that the segmented output was evaluated against a segmented version of the reference (this measure does not have the same correlation with human judgement as BLEU) . No Uni indicates the segmented BLEU score without unigrams. tion of m-BLEU score (Luong et al. , 2010) where the BLEU score is computed by comparing the segmented output with a segmented reference translation. Table 2 shows the m-BLEU scores for various systems. We also show the m-BLEU score without unigrams, since over-segmentation could lead to artificially high m-BLEU scores. In fact, if we compare the relative improvement of our m-BLEU scores for the Unsup L-match system we see a relative improvement of 39.75% over the baseline. Luong et. al. (2010) report an m-BLEU score of 55.64% but obtain a relative improvement of 0.6% over their baseline m-BLEU score. We find that when using a good segmentation model, segmentation of the morphologically complex target language improves model performance over an unsegmented baseline (the confidence scores come from bootstrap resampling) . Table 3 shows the evaluation scores for all the baselines and the methods introduced in this paper using standard wordbased lowercase BLEU, WER and PER. We do TSCMaFBU(LubanRolpcesdFotu3lne-ipLr:gMdeLT-tms.al,Stc2ho0r1es:)l 1wB54 Le.r682E90c 27a9Us∗eBL-7 W46E3. U659478R6,1WE-7 TR412E. 847Ra1528nd TER. The ∗ indicates a statistically significant improvement o∗f BndLiEcaUte score over tchalel yB saisgenli nfice mntod imel.The boldface scores are the best performing scores per evaluation measure. better than (Luong et al. , 2010) , the previous best score for this task. We also show a better relative improvement over our baseline when compared to (Luong et al., 2010) : a relative improvement of 4.86% for Unsup L-match compared to our baseline word-based model, compared to their 1.65% improvement over their baseline word-based model. Our best performing method used unsupervised morphology with L-match (see Section 2.2) and the improvement is significant: bootstrap resampling provides a confidence margin of ±0.77 and a t-test (Collins ceot nafli.d , 2005) sahrogwined o significance aw ti-thte p = 0o.0ll0in1s. 3.1 Morphological Fluency Analysis To see how well the models were doing at getting morphology right, we examined several patterns of morphological behavior. While we wish to explore minimally supervised morphological MT models, and use as little language specific information as possible, we do want to use linguistic analysis on the output of our system to see how well the models capture essential morphological information in the target language. So, we ran the word-based baseline system, the segmented model (Unsup L-match) , and the prediction model (CRF-LM) outputs, along with the reference translation through the supervised morphological analyzer Omorfi (Pirinen and Listenmaa, 2007) . Using this analysis, we looked at a variety of linguistic constructions that might reveal patterns in morphological behavior. These were: (a) explicitly marked 37 noun forms, (b) noun-adjective case agreement, (c) subject-verb person/number agreement, (d) transitive object case marking, (e) postpositions, and (f) possession. In each of these categories, we looked for construction matches on a per-sentence level between the models’ output and the reference translation. Table 4 shows the models’ performance on the constructions we examined. In all of the categories, the CRF-LM model achieves the best precision score, as we explain below, while the Unsup L-match model most frequently gets the highest recall score. A general pattern in the most prevalent of these constructions is that the baseline tends to prefer the least marked form for noun cases (corresponding to the nominative) more than the reference or the CRF-LM model. The baseline leaves nouns in the (unmarked) nominative far more than the reference, while the CRF-LM model comes much closer, so it seems to fare better at explicitly marking forms, rather than defaulting to the more frequent unmarked form. Finnish adjectives must be marked with the same case as their head noun, while verbs must agree in person and number with their subject. We saw that in both these categories, the CRFLM model outperforms for precision, while the segmented model gets the best recall. In addition, Finnish generally marks direct objects of verbs with the accusative or the partitive case; we observed more accusative/partitive-marked nouns following verbs in the CRF-LM output than in the baseline, as illustrated by example (1) in Fig. 3. While neither translation picks the same verb as in the reference for the input ‘clarify,’ the CRFLM-output paraphrases it by using a grammatical construction of the transitive verb followed by a noun phrase inflected with the accusative case, correctly capturing the transitive construction. The baseline translation instead follows ‘give’ with a direct object in the nominative case. To help clarify the constructions in question, we have used Google Translate6 to provide back6 http://translate.google. com/ of occurrences per sentence, recall and F-score. also averaged The constructions over the various translations. are listed in descending P, R and F stand for precision, order of their frequency in the texts. The highlighted value in each column is the most accurate with respect to the reference value. translations of our MT output into English; to contextualize these back-translations, we have provided Google’s back-translation of the reference. The use of postpositions shows another difference between the models. Finnish postpositions require the preceding noun to be in the genitive or sometimes partitive case, which occurs correctly more frequently in the CRF-LM than the baseline. In example (2) in Fig. 3, all three translations correspond to the English text, ‘with the basque nationalists. ’ However, the CRF-LM output is more grammatical than the baseline, because not only do the adjective and noun agree for case, but the noun ‘baskien’ to which the postposition ‘kanssa’ belongs is marked with the correct genitive case. However, this well-formedness is not rewarded by BLEU, because ‘baskien’ does not match the reference. In addition, while Finnish may express possession using case marking alone, it has another construction for possession; this can disambiguate an otherwise ambiguous clause. This alternate construction uses a pronoun in the genitive case followed by a possessive-marked noun; we see that the CRF-LM model correctly marks this construction more frequently than the baseline. As example (3) in Fig. 3 shows, while neither model correctly translates ‘matkan’ (‘trip’) , the baseline’s output attributes the inessive ‘yhteydess’ (‘connection’) as belonging to ‘tulokset’ (‘results’) , and misses marking the possession linking it to ‘Commissioner Fischler’. Our manual evaluation shows that the CRF38 LM model is producing output translations that are more morphologically fluent than the wordbased baseline and the segmented translation Unsup L-match system, even though the word choices lead to a lower BLEU score overall when compared to Unsup L-match. 4 Related Work The work on morphology in MT can be grouped into three categories, factored models, segmented translation, and morphology generation. Factored models (Koehn and Hoang, 2007) factor the phrase translation probabilities over additional information annotated to each word, allowing for text to be represented on multiple levels of analysis. We discussed the drawbacks of factored models for our task in Section 2. 1. While (Koehn and Hoang, 2007; Yang and Kirchhoff, 2006; Avramidis and Koehn, 2008) obtain improvements using factored models for translation into English, German, Spanish, and Czech, these models may be less useful for capturing long-distance dependencies in languages with much more complex morphological systems such as Finnish. In our experiments factored models did worse than the baseline. Segmented translation performs morphological analysis on the morphologically complex text for use in the translation model (Brown et al. , 1993; Goldwater and McClosky, 2005; de Gispert and Mari n˜o, 2008) . This method unpacks complex forms into simpler, more frequently occurring components, and may also increase the symmetry of the lexically realized content be(1) Input: ‘the charter we are to approve today both strengthens and gives visible shape to the common fundamental rights and values our community is to be based upon. ’ a. Reference: perusoikeuskirja , jonka t ¨an ¨a ¨an aiomme hyv a¨ksy ¨a , sek ¨a vahvistaa ett ¨a selvent a¨ a¨ (selvent ¨a a¨/VERB/ACT/INF/SG/LAT-clarify) niit a¨ (ne/PRONOUN/PL/PAR-them) yhteisi ¨a perusoikeuksia ja arvoja , joiden on oltava yhteis¨ omme perusta. Back-translation: ‘Charter of Fundamental Rights, which today we are going to accept that clarify and strengthen the common fundamental rights and values, which must be community based. ’ b. Baseline: perusoikeuskirja me hyv ¨aksymme t¨ an ¨a a¨n molemmat vahvistaa ja antaa (antaa/VERB/INF/SG/LATgive) n a¨kyv a¨ (n¨ aky a¨/VERB/ACT/PCP/SG/NOM-visible) muokata yhteist ¨a perusoikeuksia ja arvoja on perustuttava. Back-translation: ‘Charter today, we accept both confirm and modify to make a visible and common values, fundamental rights must be based. ’ c. CRF-LM: perusoikeuskirja on hyv a¨ksytty t ¨an ¨a ¨an , sek ¨a vahvistaa ja antaa (antaa/VERB/ACT/INF/SG/LAT-give) konkreettisen (konkreettinen/ADJECTIVE/SG/GEN,ACC-concrete) muodon (muoto/NOUN/SG/GEN,ACCshape) yhteisi ¨a perusoikeuksia ja perusarvoja , yhteis¨ on on perustuttava. Back-translation: ‘Charter has been approved today, and to strengthen and give concrete shape to the common basic rights and fundamental values, the Community must be based. ’ (2) Input: ‘with the basque nationalists’ a. Reference: baskimaan kansallismielisten kanssa basque-SG/NOM+land-SG/GEN,ACC nationalists-PL/GEN with-POST b. Baseline: baskimaan kansallismieliset kanssa basque-SG/NOM-+land-SG/GEN,ACC kansallismielinen-PL/NOM,ACC-nationalists POST-with c. CRF-LM: kansallismielisten baskien kanssa nationalists-PL/GEN basques-PL/GEN with-POST (3) Input: ‘and in this respect we should value the latest measures from commissioner fischler , the results of his trip to morocco on the 26th of last month and the high level meetings that took place, including the one with the king himself’ a. Reference: ja t ¨ass¨ a mieless ¨a osaamme my¨ os arvostaa komission j¨ asen fischlerin viimeisimpi ¨a toimia , jotka ovat h a¨nen (h¨ anen/GEN-his) marokkoon 26 lokakuuta tekemns (tekem¨ ans ¨a/POSS-his) matkan (matkan/GENtour) ja korkean tason kokousten jopa itsens¨ a kuninkaan kanssa tulosta Back-translation: ‘and in this sense we can also appreciate the Commissioner Fischler’s latest actions, which are his to Morocco 26 October trip to high-level meetings and even the king himself with the result b. Baseline: ja t ¨ass¨ a yhteydess a¨ olisi arvoa viimeisin toimia komission j¨ asen fischler , tulokset monitulkintaisia marokon yhteydess a¨ (yhteydess/INE-connection) , ja viime kuussa pidettiin korkean tason kokouksissa , mukaan luettuna kuninkaan kanssa Back-translation: ‘and in this context would be the value of the last act, Commissioner Fischler, the results of the Moroccan context, ambiguous, and last month held high level meetings, including with the king’ c. CRF-LM: ja t ¨ass¨ a yhteydess a¨ meid ¨an olisi lis ¨aarvoa viimeist ¨a toimenpiteit a¨ kuin komission j¨ asen fischler , ett a¨ h a¨nen (h¨ anen/GEN-his) kokemuksensa (kokemuksensa/POSS-experience) marokolle (marokolle-Moroccan) viime kuun 26 ja korkean tason tapaamiset j¨ arjestettiin, kuninkaan kanssa Back-translation: ‘and in this context, we should value the last measures as the Commissioner Fischler, that his experience in Morocco has on the 26th and high-level meetings took place, including with the king. ’ Figure 3: Morphological fluency analysis (see Section 3. 1) . tween source and target. In a somewhat orthogonal approach to ours, (Ma et al. , 2007) use alignment of a parallel text to pack together adjacent segments in the alignment output, which are then fed back to the word aligner to bootstrap an improved alignment, which is then used in the translation model. We compared our results against (Luong et al. , 2010) in Table 3 since their results are directly comparable to ours. They use a segmented phrase table and language model along with the word-based versions in the decoder and in tuning a Finnish target. Their approach requires segmented phrases 39 to match word boundaries, eliminating morphologically productive phrases. In their work a segmented language model can score a translation, but cannot insert morphology that does not show source-side reflexes. In order to perform a similar experiment that still allowed for morphologically productive phrases, we tried training a segmented translation model, the output of which we stitched up in tuning so as to tune to a word-based reference. The goal of this experiment was to control the segmented model’s tendency to overfit by rewarding it for using correct whole-word forms. However, we found that this approach was less successful than using the segmented reference in tuning, and could not meet the baseline (13.97% BLEU best tuning score, versus 14.93% BLEU for the baseline best tuning score) . Previous work in segmented translation has often used linguistically motivated morphological analysis selectively applied based on a language-specific heuristic. A typical approach is to select a highly inflecting class of words and segment them for particular morphology (de Gispert and Mari n˜o, 2008; Ramanathan et al. , 2009) . Popovi¸ c and Ney (2004) perform segmentation to reduce morphological complexity of the source to translate into an isolating target, reducing the translation error rate for the English target. For Czech-to-English, Goldwater and McClosky (2005) lemmatized the source text and inserted a set of ‘pseudowords’ expected to have lexical reflexes in English. Minkov et. al. (2007) and Toutanova et. al. (2008) use a Maximum Entropy Markov Model for morphology generation. The main drawback to this approach is that it removes morphological information from the translation model (which only uses stems) ; this can be a problem for languages in which morphology ex- presses lexical content. de Gispert (2008) uses a language-specific targeted morphological classifier for Spanish verbs to avoid this issue. Talbot and Osborne (2006) use clustering to group morphological variants of words for word alignments and for smoothing phrase translation tables. Habash (2007) provides various methods to incorporate morphological variants of words in the phrase table in order to help recognize out of vocabulary words in the source language. 5 Conclusion and Future Work We found that using a segmented translation model based on unsupervised morphology induction and a model that combined morpheme segments in the translation model with a postprocessing morphology prediction model gave us better BLEU scores than a word-based baseline. Using our proposed approach we obtain better scores than the state of the art on the EnglishFinnish translation task (Luong et al. , 2010) : from 14.82% BLEU to 15.09%, while using a 40 simpler model. We show that using morphological segmentation in the translation model can improve output translation scores. We also demonstrate that for Finnish (and possibly other agglutinative languages) , phrase-based MT benefits from allowing the translation model access to morphological segmentation yielding productive morphological phrases. Taking advantage of linguistic analysis of the output we show that using a post-processing morphology generation model can improve translation fluency on a sub-word level, in a manner that is not captured by the BLEU word-based evaluation measure. In order to help with replication of the results in this paper, we have run the various morphological analysis steps and created the necessary training, tuning and test data files needed in order to train, tune and test any phrase-based machine translation system with our data. The files can be downloaded from natlang. cs.sfu. ca. In future work we hope to explore the utility of phrases with productive morpheme boundaries and explore why they are not used more pervasively in the decoder. Evaluation measures for morphologically complex languages and tun- ing to those measures are also important future work directions. Also, we would like to explore a non-pipelined approach to morphological preand post-processing so that a globally trained model could be used to remove the target side morphemes that would improve the translation model and then predict those morphemes in the target language. Acknowledgements This research was partially supported by NSERC, Canada (RGPIN: 264905) and a Google Faculty Award. We would like to thank Christian Monson, Franz Och, Fred Popowich, Howard Johnson, Majid Razmara, Baskaran Sankaran and the anonymous reviewers for their valuable comments on this work. We would particularly like to thank the developers of the open-source Moses machine translation toolkit and the Omorfi morphological analyzer for Finnish which we used for our experiments. References Eleftherios Avramidis and Philipp Koehn. 2008. Enriching morphologically poor languages for statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, page 763?770, Columbus, Ohio, USA. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2) :263–31 1. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224–232, Columbus, Ohio, June. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (A CL05). Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reason- ing (AKRR ’05), pages 106–113, Espoo, Finland. Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proceedings of the PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes. Adri ´a de Gispert and Jos e´ Mari n˜o. 2008. On the impact of morphology in English to Spanish statistical MT. Speech Communication, 50(11-12) . Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 676–683, Vancouver, B.C. , Canada. Association for Computational Linguistics. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 868–876, Prague, Czech Republic. Association for Computational Linguistics. 41 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In A CL ‘07: Proceedings of the 45th Annual Meeting of the A CL on Interactive Poster and Demonstration Sessions, pages 177–108, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X, pages 79–86, Phuket, Thailand. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289, San Francisco, California, USA. Association for Computing Machinery. Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A hybrid morpheme-word representation for machine translation of morphologically rich languages. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 148–157, Cambridge, Massachusetts. Association for Computational Linguistics. Yanjun Ma, Nicolas Stroppa, and Andy Way. 2007. Bootstrapping word alignment via word packing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 304–311, Prague, Czech Republic. Association for Computational Linguistics. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (A CL07), pages 128–135, Prague, Czech Republic. Association for Computational Linguistics. Christian Monson. 2008. Paramor and morpho challenge 2008. In Lecture Notes in Computer Science: Workshop of the Cross-Language Evaluation Forum (CLEF 2008), Revised Selected Papers. Habash Nizar. 2007. Four techniques for online handling of out-of-vocabulary words in arabic-english statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, Columbus, Ohio. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics A CL, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Tommi Pirinen and Inari Listenmaa. 2007. Omorfi morphological analzer. http://gna.org/projects/omorfi. Maja Popovi¸ c and Hermann Ney. 2004. Towards the use of word stems and suffixes for statistiWei jing cal machine translation. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), pages 1585–1588, Lisbon, Portugal. European Language Resources Association (ELRA) . Ananthakrishnan Ramanathan, Hansraj Choudhary, Avishek Ghosh, and Pushpak Bhattacharyya. 2009. Case markers and morphology: Addressing the crux of the fluency problem in EnglishHindi SMT. In Proceedings of the Joint Conference of the 4 7th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 800–808, Suntec, Singapore. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm – an extensible language modeling toolkit. 7th International Conference on Spoken Language Processing, 3:901–904. David Talbot and Miles Osborne. 2006. Modelling lexical redundancy for machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 969–976, Sydney, Australia, July. Association for Computational Linguistics. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 514–522, Columbus, Ohio, USA. Association for Computational Linguistics. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proceedings of the European Chapter of the Association for Computational Linguistics, pages 41–48, Trento, Italy. Association for Computational Linguistics. 42

4 0.14652477 289 acl-2011-Subjectivity and Sentiment Analysis of Modern Standard Arabic

Author: Muhammad Abdul-Mageed ; Mona Diab ; Mohammed Korayem

Abstract: Although Subjectivity and Sentiment Analysis (SSA) has been witnessing a flurry of novel research, there are few attempts to build SSA systems for Morphologically-Rich Languages (MRL). In the current study, we report efforts to partially fill this gap. We present a newly developed manually annotated corpus ofModern Standard Arabic (MSA) together with a new polarity lexicon.The corpus is a collection of newswire documents annotated on the sentence level. We also describe an automatic SSA tagging system that exploits the annotated data. We investigate the impact of different levels ofpreprocessing settings on the SSA classification task. We show that by explicitly accounting for the rich morphology the system is able to achieve significantly higher levels of performance.

5 0.13235497 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

Author: Stefan Rud ; Massimiliano Ciaramita ; Jens Muller ; Hinrich Schutze

Abstract: We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.

6 0.12809169 318 acl-2011-Unsupervised Bilingual Morpheme Segmentation and Alignment with Context-rich Hidden Semi-Markov Models

7 0.1146281 261 acl-2011-Recognizing Named Entities in Tweets

8 0.1048243 163 acl-2011-Improved Modeling of Out-Of-Vocabulary Words Using Morphological Classes

9 0.098843254 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

10 0.069677547 12 acl-2011-A Generative Entity-Mention Model for Linking Entities with Knowledge Base

11 0.060965784 329 acl-2011-Using Deep Morphology to Improve Automatic Error Detection in Arabic Handwriting Recognition

12 0.059688896 44 acl-2011-An exponential translation model for target language morphology

13 0.058724198 238 acl-2011-P11-2093 k2opt.pdf

14 0.056288682 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

15 0.054531787 184 acl-2011-Joint Hebrew Segmentation and Parsing using a PCFGLA Lattice Parser

16 0.054113735 310 acl-2011-Translating from Morphologically Complex Languages: A Paraphrase-Based Approach

17 0.051989257 193 acl-2011-Language-independent compound splitting with morphological operations

18 0.051058494 27 acl-2011-A Stacked Sub-Word Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging

19 0.050408602 199 acl-2011-Learning Condensed Feature Representations from Large Unsupervised Data Sets for Supervised Learning

20 0.049781986 313 acl-2011-Two Easy Improvements to Lexical Weighting


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.119), (1, 0.001), (2, -0.031), (3, -0.035), (4, -0.007), (5, -0.008), (6, 0.105), (7, -0.063), (8, -0.015), (9, 0.125), (10, -0.036), (11, 0.059), (12, -0.159), (13, 0.011), (14, 0.103), (15, -0.086), (16, 0.018), (17, 0.005), (18, -0.031), (19, 0.081), (20, 0.028), (21, -0.04), (22, 0.055), (23, -0.078), (24, 0.013), (25, 0.019), (26, -0.0), (27, -0.018), (28, -0.069), (29, 0.079), (30, 0.01), (31, 0.039), (32, 0.009), (33, 0.018), (34, 0.044), (35, 0.117), (36, -0.032), (37, -0.077), (38, -0.081), (39, 0.072), (40, -0.076), (41, -0.043), (42, -0.013), (43, -0.141), (44, 0.128), (45, -0.053), (46, -0.011), (47, 0.024), (48, -0.02), (49, -0.093)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93139398 124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System

Author: Reyyan Yeniterzi

Abstract: Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. Using syntactic and contextual properties with the new representation provide an 7.6% relative improvement over the baseline.

2 0.73714465 10 acl-2011-A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing

Author: John Lee ; Jason Naradowsky ; David A. Smith

Abstract: Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the “pipeline” approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.

3 0.55460691 193 acl-2011-Language-independent compound splitting with morphological operations

Author: Klaus Macherey ; Andrew Dai ; David Talbot ; Ashok Popat ; Franz Och

Abstract: Translating compounds is an important problem in machine translation. Since many compounds have not been observed during training, they pose a challenge for translation systems. Previous decompounding methods have often been restricted to a small set of languages as they cannot deal with more complex compound forming processes. We present a novel and unsupervised method to learn the compound parts and morphological operations needed to split compounds into their compound parts. The method uses a bilingual corpus to learn the morphological operations required to split a compound into its parts. Furthermore, monolingual corpora are used to learn and filter the set of compound part candidates. We evaluate our method within a machine translation task and show significant improvements for various languages to show the versatility of the approach.

4 0.53393406 75 acl-2011-Combining Morpheme-based Machine Translation with Post-processing Morpheme Prediction

Author: Ann Clifton ; Anoop Sarkar

Abstract: This paper extends the training and tuning regime for phrase-based statistical machine translation to obtain fluent translations into morphologically complex languages (we build an English to Finnish translation system) . Our methods use unsupervised morphology induction. Unlike previous work we focus on morphologically productive phrase pairs – our decoder can combine morphemes across phrase boundaries. Morphemes in the target language may not have a corresponding morpheme or word in the source language. Therefore, we propose a novel combination of post-processing morphology prediction with morpheme-based translation. We show, using both automatic evaluation scores and linguistically motivated analyses of the output, that our methods outperform previously proposed ones and pro- vide the best known results on the EnglishFinnish Europarl translation task. Our methods are mostly language independent, so they should improve translation into other target languages with complex morphology. 1 Translation and Morphology Languages with rich morphological systems present significant hurdles for statistical machine translation (SMT) , most notably data sparsity, source-target asymmetry, and problems with automatic evaluation. In this work, we propose to address the problem of morphological complexity in an Englishto-Finnish MT task within a phrase-based translation framework. We focus on unsupervised segmentation methods to derive the morphological information supplied to the MT model in order to provide coverage on very large datasets and for languages with few hand-annotated 32 resources. In fact, in our experiments, unsupervised morphology always outperforms the use of a hand-built morphological analyzer. Rather than focusing on a few linguistically motivated aspects of Finnish morphological behaviour, we develop techniques for handling morphological complexity in general. We chose Finnish as our target language for this work, because it exemplifies many of the problems morphologically complex languages present for SMT. Among all the languages in the Europarl data-set, Finnish is the most difficult language to translate from and into, as was demonstrated in the MT Summit shared task (Koehn, 2005) . Another reason is the current lack of knowledge about how to apply SMT successfully to agglutinative languages like Turkish or Finnish. Our main contributions are: 1) the introduction of the notion of segmented translation where we explicitly allow phrase pairs that can end with a dangling morpheme, which can connect with other morphemes as part of the translation process, and 2) the use of a fully segmented translation model in combination with a post-processing morpheme prediction system, using unsupervised morphology induction. Both of these approaches beat the state of the art on the English-Finnish translation task. Morphology can express both content and function categories, and our experiments show that it is important to use morphology both within the translation model (for morphology with content) and outside it (for morphology contributing to fluency) . Automatic evaluation measures for MT, BLEU (Papineni et al., 2002), WER (Word Error Rate) and PER (Position Independent Word Error Rate) use the word as the basic unit rather than morphemes. In a word comProce dPinogrstla ofn tdh,e O 4r9etghon A,n Jnu nael 1 M9-e 2t4i,n2g 0 o1f1 t.he ?c A2s0s1o1ci Aatsiosonc fioartio Cno fmorpu Ctoamtiopnuatalt Lioin gauli Lsitnicgsu,i psatgices 32–42, prised of multiple morphemes, getting even a single morpheme wrong means the entire word is wrong. In addition to standard MT evaluation measures, we perform a detailed linguistic analysis of the output. Our proposed approaches are significantly better than the state of the art, achieving the highest reported BLEU scores on the English-Finnish Europarl version 3 data-set. Our linguistic analysis shows that our models have fewer morpho-syntactic errors compared to the word-based baseline. 2 2.1 Models Baseline Models We set up three baseline models for comparison in this work. The first is a basic wordbased model (called Baseline in the results) ; we trained this on the original unsegmented version of the text. Our second baseline is a factored translation model (Koehn and Hoang, 2007) (called Factored) , which used as factors the word, “stem” 1 and suffix. These are derived from the same unsupervised segmentation model used in other experiments. The results (Table 3) show that a factored model was unable to match the scores of a simple wordbased baseline. We hypothesize that this may be an inherently difficult representational form for a language with the degree of morphological complexity found in Finnish. Because the morphology generation must be precomputed, for languages with a high degree of morphological complexity, the combinatorial explosion makes it unmanageable to capture the full range of morphological productivity. In addition, because the morphological variants are generated on a per-word basis within a given phrase, it excludes productive morphological combination across phrase boundaries and makes it impossible for the model to take into account any longdistance dependencies between morphemes. We conclude from this result that it may be more useful for an agglutinative language to use morphology beyond the confines of the phrasal unit, and condition its generation on more than just the local target stem. In order to compare the 1see Section 2.2. 33 performance of unsupervised segmentation for translation, our third baseline is a segmented translation model based on a supervised segmentation model (called Sup) , using the hand-built Omorfi morphological analyzer (Pirinen and Listenmaa, 2007) , which provided slightly higher BLEU scores than the word-based baseline. 2.2 Segmented Translation For segmented translation models, it cannot be taken for granted that greater linguistic accuracy in segmentation yields improved translation (Chang et al. , 2008) . Rather, the goal in segmentation for translation is instead to maximize the amount of lexical content-carrying morphology, while generalizing over the information not helpful for improving the translation model. We therefore trained several different segmentation models, considering factors of granularity, coverage, and source-target symmetry. We performed unsupervised segmentation of the target data, using Morfessor (Creutz and Lagus, 2005) and Paramor (Monson, 2008) , two top systems from the Morpho Challenge 2008 (their combined output was the Morpho Challenge winner) . However, translation models based upon either Paramor alone or the combined systems output could not match the wordbased baseline, so we concentrated on Morfessor. Morfessor uses minimum description length criteria to train a HMM-based segmentation model. When tested against a human-annotated gold standard of linguistic morpheme segmentations for Finnish, this algorithm outperforms competing unsupervised methods, achieving an F-score of 67.0% on a 3 million sentence corpus (Creutz and Lagus, 2006) . Varying the perplexity threshold in Morfessor does not segment more word types, but rather over-segments the same word types. In order to get robust, common segmentations, we trained the segmenter on the 5000 most frequent words2 ; we then used this to segment the entire data set. In order to improve coverage, we then further segmented 2For the factored model baseline we also used the same setting perplexity = 30, 5,000 most frequent words, but with all but the last suffix collapsed and called the “stem” . TabHMleoat1nr:gplhiMngor phermphocTur631ae04in, 81c9ie03ns67gi,64n0S14e567theTp 2rsa51t, 29Se 3t168able and in translation. any word type that contained a match from the most frequent suffix set, looking for the longest matching suffix character string. We call this method Unsup L-match. After the segmentation, word-internal morpheme boundary markers were inserted into the segmented text to be used to reconstruct the surface forms in the MT output. We then trained the Moses phrase-based system (Koehn et al., 2007) on the segmented and marked text. After decoding, it was a simple matter to join together all adjacent morphemes with word-internal boundary markers to reconstruct the surface forms. Figure 1(a) gives the full model overview for all the variants of the segmented translation model (supervised/unsupervised; with and without the Unsup L-match procedure) . Table 1shows how morphemes are being used in the MT system. Of the phrases that included segmentations (‘Morph’ in Table 1) , roughly a third were ‘productive’, i.e. had a hanging morpheme (with a form such as stem+) that could be joined to a suffix (‘Hanging Morph’ in Table 1) . However, in phrases used while decoding the development and test data, roughly a quarter of the phrases that generated the translated output included segmentations, but of these, only a small fraction (6%) had a hanging morpheme; and while there are many possible reasons to account for this we were unable to find a single convincing cause. 2.3 Morphology Generation Morphology generation as a post-processing step allows major vocabulary reduction in the translation model, and allows the use of morphologically targeted features for modeling inflection. A possible disadvantage of this approach is that in this model there is no opportunity to con34 sider the morphology in translation since it is removed prior to training the translation model. Morphology generation models can use a variety of bilingual and contextual information to capture dependencies between morphemes, often more long-distance than what is possible using n-gram language models over morphemes in the segmented model. Similar to previous work (Minkov et al. , 2007; Toutanova et al. , 2008) , we model morphology generation as a sequence learning problem. Un- like previous work, we use unsupervised morphology induction and use automatically generated suffix classes as tags. The first phase of our morphology prediction model is to train a MT system that produces morphologically simplified word forms in the target language. The output word forms are complex stems (a stem and some suffixes) but still missing some important suffix morphemes. In the second phase, the output of the MT decoder is then tagged with a sequence of abstract suffix tags. In particular, the output of the MT decoder is a sequence of complex stems denoted by x and the output is a sequence of suffix class tags denoted by y. We use a list of parts from (x,y) and map to a d-dimensional feature vector Φ(x, y) , with each dimension being a real number. We infer the best sequence of tags using: F(x) = argymaxp(y | x,w) where F(x) returns the highest scoring output y∗ . A conditional random field (CRF) (Lafferty et al. , 2001) defines the conditional probability as a linear score for each candidate y and a global normalization term: logp(y | x, w) = Φ(x, y) · w − log Z where Z = Py0∈ exp(Φ(x, y0) · w) . We use stochastiPc gradient descent (using crfsgd3) to train the weight vector w. So far, this is all off-the-shelf sequence learning. However, the output y∗ from the CRF decoder is still only a sequence of abstract suffix tags. The third and final phase in our morphology prediction model GEN(x) 3 http://leon. bottou. org/projects/sgd English Training Data words Finnish Training Data words Morphological Pre-Processing stem+ +morph MT System Alignment: word word word stem+ +morph stem stem+ +morph Post-Process: Morph Re-Stitching Fully inflected surface form Evaluation against original reference (a) Segmented Translation Model English Training Data words Finnish Training Data Morphological Pre-Prowceosrdsisng 1 stem+ +morph1+ +morph2 Morphological Pre-Processing 2 stem+ +morph1+ MPosrpthe-mPRr+eo-+cSmetsio crhp1i:nhg+swteomrd+ MA+lTmigwnSomyrspdthen 1mt:+ wsotermd complex stem: stem+morph1+ MPo rpsht-oPlro gcyesGse2n:erCaRtioFnstem+morph1+ morph2sLuarnfagcueagfeorMmomdealp ing Fully inflected surface form Evaluation against original reference (b) Post-Processing Model Translation & Generation Figure 1: Training and testing pipelines for the SMT models. is to take the abstract suffix tag sequence y∗ and then map it into fully inflected word forms, and rank those outputs using a morphemic language model. The abstract suffix tags are extracted from the unsupervised morpheme learning process, and are carefully designed to enable CRF training and decoding. We call this model CRFLM for short. Figure 1(b) shows the full pipeline and Figure 2 shows a worked example of all the steps involved. We use the morphologically segmented training data (obtained using the segmented corpus described in Section 2.24) and remove selected suffixes to create a morphologically simplified version of the training data. The MT model is trained on the morphologically simplified training data. The output from the MT system is then used as input to the CRF model. The CRF model was trained on a ∼210,000 Finnish sentences, consisting noefd d∼ o1n.5 a am ∼il2li1o0n,0 tokens; tishhe 2,000 cseens,te cnoncse Europarl t.e5s tm isl eito nco tnoskiesntesd; hoef 41,434 stem tokens. The labels in the output sequence y were obtained by selecting the most productive 150 stems, and then collapsing certain vowels into equivalence classes corresponding to Finnish vowel harmony patterns. Thus 4Note that unlike Section 2.2 we do not use Unsup L-match because when evaluating the CRF model on the suffix prediction task it obtained 95.61% without using Unsup L-match and 82.99% when using Unsup L-match. 35 variants -k¨ o and -ko become vowel-generic enclitic particle -kO, and variants -ss ¨a and -ssa become the vowel-generic inessive case marker -ssA, etc. This is the only language-specific component of our translation model. However, we expect this approach to work for other agglutinative languages as well. For fusional languages like Spanish, another mapping from suffix to abstract tags might be needed. These suffix transformations to their equivalence classes prevent morphophonemic variants of the same morpheme from competing against each other in the prediction model. This resulted in 44 possible label outputs per stem which was a reasonable sized tag-set for CRF training. The CRF was trained on monolingual features of the segmented text for suffix prediction, where t is the current token: Word Stem st−n, .., st, .., st+n(n = 4) Morph Prediction yt−2 , yt−1 , yt With this simple feature set, we were able to use features over longer distances, resulting in a total of 1,110,075 model features. After CRF based recovery of the suffix tag sequence, we use a bigram language model trained on a full segmented version on the training data to recover the original vowels. We used bigrams only, because the suffix vowel harmony alternation depends only upon the preceding phonemes in the word from which it was segmented. original training koskevaa mietint o¨ ¨a data: k ¨asitell ¨a ¨an segmentation: koske+ +va+ +a mietint ¨o+ + a¨ k a¨si+ +te+ +ll a¨+ + a¨+ +n (train bigram language model with mapping A = { a , a }) map n fi bniaglr asmuff liaxn gtou agbest mraocdte tag-set: koske+ +va+ +A mietint ¨o+ +A k ¨asi+ +te+ +ll ¨a+ + ¨a+ +n (train CRF model to predict the final suffix) peeling of final suffix: koske+ +va+ mietint ¨o+ k a¨si+ +te+ +ll a¨+ + a¨+ (train SMT model on this transformation of training data) (a) Training decoder output: koske+ +va+ mietint o¨+ k a¨si+ +te+ +ll a¨+ + a¨+ decoder output stitched up: koskeva+ mietint o¨+ k ¨asitell ¨a ¨a+ CRF model prediction: x = ‘koskeva+ mietint ¨o+ k ¨asitell ¨a ¨a+’, y = ‘+A +A +n’ koskeva+ +A mietint ¨o+ +A k ¨asitell a¨ ¨a+ +n unstitch morphemes: koske+ +va+ +A mietint ¨o+ +A k ¨asi+ +te+ +ll ¨a+ + ¨a+ +n language model disambiguation: koske+ +va+ +a mietint ¨o+ + a¨ k a¨si+ +te+ +ll a¨+ + a¨+ +n final stitching: koskevaa mietint o¨ ¨a k ¨asitell ¨a ¨an (the output is then compared to the reference translation) (b) Decoding Figure 2: Worked example of all steps in the post-processing morphology prediction model. 3 Experimental Results used the Europarl version 3 corpus (Koehn, 2005) English-Finnish training data set, as well as the standard development and test data sets. Our parallel training data consists of ∼1 million senFor all of the models built in this paper, we tpeanrcaelsle lo tfr a4i0n nwgor ddast or less, sw ohfi ∼le 1t mhei development and test sets were each 2,000 sentences long. In all the experiments conducted in this paper, we used the Moses5 phrase-based translation system (Koehn et al. , 2007) , 2008 version. We trained all of the Moses systems herein using the standard features: language model, reordering model, translation model, and word penalty; in addition to these, the factored experiments called for additional translation and generation features for the added factors as noted above. We used in all experiments the following settings: a hypothesis stack size 100, distortion limit 6, phrase translations limit 20, and maximum phrase length 20. For the language models, we used SRILM 5-gram language models (Stolcke, 2002) for all factors. For our word-based Baseline system, we trained a word-based model using the same Moses system with identical settings. For evaluation against segmented translation systems in segmented forms before word reconstruction, we also segmented the baseline system’s word-based output. All the BLEU scores reported are for lowercase evaluation. We did an initial evaluation of the segmented output translation for each system using the no5http://www.statmt.org/moses/ 36 TabSlBUeuna2gps:meulSipengLmta-e nioatedchMo12dme804-.lB8S714cL±oEr0eUs.6 9 S8up19Nre.358ofe498rUs9ntoihe supervised segmentation baseline model. m-BLEU indicates that the segmented output was evaluated against a segmented version of the reference (this measure does not have the same correlation with human judgement as BLEU) . No Uni indicates the segmented BLEU score without unigrams. tion of m-BLEU score (Luong et al. , 2010) where the BLEU score is computed by comparing the segmented output with a segmented reference translation. Table 2 shows the m-BLEU scores for various systems. We also show the m-BLEU score without unigrams, since over-segmentation could lead to artificially high m-BLEU scores. In fact, if we compare the relative improvement of our m-BLEU scores for the Unsup L-match system we see a relative improvement of 39.75% over the baseline. Luong et. al. (2010) report an m-BLEU score of 55.64% but obtain a relative improvement of 0.6% over their baseline m-BLEU score. We find that when using a good segmentation model, segmentation of the morphologically complex target language improves model performance over an unsegmented baseline (the confidence scores come from bootstrap resampling) . Table 3 shows the evaluation scores for all the baselines and the methods introduced in this paper using standard wordbased lowercase BLEU, WER and PER. We do TSCMaFBU(LubanRolpcesdFotu3lne-ipLr:gMdeLT-tms.al,Stc2ho0r1es:)l 1wB54 Le.r682E90c 27a9Us∗eBL-7 W46E3. U659478R6,1WE-7 TR412E. 847Ra1528nd TER. The ∗ indicates a statistically significant improvement o∗f BndLiEcaUte score over tchalel yB saisgenli nfice mntod imel.The boldface scores are the best performing scores per evaluation measure. better than (Luong et al. , 2010) , the previous best score for this task. We also show a better relative improvement over our baseline when compared to (Luong et al., 2010) : a relative improvement of 4.86% for Unsup L-match compared to our baseline word-based model, compared to their 1.65% improvement over their baseline word-based model. Our best performing method used unsupervised morphology with L-match (see Section 2.2) and the improvement is significant: bootstrap resampling provides a confidence margin of ±0.77 and a t-test (Collins ceot nafli.d , 2005) sahrogwined o significance aw ti-thte p = 0o.0ll0in1s. 3.1 Morphological Fluency Analysis To see how well the models were doing at getting morphology right, we examined several patterns of morphological behavior. While we wish to explore minimally supervised morphological MT models, and use as little language specific information as possible, we do want to use linguistic analysis on the output of our system to see how well the models capture essential morphological information in the target language. So, we ran the word-based baseline system, the segmented model (Unsup L-match) , and the prediction model (CRF-LM) outputs, along with the reference translation through the supervised morphological analyzer Omorfi (Pirinen and Listenmaa, 2007) . Using this analysis, we looked at a variety of linguistic constructions that might reveal patterns in morphological behavior. These were: (a) explicitly marked 37 noun forms, (b) noun-adjective case agreement, (c) subject-verb person/number agreement, (d) transitive object case marking, (e) postpositions, and (f) possession. In each of these categories, we looked for construction matches on a per-sentence level between the models’ output and the reference translation. Table 4 shows the models’ performance on the constructions we examined. In all of the categories, the CRF-LM model achieves the best precision score, as we explain below, while the Unsup L-match model most frequently gets the highest recall score. A general pattern in the most prevalent of these constructions is that the baseline tends to prefer the least marked form for noun cases (corresponding to the nominative) more than the reference or the CRF-LM model. The baseline leaves nouns in the (unmarked) nominative far more than the reference, while the CRF-LM model comes much closer, so it seems to fare better at explicitly marking forms, rather than defaulting to the more frequent unmarked form. Finnish adjectives must be marked with the same case as their head noun, while verbs must agree in person and number with their subject. We saw that in both these categories, the CRFLM model outperforms for precision, while the segmented model gets the best recall. In addition, Finnish generally marks direct objects of verbs with the accusative or the partitive case; we observed more accusative/partitive-marked nouns following verbs in the CRF-LM output than in the baseline, as illustrated by example (1) in Fig. 3. While neither translation picks the same verb as in the reference for the input ‘clarify,’ the CRFLM-output paraphrases it by using a grammatical construction of the transitive verb followed by a noun phrase inflected with the accusative case, correctly capturing the transitive construction. The baseline translation instead follows ‘give’ with a direct object in the nominative case. To help clarify the constructions in question, we have used Google Translate6 to provide back6 http://translate.google. com/ of occurrences per sentence, recall and F-score. also averaged The constructions over the various translations. are listed in descending P, R and F stand for precision, order of their frequency in the texts. The highlighted value in each column is the most accurate with respect to the reference value. translations of our MT output into English; to contextualize these back-translations, we have provided Google’s back-translation of the reference. The use of postpositions shows another difference between the models. Finnish postpositions require the preceding noun to be in the genitive or sometimes partitive case, which occurs correctly more frequently in the CRF-LM than the baseline. In example (2) in Fig. 3, all three translations correspond to the English text, ‘with the basque nationalists. ’ However, the CRF-LM output is more grammatical than the baseline, because not only do the adjective and noun agree for case, but the noun ‘baskien’ to which the postposition ‘kanssa’ belongs is marked with the correct genitive case. However, this well-formedness is not rewarded by BLEU, because ‘baskien’ does not match the reference. In addition, while Finnish may express possession using case marking alone, it has another construction for possession; this can disambiguate an otherwise ambiguous clause. This alternate construction uses a pronoun in the genitive case followed by a possessive-marked noun; we see that the CRF-LM model correctly marks this construction more frequently than the baseline. As example (3) in Fig. 3 shows, while neither model correctly translates ‘matkan’ (‘trip’) , the baseline’s output attributes the inessive ‘yhteydess’ (‘connection’) as belonging to ‘tulokset’ (‘results’) , and misses marking the possession linking it to ‘Commissioner Fischler’. Our manual evaluation shows that the CRF38 LM model is producing output translations that are more morphologically fluent than the wordbased baseline and the segmented translation Unsup L-match system, even though the word choices lead to a lower BLEU score overall when compared to Unsup L-match. 4 Related Work The work on morphology in MT can be grouped into three categories, factored models, segmented translation, and morphology generation. Factored models (Koehn and Hoang, 2007) factor the phrase translation probabilities over additional information annotated to each word, allowing for text to be represented on multiple levels of analysis. We discussed the drawbacks of factored models for our task in Section 2. 1. While (Koehn and Hoang, 2007; Yang and Kirchhoff, 2006; Avramidis and Koehn, 2008) obtain improvements using factored models for translation into English, German, Spanish, and Czech, these models may be less useful for capturing long-distance dependencies in languages with much more complex morphological systems such as Finnish. In our experiments factored models did worse than the baseline. Segmented translation performs morphological analysis on the morphologically complex text for use in the translation model (Brown et al. , 1993; Goldwater and McClosky, 2005; de Gispert and Mari n˜o, 2008) . This method unpacks complex forms into simpler, more frequently occurring components, and may also increase the symmetry of the lexically realized content be(1) Input: ‘the charter we are to approve today both strengthens and gives visible shape to the common fundamental rights and values our community is to be based upon. ’ a. Reference: perusoikeuskirja , jonka t ¨an ¨a ¨an aiomme hyv a¨ksy ¨a , sek ¨a vahvistaa ett ¨a selvent a¨ a¨ (selvent ¨a a¨/VERB/ACT/INF/SG/LAT-clarify) niit a¨ (ne/PRONOUN/PL/PAR-them) yhteisi ¨a perusoikeuksia ja arvoja , joiden on oltava yhteis¨ omme perusta. Back-translation: ‘Charter of Fundamental Rights, which today we are going to accept that clarify and strengthen the common fundamental rights and values, which must be community based. ’ b. Baseline: perusoikeuskirja me hyv ¨aksymme t¨ an ¨a a¨n molemmat vahvistaa ja antaa (antaa/VERB/INF/SG/LATgive) n a¨kyv a¨ (n¨ aky a¨/VERB/ACT/PCP/SG/NOM-visible) muokata yhteist ¨a perusoikeuksia ja arvoja on perustuttava. Back-translation: ‘Charter today, we accept both confirm and modify to make a visible and common values, fundamental rights must be based. ’ c. CRF-LM: perusoikeuskirja on hyv a¨ksytty t ¨an ¨a ¨an , sek ¨a vahvistaa ja antaa (antaa/VERB/ACT/INF/SG/LAT-give) konkreettisen (konkreettinen/ADJECTIVE/SG/GEN,ACC-concrete) muodon (muoto/NOUN/SG/GEN,ACCshape) yhteisi ¨a perusoikeuksia ja perusarvoja , yhteis¨ on on perustuttava. Back-translation: ‘Charter has been approved today, and to strengthen and give concrete shape to the common basic rights and fundamental values, the Community must be based. ’ (2) Input: ‘with the basque nationalists’ a. Reference: baskimaan kansallismielisten kanssa basque-SG/NOM+land-SG/GEN,ACC nationalists-PL/GEN with-POST b. Baseline: baskimaan kansallismieliset kanssa basque-SG/NOM-+land-SG/GEN,ACC kansallismielinen-PL/NOM,ACC-nationalists POST-with c. CRF-LM: kansallismielisten baskien kanssa nationalists-PL/GEN basques-PL/GEN with-POST (3) Input: ‘and in this respect we should value the latest measures from commissioner fischler , the results of his trip to morocco on the 26th of last month and the high level meetings that took place, including the one with the king himself’ a. Reference: ja t ¨ass¨ a mieless ¨a osaamme my¨ os arvostaa komission j¨ asen fischlerin viimeisimpi ¨a toimia , jotka ovat h a¨nen (h¨ anen/GEN-his) marokkoon 26 lokakuuta tekemns (tekem¨ ans ¨a/POSS-his) matkan (matkan/GENtour) ja korkean tason kokousten jopa itsens¨ a kuninkaan kanssa tulosta Back-translation: ‘and in this sense we can also appreciate the Commissioner Fischler’s latest actions, which are his to Morocco 26 October trip to high-level meetings and even the king himself with the result b. Baseline: ja t ¨ass¨ a yhteydess a¨ olisi arvoa viimeisin toimia komission j¨ asen fischler , tulokset monitulkintaisia marokon yhteydess a¨ (yhteydess/INE-connection) , ja viime kuussa pidettiin korkean tason kokouksissa , mukaan luettuna kuninkaan kanssa Back-translation: ‘and in this context would be the value of the last act, Commissioner Fischler, the results of the Moroccan context, ambiguous, and last month held high level meetings, including with the king’ c. CRF-LM: ja t ¨ass¨ a yhteydess a¨ meid ¨an olisi lis ¨aarvoa viimeist ¨a toimenpiteit a¨ kuin komission j¨ asen fischler , ett a¨ h a¨nen (h¨ anen/GEN-his) kokemuksensa (kokemuksensa/POSS-experience) marokolle (marokolle-Moroccan) viime kuun 26 ja korkean tason tapaamiset j¨ arjestettiin, kuninkaan kanssa Back-translation: ‘and in this context, we should value the last measures as the Commissioner Fischler, that his experience in Morocco has on the 26th and high-level meetings took place, including with the king. ’ Figure 3: Morphological fluency analysis (see Section 3. 1) . tween source and target. In a somewhat orthogonal approach to ours, (Ma et al. , 2007) use alignment of a parallel text to pack together adjacent segments in the alignment output, which are then fed back to the word aligner to bootstrap an improved alignment, which is then used in the translation model. We compared our results against (Luong et al. , 2010) in Table 3 since their results are directly comparable to ours. They use a segmented phrase table and language model along with the word-based versions in the decoder and in tuning a Finnish target. Their approach requires segmented phrases 39 to match word boundaries, eliminating morphologically productive phrases. In their work a segmented language model can score a translation, but cannot insert morphology that does not show source-side reflexes. In order to perform a similar experiment that still allowed for morphologically productive phrases, we tried training a segmented translation model, the output of which we stitched up in tuning so as to tune to a word-based reference. The goal of this experiment was to control the segmented model’s tendency to overfit by rewarding it for using correct whole-word forms. However, we found that this approach was less successful than using the segmented reference in tuning, and could not meet the baseline (13.97% BLEU best tuning score, versus 14.93% BLEU for the baseline best tuning score) . Previous work in segmented translation has often used linguistically motivated morphological analysis selectively applied based on a language-specific heuristic. A typical approach is to select a highly inflecting class of words and segment them for particular morphology (de Gispert and Mari n˜o, 2008; Ramanathan et al. , 2009) . Popovi¸ c and Ney (2004) perform segmentation to reduce morphological complexity of the source to translate into an isolating target, reducing the translation error rate for the English target. For Czech-to-English, Goldwater and McClosky (2005) lemmatized the source text and inserted a set of ‘pseudowords’ expected to have lexical reflexes in English. Minkov et. al. (2007) and Toutanova et. al. (2008) use a Maximum Entropy Markov Model for morphology generation. The main drawback to this approach is that it removes morphological information from the translation model (which only uses stems) ; this can be a problem for languages in which morphology ex- presses lexical content. de Gispert (2008) uses a language-specific targeted morphological classifier for Spanish verbs to avoid this issue. Talbot and Osborne (2006) use clustering to group morphological variants of words for word alignments and for smoothing phrase translation tables. Habash (2007) provides various methods to incorporate morphological variants of words in the phrase table in order to help recognize out of vocabulary words in the source language. 5 Conclusion and Future Work We found that using a segmented translation model based on unsupervised morphology induction and a model that combined morpheme segments in the translation model with a postprocessing morphology prediction model gave us better BLEU scores than a word-based baseline. Using our proposed approach we obtain better scores than the state of the art on the EnglishFinnish translation task (Luong et al. , 2010) : from 14.82% BLEU to 15.09%, while using a 40 simpler model. We show that using morphological segmentation in the translation model can improve output translation scores. We also demonstrate that for Finnish (and possibly other agglutinative languages) , phrase-based MT benefits from allowing the translation model access to morphological segmentation yielding productive morphological phrases. Taking advantage of linguistic analysis of the output we show that using a post-processing morphology generation model can improve translation fluency on a sub-word level, in a manner that is not captured by the BLEU word-based evaluation measure. In order to help with replication of the results in this paper, we have run the various morphological analysis steps and created the necessary training, tuning and test data files needed in order to train, tune and test any phrase-based machine translation system with our data. The files can be downloaded from natlang. cs.sfu. ca. In future work we hope to explore the utility of phrases with productive morpheme boundaries and explore why they are not used more pervasively in the decoder. Evaluation measures for morphologically complex languages and tun- ing to those measures are also important future work directions. Also, we would like to explore a non-pipelined approach to morphological preand post-processing so that a globally trained model could be used to remove the target side morphemes that would improve the translation model and then predict those morphemes in the target language. Acknowledgements This research was partially supported by NSERC, Canada (RGPIN: 264905) and a Google Faculty Award. We would like to thank Christian Monson, Franz Och, Fred Popowich, Howard Johnson, Majid Razmara, Baskaran Sankaran and the anonymous reviewers for their valuable comments on this work. We would particularly like to thank the developers of the open-source Moses machine translation toolkit and the Omorfi morphological analyzer for Finnish which we used for our experiments. References Eleftherios Avramidis and Philipp Koehn. 2008. Enriching morphologically poor languages for statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, page 763?770, Columbus, Ohio, USA. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2) :263–31 1. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224–232, Columbus, Ohio, June. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (A CL05). Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reason- ing (AKRR ’05), pages 106–113, Espoo, Finland. Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proceedings of the PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes. Adri ´a de Gispert and Jos e´ Mari n˜o. 2008. On the impact of morphology in English to Spanish statistical MT. Speech Communication, 50(11-12) . Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 676–683, Vancouver, B.C. , Canada. Association for Computational Linguistics. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 868–876, Prague, Czech Republic. Association for Computational Linguistics. 41 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In A CL ‘07: Proceedings of the 45th Annual Meeting of the A CL on Interactive Poster and Demonstration Sessions, pages 177–108, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X, pages 79–86, Phuket, Thailand. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289, San Francisco, California, USA. Association for Computing Machinery. Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A hybrid morpheme-word representation for machine translation of morphologically rich languages. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 148–157, Cambridge, Massachusetts. Association for Computational Linguistics. Yanjun Ma, Nicolas Stroppa, and Andy Way. 2007. Bootstrapping word alignment via word packing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 304–311, Prague, Czech Republic. Association for Computational Linguistics. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (A CL07), pages 128–135, Prague, Czech Republic. Association for Computational Linguistics. Christian Monson. 2008. Paramor and morpho challenge 2008. In Lecture Notes in Computer Science: Workshop of the Cross-Language Evaluation Forum (CLEF 2008), Revised Selected Papers. Habash Nizar. 2007. Four techniques for online handling of out-of-vocabulary words in arabic-english statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, Columbus, Ohio. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics A CL, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Tommi Pirinen and Inari Listenmaa. 2007. Omorfi morphological analzer. http://gna.org/projects/omorfi. Maja Popovi¸ c and Hermann Ney. 2004. Towards the use of word stems and suffixes for statistiWei jing cal machine translation. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), pages 1585–1588, Lisbon, Portugal. European Language Resources Association (ELRA) . Ananthakrishnan Ramanathan, Hansraj Choudhary, Avishek Ghosh, and Pushpak Bhattacharyya. 2009. Case markers and morphology: Addressing the crux of the fluency problem in EnglishHindi SMT. In Proceedings of the Joint Conference of the 4 7th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 800–808, Suntec, Singapore. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm – an extensible language modeling toolkit. 7th International Conference on Spoken Language Processing, 3:901–904. David Talbot and Miles Osborne. 2006. Modelling lexical redundancy for machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 969–976, Sydney, Australia, July. Association for Computational Linguistics. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 514–522, Columbus, Ohio, USA. Association for Computational Linguistics. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proceedings of the European Chapter of the Association for Computational Linguistics, pages 41–48, Trento, Italy. Association for Computational Linguistics. 42

5 0.51295102 163 acl-2011-Improved Modeling of Out-Of-Vocabulary Words Using Morphological Classes

Author: Thomas Mueller ; Hinrich Schuetze

Abstract: We present a class-based language model that clusters rare words of similar morphology together. The model improves the prediction of words after histories containing outof-vocabulary words. The morphological features used are obtained without the use of labeled data. The perplexity improvement compared to a state of the art Kneser-Ney model is 4% overall and 81% on unknown histories.

6 0.50094885 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

7 0.46900484 303 acl-2011-Tier-based Strictly Local Constraints for Phonology

8 0.46030334 249 acl-2011-Predicting Relative Prominence in Noun-Noun Compounds

9 0.45903331 310 acl-2011-Translating from Morphologically Complex Languages: A Paraphrase-Based Approach

10 0.45677713 289 acl-2011-Subjectivity and Sentiment Analysis of Modern Standard Arabic

11 0.44729483 261 acl-2011-Recognizing Named Entities in Tweets

12 0.43947691 238 acl-2011-P11-2093 k2opt.pdf

13 0.42583549 318 acl-2011-Unsupervised Bilingual Morpheme Segmentation and Alignment with Context-rich Hidden Semi-Markov Models

14 0.41949844 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

15 0.4028126 44 acl-2011-An exponential translation model for target language morphology

16 0.39923012 184 acl-2011-Joint Hebrew Segmentation and Parsing using a PCFGLA Lattice Parser

17 0.38446939 297 acl-2011-That's What She Said: Double Entendre Identification

18 0.37674919 27 acl-2011-A Stacked Sub-Word Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging

19 0.3665247 7 acl-2011-A Corpus for Modeling Morpho-Syntactic Agreement in Arabic: Gender, Number and Rationality

20 0.36475915 329 acl-2011-Using Deep Morphology to Improve Automatic Error Detection in Arabic Handwriting Recognition


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.013), (17, 0.023), (31, 0.02), (37, 0.058), (39, 0.048), (41, 0.053), (55, 0.48), (59, 0.02), (72, 0.061), (91, 0.021), (96, 0.104)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.82249832 24 acl-2011-A Scalable Probabilistic Classifier for Language Modeling

Author: Joel Lang

Abstract: We present a novel probabilistic classifier, which scales well to problems that involve a large number ofclasses and require training on large datasets. A prominent example of such a problem is language modeling. Our classifier is based on the assumption that each feature is associated with a predictive strength, which quantifies how well the feature can predict the class by itself. The predictions of individual features can then be combined according to their predictive strength, resulting in a model, whose parameters can be reliably and efficiently estimated. We show that a generative language model based on our classifier consistently matches modified Kneser-Ney smoothing and can outperform it if sufficiently rich features are incorporated.

2 0.79275858 78 acl-2011-Confidence-Weighted Learning of Factored Discriminative Language Models

Author: Viet Ha Thuc ; Nicola Cancedda

Abstract: Language models based on word surface forms only are unable to benefit from available linguistic knowledge, and tend to suffer from poor estimates for rare features. We propose an approach to overcome these two limitations. We use factored features that can flexibly capture linguistic regularities, and we adopt confidence-weighted learning, a form of discriminative online learning that can better take advantage of a heavy tail of rare features. Finally, we extend the confidence-weighted learning to deal with label noise in training data, a common case with discriminative lan- guage modeling.

same-paper 3 0.79046142 124 acl-2011-Exploiting Morphology in Turkish Named Entity Recognition System

Author: Reyyan Yeniterzi

Abstract: Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. Using syntactic and contextual properties with the new representation provide an 7.6% relative improvement over the baseline.

4 0.78814733 275 acl-2011-Semi-Supervised Modeling for Prenominal Modifier Ordering

Author: Margaret Mitchell ; Aaron Dunlop ; Brian Roark

Abstract: In this paper, we argue that ordering prenominal modifiers typically pursued as a supervised modeling task is particularly wellsuited to semi-supervised approaches. By relying on automatic parses to extract noun phrases, we can scale up the training data by orders of magnitude. This minimizes the predominant issue of data sparsity that has informed most previous approaches. We compare several recent approaches, and find improvements from additional training data across the board; however, none outperform a simple n-gram model. – –

5 0.69655621 144 acl-2011-Global Learning of Typed Entailment Rules

Author: Jonathan Berant ; Ido Dagan ; Jacob Goldberger

Abstract: Extensive knowledge bases ofentailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released (Schoenmackers et al., 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs.

6 0.67179823 237 acl-2011-Ordering Prenominal Modifiers with a Reranking Approach

7 0.63003165 245 acl-2011-Phrase-Based Translation Model for Question Retrieval in Community Question Answer Archives

8 0.47806394 175 acl-2011-Integrating history-length interpolation and classes in language modeling

9 0.47521091 150 acl-2011-Hierarchical Text Classification with Latent Concepts

10 0.47056043 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

11 0.46525595 116 acl-2011-Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers

12 0.46517637 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling

13 0.4548521 280 acl-2011-Sentence Ordering Driven by Local and Global Coherence for Summary Generation

14 0.45268476 17 acl-2011-A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation

15 0.44814605 135 acl-2011-Faster and Smaller N-Gram Language Models

16 0.44536024 85 acl-2011-Coreference Resolution with World Knowledge

17 0.44501215 163 acl-2011-Improved Modeling of Out-Of-Vocabulary Words Using Morphological Classes

18 0.44360626 38 acl-2011-An Empirical Investigation of Discounting in Cross-Domain Language Models

19 0.44258255 197 acl-2011-Latent Class Transliteration based on Source Language Origin

20 0.4396092 316 acl-2011-Unary Constraints for Efficient Context-Free Parsing