acl acl2013 acl2013-122 knowledge-graph by maker-knowledge-mining

122 acl-2013-Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners


Source: pdf

Author: Keisuke Sakaguchi ; Yuki Arase ; Mamoru Komachi

Abstract: We propose discriminative methods to generate semantic distractors of fill-in-theblank quiz for language learners using a large-scale language learners’ corpus. Unlike previous studies, the proposed methods aim at satisfying both reliability and validity of generated distractors; distractors should be exclusive against answers to avoid multiple answers in one quiz, and distractors should discriminate learners’ proficiency. Detailed user evaluation with 3 native and 23 non-native speakers of English shows that our methods achieve better reliability and validity than previous methods.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 s Abstract We propose discriminative methods to generate semantic distractors of fill-in-theblank quiz for language learners using a large-scale language learners’ corpus. [sent-8, score-1.13]

2 Unlike previous studies, the proposed methods aim at satisfying both reliability and validity of generated distractors; distractors should be exclusive against answers to avoid multiple answers in one quiz, and distractors should discriminate learners’ proficiency. [sent-9, score-1.592]

3 Detailed user evaluation with 3 native and 23 non-native speakers of English shows that our methods achieve better reliability and validity than previous methods. [sent-10, score-0.206]

4 1 Introduction Fill-in-the-blank is a popular style used for evaluating proficiency of language learners, from homework to official tests, such as TOEIC1 and TOEFL2. [sent-11, score-0.087]

5 As shown in Figure 1, a quiz is composed of 4 parts; (1) sentence, (2) blank to fill in, (3) correct answer, and (4) distractors (incorrect options). [sent-12, score-0.917]

6 However, it is not easy to come up with appropriate distractors without rich experience in language education. [sent-13, score-0.675]

7 There are two major requirements that distractors should satisfy: reliability and validity (Alderson et al. [sent-14, score-0.782]

8 First, distractors should be reliable; they are exclusive against the answer and none of distractors can replace the answer to avoid allowing multiple correct answers in one quiz. [sent-16, score-1.538]

9 Second, distractors should be valid; they discriminate learners’ proficiency adequately. [sent-17, score-0.783]

10 com Fi where (a) blaming is the answer and (b) accusing is a distractor. [sent-29, score-0.051]

11 There are previous studies on distractor generation for automatic fill-in-the-blank quiz generation (Mitkov et al. [sent-30, score-0.558]

12 Hoshino and Nakagawa (2005) randomly selected distractors from words in the same document. [sent-32, score-0.675]

13 (2005) collected distractor candidates that are close to the answer in terms of word-frequency, and ranked them by an association/collocation measure between the candidate and surrounding words in a given context. [sent-36, score-0.441]

14 Dahlmeier and Ng (201 1) generated candidates for collocation error correction for English as a Second Language (ESL) writing using paraphrasing with native lan- guage (L1) pivoting technique. [sent-37, score-0.29]

15 This method takes an sentence containing a collocation error as input and translates it into L1, and then translate it back to English to generate correction candidates. [sent-38, score-0.204]

16 Although the purpose is different, the technique is also applicable for distractor generation. [sent-39, score-0.318]

17 To our best knowledge, there have not been studies that fully employed actual errors made by ESL learners for distractor generation. [sent-40, score-0.479]

18 In this paper, we propose automated distractor generation methods using a large-scale ESL corpus with a discriminative model. [sent-41, score-0.425]

19 We focus on semantically confusing distractors that measure learners’ competence to distinguish word-sense and select an appropriate word. [sent-42, score-0.675]

20 We especially target verbs, because verbs are difficult for language learners to use correctly (Leacock et al. [sent-43, score-0.213]

21 23O8ur proposed methods use discriminative models ProceedingSsof oifa, th Beu 5l1gsarti Aan,An uuaglu Mste 4e-ti9n2g 0 o1f3 t. [sent-45, score-0.06]

22 A Figure 2: Example of a sentence correction pair and error tags (Replacement, Deletion and Insertion). [sent-49, score-0.123]

23 trained on error patterns extracted from an ESL corpus, and can generate exclusive distractors with taking context of a given sentence into consideration. [sent-50, score-0.819]

24 We conduct human evaluation using 3 native and 23 non-native speakers of English. [sent-51, score-0.099]

25 Furthermore, the non-native speakers’ performance on quiz generated by our method has about 0. [sent-54, score-0.22]

26 76 of correlation coefficient with their TOEIC scores, which shows that distractors generated by our methods satisfy validity. [sent-55, score-0.795]

27 Contributions of this paper are twofold; (1) we present methods for generating reliable and valid distractors, (2) we also demonstrate the effectiveness of ESL corpus and discriminative models on distractor generation. [sent-56, score-0.437]

28 2 Proposed Method To generate distractors, we first need to decide which word to be blanked. [sent-57, score-0.048]

29 We then generate candidates of distractors and rank them based on a certain criterion to select distractors to output. [sent-58, score-1.441]

30 In this section, we propose our methods for extracting target words from ESL corpus and selecting distractors by a discriminative model that considers long-distance context of a given sentence. [sent-59, score-0.787]

31 For generating semantic distractors, we regard a correction as a target and the misused word as one of the distractor candidates. [sent-63, score-0.455]

32 In the Lang-8 corpus, there is no clue to align the original and corrected words. [sent-64, score-0.066]

33 In addition, words may be deleted and inserted in the corrected sentence, which makes the alignment difficult. [sent-65, score-0.066]

34 Therefore, we detect word deletion, insertion, and replacement by dynamic programming4. [sent-66, score-0.043]

35 pare a corrected sentence against its original sentence, and when word insertion and deletion errors are identified, we put a spaceholder (Figure 2). [sent-70, score-0.147]

36 replacement) pairs by comparing trigrams around the replacement in the original and corrected sentences, for considering surrounding context of the target. [sent-73, score-0.138]

37 These error-correction pairs are a mixture ofgrammatical mistakes, spelling errors, and semantic confusions. [sent-74, score-0.039]

38 Therefore, we identify pairs due to semantic confusion; we exclude grammatical error corrections by eliminating pairs whose error and correction have different part-of-speech (POS)5, and exclude spelling error corrections based on edit-distance. [sent-75, score-0.307]

39 As a result, we extract 689 unique verbs (lemma) and 3,885 correction pairs in total. [sent-76, score-0.083]

40 Using the error-correction pairs, we calculate conditional probabilities P(we |wc), which represent how probable that ESL le|warners misuse the word wc as we. [sent-77, score-0.026]

41 Based on the probabilities, we compute a confusion matrix. [sent-78, score-0.068]

42 The confusion matrix can generate distractors reflecting error patterns of ESL learners. [sent-79, score-0.852]

43 Given a sentence, we identify verbs appearing in the confusion matrix and make them blank, then outputs distractor candidates that have high confusion probability. [sent-80, score-0.54]

44 We rank the candidates by a generative model to consider the surrounding context (e. [sent-81, score-0.1]

45 We refer to this generative method as Confusionmatrix Method (CFM). [sent-84, score-0.028]

46 2 Discriminative Model for Distractor Generation and Selection To generate distractors that considers long- distance context and reflects detailed syntactic information of the sentence, we train multiple classifiers for each target word using error-correction pairs extracted from ESL corpus. [sent-86, score-0.755]

47 A classifier for 5Because the Lang-8 corpus does not have POS tags, we //github. [sent-87, score-0.02]

48 a target word takes a sentence (in which the target word appears) as an input and outputs a verb as the best distractor given the context using following features: 5-gram (±1 and ±2 words of the target) l feematmuraess :a 5nd-g dependency type wwiotrhd tsh oef t tahreget child (lemma). [sent-90, score-0.404]

49 label for the classifier of a target verb (blame). [sent-95, score-0.032]

50 These classifiers are based on a discriminative model: Support Vector Machine (SVM)6 (Vapnik, 1995). [sent-96, score-0.06]

51 First, we directly use the corrected sentences in the Lang-8 corpus. [sent-98, score-0.066]

52 Second, we train classifiers with an ESLsimulated native corpus, because (1) the number of sentences containing a certain error-correction pair is still limited in the ESL corpus and (2) corrected sentences are still difficult to parse correctly due to inherent noise in the Lang-8 corpus. [sent-101, score-0.153]

53 For each target in a given sentence, we artificially change the target into an incorrect word according to the error probabilities obtained from the learners confusion matrix explained in Section 2. [sent-103, score-0.379]

54 In order to collect a sufficient amount of training data, we generate 100 samples for each training sentence in which the target word is replaced into an erroneous word. [sent-105, score-0.102]

55 3 Evaluation with Native-Speakers In this experiment, we evaluate the reliability of generated distractors. [sent-107, score-0.073]

56 The authors asked the help of 3 native speakers of English (1 male and 2 females, majoring computer science) from an author’s graduate school. [sent-108, score-0.169]

57 We provide each participant a gift card of $30 as a compensation when completing the task. [sent-109, score-0.151]

58 6We use Linear SVM with default settings in the scikitlearn toolkit 0. [sent-110, score-0.019]

59 com/ sh 9The implementation is available at https : / / github . [sent-121, score-0.023]

60 com/ ke i ks / dis c-s im- e s l s Confusion Matrix Method, DiscESL: Discriminative model with ESL corpus, DiscSimESL: Discriminative model with simulated ESL corpus) and baseline (THM: Thesaurus Method, RTM: Roundtrip Method). [sent-122, score-0.019]

61 In order to compare distractors generated by different methods, we ask participants to solve the generated fill-in-the-blank quiz presented in Figure 1. [sent-123, score-1.007]

62 Each quiz has 3 options: (a) only word A is correct, (b) only word B is correct, (c) both are correct. [sent-124, score-0.186]

63 The source sentences to generate a quiz are collected from VOA, which are not included in the training dataset of the DiscSimESL. [sent-125, score-0.234]

64 We generate 50 quizzes using different sentences per each method to avoid showing the same sentence multiple times to participants. [sent-126, score-0.22]

65 We randomly ordered the quizzes generated by different methods for fair comparison. [sent-127, score-0.184]

66 , 2005) and extract distractor candidates from synonyms of the target extracted from WordNet10. [sent-131, score-0.393]

67 The RTM is based on (Dahlmeier and Ng, 2011) and extracts distractor candidates from roundtrip (pivoting) translation lexicon constructed from the WIT3 corpus (Cettolo et al. [sent-132, score-0.433]

68 In this dictionary, the target word is translated into Japanese words and they are translated back to English as distractor candidates. [sent-135, score-0.35]

69 To consider (local) context, the candidates generated by the THM, RTM, and CFM are re-ranked by 5-gram language 10WordNet 3. [sent-136, score-0.077]

70 interval and inter-rater model score trained on Google 1T Web Corpus (Brants and Franz, 2006) with IRSTLM toolkit12. [sent-146, score-0.018]

71 As an evaluation metric, we compute the ratio of appropriate distractors (RAD) by the following equation: RAD = NAD/NALL, where NALL is the total number of quizzes and NAD is the number of quizzes on which more than or equal to 2 participants agree by selecting the correct answer. [sent-147, score-1.081]

72 When at least 2 participants select the option (c) (both options are correct), we determine the distractor as inappropriate. [sent-148, score-0.431]

73 Table 3 shows the results of the first experiment; RAD with a 95% confidence interval and interrater agreement κ. [sent-150, score-0.018]

74 These results show that the effectiveness of using ESL corpus to generate reliable distractors. [sent-155, score-0.107]

75 With respect to κ, our discriminative models achieve from 0. [sent-156, score-0.06]

76 2 higher agreement than baselines, indicating that the discriminative models can generate sound distractors more effectively than generative models. [sent-158, score-0.811]

77 The lower κ on generative models may be because the distractors are semantically too close to the target (correct answer) as following examples: The coalition has *published/issued a report saying that . [sent-159, score-0.735]

78 As a result, the quiz from generative models is not reliable since both published and issued are correct. [sent-163, score-0.253]

79 4 Evaluation with ESL Learners In this experiment, we evaluate the validity of generated distractors regarding ESL learners’ profi12The irstlm toolkit 5. [sent-164, score-0.812]

80 net /pro j e ct s / i st lm/ file s / i st lm/ r r TabCPRBDMlirHTeFos ptc4hlSEo:insmed(L1)SCor0. [sent-166, score-0.044]

81 d4587par- ticipants’ TOEIC scores, (2) the average percentage of correct answer (Corr), incorrect answer of distractor (Dist), and incorrect answer that both are correct (Both) chosen by participants, and (3) standard deviation (Std) of Corr. [sent-171, score-0.577]

82 Twenty-three Japanese native speakers (15 males and 8 females) are participated. [sent-174, score-0.099]

83 All the participants, who have taken at least 8 years of English education, self-report proficiency levels as the TOEIC scores from 380 to 99013. [sent-175, score-0.087]

84 All the participants are graduate students majoring in science related courses. [sent-176, score-0.148]

85 We call for participants by e- mailing to a graduate school. [sent-177, score-0.106]

86 We provide each participant a gift card of $10 as a compensation when completing the task. [sent-178, score-0.151]

87 We ask participants to solve 20 quizzes per each method in the same manner as Section 3. [sent-179, score-0.228]

88 To evaluate validity of distractors, we use only reliable quizzes accepted in Section 3. [sent-180, score-0.257]

89 Namely, we exclude quizzes whose options are both correct. [sent-181, score-0.214]

90 We evaluate correlation between learners’ accuracy for the generated quizzes and the TOEIC score. [sent-182, score-0.218]

91 lation coefficient r and standard deviation on DiscSimESL shows that its distractors achieve best validity. [sent-184, score-0.707]

92 It illustrates that DiscSimESL achieves higher level of positive correlation than THM. [sent-189, score-0.034]

93 Table 4 also shows high percentage ofchoosing “(c) both are correct” on DiscSimESL, which indicates that distractors gener- ated from DiscSimESL are difficult to distinguish for ESL learners but not for native speakers as a following example: . [sent-190, score-0.935]

94 A relatively lower correlation coefficient on DiscESL may be caused by inherent noise on parsing the Lang-8 corpus and domain difference from quiz sentences (VOA). [sent-197, score-0.29]

95 5 Conclusion We have presented methods that automatically generate semantic distractors of a fill-in-the-blank quiz for ESL learners. [sent-198, score-0.909]

96 The proposed methods employ discriminative models trained using error patterns extracted from ESL corpus and can generate reliable distractors by taking context of a given sentence into consideration. [sent-199, score-0.902]

97 3% of distractors are reliable when generated by our method (DiscSimESL). [sent-201, score-0.748]

98 76 of correlation coefficient to their TOEIC scores, indicating that the distractors have better validity than previous methods. [sent-203, score-0.809]

99 Moreover, we will take ESL learners’ proficiency into account for generating distractors of appropriate levels for different learners. [sent-205, score-0.762]

100 We are grateful to Yangyang Xi for granting permission to use text from Lang-8 and Takuya Fujino for his error pair extraction algorithm. [sent-207, score-0.038]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('distractors', 0.675), ('esl', 0.325), ('distractor', 0.318), ('quiz', 0.186), ('toeic', 0.17), ('learners', 0.161), ('quizzes', 0.15), ('discsimesl', 0.148), ('thm', 0.148), ('rad', 0.112), ('rtm', 0.106), ('proficiency', 0.087), ('participants', 0.078), ('confusion', 0.068), ('validity', 0.068), ('corrected', 0.066), ('correction', 0.063), ('discriminative', 0.06), ('voa', 0.056), ('roundtrip', 0.052), ('dahlmeier', 0.052), ('answer', 0.051), ('speakers', 0.05), ('native', 0.049), ('generate', 0.048), ('candidates', 0.043), ('replacement', 0.043), ('alderson', 0.042), ('cfm', 0.042), ('compensation', 0.042), ('discesl', 0.042), ('hoshino', 0.042), ('majoring', 0.042), ('misused', 0.042), ('reliable', 0.039), ('reliability', 0.039), ('error', 0.038), ('sumita', 0.037), ('exclusive', 0.036), ('options', 0.035), ('komachi', 0.035), ('females', 0.035), ('irstlm', 0.035), ('generated', 0.034), ('correlation', 0.034), ('collocation', 0.033), ('coefficient', 0.032), ('target', 0.032), ('card', 0.031), ('deletion', 0.03), ('cettolo', 0.03), ('pivoting', 0.03), ('surrounding', 0.029), ('exclude', 0.029), ('insertion', 0.029), ('completing', 0.029), ('correct', 0.028), ('graduate', 0.028), ('generative', 0.028), ('blank', 0.028), ('arbor', 0.027), ('generation', 0.027), ('leacock', 0.027), ('gift', 0.027), ('corenlp', 0.026), ('wc', 0.026), ('corrections', 0.026), ('oe', 0.026), ('mitkov', 0.025), ('incorrect', 0.025), ('matrix', 0.023), ('https', 0.023), ('participant', 0.022), ('st', 0.022), ('answers', 0.022), ('jp', 0.022), ('sentence', 0.022), ('brants', 0.021), ('discriminate', 0.021), ('verbs', 0.02), ('thesaurus', 0.02), ('corpus', 0.02), ('educational', 0.02), ('spelling', 0.02), ('satisfy', 0.02), ('ke', 0.019), ('metropolitan', 0.019), ('nad', 0.019), ('cikit', 0.019), ('cloze', 0.019), ('danling', 0.019), ('haidian', 0.019), ('lep', 0.019), ('ofgrammatical', 0.019), ('scikitlearn', 0.019), ('sugaya', 0.019), ('yuki', 0.019), ('inherent', 0.018), ('interval', 0.018), ('ann', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 122 acl-2013-Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners

Author: Keisuke Sakaguchi ; Yuki Arase ; Mamoru Komachi

Abstract: We propose discriminative methods to generate semantic distractors of fill-in-theblank quiz for language learners using a large-scale language learners’ corpus. Unlike previous studies, the proposed methods aim at satisfying both reliability and validity of generated distractors; distractors should be exclusive against answers to avoid multiple answers in one quiz, and distractors should discriminate learners’ proficiency. Detailed user evaluation with 3 native and 23 non-native speakers of English shows that our methods achieve better reliability and validity than previous methods.

2 0.19684094 8 acl-2013-A Learner Corpus-based Approach to Verb Suggestion for ESL

Author: Yu Sawai ; Mamoru Komachi ; Yuji Matsumoto

Abstract: We propose a verb suggestion method which uses candidate sets and domain adaptation to incorporate error patterns produced by ESL learners. The candidate sets are constructed from a large scale learner corpus to cover various error patterns made by learners. Furthermore, the model is trained using both a native corpus and the learner corpus via a domain adaptation technique. Experiments on two learner corpora show that the candidate sets increase the coverage of error patterns and domain adaptation improves the performance for verb suggestion.

3 0.1062834 58 acl-2013-Automated Collocation Suggestion for Japanese Second Language Learners

Author: Lis Pereira ; Erlyn Manguilimotan ; Yuji Matsumoto

Abstract: This study addresses issues of Japanese language learning concerning word combinations (collocations). Japanese learners may be able to construct grammatically correct sentences, however, these may sound “unnatural”. In this work, we analyze correct word combinations using different collocation measures and word similarity methods. While other methods use well-formed text, our approach makes use of a large Japanese language learner corpus for generating collocation candidates, in order to build a system that is more sensitive to constructions that are difficult for learners. Our results show that we get better results compared to other methods that use only wellformed text. 1

4 0.068653069 235 acl-2013-Machine Translation Detection from Monolingual Web-Text

Author: Yuki Arase ; Ming Zhou

Abstract: We propose a method for automatically detecting low-quality Web-text translated by statistical machine translation (SMT) systems. We focus on the phrase salad phenomenon that is observed in existing SMT results and propose a set of computationally inexpensive features to effectively detect such machine-translated sentences from a large-scale Web-mined text. Unlike previous approaches that require bilingual data, our method uses only monolingual text as input; therefore it is applicable for refining data produced by a variety of Web-mining activities. Evaluation results show that the proposed method achieves an accuracy of 95.8% for sentences and 80.6% for text in noisy Web pages.

5 0.043934252 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

Author: Wen-tau Yih ; Ming-Wei Chang ; Christopher Meek ; Andrzej Pastusiak

Abstract: In this paper, we study the answer sentence selection problem for question answering. Unlike previous work, which primarily leverages syntactic analysis through dependency tree matching, we focus on improving the performance using models of lexical semantic resources. Experiments show that our systems can be consistently and significantly improved with rich lexical semantic information, regardless of the choice of learning algorithms. When evaluated on a benchmark dataset, the MAP and MRR scores are increased by 8 to 10 points, compared to one of our baseline systems using only surface-form matching. Moreover, our best system also outperforms pervious work that makes use of the dependency tree structure by a wide margin.

6 0.037491869 364 acl-2013-Typesetting for Improved Readability using Lexical and Syntactic Information

7 0.035896618 342 acl-2013-Text Classification from Positive and Unlabeled Data using Misclassified Data Correction

8 0.034393337 263 acl-2013-On the Predictability of Human Assessment: when Matrix Completion Meets NLP Evaluation

9 0.033719178 107 acl-2013-Deceptive Answer Prediction with User Preference Graph

10 0.031591736 44 acl-2013-An Empirical Examination of Challenges in Chinese Parsing

11 0.031358913 241 acl-2013-Minimum Bayes Risk based Answer Re-ranking for Question Answering

12 0.030646157 251 acl-2013-Mr. MIRA: Open-Source Large-Margin Structured Learning on MapReduce

13 0.030574478 37 acl-2013-Adaptive Parser-Centric Text Normalization

14 0.03028862 135 acl-2013-English-to-Russian MT evaluation campaign

15 0.029247642 60 acl-2013-Automatic Coupling of Answer Extraction and Information Retrieval

16 0.02906296 328 acl-2013-Stacking for Statistical Machine Translation

17 0.028972913 299 acl-2013-Reconstructing an Indo-European Family Tree from Non-native English Texts

18 0.028958149 221 acl-2013-Learning Non-linear Features for Machine Translation Using Gradient Boosting Machines

19 0.028763661 31 acl-2013-A corpus-based evaluation method for Distributional Semantic Models

20 0.028190389 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.09), (1, -0.003), (2, 0.018), (3, -0.026), (4, -0.006), (5, -0.018), (6, -0.014), (7, -0.039), (8, 0.03), (9, 0.005), (10, -0.026), (11, 0.029), (12, -0.013), (13, -0.01), (14, -0.043), (15, 0.005), (16, -0.013), (17, 0.013), (18, -0.002), (19, -0.003), (20, 0.093), (21, -0.004), (22, 0.069), (23, -0.041), (24, 0.093), (25, 0.133), (26, 0.005), (27, 0.009), (28, -0.012), (29, -0.008), (30, -0.063), (31, -0.027), (32, -0.065), (33, -0.054), (34, -0.042), (35, 0.084), (36, -0.053), (37, 0.019), (38, -0.003), (39, -0.052), (40, -0.022), (41, -0.082), (42, 0.059), (43, -0.079), (44, 0.081), (45, 0.087), (46, -0.052), (47, -0.0), (48, -0.135), (49, 0.002)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.89019793 122 acl-2013-Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners

Author: Keisuke Sakaguchi ; Yuki Arase ; Mamoru Komachi

Abstract: We propose discriminative methods to generate semantic distractors of fill-in-theblank quiz for language learners using a large-scale language learners’ corpus. Unlike previous studies, the proposed methods aim at satisfying both reliability and validity of generated distractors; distractors should be exclusive against answers to avoid multiple answers in one quiz, and distractors should discriminate learners’ proficiency. Detailed user evaluation with 3 native and 23 non-native speakers of English shows that our methods achieve better reliability and validity than previous methods.

2 0.85072416 8 acl-2013-A Learner Corpus-based Approach to Verb Suggestion for ESL

Author: Yu Sawai ; Mamoru Komachi ; Yuji Matsumoto

Abstract: We propose a verb suggestion method which uses candidate sets and domain adaptation to incorporate error patterns produced by ESL learners. The candidate sets are constructed from a large scale learner corpus to cover various error patterns made by learners. Furthermore, the model is trained using both a native corpus and the learner corpus via a domain adaptation technique. Experiments on two learner corpora show that the candidate sets increase the coverage of error patterns and domain adaptation improves the performance for verb suggestion.

3 0.80551636 58 acl-2013-Automated Collocation Suggestion for Japanese Second Language Learners

Author: Lis Pereira ; Erlyn Manguilimotan ; Yuji Matsumoto

Abstract: This study addresses issues of Japanese language learning concerning word combinations (collocations). Japanese learners may be able to construct grammatically correct sentences, however, these may sound “unnatural”. In this work, we analyze correct word combinations using different collocation measures and word similarity methods. While other methods use well-formed text, our approach makes use of a large Japanese language learner corpus for generating collocation candidates, in order to build a system that is more sensitive to constructions that are difficult for learners. Our results show that we get better results compared to other methods that use only wellformed text. 1

4 0.5283519 186 acl-2013-Identifying English and Hungarian Light Verb Constructions: A Contrastive Approach

Author: Veronika Vincze ; Istvan Nagy T. ; Richard Farkas

Abstract: Here, we introduce a machine learningbased approach that allows us to identify light verb constructions (LVCs) in Hungarian and English free texts. We also present the results of our experiments on the SzegedParalellFX English–Hungarian parallel corpus where LVCs were manually annotated in both languages. With our approach, we were able to contrast the performance of our method and define language-specific features for these typologically different languages. Our presented method proved to be sufficiently robust as it achieved approximately the same scores on the two typologically different languages.

5 0.47971737 299 acl-2013-Reconstructing an Indo-European Family Tree from Non-native English Texts

Author: Ryo Nagata ; Edward Whittaker

Abstract: Mother tongue interference is the phenomenon where linguistic systems of a mother tongue are transferred to another language. Although there has been plenty of work on mother tongue interference, very little is known about how strongly it is transferred to another language and about what relation there is across mother tongues. To address these questions, this paper explores and visualizes mother tongue interference preserved in English texts written by Indo-European language speakers. This paper further explores linguistic features that explain why certain relations are preserved in English writing, and which contribute to related tasks such as native language identification.

6 0.47300249 364 acl-2013-Typesetting for Improved Readability using Lexical and Syntactic Information

7 0.4662317 371 acl-2013-Unsupervised joke generation from big data

8 0.45104888 213 acl-2013-Language Acquisition and Probabilistic Models: keeping it simple

9 0.43432304 235 acl-2013-Machine Translation Detection from Monolingual Web-Text

10 0.4179379 302 acl-2013-Robust Automated Natural Language Processing with Multiword Expressions and Collocations

11 0.37873974 34 acl-2013-Accurate Word Segmentation using Transliteration and Language Model Projection

12 0.3655256 65 acl-2013-BRAINSUP: Brainstorming Support for Creative Sentence Generation

13 0.35774499 110 acl-2013-Deepfix: Statistical Post-editing of Statistical Machine Translation Using Deep Syntactic Analysis

14 0.35539693 366 acl-2013-Understanding Verbs based on Overlapping Verbs Senses

15 0.33571243 149 acl-2013-Exploring Word Order Universals: a Probabilistic Graphical Model Approach

16 0.33547798 246 acl-2013-Modeling Thesis Clarity in Student Essays

17 0.3347486 202 acl-2013-Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web

18 0.32661355 276 acl-2013-Part-of-Speech Induction in Dependency Trees for Statistical Machine Translation

19 0.32652777 1 acl-2013-"Let Everything Turn Well in Your Wife": Generation of Adult Humor Using Lexical Constraints

20 0.32468036 322 acl-2013-Simple, readable sub-sentences


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.042), (6, 0.02), (11, 0.04), (15, 0.02), (24, 0.035), (26, 0.043), (28, 0.01), (35, 0.512), (40, 0.011), (42, 0.037), (48, 0.017), (70, 0.024), (88, 0.027), (90, 0.017), (93, 0.019), (95, 0.043)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.9801814 278 acl-2013-Patient Experience in Online Support Forums: Modeling Interpersonal Interactions and Medication Use

Author: Annie Chen

Abstract: Though there has been substantial research concerning the extraction of information from clinical notes, to date there has been less work concerning the extraction of useful information from patient-generated content. Using a dataset comprised of online support group discussion content, this paper investigates two dimensions that may be important in the extraction of patient-generated experiences from text; significant individuals/groups and medication use. With regard to the former, the paper describes an approach involving the pairing of important figures (e.g. family, husbands, doctors, etc.) and affect, and suggests possible applications of such techniques to research concerning online social support, as well as integration into search interfaces for patients. Additionally, the paper demonstrates the extraction of side effects and sentiment at different phases in patient medication use, e.g. adoption, current use, discontinuation and switching, and demonstrates the utility of such an application for drug safety monitoring in online discussion forums. 1

2 0.97875476 55 acl-2013-Are Semantically Coherent Topic Models Useful for Ad Hoc Information Retrieval?

Author: Romain Deveaud ; Eric SanJuan ; Patrice Bellot

Abstract: The current topic modeling approaches for Information Retrieval do not allow to explicitly model query-oriented latent topics. More, the semantic coherence of the topics has never been considered in this field. We propose a model-based feedback approach that learns Latent Dirichlet Allocation topic models on the top-ranked pseudo-relevant feedback, and we measure the semantic coherence of those topics. We perform a first experimental evaluation using two major TREC test collections. Results show that retrieval perfor- mances tend to be better when using topics with higher semantic coherence.

3 0.97384739 160 acl-2013-Fine-grained Semantic Typing of Emerging Entities

Author: Ndapandula Nakashole ; Tomasz Tylenda ; Gerhard Weikum

Abstract: Methods for information extraction (IE) and knowledge base (KB) construction have been intensively studied. However, a largely under-explored case is tapping into highly dynamic sources like news streams and social media, where new entities are continuously emerging. In this paper, we present a method for discovering and semantically typing newly emerging out-ofKB entities, thus improving the freshness and recall of ontology-based IE and improving the precision and semantic rigor of open IE. Our method is based on a probabilistic model that feeds weights into integer linear programs that leverage type signatures of relational phrases and type correlation or disjointness constraints. Our experimental evaluation, based on crowdsourced user studies, show our method performing significantly better than prior work.

4 0.97038651 76 acl-2013-Building and Evaluating a Distributional Memory for Croatian

Author: Jan Snajder ; Sebastian Pado ; Zeljko Agic

Abstract: We report on the first structured distributional semantic model for Croatian, DM.HR. It is constructed after the model of the English Distributional Memory (Baroni and Lenci, 2010), from a dependencyparsed Croatian web corpus, and covers about 2M lemmas. We give details on the linguistic processing and the design principles. An evaluation shows state-of-theart performance on a semantic similarity task with particularly good performance on nouns. The resource is freely available.

5 0.9637143 311 acl-2013-Semantic Neighborhoods as Hypergraphs

Author: Chris Quirk ; Pallavi Choudhury

Abstract: Ambiguity preserving representations such as lattices are very useful in a number of NLP tasks, including paraphrase generation, paraphrase recognition, and machine translation evaluation. Lattices compactly represent lexical variation, but word order variation leads to a combinatorial explosion of states. We advocate hypergraphs as compact representations for sets of utterances describing the same event or object. We present a method to construct hypergraphs from sets of utterances, and evaluate this method on a simple recognition task. Given a set of utterances that describe a single object or event, we construct such a hypergraph, and demonstrate that it can recognize novel descriptions of the same event with high accuracy.

6 0.95735979 10 acl-2013-A Markov Model of Machine Translation using Non-parametric Bayesian Inference

7 0.95506608 32 acl-2013-A relatedness benchmark to test the role of determiners in compositional distributional semantics

same-paper 8 0.95429355 122 acl-2013-Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners

9 0.80558962 60 acl-2013-Automatic Coupling of Answer Extraction and Information Retrieval

10 0.79435521 238 acl-2013-Measuring semantic content in distributional vectors

11 0.7832464 113 acl-2013-Derivational Smoothing for Syntactic Distributional Semantics

12 0.78003824 58 acl-2013-Automated Collocation Suggestion for Japanese Second Language Learners

13 0.75712013 283 acl-2013-Probabilistic Domain Modelling With Contextualized Distributional Semantic Vectors

14 0.75575495 121 acl-2013-Discovering User Interactions in Ideological Discussions

15 0.74373341 158 acl-2013-Feature-Based Selection of Dependency Paths in Ad Hoc Information Retrieval

16 0.73339897 219 acl-2013-Learning Entity Representation for Entity Disambiguation

17 0.73000878 352 acl-2013-Towards Accurate Distant Supervision for Relational Facts Extraction

18 0.72816998 347 acl-2013-The Role of Syntax in Vector Space Models of Compositional Semantics

19 0.72491157 231 acl-2013-Linggle: a Web-scale Linguistic Search Engine for Words in Context

20 0.72437227 371 acl-2013-Unsupervised joke generation from big data