acl acl2013 acl2013-387 knowledge-graph by maker-knowledge-mining

387 acl-2013-Why-Question Answering using Intra- and Inter-Sentential Causal Relations


Source: pdf

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Motoki Sano ; Stijn De Saeger ; Kiyonori Ohtake

Abstract: In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addi- tion, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 j p l j Abstract In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. [sent-4, score-0.931]

2 To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. [sent-5, score-0.895]

3 We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. [sent-6, score-1.243]

4 1 Introduction “Why-question answering” (why-QA) is a task to retrieve answers from a given text archive for a why-question, such as “Why are tsunamis generated? [sent-12, score-0.326]

5 Consider the sentence A1 in Table 1, which represents the causal relation between the cause, “the ocean’s water mass . [sent-20, score-0.98]

6 Cause and effect parts of each causal relation, marked with [. [sent-24, score-0.881]

7 ” This is a good answer to the question, “Why are tsunamis generated? [sent-33, score-0.384]

8 Our method finds text fragments that include such causal relations with an effect part that resembles a given question and provides them as answers. [sent-35, score-0.988]

9 Some methods utilized the causal relations between terms as evidence for finding answers (i. [sent-38, score-0.973]

10 , matching a cause term with an answer text and its effect term with a question) (Girju, 2003; Higashinaka and Isozaki, 2008). [sent-40, score-0.357]

11 , a text fragment that may be provided as an answer, explicitly contains a complex causal relation sen1733 Proce dingsS o f ita h,e B 5u1lgsta Arinan,u Aaulg Musete 4ti-n9g 2 o0f1 t3h. [sent-45, score-0.93]

12 For example, A5 in Table 1is an incorrect answer to “Why are tsunamis generated? [sent-48, score-0.384]

13 The first challenge is to accurately identify a wide range of causal relations like those in Table 1 in answer candidates. [sent-54, score-1.031]

14 To meet this challenge, we developed a sequence labeling method that identifies not only intra-sentential causal relations, i. [sent-55, score-0.812]

15 , the causal relations between two terms/phrases/clauses expressed in a single sentence (e. [sent-57, score-0.895]

16 , A1 in Table 1), but also the intersentential causal relations, which are the causal relations between two terms/phrases/clauses expressed in two adjacent sentences (e. [sent-59, score-1.707]

17 The second challenge is assessing the appropriateness of each identified causal relation as an an- swer to a given question. [sent-62, score-1.007]

18 This is important since the causal relations identified in the answer candidates may have nothing to do with a given question. [sent-63, score-1.13]

19 In this case, we have to reject these causal relations because they are inappropriate as an answer to the question. [sent-64, score-1.031]

20 When a single answer candidate contains many causal relations, we also have to select the appropriate ones. [sent-65, score-1.037]

21 Those in A1–A3 are appropriate answers to “Why are tsunamis generated? [sent-67, score-0.37]

22 When we made our system provide only its confident answers according to their confidence score given by our system, the precision of these confident answers was 83. [sent-84, score-0.297]

23 2 Related Work Although there were many previous works on the acquisition of intra- and inter-sentential causal relations from texts (Khoo et al. [sent-89, score-0.895]

24 , 2012), their application to why-QA was limited to causal relations between terms (Girju, 2003; Higashinaka and Isozaki, 2008). [sent-95, score-0.895]

25 , 2012), and causal relations between terms (Girju, 2003; Higashinaka and Isozaki, 2008) has been used. [sent-99, score-0.895]

26 On the other hand, our method explicitly identifies intra- and inter-sentential causal relations between terms/phrases/clauses that have complex struc- tures and uses the identified relations to answer a why-question. [sent-101, score-1.133]

27 We extended our previous work by introducing causal relations recognized from answer candidates to the answer re-ranking. [sent-111, score-1.247]

28 The top ranked passages are regarded as answer candidates in the answer re-ranking. [sent-120, score-0.371]

29 In this work, we propose causal relation features generated from intra- and inter-sentential causal rela- tions in answer candidates and use them along with the features proposed in our previous work for training our re-ranker. [sent-126, score-2.031]

30 4 Causal Relations for Why-QA We describe causal relation recognition in Section 4. [sent-127, score-0.949]

31 1and describe the features (of our re-ranker) generated from causal relations in Section 4. [sent-128, score-0.935]

32 1 Causal Relation Recognition We restrict causal relations to those expressed by such cue phrases for causality as (the Japanese counterparts of) because and as a result like in the previous work (Khoo et al. [sent-131, score-1.033]

33 , 2000; Inui and Okumura, 2005) and recognize them in the following two steps: extracting causal relation candidates and recognizing causal relations from these candidates. [sent-132, score-1.924]

34 1 Extracting Causal Relation Candidates We identify cue phrases for causality in answer candidates using the regular expressions in Table 2. [sent-135, score-0.354]

35 Then, for each identified cue phrase, we extract three sentences as a causal relation candi- date, where one contains the cue phrase and the other two are the previous and next sentences in the answer candidates. [sent-136, score-1.229]

36 When there is more than one cue phrase in an answer candidate, we use all of them for extracting the causal relation candidates, assuming that each of the cue phrases is linked to different causal relations. [sent-137, score-2.022]

37 We call a cue phrase used for extracting a causal relation candidate a c-marker (causality marker) of the candidate to distinguish it from the other cue phrases in the same causal relation candidate. [sent-138, score-2.094]

38 2 Recognizing Causal Relations Next, we recognize the spans of the cause and effect parts of a causal relation linked to a c-marker. [sent-148, score-1.095]

39 In our task, CRFs take three sentences of a causal relation candidate as input and generate their cause-effect annotations with a set of possible cause-effect IOB labels, including BeginCause (B-C), Inside-Cause (I-C), Begin-Effect (BE), Inside-Effect (I-E), and Outside (O). [sent-151, score-0.975]

40 We used the three types offeature sets in Table 3 for training the CRFs, where j is in the range of i−4 ≤ j ≤ i+ 4 for current position iin a causal rie−la4tio ≤n cja ≤nd iid+at4e. [sent-154, score-0.812]

41 Figure 2: Recognizing causal relations by sequence labeling: Underlined text This causes represents a c-marker, and EOS and EOA represent end-of-sentence and end-of-answer candidates. [sent-156, score-0.937]

42 Syntactic features: The span of the causal relations in a given causal relation candidate strongly depends on the c-marker in the candidate. [sent-159, score-1.87]

43 Especially for intra-sentential causal relations, their cause and effect parts often appear in the subtrees of the c-marker’s node or those of the c-marker’s parent node in a syntactic dependency tree struc- ture. [sent-160, score-1.071]

44 , 2012) to assess the appropriateness ofeach causal relation obtained by our causal relation recognizer as an answer to a given question. [sent-185, score-2.037]

45 Finding answers with term matching and partial tree matching has been used in the literature of question answering (Girju, 2003; Narayanan and Harabagiu, 2004; Moschitti et al. [sent-186, score-0.313]

46 Each feature type expresses the causal relations in an answer candidate that are determined to be appropriate as answers to a given question by term matching (tf1–tf4), partial tree matching (pf1– pf4) and excitation polarity matching (ef1–ef4). [sent-193, score-1.639]

47 We call these causal relations used for generating our causal relation features candidates of an appropriate causal relation in this section. [sent-194, score-2.896]

48 Note that if one answer candidate has more than one candidate of an appropriate causal relation found by one matching method, we generated features for each appropriate candidate and merged all of them for the answer candidate. [sent-195, score-1.502]

49 1 Term Matching Our term matching method judges that a causal relation is a candidate of an appropriate causal relation if its effect part contains at least one content word (nouns, verbs, and adjectives) in the question. [sent-206, score-2.071]

50 For example, all the causal relations of A1– A4 in Table 1 are candidates of an appropriate causal relation to the question, “Why is a tsunami generated? [sent-207, score-1.975]

51 tf1–tf4 are generated from candidates of an appropriate causal relation identified by term matching. [sent-209, score-1.116]

52 For example, word 3-gram “this/cause/QW” is extracted from This causes tsunamis in A2 for “Why is a tsunami generated? [sent-212, score-0.292]

53 tf3 is a binary feature that indicates the existence of candidates of an appropriate causal relation identified by term matching in an answer candidate. [sent-218, score-1.266]

54 tf4 represents the degree of the relevance of the candidates of an appropriate causal relation measured by the number of matched terms: one, two, and more than two. [sent-219, score-1.078]

55 2 Partial Tree Matching Our partial tree matching method judges a causal relation as a candidate of an appropriate causal relation if its effect part contains at least one partial tree in a question, where the partial tree covers more than one content word. [sent-222, score-2.231]

56 For example, only the causal relation A1 among A1–A4 is a candidate of an appropriate causal relation for question “Why are tsunamis generated? [sent-223, score-2.242]

57 pf1–pf4 are generated from candidates of an appropriate causal relation identified by the partial tree matching. [sent-225, score-1.156]

58 r pf3 aisc a binary feature that indicates whether an answer candidate contains candidates of an appropriate causal relation identified by partial tree matching. [sent-229, score-1.314]

59 pf4 rep- resents the degree of the relevance of the candidate of an appropriate causal relation measured by the number of matched partial trees: one, two, and more than two. [sent-230, score-1.057]

60 This consistency suggests that A1 is a good answer to question “Why are tsunamis caused? [sent-243, score-0.429]

61 This suggests that A4 is not a good answer to “Why are tsunamis caused? [sent-246, score-0.384]

62 Next, we assume that a causal relation is ap- propriate as an answer to a question if the effect part of the causal relation and the question share at least one common noun with the same polarity. [sent-255, score-2.134]

63 More detailed information concerning the configurations of all the nouns in all the candidates of an appropriate causal relation (including their cause parts) and the question are encoded into our feature set ef1–ef4 in Table 4 and the final judgment is done by our re-ranker. [sent-256, score-1.232]

64 ef1 indicates whether each type of noun-polarity pair exists in a causal relation. [sent-258, score-0.812]

65 In other words, ef2 indicates whether each type of noun-polarity pair exists in the causal relation for each word class. [sent-261, score-0.93]

66 ef3 indicates the existence of candidates of an appropriate causal relation identified by this matching scheme, and ef4 represents the number of noun-polarity pairs shared by the question and the candidates of an appropriate causal relations (one, two, and more than two). [sent-262, score-2.198]

67 5 Experiments We experimented with causal relation recognition and why-QA with our causal relation features. [sent-263, score-1.879]

68 This why-QA data set is composed of 850 Japanese why-questions and their top-20 answer candidates obtained by answer candidate extraction from 600 million Japanese web pages. [sent-267, score-0.413]

69 2 Data Set for Causal Relation Recognition We built a data set composed of manually annotated causal relations for evaluating our causal relation recognition. [sent-279, score-1.841]

70 Finally, we had a data set made of 16,051 causal relation candidates, 8,117 of which had a true causal relation; the number of intra- and inter-sentential causal relations were 7,120 and 997, respectively. [sent-282, score-2.637]

71 We performed 10-fold cross validation to evaluate our causal relation recognition with this 10-fold data. [sent-284, score-0.949]

72 3 Causal Relation Recognition We used CRF++5 for training our causal relation recognizer. [sent-286, score-0.93]

73 com/p/crfpp/ 1739 the result for our baseline system that recognizes a causal relation by simply taking the two phrases adjacent to a c-marker (i. [sent-295, score-0.93]

74 , before and after) as cause and effect parts of the causal relation. [sent-297, score-0.977]

75 In other words, we judged that a causal relation recognized by BASELINE is correct ifboth cause and effect parts in the gold standard are adjacent to a c-marker. [sent-299, score-1.095]

76 INTRA-SENT and INTER-SENT represent the results for intra- and inter-sentential causal relations and ALL represents the result for the both causal relations by our method. [sent-300, score-1.814]

77 From these results, we confirmed that our method recognized both intra- and inter-sentential causal relations with over 80% precision, and it significantly outperformed our baseline system in both precision and recall rates. [sent-301, score-0.956]

78 l091ation recognition (%) We also investigated the contribution of the three types of features used in our causal relation recognition to the performance. [sent-305, score-0.985]

79 We used the causal relations obtained from the 10-fold cross validation for our why-QA experiments. [sent-309, score-0.895]

80 4 Why-Question Answering We performed why-QA experiments to confirm the effectiveness of intra- and inter-sentential causal relations in a why-QA task. [sent-311, score-0.895]

81 OURCF uses a re-ranker trained with only our causal relation features. [sent-314, score-0.93]

82 OH+PREVCF is a system with a re-ranker trained with the features used in OH and with the causal relation feature proposed in Higashinaka and Isozaki (2008). [sent-317, score-0.963]

83 The causal relation feature includes an indicator that determines whether the causal relations between two terms appear in a question-answer pair; cause in an answer and its effect in a question. [sent-318, score-2.105]

84 We acquired the causal relation instances (between terms) from 600 million Japanese web pages using the method of De Saeger et al. [sent-319, score-0.93]

85 (2009) and exploited the top-100,000 causal relation instances in this system. [sent-320, score-0.93]

86 PROPOSED has a re-ranker trained with our causal relation features as well as the three types of features proposed in Oh et al. [sent-321, score-0.98]

87 Comparison between OH and PROPOSED reveals the contribution of our causal relation features to why-QA. [sent-323, score-0.947]

88 Although this suggests the effectiveness of our causal relation features, the overall performance of OURCF was lower than that of OH. [sent-339, score-0.93]

89 % of questions Figure 4: Effect of causal relation features on the top-answers We also compared confident answers of OURCF, OH, and PROPOSED by making each system provide only the k confident top-answers (for k questions) selected by their SVM scores given by each system’s re-ranker. [sent-342, score-1.197]

90 This experiment confirmed that our causal relation features were also effective in improving the quality of the highly confident answers. [sent-349, score-1.025]

91 We think that one of the reasons is the relatively small coverage of the excitation polarity lexicon, a core resource in our excitation polarity matching. [sent-352, score-0.41]

92 Next, we investigated the contribution of the intra- and inter-sentential causal relations to the performance of our method. [sent-354, score-0.895]

93 We used only one of the two types of causal relations for generating causal relation features (INTRA-SENT and INTERSENT) for training our re-ranker and compared the results in these settings with the one when both were used (ALL (PROPOSED)). [sent-355, score-1.842]

94 Both intra- and inter-sentential causal relations contributed to the performance improvement. [sent-357, score-0.895]

95 075P Table 8: Results with/without sentential causal relations (%) intra- and inter- We also investigated the contributions of the three types of causal relation features by ablation tests (Table 9). [sent-360, score-1.842]

96 7905yP-QA(%) 6 Conclusion In this paper, we explored the utility of intra- and inter-sentential causal relations for ranking answer candidates to why-questions. [sent-365, score-1.111]

97 We also proposed a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation. [sent-366, score-1.1]

98 2% precision for its confident answers, when it only provided its confident answers for 25% of all the questions in our test set. [sent-371, score-0.275]

99 Investigating the characteristics of causal relations in Japanese text. [sent-430, score-0.895]

100 Extracting causal knowledge from a medical database using graphical patterns. [sent-440, score-0.812]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('causal', 0.812), ('tsunamis', 0.248), ('oh', 0.196), ('excitation', 0.143), ('answer', 0.136), ('relation', 0.118), ('hashimoto', 0.106), ('japanese', 0.098), ('cause', 0.096), ('higashinaka', 0.086), ('relations', 0.083), ('candidates', 0.08), ('answers', 0.078), ('causality', 0.075), ('ourcf', 0.065), ('cue', 0.063), ('torisawa', 0.063), ('girju', 0.063), ('polarity', 0.062), ('saeger', 0.06), ('confident', 0.058), ('questions', 0.056), ('murata', 0.055), ('templates', 0.053), ('isozaki', 0.051), ('stijn', 0.05), ('verberne', 0.05), ('effect', 0.048), ('chikara', 0.048), ('candidate', 0.045), ('question', 0.045), ('appropriate', 0.044), ('prevcf', 0.043), ('kentaro', 0.043), ('appropriateness', 0.041), ('excitatory', 0.038), ('inhibitory', 0.038), ('partial', 0.038), ('matching', 0.037), ('polarities', 0.036), ('answering', 0.036), ('kazama', 0.035), ('kiyonori', 0.029), ('khoo', 0.029), ('mj', 0.027), ('water', 0.026), ('tsunami', 0.026), ('varga', 0.026), ('node', 0.026), ('ichi', 0.026), ('precision', 0.025), ('qw', 0.025), ('orientation', 0.025), ('sentiment', 0.024), ('represents', 0.024), ('classes', 0.023), ('generated', 0.023), ('tree', 0.022), ('inundation', 0.022), ('rct', 0.022), ('riaz', 0.022), ('parts', 0.021), ('inui', 0.02), ('syntactic', 0.02), ('term', 0.02), ('confirmed', 0.02), ('recognition', 0.019), ('morpheme', 0.019), ('jun', 0.019), ('recognizing', 0.019), ('motoki', 0.019), ('sano', 0.019), ('suzan', 0.019), ('identified', 0.019), ('caused', 0.019), ('passages', 0.019), ('nouns', 0.019), ('jth', 0.018), ('configurations', 0.018), ('subtree', 0.018), ('causes', 0.018), ('phrase', 0.018), ('radinsky', 0.018), ('lou', 0.018), ('blanco', 0.018), ('features', 0.017), ('content', 0.017), ('swer', 0.017), ('boves', 0.017), ('bunsetsu', 0.017), ('weaken', 0.017), ('androutsopoulos', 0.017), ('proposed', 0.016), ('entailment', 0.016), ('outperformed', 0.016), ('composed', 0.016), ('class', 0.016), ('lightweight', 0.016), ('narayanan', 0.016), ('masaki', 0.016)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 387 acl-2013-Why-Question Answering using Intra- and Inter-Sentential Causal Relations

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Motoki Sano ; Stijn De Saeger ; Kiyonori Ohtake

Abstract: In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addi- tion, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only.

2 0.49449569 386 acl-2013-What causes a causal relation? Detecting Causal Triggers in Biomedical Scientific Discourse

Author: Claudiu Mihaila ; Sophia Ananiadou

Abstract: Current domain-specific information extraction systems represent an important resource for biomedical researchers, who need to process vaster amounts of knowledge in short times. Automatic discourse causality recognition can further improve their workload by suggesting possible causal connections and aiding in the curation of pathway models. We here describe an approach to the automatic identification of discourse causality triggers in the biomedical domain using machine learning. We create several baselines and experiment with various parameter settings for three algorithms, i.e., Conditional Random Fields (CRF), Support Vector Machines (SVM) and Random Forests (RF). Also, we evaluate the impact of lexical, syntactic and semantic features on each of the algorithms and look at er- rors. The best performance of 79.35% F-score is achieved by CRFs when using all three feature types.

3 0.15781698 42 acl-2013-Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster

Author: Istvan Varga ; Motoki Sano ; Kentaro Torisawa ; Chikara Hashimoto ; Kiyonori Ohtake ; Takao Kawai ; Jong-Hoon Oh ; Stijn De Saeger

Abstract: The 2011 Great East Japan Earthquake caused a wide range of problems, and as countermeasures, many aid activities were carried out. Many of these problems and aid activities were reported via Twitter. However, most problem reports and corresponding aid messages were not successfully exchanged between victims and local governments or humanitarian organizations, overwhelmed by the vast amount of information. As a result, victims could not receive necessary aid and humanitarian organizations wasted resources on redundant efforts. In this paper, we propose a method for discovering matches between problem reports and aid messages. Our system contributes to problem-solving in a large scale disaster situation by facilitating communication between victims and humanitarian organizations.

4 0.10936445 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

Author: Wen-tau Yih ; Ming-Wei Chang ; Christopher Meek ; Andrzej Pastusiak

Abstract: In this paper, we study the answer sentence selection problem for question answering. Unlike previous work, which primarily leverages syntactic analysis through dependency tree matching, we focus on improving the performance using models of lexical semantic resources. Experiments show that our systems can be consistently and significantly improved with rich lexical semantic information, regardless of the choice of learning algorithms. When evaluated on a benchmark dataset, the MAP and MRR scores are increased by 8 to 10 points, compared to one of our baseline systems using only surface-form matching. Moreover, our best system also outperforms pervious work that makes use of the dependency tree structure by a wide margin.

5 0.091745295 60 acl-2013-Automatic Coupling of Answer Extraction and Information Retrieval

Author: Xuchen Yao ; Benjamin Van Durme ; Peter Clark

Abstract: Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated overengineering on IR, and not necessarily performance gain for QA. We propose to tightly integrate them by coupling automatically learned features for answer extraction to a shallow-structured IR model. Our method is very quick to implement, and significantly improves IR for QA (measured in Mean Average Precision and Mean Reciprocal Rank) by 10%-20% against an uncoupled retrieval baseline in both document and passage retrieval, which further leads to a downstream 20% improvement in QA F1.

6 0.09009099 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

7 0.084472746 107 acl-2013-Deceptive Answer Prediction with User Preference Graph

8 0.081524976 241 acl-2013-Minimum Bayes Risk based Answer Re-ranking for Question Answering

9 0.071566023 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

10 0.057059675 292 acl-2013-Question Classification Transfer

11 0.054228738 245 acl-2013-Modeling Human Inference Process for Textual Entailment Recognition

12 0.053359322 188 acl-2013-Identifying Sentiment Words Using an Optimization-based Model without Seed Words

13 0.052180704 297 acl-2013-Recognizing Partial Textual Entailment

14 0.052073304 329 acl-2013-Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization

15 0.051526055 266 acl-2013-PAL: A Chatterbot System for Answering Domain-specific Questions

16 0.050410341 290 acl-2013-Question Analysis for Polish Question Answering

17 0.050350748 2 acl-2013-A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations

18 0.049930677 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

19 0.047881026 242 acl-2013-Mining Equivalent Relations from Linked Data

20 0.047464266 197 acl-2013-Incremental Topic-Based Translation Model Adaptation for Conversational Spoken Language Translation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.124), (1, 0.075), (2, -0.025), (3, -0.049), (4, -0.002), (5, 0.107), (6, -0.018), (7, -0.105), (8, 0.09), (9, 0.071), (10, 0.102), (11, -0.012), (12, -0.039), (13, 0.021), (14, 0.028), (15, -0.015), (16, 0.06), (17, -0.14), (18, -0.065), (19, -0.011), (20, 0.086), (21, -0.04), (22, 0.066), (23, -0.057), (24, -0.005), (25, 0.07), (26, -0.039), (27, 0.004), (28, 0.007), (29, -0.08), (30, 0.019), (31, 0.1), (32, -0.151), (33, 0.09), (34, -0.117), (35, 0.123), (36, 0.079), (37, 0.147), (38, 0.027), (39, -0.049), (40, 0.062), (41, 0.11), (42, 0.125), (43, 0.049), (44, -0.191), (45, 0.064), (46, -0.157), (47, 0.14), (48, -0.038), (49, -0.11)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92323154 387 acl-2013-Why-Question Answering using Intra- and Inter-Sentential Causal Relations

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Motoki Sano ; Stijn De Saeger ; Kiyonori Ohtake

Abstract: In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addi- tion, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only.

2 0.75810939 386 acl-2013-What causes a causal relation? Detecting Causal Triggers in Biomedical Scientific Discourse

Author: Claudiu Mihaila ; Sophia Ananiadou

Abstract: Current domain-specific information extraction systems represent an important resource for biomedical researchers, who need to process vaster amounts of knowledge in short times. Automatic discourse causality recognition can further improve their workload by suggesting possible causal connections and aiding in the curation of pathway models. We here describe an approach to the automatic identification of discourse causality triggers in the biomedical domain using machine learning. We create several baselines and experiment with various parameter settings for three algorithms, i.e., Conditional Random Fields (CRF), Support Vector Machines (SVM) and Random Forests (RF). Also, we evaluate the impact of lexical, syntactic and semantic features on each of the algorithms and look at er- rors. The best performance of 79.35% F-score is achieved by CRFs when using all three feature types.

3 0.49964741 42 acl-2013-Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster

Author: Istvan Varga ; Motoki Sano ; Kentaro Torisawa ; Chikara Hashimoto ; Kiyonori Ohtake ; Takao Kawai ; Jong-Hoon Oh ; Stijn De Saeger

Abstract: The 2011 Great East Japan Earthquake caused a wide range of problems, and as countermeasures, many aid activities were carried out. Many of these problems and aid activities were reported via Twitter. However, most problem reports and corresponding aid messages were not successfully exchanged between victims and local governments or humanitarian organizations, overwhelmed by the vast amount of information. As a result, victims could not receive necessary aid and humanitarian organizations wasted resources on redundant efforts. In this paper, we propose a method for discovering matches between problem reports and aid messages. Our system contributes to problem-solving in a large scale disaster situation by facilitating communication between victims and humanitarian organizations.

4 0.3709605 266 acl-2013-PAL: A Chatterbot System for Answering Domain-specific Questions

Author: Yuanchao Liu ; Ming Liu ; Xiaolong Wang ; Limin Wang ; Jingjing Li

Abstract: In this paper, we propose PAL, a prototype chatterbot for answering non-obstructive psychological domain-specific questions. This system focuses on providing primary suggestions or helping people relieve pressure by extracting knowledge from online forums, based on which the chatterbot system is constructed. The strategies used by PAL, including semantic-extension-based question matching, solution management with personal information consideration, and XML-based knowledge pattern construction, are described and discussed. We also conduct a primary test for the feasibility of our system.

5 0.36241353 69 acl-2013-Bilingual Lexical Cohesion Trigger Model for Document-Level Machine Translation

Author: Guosheng Ben ; Deyi Xiong ; Zhiyang Teng ; Yajuan Lu ; Qun Liu

Abstract: In this paper, we propose a bilingual lexical cohesion trigger model to capture lexical cohesion for document-level machine translation. We integrate the model into hierarchical phrase-based machine translation and achieve an absolute improvement of 0.85 BLEU points on average over the baseline on NIST Chinese-English test sets.

6 0.36232778 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

7 0.33842957 60 acl-2013-Automatic Coupling of Answer Extraction and Information Retrieval

8 0.33406159 241 acl-2013-Minimum Bayes Risk based Answer Re-ranking for Question Answering

9 0.31794503 218 acl-2013-Latent Semantic Tensor Indexing for Community-based Question Answering

10 0.31794441 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

11 0.31737185 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE

12 0.3063907 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

13 0.30071107 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

14 0.2975263 206 acl-2013-Joint Event Extraction via Structured Prediction with Global Features

15 0.2966499 158 acl-2013-Feature-Based Selection of Dependency Paths in Ad Hoc Information Retrieval

16 0.2940177 329 acl-2013-Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization

17 0.29379171 292 acl-2013-Question Classification Transfer

18 0.29008681 254 acl-2013-Multimodal DBN for Predicting High-Quality Answers in cQA portals

19 0.28988743 297 acl-2013-Recognizing Partial Textual Entailment

20 0.28955019 34 acl-2013-Accurate Word Segmentation using Transliteration and Language Model Projection


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.036), (6, 0.023), (11, 0.096), (15, 0.018), (24, 0.02), (25, 0.209), (26, 0.04), (28, 0.013), (35, 0.081), (42, 0.037), (48, 0.056), (70, 0.051), (88, 0.034), (90, 0.021), (92, 0.087), (95, 0.057)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77829373 387 acl-2013-Why-Question Answering using Intra- and Inter-Sentential Causal Relations

Author: Jong-Hoon Oh ; Kentaro Torisawa ; Chikara Hashimoto ; Motoki Sano ; Stijn De Saeger ; Kiyonori Ohtake

Abstract: In this paper, we explore the utility of intra- and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra- and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addi- tion, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only.

2 0.69409591 144 acl-2013-Explicit and Implicit Syntactic Features for Text Classification

Author: Matt Post ; Shane Bergsma

Abstract: Syntactic features are useful for many text classification tasks. Among these, tree kernels (Collins and Duffy, 2001) have been perhaps the most robust and effective syntactic tool, appealing for their empirical success, but also because they do not require an answer to the difficult question of which tree features to use for a given task. We compare tree kernels to different explicit sets of tree features on five diverse tasks, and find that explicit features often perform as well as tree kernels on accuracy and always in orders of magnitude less time, and with smaller models. Since explicit features are easy to generate and use (with publicly avail- able tools) , we suggest they should always be included as baseline comparisons in tree kernel method evaluations.

3 0.68602049 95 acl-2013-Crawling microblogging services to gather language-classified URLs. Workflow and case study

Author: Adrien Barbaresi

Abstract: We present a way to extract links from messages published on microblogging platforms and we classify them according to the language and possible relevance of their target in order to build a text corpus. Three platforms are taken into consideration: FriendFeed, identi.ca and Reddit, as they account for a relative diversity of user profiles and more importantly user languages. In order to explore them, we introduce a traversal algorithm based on user pages. As we target lesser-known languages, we try to focus on non-English posts by filtering out English text. Using mature open-source software from the NLP research field, a spell checker (as- pell) and a language identification system (langid .py), our case study and our benchmarks give an insight into the linguistic structure of the considered services.

4 0.64863175 213 acl-2013-Language Acquisition and Probabilistic Models: keeping it simple

Author: Aline Villavicencio ; Marco Idiart ; Robert Berwick ; Igor Malioutov

Abstract: Hierarchical Bayesian Models (HBMs) have been used with some success to capture empirically observed patterns of under- and overgeneralization in child language acquisition. However, as is well known, HBMs are “ideal”learningsystems, assumingaccess to unlimited computational resources that may not be available to child language learners. Consequently, it remains crucial to carefully assess the use of HBMs along with alternative, possibly simpler, candidate models. This paper presents such an evaluation for a language acquisi- tion domain where explicit HBMs have been proposed: the acquisition of English dative constructions. In particular, we present a detailed, empiricallygrounded model-selection comparison of HBMs vs. a simpler alternative based on clustering along with maximum likelihood estimation that we call linear competition learning (LCL). Our results demonstrate that LCL can match HBM model performance without incurring on the high computational costs associated with HBMs.

5 0.64193517 42 acl-2013-Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster

Author: Istvan Varga ; Motoki Sano ; Kentaro Torisawa ; Chikara Hashimoto ; Kiyonori Ohtake ; Takao Kawai ; Jong-Hoon Oh ; Stijn De Saeger

Abstract: The 2011 Great East Japan Earthquake caused a wide range of problems, and as countermeasures, many aid activities were carried out. Many of these problems and aid activities were reported via Twitter. However, most problem reports and corresponding aid messages were not successfully exchanged between victims and local governments or humanitarian organizations, overwhelmed by the vast amount of information. As a result, victims could not receive necessary aid and humanitarian organizations wasted resources on redundant efforts. In this paper, we propose a method for discovering matches between problem reports and aid messages. Our system contributes to problem-solving in a large scale disaster situation by facilitating communication between victims and humanitarian organizations.

6 0.63746631 77 acl-2013-Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT?

7 0.58767486 25 acl-2013-A Tightly-coupled Unsupervised Clustering and Bilingual Alignment Model for Transliteration

8 0.5836845 17 acl-2013-A Random Walk Approach to Selectional Preferences Based on Preference Ranking and Propagation

9 0.58191186 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

10 0.57927924 358 acl-2013-Transition-based Dependency Parsing with Selectional Branching

11 0.57747889 215 acl-2013-Large-scale Semantic Parsing via Schema Matching and Lexicon Extension

12 0.57666939 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

13 0.57653314 245 acl-2013-Modeling Human Inference Process for Textual Entailment Recognition

14 0.5764901 242 acl-2013-Mining Equivalent Relations from Linked Data

15 0.57647145 275 acl-2013-Parsing with Compositional Vector Grammars

16 0.57414162 61 acl-2013-Automatic Interpretation of the English Possessive

17 0.5739693 27 acl-2013-A Two Level Model for Context Sensitive Inference Rules

18 0.57375932 102 acl-2013-DErivBase: Inducing and Evaluating a Derivational Morphology Resource for German

19 0.57368493 318 acl-2013-Sentiment Relevance

20 0.57368147 202 acl-2013-Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web