acl acl2011 acl2011-9 knowledge-graph by maker-knowledge-mining

9 acl-2011-A Cross-Lingual ILP Solution to Zero Anaphora Resolution


Source: pdf

Author: Ryu Iida ; Massimo Poesio

Abstract: We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for ex- plicitly realized anaphors.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 jp Abstract We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. [sent-5, score-1.573]

2 We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. [sent-6, score-0.279]

3 We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for ex- plicitly realized anaphors. [sent-7, score-0.326]

4 1 Introduction In so-called ‘pro-drop’ languages such as Japanese and many romance languages including Italian, phonetic realization is not required for anaphoric references in contexts in which in English noncontrastive pronouns are used: e. [sent-8, score-0.271]

5 it The felicitousness of zero anaphoric reference depends on the referred entity being sufficiently salient, hence this type of data–particularly in Japanese and Italian–played a key role in early work in coreference resolution, e. [sent-23, score-0.748]

6 Zero anaphora resolution has remained a very active area of study for researchers working on Japanese, because of the prevalence of zeros in such languages1 (Seki et al. [sent-28, score-0.687]

7 But now the availability of corpora annotated to study anaphora, including zero anaphora, in languages such as Italian (e. [sent-35, score-0.251]

8 , 2010), is leading to a renewed interest in zero anaphora resolution, particularly at the light of the mediocre results obtained on zero anaphors by most systems partici- pating in SEMEVAL. [sent-39, score-1.045]

9 It is therefore natural to view zero anaphora resolution as a joint inference 1As shown in Table 1, 64. [sent-41, score-0.893]

10 3% of anaphors in the NAIST Text Corpus of Anaphora are zeros. [sent-42, score-0.146]

11 We demonstrate that treating zero anaphora resolution as a three-way inference problem is successful for both Italian and Japanese. [sent-47, score-0.893]

12 We integrate the zero anaphora resolver with a coreference resolver and demonstrate that the approach leads to improved results for both Italian and Japanese. [sent-48, score-0.982]

13 In Section 4 we show the experimental results with zero anaphora only. [sent-52, score-0.648]

14 In Section 5 we discuss experiments testing that adding our zero anaphora detector and resolver to a full coreference resolver would result in overall increase in performance. [sent-53, score-0.982]

15 2 Using ILP for joint anaphoricity and coreference determination Integer Linear Programming (ILP) is a method for constraint-based inference aimed at finding the values for a set of variables that maximize a (linear) objective function while satisfying a number of con- straints. [sent-55, score-0.51]

16 Roth and Yih (2004) advocated ILP as a general solution for a number of NLP tasks that require combining multiple classifiers and which the traditional pipeline architecture is not appropriate, such as entity disambiguation and relation extraction. [sent-56, score-0.047]

17 Denis and Baldridge (2007) defined the following object function for the joint anaphoricity and coreference determination problem. [sent-57, score-0.51]

18 ∈ P yj ∈ {0, 1} ∀j ∈ M M stands for the set of mentions in the document, and P the set of possible coreference links over these mentions. [sent-75, score-0.492]

19 is an indicator variable that is set to 1 if mention j is anaphoric, and 0 otherwise. [sent-81, score-0.066]

20 = −log(P(COREF|i, j)) are (logs of) probabilities produced by an Fa|ni,tejc))ede arnet i (dloegnstif iocfa)ti porno bcalabsisli tfiieesr with −log, whereas cjA = −log(P(ANAPH|j)), are hthe − probabilities produced by an anaphoricity d))e,termination classifier with −log. [sent-84, score-0.187]

21 a solution to antecedent identification and anaphoricity determination is guided by the following three constraints. [sent-86, score-0.454]

22 ∈ P (3) Resolve anaphors: if a mention is anaphoric (yj = 1), it must be coreferent with at least one antecedent. [sent-97, score-0.291]

23 Mj Do not resolve non-anaphors: if a mention is nonanaphoric (yj = 0), it should have no antecedents. [sent-102, score-0.119]

24 1 Best First In the context of zero anaphora resolution, the ‘Do not resolve non-anaphors’ constraint (5) is too weak, as it allows the redundant choice of more than one candidate antecedent. [sent-112, score-0.734]

25 2 A subject detection model The greatest difficulty in zero anaphora resolution in comparison to, say, pronoun resolution, is zero anaphora detection. [sent-121, score-1.781]

26 Simply relying for this on the parser is not enough: most dependency parsers are not very accurate at identifying cases in which the verb does not have a subject on syntactic grounds only. [sent-122, score-0.12]

27 Again, it seems reasonable to suppose this is because zero anaphora detection requires a combination of syntactic information and information about the current context. [sent-123, score-0.722]

28 Within the ILP framework, this hypothesis can be implemented by turning the zero anaphora resolution optimization problem into one with three indicator variables, with the objective function in (8). [sent-124, score-0.916]

29 The third variable, zj, encodes the information provided by the parser: it is 1 with cost cjS = −log(P(SUBJ|j)) if the parser 806 thinks that verb j has an explicit subject with probability P(SUBJ|j), otherwise it is 0. [sent-125, score-0.096]

30 ∈ P yj ∈ {0, 1} ∀j ∈ M zj ∈ {0, 1} ∀j ∈ M The crucial fact about the relation between zj and yj is that a verb has either a syntactically realized NP or a zero pronoun as a subject, but not both. [sent-145, score-0.864]

31 Resolve only non-subjects: if a predicate j syntactically depends on a subject (zj = 1), then the predicate j should have no antecedents of its subject zero pronoun. [sent-147, score-0.622]

32 yj 4 + zj ≤ 1 ∀j ∈ M (9) Experiment 1: zero anaphora resolution In a first round of experiments, we evaluated the performance ofthe model proposed in Section 3 on zero anaphora only (i. [sent-148, score-1.796]

33 , not attempting to resolve other types of anaphoric expressions). [sent-150, score-0.269]

34 The table shows that NP anaphora occurs more frequently than zero-anaphora in Italian, whereas in Japanese the frequency of anaphoric zero-anaphors2 is almost double the frequency of the remaining anaphoric expressions. [sent-153, score-0.849]

35 , 2010), where both zero-anaphora and NP 2In Japanese, like in Italian, zero anaphors are often used non-anaphorically, to refer to situationally introduced entities, as in I went to John’s office, but they told me that he had left. [sent-156, score-0.397]

36 Table 1: Italian and Japanese Data Sets coreference are annotated. [sent-158, score-0.246]

37 In Italian, zero pronouns may only occur as omitted subjects of verbs. [sent-163, score-0.316]

38 Therefore, in the task of zero-anaphora resolution all verbs appearing in a text are considered candidates for zero pronouns, and all gold mentions or system mentions preceding a candidate zero pronoun are considered as candidate antecedents. [sent-164, score-1.059]

39 (In contrast, in the experiments on coreference resolution discussed in the following section, all mentions are considered as both candidate anaphors and candidate antecedents. [sent-165, score-0.801]

40 To compare the results with gold mentions and with system detected mentions, we carried out an evaluation using the mentions automatically detected by the Italian version of the BART system (I-BART) (Poesio et al. [sent-166, score-0.212]

41 3 Japanese For Japanese coreference we used the NAIST Text Corpus (Iida et al. [sent-168, score-0.246]

42 4β, which contains the annotated data about NP coreference and zero-anaphoric relations. [sent-170, score-0.246]

43 In addition, we also used a Japanese named entity tagger, CaboCha5 for automatically tagging named entity labels. [sent-172, score-0.05]

44 7 For evaluation, articles published from January 1st to January 11th and the editorials from January to August were used for training and articles dated January 14th to 17th and editorials dated October to December are used for testing as done by Taira et al. [sent-185, score-0.108]

45 Furthermore, in the experiments we only considered subject zero pronouns for a fair comparison to Italian zeroanaphora. [sent-188, score-0.392]

46 To directly reflect this difference, we created two antecedent identification models; one for intrasentential zero-anaphora, induced using the training instances which a zero pronoun and its candidate antecedent appear in the same sentences, the other for 6http://chasen-legacy. [sent-197, score-0.739]

47 (2001), antecedent identification and anaphoricity determination are simultaneously executed by a single classifier. [sent-204, score-0.454]

48 DS-CASCADE: the model first filters out nonanaphoric candidate anaphors using an anaphoricity determination model, then selects an antecedent from a set of candidate antecedents of anaphoric candidate anaphors using an antecedent identification model. [sent-205, score-1.327]

49 3 Features The feature sets for antecedent identification and anaphoricity determination are briefly summarized in Table 2 and Table 3, respectively. [sent-207, score-0.454]

50 4 Creating subject detection models To create a subject detection model for Italian, we used the TUT corpus9 (Bosco et al. [sent-211, score-0.34]

51 We induced an maximum entropy classifier by using as items all arcs of dependency relations, each ofwhich is used as a positive instance if its label is subject; otherwise it is used as a negative instance. [sent-213, score-0.089]

52 To train the Japanese subject detection model we used 1,753 articles contained both in the NAIST Text Corpus and the Kyoto University Text Corpus. [sent-214, score-0.17]

53 To create the training instances, any pair of a predicate and its dependent are extracted, each of 8http://www. [sent-216, score-0.067]

54 to this relation as 808 featuredescription Table 3: Features for anaphoricity determination which is judged as positive if its relation is subject; as negative otherwise. [sent-224, score-0.297]

55 As features for Italian, we used lemmas, PoS tag of a predicate and its dependents as well as their morphological information (i. [sent-225, score-0.067]

56 For Japanese, the head lemmas of predicate and dependent chunks as well as the functional words involved with these two chunks were used as features. [sent-229, score-0.113]

57 5 Results with zero anaphora only In zero anaphora resolution, we need to find all predicates that have anaphoric unrealized subjects (i. [sent-233, score-1.614]

58 zero pronouns which have an antecedent in a text), and then identify an antecedent for each such argument. [sent-235, score-0.592]

59 The performance of each model at zero-anaphora detection and resolution is shown in Table 4, using recall featuredescription marked with ‘*’ are only used in Italian, while the features marked with ‘**’ are only used in Japanese. [sent-237, score-0.352]

60 Table 2: Features used for antecedent identification ImPDLASo -dICR+eWABSlUFIC+EJADB0 R. [sent-238, score-0.19]

61 23946817 Table 4: Results on zero pronouns / precision / F over link detection as a metric (model theoretic metrics do not apply for this task as only subsets of coreference chains are considered). [sent-247, score-0.616]

62 5 809 Experiment 2: coreference resolution for all anaphors In a second series of experiments we evaluated the performance of our models together with a full coreference system resolving all anaphors, not just zeros. [sent-250, score-0.908]

63 1 Separating vs combining classifiers Different types of nominal expressions display very different anaphoric behavior: e. [sent-252, score-0.248]

64 , pronoun resolution involves very different types of information from nominal expression resolution, depending more on syntactic information and on the local context and less on commonsense knowledge. [sent-254, score-0.315]

65 But the most common approach to coreference resolution (Soon et al. [sent-255, score-0.491]

66 ) is to use a single classifier to identify antecedents of all anaphoric expressions, relying on the ability of the machine learning algorithm to learn these differences. [sent-257, score-0.299]

67 These models, however, often fail to capture the differences in anaphoric behavior between different types of expressions–one of the reasons being that the amount of training instances is often too small to learn such differences. [sent-258, score-0.226]

68 11 Using different models would appear to be key in the case of zeroanaphora resolution, which differs even more from the rest of anaphora resolution, e. [sent-259, score-0.495]

69 Likewise, anaphoricity determination models were separately induced with regards to these two anaphora types. [sent-263, score-0.698]

70 2 Results with all anaphors In Table 5 and Table 6 we show the (MUC scorer) results obtained by adding the zero anaphoric resolution models proposed in this paper to both a com- bined and a separated classifier. [sent-265, score-0.904]

71 For the separated classifier, we use the ILP+BF model for explicitly realized NPs, and different ILP models for zeros. [sent-266, score-0.069]

72 The results show that the separated classifier works systematically better than a combined classifier. [sent-267, score-0.064]

73 In particular, the effect of introducing the separated model with ILP+BF+SUBJ is more significant when using the system detected mentions; it obtained performance more than 13 points better than I-BART when the model referred to the system detected mentions. [sent-280, score-0.114]

74 6 Related work We are not aware of any previous machine learning model for zero anaphora in Italian, but there has been quite a lot of work on Japanese zeroanaphora (Iida et al. [sent-281, score-0.746]

75 (2009), zero-anaphora resolution is considered as a sub-task of predicate argument structure analysis, taking the NAIST text corpus as a target data set. [sent-289, score-0.368]

76 (2010) applied decision lists and transformation-based learning respectively in order to manually analyze which clues are important for each argument assignment. [sent-292, score-0.056]

77 (2009) also tackled to the same problem setting by applying a pairwise classifier for each argument. [sent-294, score-0.06]

78 In their approach, a ‘null’ argument is explicitly added into the set of candidate argument to learn the situation where an argument of a predicate is ‘exophoric’ . [sent-295, score-0.278]

79 They adopted the BACT learning algorithm (Kudo and Matsumoto, 2004) to effectively learn subtrees useful for both antecedent identification and zero pronoun detection. [sent-300, score-0.511]

80 (2009) obtained interesting experimental results about the relationship between zeroanaphora resolution and the scale of automatically acquired case frames. [sent-304, score-0.372]

81 They also proposed a probabilistic model to Japanese zero-anaphora in which an argument assignment score is estimated based on the automatically acquired case frames. [sent-306, score-0.085]

82 They concluded that case frames acquired from larger corpora lead to better f-score on zeroanaphora resolution. [sent-307, score-0.127]

83 Although we used gold mentions in our evaluations, mention detection is also essential. [sent-311, score-0.195]

84 As a next step, we also need to take into account ways of incorporating a mention detection model into the ILP formulation. [sent-312, score-0.117]

85 7 Conclusion In this paper, we developed a new ILP-based model of zero anaphora detection and resolution that extends the coreference resolution model proposed by Denis and Baldridge (2007) by introducing modified constraints and a subject detection model. [sent-313, score-1.65]

86 We 811 evaluated this model both individually and as part of the overall coreference task for both Italian and Japanese zero anaphora, obtaining clear improvements in performance. [sent-314, score-0.497]

87 One avenue for future research is motivated by the observation that whereas introducing the subject detection model and the best-first constraint results in higher precision maintaining the recall compared to the baselines, that precision is still low. [sent-315, score-0.192]

88 One of the major source of the errors is that zero pronouns are frequently used in Italian and Japanese in contexts in which in English as so-called generic they would be used: “I walked into the hotel and (they) said . [sent-316, score-0.296]

89 In such case, the zero pronoun detection model is often incorrect. [sent-319, score-0.395]

90 We are considering adding a generic they detection component. [sent-320, score-0.074]

91 We also intend to experiment with introducing more sophisticated antecedent identification models in the ILP framework. [sent-321, score-0.212]

92 (2003) showed that the relative comparison of two candidate antecedents leads to obtaining better accuracy than the pairwise model. [sent-324, score-0.12]

93 Finally, we would like to test our model with English constructions which closely resemble zero anaphora. [sent-327, score-0.251]

94 Joint determination of anaphoricity and coreference resolution using integer programming. [sent-366, score-0.755]

95 Incorporating contextual cues in trainable models for coreference resolution. [sent-396, score-0.246]

96 Annotating a Japanese text corpus with predicateargument and coreference relations. [sent-411, score-0.246]

97 Japanese zero pronoun resolution based on ranking rules and machine learning. [sent-424, score-0.566]

98 A probabilistic method for analyzing Japanese anaphora integrating zero pronoun detection and resolution. [sent-508, score-0.792]

99 A machine learning approach to coreference resolution of noun phrases. [sent-519, score-0.491]

100 A Japanese predicate argument structure analysis using decision lists. [sent-526, score-0.123]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('anaphora', 0.397), ('italian', 0.328), ('zero', 0.251), ('coreference', 0.246), ('resolution', 0.245), ('anaphoric', 0.226), ('japanese', 0.218), ('iida', 0.187), ('taira', 0.18), ('ilp', 0.178), ('yj', 0.168), ('anaphoricity', 0.159), ('antecedent', 0.148), ('anaphors', 0.146), ('denis', 0.132), ('baldridge', 0.119), ('determination', 0.105), ('zeroanaphora', 0.098), ('subject', 0.096), ('naist', 0.093), ('imamura', 0.087), ('zj', 0.087), ('mentions', 0.078), ('detection', 0.074), ('centering', 0.071), ('pronoun', 0.07), ('predicate', 0.067), ('sasano', 0.065), ('poesio', 0.065), ('semeval', 0.063), ('rodriguez', 0.058), ('argument', 0.056), ('bf', 0.05), ('markable', 0.05), ('subj', 0.05), ('cja', 0.049), ('textpro', 0.049), ('unrealized', 0.049), ('zeros', 0.045), ('antecedents', 0.045), ('pronouns', 0.045), ('resolver', 0.044), ('recasens', 0.043), ('candidate', 0.043), ('resolve', 0.043), ('mention', 0.043), ('january', 0.042), ('identification', 0.042), ('pianta', 0.04), ('attardi', 0.037), ('induced', 0.037), ('separated', 0.036), ('dell', 0.036), ('muc', 0.034), ('realized', 0.033), ('cjs', 0.033), ('delogu', 0.033), ('exophoric', 0.033), ('featuredescription', 0.033), ('nonanaphoric', 0.033), ('ryu', 0.033), ('simi', 0.033), ('pairwise', 0.032), ('inui', 0.032), ('acquired', 0.029), ('bosco', 0.029), ('dated', 0.029), ('kobdani', 0.029), ('orletta', 0.029), ('uryupina', 0.029), ('trento', 0.028), ('classifier', 0.028), ('baselines', 0.028), ('detected', 0.028), ('cj', 0.027), ('walker', 0.027), ('di', 0.027), ('versley', 0.027), ('soon', 0.025), ('resolving', 0.025), ('editorials', 0.025), ('fujita', 0.025), ('entity', 0.025), ('proposal', 0.024), ('seki', 0.024), ('visit', 0.024), ('dependency', 0.024), ('indicator', 0.023), ('predicates', 0.023), ('chunks', 0.023), ('mj', 0.023), ('introducing', 0.022), ('classifiers', 0.022), ('coreferent', 0.022), ('grosz', 0.022), ('discourse', 0.021), ('isozaki', 0.021), ('yih', 0.021), ('kyoto', 0.02), ('subjects', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 9 acl-2011-A Cross-Lingual ILP Solution to Zero Anaphora Resolution

Author: Ryu Iida ; Massimo Poesio

Abstract: We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for ex- plicitly realized anaphors.

2 0.29982129 23 acl-2011-A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models

Author: Dingcheng Li ; Tim Miller ; William Schuler

Abstract: and Wellner, This paper presents a supervised pronoun anaphora resolution system based on factorial hidden Markov models (FHMMs). The basic idea is that the hidden states of FHMMs are an explicit short-term memory with an antecedent buffer containing recently described referents. Thus an observed pronoun can find its antecedent from the hidden buffer, or in terms of a generative model, the entries in the hidden buffer generate the corresponding pronouns. A system implementing this model is evaluated on the ACE corpus with promising performance.

3 0.18010348 63 acl-2011-Bootstrapping coreference resolution using word associations

Author: Hamidreza Kobdani ; Hinrich Schuetze ; Michael Schiehlen ; Hans Kamp

Abstract: In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus. We show that word associations are useful for CoRe – e.g., the strong association between Obama and President is an indicator of likely coreference. Association information has so far not been used in CoRe because it is sparse and difficult to learn from small labeled corpora. Since unlabeled text is readily available, our unsupervised approach addresses the sparseness problem. In a self-training framework, we train a decision tree on a corpus that is automatically labeled using word associations. We show that this unsupervised system has better CoRe performance than other learning approaches that do not use manually labeled data. .

4 0.17861792 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

Author: Sameer Singh ; Amarnag Subramanya ; Fernando Pereira ; Andrew McCallum

Abstract: Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.

5 0.14645457 85 acl-2011-Coreference Resolution with World Knowledge

Author: Altaf Rahman ; Vincent Ng

Abstract: While world knowledge has been shown to improve learning-based coreference resolvers, the improvements were typically obtained by incorporating world knowledge into a fairly weak baseline resolver. Hence, it is not clear whether these benefits can carry over to a stronger baseline. Moreover, since there has been no attempt to apply different sources of world knowledge in combination to coreference resolution, it is not clear whether they offer complementary benefits to a resolver. We systematically compare commonly-used and under-investigated sources of world knowledge for coreference resolution by applying them to two learning-based coreference models and evaluating them on documents annotated with two different annotation schemes.

6 0.13743098 86 acl-2011-Coreference for Learning to Extract Relations: Yes Virginia, Coreference Matters

7 0.13485441 129 acl-2011-Extending the Entity Grid with Entity-Specific Features

8 0.094784558 144 acl-2011-Global Learning of Typed Entailment Rules

9 0.08616703 238 acl-2011-P11-2093 k2opt.pdf

10 0.080098405 197 acl-2011-Latent Class Transliteration based on Source Language Origin

11 0.072717726 240 acl-2011-ParaSense or How to Use Parallel Corpora for Word Sense Disambiguation

12 0.065850377 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

13 0.065024234 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

14 0.061680309 161 acl-2011-Identifying Word Translations from Comparable Corpora Using Latent Topic Models

15 0.060409535 323 acl-2011-Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections

16 0.056935873 6 acl-2011-A Comprehensive Dictionary of Multiword Expressions

17 0.055983365 334 acl-2011-Which Noun Phrases Denote Which Concepts?

18 0.055155549 191 acl-2011-Knowledge Base Population: Successful Approaches and Challenges

19 0.054405425 110 acl-2011-Effective Use of Function Words for Rule Generalization in Forest-Based Translation

20 0.052162785 187 acl-2011-Jointly Learning to Extract and Compress


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.14), (1, 0.033), (2, -0.123), (3, -0.013), (4, 0.062), (5, 0.043), (6, 0.059), (7, -0.038), (8, -0.214), (9, 0.025), (10, 0.046), (11, -0.036), (12, -0.081), (13, -0.045), (14, 0.022), (15, 0.041), (16, -0.03), (17, 0.062), (18, 0.054), (19, 0.056), (20, -0.053), (21, 0.018), (22, 0.023), (23, 0.115), (24, -0.105), (25, 0.034), (26, -0.103), (27, -0.109), (28, -0.191), (29, -0.16), (30, 0.133), (31, -0.112), (32, -0.004), (33, -0.049), (34, -0.185), (35, -0.022), (36, -0.013), (37, 0.021), (38, 0.043), (39, -0.043), (40, 0.097), (41, -0.014), (42, -0.023), (43, 0.041), (44, 0.038), (45, -0.01), (46, 0.002), (47, -0.132), (48, -0.014), (49, -0.167)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95223254 9 acl-2011-A Cross-Lingual ILP Solution to Zero Anaphora Resolution

Author: Ryu Iida ; Massimo Poesio

Abstract: We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for ex- plicitly realized anaphors.

2 0.81245029 85 acl-2011-Coreference Resolution with World Knowledge

Author: Altaf Rahman ; Vincent Ng

Abstract: While world knowledge has been shown to improve learning-based coreference resolvers, the improvements were typically obtained by incorporating world knowledge into a fairly weak baseline resolver. Hence, it is not clear whether these benefits can carry over to a stronger baseline. Moreover, since there has been no attempt to apply different sources of world knowledge in combination to coreference resolution, it is not clear whether they offer complementary benefits to a resolver. We systematically compare commonly-used and under-investigated sources of world knowledge for coreference resolution by applying them to two learning-based coreference models and evaluating them on documents annotated with two different annotation schemes.

3 0.79502702 23 acl-2011-A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models

Author: Dingcheng Li ; Tim Miller ; William Schuler

Abstract: and Wellner, This paper presents a supervised pronoun anaphora resolution system based on factorial hidden Markov models (FHMMs). The basic idea is that the hidden states of FHMMs are an explicit short-term memory with an antecedent buffer containing recently described referents. Thus an observed pronoun can find its antecedent from the hidden buffer, or in terms of a generative model, the entries in the hidden buffer generate the corresponding pronouns. A system implementing this model is evaluated on the ACE corpus with promising performance.

4 0.79326367 63 acl-2011-Bootstrapping coreference resolution using word associations

Author: Hamidreza Kobdani ; Hinrich Schuetze ; Michael Schiehlen ; Hans Kamp

Abstract: In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus. We show that word associations are useful for CoRe – e.g., the strong association between Obama and President is an indicator of likely coreference. Association information has so far not been used in CoRe because it is sparse and difficult to learn from small labeled corpora. Since unlabeled text is readily available, our unsupervised approach addresses the sparseness problem. In a self-training framework, we train a decision tree on a corpus that is automatically labeled using word associations. We show that this unsupervised system has better CoRe performance than other learning approaches that do not use manually labeled data. .

5 0.60553247 196 acl-2011-Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models

Author: Sameer Singh ; Amarnag Subramanya ; Fernando Pereira ; Andrew McCallum

Abstract: Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.

6 0.42746454 129 acl-2011-Extending the Entity Grid with Entity-Specific Features

7 0.36471829 86 acl-2011-Coreference for Learning to Extract Relations: Yes Virginia, Coreference Matters

8 0.32765085 238 acl-2011-P11-2093 k2opt.pdf

9 0.3161743 1 acl-2011-(11-06-spirl)

10 0.30704343 8 acl-2011-A Corpus of Scope-disambiguated English Text

11 0.28602448 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

12 0.27510148 334 acl-2011-Which Noun Phrases Denote Which Concepts?

13 0.27107435 297 acl-2011-That's What She Said: Double Entendre Identification

14 0.26430345 284 acl-2011-Simple Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Models

15 0.25780824 95 acl-2011-Detection of Agreement and Disagreement in Broadcast Conversations

16 0.25430432 229 acl-2011-NULEX: An Open-License Broad Coverage Lexicon

17 0.25305638 101 acl-2011-Disentangling Chat with Local Coherence Models

18 0.25193939 215 acl-2011-MACAON An NLP Tool Suite for Processing Word Lattices

19 0.24138799 191 acl-2011-Knowledge Base Population: Successful Approaches and Challenges

20 0.23038822 194 acl-2011-Language Use: What can it tell us?


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.019), (13, 0.021), (16, 0.293), (17, 0.041), (26, 0.019), (31, 0.013), (37, 0.114), (39, 0.036), (41, 0.057), (53, 0.015), (55, 0.071), (59, 0.05), (72, 0.037), (91, 0.027), (96, 0.087)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.78853118 291 acl-2011-SystemT: A Declarative Information Extraction System

Author: Yunyao Li ; Frederick Reiss ; Laura Chiticariu

Abstract: Frederick R. Reiss IBM Research - Almaden 650 Harry Road San Jose, CA 95120 frre i s @us . ibm . com s Laura Chiticariu IBM Research - Almaden 650 Harry Road San Jose, CA 95120 chit i us .ibm . com @ magnitude larger than classical IE corpora. An Emerging text-intensive enterprise applications such as social analytics and semantic search pose new challenges of scalability and usability to Information Extraction (IE) systems. This paper presents SystemT, a declarative IE system that addresses these challenges and has been deployed in a wide range of enterprise applications. SystemT facilitates the development of high quality complex annotators by providing a highly expressive language and an advanced development environment. It also includes a cost-based optimizer and a high-performance, flexible runtime with minimum memory footprint. We present SystemT as a useful resource that is freely available, and as an opportunity to promote research in building scalable and usable IE systems.

same-paper 2 0.73445845 9 acl-2011-A Cross-Lingual ILP Solution to Zero Anaphora Resolution

Author: Ryu Iida ; Massimo Poesio

Abstract: We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for ex- plicitly realized anaphors.

3 0.6305871 320 acl-2011-Unsupervised Discovery of Domain-Specific Knowledge from Text

Author: Dirk Hovy ; Chunliang Zhang ; Eduard Hovy ; Anselmo Penas

Abstract: Learning by Reading (LbR) aims at enabling machines to acquire knowledge from and reason about textual input. This requires knowledge about the domain structure (such as entities, classes, and actions) in order to do inference. We present a method to infer this implicit knowledge from unlabeled text. Unlike previous approaches, we use automatically extracted classes with a probability distribution over entities to allow for context-sensitive labeling. From a corpus of 1.4m sentences, we learn about 250k simple propositions about American football in the form of predicateargument structures like “quarterbacks throw passes to receivers”. Using several statistical measures, we show that our model is able to generalize and explain the data statistically significantly better than various baseline approaches. Human subjects judged up to 96.6% of the resulting propositions to be sensible. The classes and probabilistic model can be used in textual enrichment to improve the performance of LbR end-to-end systems.

4 0.62047654 283 acl-2011-Simple English Wikipedia: A New Text Simplification Task

Author: William Coster ; David Kauchak

Abstract: In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based trans- lation approach for simplification.

5 0.56856877 254 acl-2011-Putting it Simply: a Context-Aware Approach to Lexical Simplification

Author: Or Biran ; Samuel Brody ; Noemie Elhadad

Abstract: We present a method for lexical simplification. Simplification rules are learned from a comparable corpus, and the rules are applied in a context-aware fashion to input sentences. Our method is unsupervised. Furthermore, it does not require any alignment or correspondence among the complex and simple corpora. We evaluate the simplification according to three criteria: preservation of grammaticality, preservation of meaning, and degree of simplification. Results show that our method outperforms an established simplification baseline for both meaning preservation and simplification, while maintaining a high level of grammaticality.

6 0.53580582 85 acl-2011-Coreference Resolution with World Knowledge

7 0.51386255 237 acl-2011-Ordering Prenominal Modifiers with a Reranking Approach

8 0.49993312 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

9 0.49847144 289 acl-2011-Subjectivity and Sentiment Analysis of Modern Standard Arabic

10 0.49756891 92 acl-2011-Data point selection for cross-language adaptation of dependency parsers

11 0.49739248 186 acl-2011-Joint Training of Dependency Parsing Filters through Latent Support Vector Machines

12 0.49697912 23 acl-2011-A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models

13 0.49474162 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

14 0.49352551 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

15 0.49293613 331 acl-2011-Using Large Monolingual and Bilingual Corpora to Improve Coordination Disambiguation

16 0.49175858 103 acl-2011-Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation

17 0.49142125 292 acl-2011-Target-dependent Twitter Sentiment Classification

18 0.4911586 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

19 0.49064457 311 acl-2011-Translationese and Its Dialects

20 0.49027991 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation