emnlp emnlp2012 emnlp2012-112 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Altaf Rahman ; Vincent Ng
Abstract: We examine the task of resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference (e.g., Binding Constraints, gender and number agreement) as well as commonly-used resolution heuristics (e.g., string-matching facilities, syntactic salience) are not useful. Being able to solve this task has broader implications in artificial intelligence: a restricted version of it, sometimes referred to as the Winograd Schema Challenge, has been suggested as a conceptually and practically appealing alternative to the Turing Test. We employ a knowledge-rich approach to this task, which yields a pronoun resolver that outperforms state-of-the-art resolvers by nearly 18 points in accuracy on our dataset.
Reference: text
sentIndex sentText sentNum sentScore
1 edu a Abstract We examine the task of resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference (e. [sent-3, score-0.323]
2 We employ a knowledge-rich approach to this task, which yields a pronoun resolver that outperforms state-of-the-art resolvers by nearly 18 points in accuracy on our dataset. [sent-9, score-0.702]
3 1 Introduction Despite the significant amount of work on pronoun resolution in the natural language processing community in the past forty years, the problem is still far from being solved. [sent-10, score-0.486]
4 Humans can resolve the pronoun easily, but stateof-the-art coreference resolvers cannot. [sent-15, score-0.744]
5 The reason is that humans have the kind of world knowledge 777 needed to resolve the pronouns that machines do not. [sent-16, score-0.45]
6 Our goal in this paper is to examine the resolution of complex cases of definite pronouns that appear in sentences exemplified by the shout example. [sent-21, score-0.52]
7 , the connective appears between the two clauses, just like because in the shout example), where the first clause contains two or more candidate antecedents (e. [sent-24, score-0.71]
8 , Ed and Tim), and the second clause contains the target pronoun (e. [sent-26, score-0.452]
9 , he); and (2) the target pronoun agrees in gender, number, and semantic class with each candidate antecedent, but does not have any overlap in content words with any of them. [sent-28, score-0.517]
10 For convenience, we will refer to the target pronoun that appears in this kind of sentences as a difficult pronoun. [sent-29, score-0.458]
11 Note that many traditional linguistic constraints on coreference are no longer useful for resolving difficult pronouns. [sent-30, score-0.302]
12 The target pronoun in each sentence is italicized, and its antecedent is boldfaced. [sent-45, score-0.746]
13 string-matching facilities will not be useful either, since the pronoun and its candidate antecedents do not have any words in common. [sent-46, score-0.731]
14 Twin sentences were used extensively by researchers in the 1970s to illustrate the difficulty of pronoun resolution (Hirst, 1981). [sent-48, score-0.486]
15 , 2011)) cannot accurately resolve the difficult pronouns in these structurally simple sentences, as they do not have the mechanism to capture the fine distinctions between twin sentences. [sent-61, score-0.447]
16 In other words, when given these sentences, the best that the existing resolvers can do to resolve the pronouns is guessing. [sent-62, score-0.427]
17 In fact, the Stanford coreference resolver (Lee et al. [sent-66, score-0.401]
18 In fact, being able to automatically resolve difficult pronouns has broader implications in artificial intelligence. [sent-69, score-0.333]
19 Strictly speaking, we are addressing a relaxed version of the Challenge: while Levesque focuses solely on definite pronouns whose resolution requires background knowledge not expressed in the words of a sentence, we do not impose such a condition on a sentence. [sent-72, score-0.424]
20 Levesque believes that “with a very high probability”, anything that can resolve correctly a series of difficult pronouns “is thinking in the fullbodied sense we usually reserve for people”. [sent-75, score-0.333]
21 Each student was also asked to annotate the candidate antecedents, the target pronoun, and the correct antecedent for each sentence she composed. [sent-106, score-0.529]
22 Each sentence pair was crosschecked by one other student to ensure that it (1) conforms to the desired constraints and (2) does not contain pronouns with ambiguous antecedents (in other words, a human should not be confused as to which candidate antecedent is the correct one). [sent-108, score-0.935]
23 — — While not requested by us, the students annotated exactly two candidate antecedents for each sentence. [sent-120, score-0.39]
24 For ease of exposition, we will assume below that there are two candidate antecedents per sentence. [sent-121, score-0.39]
25 3 Machine Learning Framework Since our goal is to determine which of the two candidate antecedents is the correct antecedent for the target pronoun in each sentence, our system assumes as input the sentence, the target pronoun, and the two candidate antecedents. [sent-122, score-1.276]
26 Given a pronoun and two candidate antecedents, we aim to train a ranking model that ranks the two candidates such that the correct antecedent is assigned a higher rank. [sent-128, score-0.782]
27 More formally, given training sentence Sk containing target pronoun Ak, correct antecedent Ck and incorrect antecedent Ik, we create two feature vectors, xCAk and xIAk , where xCAk is generated from Ak and Ck, and xIAk is generated from Ak and Ik. [sent-129, score-1.104]
28 For each test instance, the target pronoun is resolved to the higher-ranked candidate antecedent. [sent-134, score-0.567]
29 4 Linguistic Features We derive linguistic features for resolving difficult pronouns from eight components, as described below. [sent-135, score-0.358]
30 Below is a chain learned by Chambers and Jurafsky (2008): borrow-s invest-s spend-s pay-s raise-s lend-s As we can see, a narrative chain is composed of a sequence of events (verbs) together with the roles of the protagonist. [sent-143, score-0.342]
31 We employ narrative chains to heuristically predict the antecedent for the target pronoun, and encode the prediction as a feature. [sent-146, score-0.673]
32 Given a sentence, we first determine the event the target pronoun participates in and its role in the event. [sent-148, score-0.498]
33 2 Second, we determine the event(s) that the candidate antecedents participate in. [sent-150, score-0.39]
34 In (2), both candidate antecedents participate in the punish event. [sent-151, score-0.451]
35 Third, we pair each event participated by each candidate antecedent with each event participated by the pronoun. [sent-152, score-0.583]
36 Note that try and escape are associated with the role of the pronoun that we extracted in the first step. [sent-154, score-0.424]
37 In other words, the protagonist in this chain is the subject of an escape event and the object of a punish event. [sent-157, score-0.347]
38 Fifth, from the extracted chain, we obtain the role played by the pronoun (i. [sent-158, score-0.375]
39 , the protagonist) in the event in which the candidate antecedents participate. [sent-160, score-0.461]
40 In our example, the pronoun plays an object role in the punish event. [sent-161, score-0.471]
41 Finally, we extract the candidate antecedent that plays the extracted role, which in our example is the second antecedent, Tim. [sent-162, score-0.441]
42 Assume in the rest of the paper that i1 and i2 are the feature vectors corresponding to the first candidate antecedent and the second candidate an2Throughout the paper, the subject/object of an event refers to its deep rather than surface subject/object. [sent-164, score-0.636]
43 z 4For an alternative way of using narrative chains for coreference resolution, see Irwin et al. [sent-170, score-0.419]
44 5 For our running example, since Tim is predicted to be the antecedent of he, the value of NC in i2 is 1, and its value in i1 is 0. [sent-173, score-0.317]
45 Similarly, humans resolve it to the knife in (3b) by exploiting the world knowledge that the word sharp can be used to describe a knife but not flesh. [sent-180, score-0.442]
46 On the other hand, only four queries are generated for (3a): (Q1) “lions are”; (Q2) 5The nth candidate antecedent in a sentence is the nth annotated NP encountered when processing the sentence in a leftto-right manner. [sent-187, score-0.594]
47 In sentence (2), Ed is the first candidate antecedent and Tim is the second. [sent-188, score-0.477]
48 The role of the threshold x should be obvious: it ensures that a heuristic decision is made only if the difference between the counts for the two queries are sufficiently large, because otherwise there is no reason for us to prefer one candidate antecedent to the other. [sent-195, score-0.603]
49 Note that other researchers have also used lexicosyntactic patterns to generate search queries for bridging anaphora resolution (e. [sent-197, score-0.331]
50 (2003)), and learning selectional preferences for pronoun resolution (e. [sent-203, score-0.486]
51 The reason is that both candidate antecedents in the sentence are proper names belonging to the same type (which in this case is PERSON). [sent-218, score-0.426]
52 The pronoun he refers to John in (5a) and Jim in (5b). [sent-236, score-0.341]
53 In (5a), the use of even though yields a clause of concession, which flips the polarity of more popular (from positive to negative), whereas in (5b), the use of because yields a clause of cause, which does not change the polarity of more popular (i. [sent-239, score-0.444]
54 , “better” or “worse”) to the pronoun and the two candidate antecedents. [sent-248, score-0.465]
55 For instance, to determine the rank value of the pronoun A, we first determine the polarity value pA of its anchor word wA, which is either the verb v for which A serves as the deep subject, or the adjective modifying A if v does not exist,7 using Wilson et al. [sent-249, score-0.636]
56 The polarity values of the two candidate antecedents can be determined in a similar fashion. [sent-260, score-0.553]
57 Specifically, if the rank value of the pronoun or the rank value of one or both of the candidate antecedents cannot be determined, the val- ues of all three binary features will be set to zero for both i1 and i2. [sent-264, score-0.839]
58 To compute HPOL1, which is a binary feature, we (1) employ a heuristic resolution procedure, which resolves the pronoun to the candidate antecedent with the same rank value, and then (2) encode the outcome of this heuristic procedure as the value of HPOL1. [sent-266, score-1.185]
59 For example, since the first candidate antecedent, John, is predicted to be the antecedent in (5a), HPOL1(i1)=1 and HPOL1(i2)=0. [sent-267, score-0.441]
60 The value of HPOL2 is the concatenation of the polarity values determined for the pronoun and the candidate antecedent. [sent-268, score-0.628]
61 5 Machine-Learned Polarity In the previous subsection, we compute the polarity of a word by updating its prior polarity heuristically with contextual information. [sent-273, score-0.379]
62 Given a sentence and the polarity values of the phrases annotated by OpinionFinder, we determine the rank values of the pronoun and the two candidate antecedents by mapping them to the polarized phrases using the dependency relations provided by the Stanford dependency parser. [sent-277, score-0.984]
63 We create three binary features, LPOL1, LPOL2, and LPOL3, whose values are computed in the same way as HPOL1, HPOL2, and HPOL3, respectively, except that the computation here is based on the machine-learned polarity values rather than the heuristically determined polarity values. [sent-278, score-0.42]
64 Humans resolve they to Google in (6a) by exploiting the world knowledge that there is a causal relation (signaled by the discourse connective because) between the want event and the buy event. [sent-282, score-0.425]
65 Each triple is of the form (V ,Conn,X), where Conn is a discourse connective, V is a stemmed verb in the clause preceding Conn, and X is a stemmed verb or an adjective in the clause following Conn. [sent-288, score-0.459]
66 We use the frequency counts of these triples to heuristically predict the correct antecedent for a target pronoun. [sent-291, score-0.422]
67 Given a sentence where Conn is the discourse connective, X is the stemmed verb governing the target pronoun A or the adjective modifying A (if X is a to be verb), and V is the stemmed verb governing the candidate antecedents, we retrieve the frequency count of the triple (V ,Conn,X). [sent-292, score-1.038]
68 If the count is at least 100, we employ a procedure for heuristically selecting the antecedent for the target anaphor. [sent-293, score-0.464]
69 Specifically, if X is a verb, then it resolves the target pronoun to the candidate antecedent that has the same grammatical role as the pronoun. [sent-294, score-0.978]
70 However, if X is an adjective and the sentence does not involve comparison, then it resolves the target pronoun to the candidate antecedent serving as the subject of V . [sent-295, score-1.059]
71 In our running example, the triple (buy, because, want) occurs 860 times in our corpus, so the pronoun they is resolved to the candidate antecedent that occurs as the subject of buy. [sent-297, score-0.924]
72 7 Semantic Compatibility Some of the queries generated by the Google component, such as Q1 and Q2, aim to capture the semantic compatibility between a candidate antecedent, C, and the verb governing the target pronoun, V . [sent-301, score-0.41]
73 Assuming that the target pronoun and its governing verb V has grammatical relation GR, we create three features, SC1, SC2, and SC3, based on our semantic compatibility component. [sent-308, score-0.587]
74 SC1 encodes the MI value of the head noun of a candidate antecedent and V (and GR). [sent-309, score-0.441]
75 SC2 is a binary feature whose value indicates which of the candidate antecedents has a larger MI value with V (and GR). [sent-310, score-0.39]
76 In other words, SC2 and SC3 employ different measures to heuristically predict the correct antecedent for the target pronoun. [sent-312, score-0.422]
77 If the target pronoun is governed by a to be verb, the values of these three features will all be set to zero. [sent-313, score-0.393]
78 Assuming that W is an arbitrary word in a sentence S that is not part of a candidate antecedent and Conn is the connective in S, we create three types of binary-valued antecedent-independent features, namely (1) unigrams, where we create one 9We use the same formula as described in Section 4. [sent-325, score-0.692]
79 Let (1) HC1 and HC2 be the head words of candidate antecedents C1 and C2, respectively; (2) VC1 , VC2 , and VA be the verbs governing C1, C2, and the target pronoun A, respectively; and (3) JC1 , JC2 , and JA be the adjectives modifying C1, C2, and A, respectively. [sent-330, score-0.834]
80 11 We create from each candidate antecedent four features, each of which is a word pair. [sent-331, score-0.482]
81 Our first baseline is a resolver that randomly guesses the antecedent for the 10Pairing an adjective A in one clause with a noun N in another clause may mislead the learner into thinking that N is modified by A, and hence we do not create such pairs. [sent-344, score-0.747]
82 Since there are two candidate antecedents per sentence, the Random baseline should achieve an accuracy of 50%. [sent-347, score-0.39]
83 Recall from Section 3 that our system assumes as input not only a sentence containing a target pronoun but also the two candidate antecedents. [sent-353, score-0.553]
84 Hence, if the Stanford resolver decides to resolve the target pronoun, it will resolve it to one of the two candidate antecedents. [sent-355, score-0.623]
85 Given that the Random baseline correctly resolves 50% of pronouns and the Stanford resolver correctly resolves only 40. [sent-362, score-0.645]
86 Hence, to ensure a fairer comparison, we produce “adjusted” scores for the Stanford resolver, where we “force” it to resolve all of the unresolved target pronouns by assuming that probabilistically half of them will be resolved correctly. [sent-366, score-0.401]
87 Note that the Baseline Ranker will be applied to resolve only those pronouns that are left unresolved by Stanford. [sent-397, score-0.299]
88 Results in row 3 of Table 3 show that the adjusted accuracy of this Combined resolver is 55. [sent-398, score-0.318]
89 These results suggest that our features are more useful for resolving difficult pronouns than those commonly used for coreference resolution. [sent-413, score-0.494]
90 Lexical Features, our system still outperforms all the baseline resolvers: as can been implied from the last row of Table 4, in the absence of the Lexical Features, our resolver achieves an adjusted accuracy of 67. [sent-434, score-0.318]
91 Google is especially good at capturing facts, such as lions are predators and zebras are not predators, helping us correctly resolve sentences such as (5a) and (5b), as well as those in sentence pair (I) in Table 1. [sent-446, score-0.371]
92 However, it may not be good at handling pronouns whose resolution requires an understanding of the connection between the facts or events described in the two clauses of a sentence. [sent-447, score-0.408]
93 For example, narrative chains fail to capture the causal relation between the event expressed by angry and shout in sentence (1b). [sent-452, score-0.486]
94 This can be attributed to the simplicity of our Heuristic Polarity component: determining the polarity of a word based on its prior polarity is too na¨ ıve. [sent-455, score-0.326]
95 6 Conclusions We investigated the resolution of complex cases of definite pronouns, a problem that was under extensive discussion by coreference researchers in the 1970s but has received revived interest owing in part to its relevance to the Turing Test. [sent-457, score-0.368]
96 In addition, we plan to integrate our resolver into a general-purpose coreference system and evaluate the resulting resolver on standard eval- uation corpora such as MUC, ACE, and OntoNotes. [sent-460, score-0.634]
97 Simple coreference resolution with rich syntactic and semantic features. [sent-512, score-0.313]
98 Stanford’s multi-pass sieve coreference resolution system at the CoNLL-201 1 shared task. [sent-532, score-0.313]
99 A new, fully automatic version of Mitkov’s knowledge-poor pronoun resolution method. [sent-545, score-0.486]
100 A machine learning approach to pronoun resolution in spoken dialogue. [sent-615, score-0.486]
wordName wordTfidf (topN-words)
[('pronoun', 0.341), ('antecedent', 0.317), ('antecedents', 0.266), ('resolver', 0.233), ('pronouns', 0.192), ('coreference', 0.168), ('polarity', 0.163), ('narrative', 0.154), ('resolution', 0.145), ('connective', 0.133), ('resolvers', 0.128), ('shout', 0.128), ('candidate', 0.124), ('jim', 0.123), ('conn', 0.114), ('twin', 0.114), ('poesio', 0.111), ('resolves', 0.11), ('resolve', 0.107), ('resolving', 0.1), ('winograd', 0.1), ('chains', 0.097), ('ranker', 0.091), ('knife', 0.086), ('lions', 0.086), ('stanford', 0.085), ('queries', 0.081), ('framenet', 0.072), ('predators', 0.071), ('zebras', 0.071), ('event', 0.071), ('tim', 0.068), ('stemmed', 0.067), ('compatibility', 0.062), ('punish', 0.061), ('anaphora', 0.06), ('clause', 0.059), ('flesh', 0.057), ('levesque', 0.057), ('roles', 0.057), ('else', 0.056), ('definite', 0.055), ('rahman', 0.055), ('rank', 0.054), ('heuristically', 0.053), ('schema', 0.052), ('target', 0.052), ('triple', 0.051), ('governing', 0.051), ('resolved', 0.05), ('xiaofeng', 0.049), ('escape', 0.049), ('massimo', 0.048), ('adjusted', 0.047), ('chain', 0.047), ('heuristic', 0.047), ('bridging', 0.045), ('ponzetto', 0.045), ('opinionfinder', 0.044), ('turing', 0.044), ('world', 0.044), ('nc', 0.044), ('humans', 0.044), ('sharp', 0.043), ('unannotated', 0.043), ('cbr', 0.043), ('gr', 0.043), ('protagonist', 0.043), ('shouted', 0.043), ('unadjusted', 0.043), ('versley', 0.043), ('xcak', 0.043), ('xiak', 0.043), ('count', 0.042), ('create', 0.041), ('chambers', 0.041), ('subj', 0.041), ('subject', 0.041), ('verb', 0.04), ('discourse', 0.038), ('row', 0.038), ('adjective', 0.038), ('sentiment', 0.037), ('events', 0.037), ('google', 0.037), ('mi', 0.036), ('sentence', 0.036), ('object', 0.035), ('role', 0.034), ('difficult', 0.034), ('clauses', 0.034), ('anaphors', 0.033), ('mitkov', 0.033), ('claudio', 0.033), ('simone', 0.033), ('eight', 0.032), ('knowledge', 0.032), ('ed', 0.032), ('strube', 0.032), ('kind', 0.031)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999845 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge
Author: Altaf Rahman ; Vincent Ng
Abstract: We examine the task of resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference (e.g., Binding Constraints, gender and number agreement) as well as commonly-used resolution heuristics (e.g., string-matching facilities, syntactic salience) are not useful. Being able to solve this task has broader implications in artificial intelligence: a restricted version of it, sometimes referred to as the Winograd Schema Challenge, has been suggested as a conceptually and practically appealing alternative to the Turing Test. We employ a knowledge-rich approach to this task, which yields a pronoun resolver that outperforms state-of-the-art resolvers by nearly 18 points in accuracy on our dataset.
2 0.3681986 113 emnlp-2012-Resolving This-issue Anaphora
Author: Varada Kolhatkar ; Graeme Hirst
Abstract: We annotate and resolve a particular case of abstract anaphora, namely, thisissue anaphora. We propose a candidate ranking model for this-issue anaphora resolution that explores different issuespecific and general abstract-anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.
3 0.21893592 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents
Author: Heeyoung Lee ; Marta Recasens ; Angel Chang ; Mihai Surdeanu ; Dan Jurafsky
Abstract: We introduce a novel coreference resolution system that models entities and events jointly. Our iterative method cautiously constructs clusters of entity and event mentions using linear regression to model cluster merge operations. As clusters are built, information flows between entity and event clusters through features that model semantic role dependencies. Our system handles nominal and verbal events as well as entities, and our joint formulation allows information from event coreference to help entity coreference, and vice versa. In a cross-document domain with comparable documents, joint coreference resolution performs significantly better (over 3 CoNLL F1 points) than two strong baselines that resolve entities and events separately.
4 0.1277674 73 emnlp-2012-Joint Learning for Coreference Resolution with Markov Logic
Author: Yang Song ; Jing Jiang ; Wayne Xin Zhao ; Sujian Li ; Houfeng Wang
Abstract: Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-201 1shared task on the same dataset. Compared with the best system from CoNLL2011, which employs a rule-based method, our system shows competitive performance.
5 0.1242087 28 emnlp-2012-Collocation Polarity Disambiguation Using Web-based Pseudo Contexts
Author: Yanyan Zhao ; Bing Qin ; Ting Liu
Abstract: This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as ⟨long, battery life⟩ or ⟨long, ast atratrguep⟩t) (, siunc whh aisch ⟨ ltohneg s,en btatitmeernyt l iofrei⟩en otrat ⟨iolonn gof, tshtaer polarity wwohirdch (“long”) changes along owniothf different targets (“battery life” or “startup”). To disambiguate a collocation’s polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation’s polarity.Without using any additional labeled data, experiments , show that our method is effective.
6 0.11579979 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts
7 0.11261031 76 emnlp-2012-Learning-based Multi-Sieve Co-reference Resolution with Knowledge
8 0.1075571 72 emnlp-2012-Joint Inference for Event Timeline Construction
9 0.098824978 36 emnlp-2012-Domain Adaptation for Coreference Resolution: An Adaptive Ensemble Approach
10 0.088139921 80 emnlp-2012-Learning Verb Inference Rules from Linguistically-Motivated Evidence
11 0.078629822 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction
12 0.07494767 137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes
13 0.070749573 97 emnlp-2012-Natural Language Questions for the Web of Data
14 0.063388005 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers
15 0.061370254 32 emnlp-2012-Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants
16 0.057764228 55 emnlp-2012-Forest Reranking through Subtree Ranking
17 0.053039305 110 emnlp-2012-Reading The Web with Learned Syntactic-Semantic Inference Rules
18 0.051713295 91 emnlp-2012-Monte Carlo MCMC: Efficient Inference by Approximate Sampling
19 0.05091605 85 emnlp-2012-Local and Global Context for Supervised and Unsupervised Metonymy Resolution
20 0.049757876 101 emnlp-2012-Opinion Target Extraction Using Word-Based Translation Model
topicId topicWeight
[(0, 0.226), (1, 0.219), (2, -0.041), (3, -0.103), (4, 0.157), (5, -0.134), (6, -0.109), (7, -0.196), (8, 0.097), (9, 0.036), (10, -0.061), (11, -0.085), (12, 0.009), (13, 0.353), (14, -0.002), (15, -0.299), (16, -0.117), (17, 0.079), (18, -0.131), (19, 0.065), (20, 0.05), (21, 0.139), (22, -0.156), (23, -0.148), (24, 0.115), (25, -0.116), (26, 0.026), (27, 0.072), (28, -0.035), (29, 0.028), (30, 0.172), (31, 0.053), (32, -0.06), (33, 0.098), (34, 0.032), (35, 0.079), (36, -0.043), (37, -0.069), (38, -0.028), (39, 0.028), (40, 0.069), (41, 0.108), (42, -0.027), (43, -0.008), (44, 0.027), (45, -0.004), (46, 0.039), (47, 0.017), (48, 0.026), (49, 0.012)]
simIndex simValue paperId paperTitle
same-paper 1 0.95150399 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge
Author: Altaf Rahman ; Vincent Ng
Abstract: We examine the task of resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference (e.g., Binding Constraints, gender and number agreement) as well as commonly-used resolution heuristics (e.g., string-matching facilities, syntactic salience) are not useful. Being able to solve this task has broader implications in artificial intelligence: a restricted version of it, sometimes referred to as the Winograd Schema Challenge, has been suggested as a conceptually and practically appealing alternative to the Turing Test. We employ a knowledge-rich approach to this task, which yields a pronoun resolver that outperforms state-of-the-art resolvers by nearly 18 points in accuracy on our dataset.
2 0.90647858 113 emnlp-2012-Resolving This-issue Anaphora
Author: Varada Kolhatkar ; Graeme Hirst
Abstract: We annotate and resolve a particular case of abstract anaphora, namely, thisissue anaphora. We propose a candidate ranking model for this-issue anaphora resolution that explores different issuespecific and general abstract-anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.
3 0.42394766 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents
Author: Heeyoung Lee ; Marta Recasens ; Angel Chang ; Mihai Surdeanu ; Dan Jurafsky
Abstract: We introduce a novel coreference resolution system that models entities and events jointly. Our iterative method cautiously constructs clusters of entity and event mentions using linear regression to model cluster merge operations. As clusters are built, information flows between entity and event clusters through features that model semantic role dependencies. Our system handles nominal and verbal events as well as entities, and our joint formulation allows information from event coreference to help entity coreference, and vice versa. In a cross-document domain with comparable documents, joint coreference resolution performs significantly better (over 3 CoNLL F1 points) than two strong baselines that resolve entities and events separately.
4 0.34178162 73 emnlp-2012-Joint Learning for Coreference Resolution with Markov Logic
Author: Yang Song ; Jing Jiang ; Wayne Xin Zhao ; Sujian Li ; Houfeng Wang
Abstract: Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-201 1shared task on the same dataset. Compared with the best system from CoNLL2011, which employs a rule-based method, our system shows competitive performance.
5 0.28980526 28 emnlp-2012-Collocation Polarity Disambiguation Using Web-based Pseudo Contexts
Author: Yanyan Zhao ; Bing Qin ; Ting Liu
Abstract: This paper focuses on the task of collocation polarity disambiguation. The collocation refers to a binary tuple of a polarity word and a target (such as ⟨long, battery life⟩ or ⟨long, ast atratrguep⟩t) (, siunc whh aisch ⟨ ltohneg s,en btatitmeernyt l iofrei⟩en otrat ⟨iolonn gof, tshtaer polarity wwohirdch (“long”) changes along owniothf different targets (“battery life” or “startup”). To disambiguate a collocation’s polarity, previous work always turned to investigate the polarities of its surrounding contexts, and then assigned the majority polarity to the collocation. However, these contexts are limited, thus the resulting polarity is insufficient to be reliable. We therefore propose an unsupervised three-component framework to expand some pseudo contexts from web, to help disambiguate a collocation’s polarity.Without using any additional labeled data, experiments , show that our method is effective.
6 0.28877872 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts
7 0.25210711 76 emnlp-2012-Learning-based Multi-Sieve Co-reference Resolution with Knowledge
8 0.24152367 85 emnlp-2012-Local and Global Context for Supervised and Unsupervised Metonymy Resolution
9 0.22841781 36 emnlp-2012-Domain Adaptation for Coreference Resolution: An Adaptive Ensemble Approach
10 0.2264654 80 emnlp-2012-Learning Verb Inference Rules from Linguistically-Motivated Evidence
11 0.21826492 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction
12 0.21025699 32 emnlp-2012-Detecting Subgroups in Online Discussions by Modeling Positive and Negative Relations among Participants
13 0.20740752 72 emnlp-2012-Joint Inference for Event Timeline Construction
14 0.20451932 137 emnlp-2012-Why Question Answering using Sentiment Analysis and Word Classes
15 0.1909911 97 emnlp-2012-Natural Language Questions for the Web of Data
16 0.18810937 110 emnlp-2012-Reading The Web with Learned Syntactic-Semantic Inference Rules
17 0.17453392 9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution
18 0.1603823 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers
19 0.15795852 123 emnlp-2012-Syntactic Transfer Using a Bilingual Lexicon
20 0.15614799 26 emnlp-2012-Building a Lightweight Semantic Model for Unsupervised Information Extraction on Short Listings
topicId topicWeight
[(2, 0.018), (16, 0.068), (25, 0.014), (34, 0.044), (60, 0.063), (63, 0.048), (64, 0.029), (65, 0.044), (70, 0.014), (73, 0.401), (74, 0.034), (76, 0.048), (80, 0.02), (86, 0.033), (95, 0.041)]
simIndex simValue paperId paperTitle
1 0.84195775 125 emnlp-2012-Towards Efficient Named-Entity Rule Induction for Customizability
Author: Ajay Nagesh ; Ganesh Ramakrishnan ; Laura Chiticariu ; Rajasekar Krishnamurthy ; Ankush Dharkar ; Pushpak Bhattacharyya
Abstract: Generic rule-based systems for Information Extraction (IE) have been shown to work reasonably well out-of-the-box, and achieve state-of-the-art accuracy with further domain customization. However, it is generally recognized that manually building and customizing rules is a complex and labor intensive process. In this paper, we discuss an approach that facilitates the process of building customizable rules for Named-Entity Recognition (NER) tasks via rule induction, in the Annotation Query Language (AQL). Given a set of basic features and an annotated document collection, our goal is to generate an initial set of rules with reasonable accuracy, that are interpretable and thus can be easily refined by a human developer. We present an efficient rule induction process, modeled on a fourstage manual rule development process and present initial promising results with our system. We also propose a simple notion of extractor complexity as a first step to quantify the interpretability of an extractor, and study the effect of induction bias and customization ofbasic features on the accuracy and complexity of induced rules. We demonstrate through experiments that the induced rules have good accuracy and low complexity according to our complexity measure.
same-paper 2 0.8354075 112 emnlp-2012-Resolving Complex Cases of Definite Pronouns: The Winograd Schema Challenge
Author: Altaf Rahman ; Vincent Ng
Abstract: We examine the task of resolving complex cases of definite pronouns, specifically those for which traditional linguistic constraints on coreference (e.g., Binding Constraints, gender and number agreement) as well as commonly-used resolution heuristics (e.g., string-matching facilities, syntactic salience) are not useful. Being able to solve this task has broader implications in artificial intelligence: a restricted version of it, sometimes referred to as the Winograd Schema Challenge, has been suggested as a conceptually and practically appealing alternative to the Turing Test. We employ a knowledge-rich approach to this task, which yields a pronoun resolver that outperforms state-of-the-art resolvers by nearly 18 points in accuracy on our dataset.
3 0.73167545 115 emnlp-2012-SSHLDA: A Semi-Supervised Hierarchical Topic Model
Author: Xian-Ling Mao ; Zhao-Yan Ming ; Tat-Seng Chua ; Si Li ; Hongfei Yan ; Xiaoming Li
Abstract: Supervised hierarchical topic modeling and unsupervised hierarchical topic modeling are usually used to obtain hierarchical topics, such as hLLDA and hLDA. Supervised hierarchical topic modeling makes heavy use of the information from observed hierarchical labels, but cannot explore new topics; while unsupervised hierarchical topic modeling is able to detect automatically new topics in the data space, but does not make use of any information from hierarchical labels. In this paper, we propose a semi-supervised hierarchical topic model which aims to explore new topics automatically in the data space while incorporating the information from observed hierarchical labels into the modeling process, called SemiSupervised Hierarchical Latent Dirichlet Allocation (SSHLDA). We also prove that hLDA and hLLDA are special cases of SSHLDA. We . conduct experiments on Yahoo! Answers and ODP datasets, and assess the performance in terms of perplexity and clustering. The experimental results show that predictive ability of SSHLDA is better than that of baselines, and SSHLDA can also achieve significant improvement over baselines for clustering on the FScore measure.
4 0.39516088 71 emnlp-2012-Joint Entity and Event Coreference Resolution across Documents
Author: Heeyoung Lee ; Marta Recasens ; Angel Chang ; Mihai Surdeanu ; Dan Jurafsky
Abstract: We introduce a novel coreference resolution system that models entities and events jointly. Our iterative method cautiously constructs clusters of entity and event mentions using linear regression to model cluster merge operations. As clusters are built, information flows between entity and event clusters through features that model semantic role dependencies. Our system handles nominal and verbal events as well as entities, and our joint formulation allows information from event coreference to help entity coreference, and vice versa. In a cross-document domain with comparable documents, joint coreference resolution performs significantly better (over 3 CoNLL F1 points) than two strong baselines that resolve entities and events separately.
5 0.38567367 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction
Author: Valentin I. Spitkovsky ; Hiyan Alshawi ; Daniel Jurafsky
Abstract: We present a new family of models for unsupervised parsing, Dependency and Boundary models, that use cues at constituent boundaries to inform head-outward dependency tree generation. We build on three intuitions that are explicit in phrase-structure grammars but only implicit in standard dependency formulations: (i) Distributions of words that occur at sentence boundaries such as English determiners resemble constituent edges. (ii) Punctuation at sentence boundaries further helps distinguish full sentences from fragments like headlines and titles, allowing us to model grammatical differences between complete and incomplete sentences. (iii) Sentence-internal punctuation boundaries help with longer-distance dependencies, since punctuation correlates with constituent edges. Our models induce state-of-the-art dependency grammars for many languages without — — special knowledge of optimal input sentence lengths or biased, manually-tuned initializers.
6 0.3807753 23 emnlp-2012-Besting the Quiz Master: Crowdsourcing Incremental Classification Games
7 0.36115739 20 emnlp-2012-Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews
8 0.35773811 113 emnlp-2012-Resolving This-issue Anaphora
9 0.35703245 93 emnlp-2012-Multi-instance Multi-label Learning for Relation Extraction
10 0.35023612 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields
11 0.35020757 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling
12 0.34703177 8 emnlp-2012-A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes
13 0.34629944 30 emnlp-2012-Constructing Task-Specific Taxonomies for Document Collection Browsing
14 0.34625798 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure
15 0.34441647 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts
16 0.34351161 77 emnlp-2012-Learning Constraints for Consistent Timeline Extraction
17 0.34319365 47 emnlp-2012-Explore Person Specific Evidence in Web Person Name Disambiguation
18 0.34298146 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints
19 0.34169829 10 emnlp-2012-A Statistical Relational Learning Approach to Identifying Evidence Based Medicine Categories
20 0.3412981 110 emnlp-2012-Reading The Web with Learned Syntactic-Semantic Inference Rules