acl acl2010 acl2010-92 knowledge-graph by maker-knowledge-mining

92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.


Source: pdf

Author: Cristian Danescu-Niculescu-Mizil ; Lillian Lee

Abstract: Researchers in textual entailment have begun to consider inferences involving downward-entailing operators, an interesting and important class of lexical items that change the way inferences are made. Recent work proposed a method for learning English downward-entailing operators that requires access to a high-quality collection of negative polarity items (NPIs). However, English is one of the very few languages for which such a list exists. We propose the first approach that can be applied to the many languages for which there is no pre-existing high-precision database of NPIs. As a case study, we apply our method to Romanian and show that our method yields good results. Also, we perform a cross-linguistic analysis that suggests interesting connections to some findings in linguistic typology.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Unsupervised co-learning of downward-entailing operators Cristian Danescu-Niculescu-Mizil and Lillian Lee Department of Computer Science, Cornell University cristian@cs. [sent-2, score-0.583]

2 edu Abstract Researchers in textual entailment have begun to consider inferences involving downward-entailing operators, an interesting and important class of lexical items that change the way inferences are made. [sent-6, score-0.293]

3 Recent work proposed a method for learning English downward-entailing operators that requires access to a high-quality collection of negative polarity items (NPIs). [sent-7, score-0.859]

4 However, English is one of the very few languages for which such a list exists. [sent-8, score-0.104]

5 We propose the first approach that can be applied to the many languages for which there is no pre-existing high-precision database of NPIs. [sent-9, score-0.05]

6 Also, we perform a cross-linguistic analysis that suggests interesting connections to some findings in linguistic typology. [sent-11, score-0.058]

7 —From the movie Police, adjective Downward-entailing operators are an interesting and varied class of lexical items that change the default way of dealing with certain types of inferences. [sent-20, score-0.69]

8 We explain what downward entailing means by first demonstrating the “default” behavior, which is upward entailing. [sent-23, score-0.035]

9 The word ‘observed’ is an example upward-entailing operator: the statement (i) ‘Witnesses observed opium use. [sent-24, score-0.066]

10 That is, tt nheo tt vruitche v vaelrusae (isw preserved ⇒if we replace thhaet argument of an upward-entailing operator by a superset (a more general version); in our case, the set ‘opium use’ was replaced by the superset ‘narcotic use’ . [sent-27, score-0.172]

11 Downward-entailing (DE) (also known as downward monotonic or monotone decreasing) operators violate this default inference rule: with DE operators, reasoning instead goes from “sets to subsets”. [sent-28, score-0.684]

12 An example is the word ‘bans’ : ‘The law bans opium use’ ‘⇒6Th (e⇐ la)w bans narcotic use’. [sent-29, score-0.264]

13 Although DE behavior represents an exception to the default, DE operators are as a class rather common. [sent-30, score-0.583]

14 Some are simple negations, such as ‘not’, but some other English DE operators are ‘without’, ‘reluctant to’, ‘to doubt’ , and ‘to allow’ . [sent-32, score-0.583]

15 Because DE operators violate the default “sets to supersets” inference, identifying them can po- tentially improve performance in many NLP tasks. [sent-34, score-0.649]

16 Perhaps the most obvious such tasks are those involving textual entailment, such as question answering, information extraction, summarization, and the evaluation of machine translation [4]. [sent-35, score-0.087]

17 c C2o0n1f0er Aenscseoc Sihatoirotn P faopre Crso,m papguetsat 2io4n7a–l2 L5i2n,guistics greater cognitive load than inferences in the opposite direction [8]. [sent-42, score-0.058]

18 [5] DLD09 for short as a starting point, as they present the first and, until now, only algorithm for automatically extracting DE operators from data. [sent-47, score-0.583]

19 DLD09 critically depends on access to a highquality, carefully curated collection of negative — — polarity items (NPIs) lexical items such as ‘any’, ‘ever’, or the idiom ‘have a clue’ that tend to occur only in negative environments (see §2 tfoor more details). [sent-49, score-0.484]

20 nDeLgaDti0v9e use iNroPnIms as signals §o2f the occurrence of downward-entailing operators. [sent-50, score-0.026]

21 To circumvent this problem, we introduce a knowledge-lean co-learning approach. [sent-52, score-0.035]

22 Our algorithm is initialized with a very small seed set of NPIs (which we describe how to generate), and then iterates between (a) discovering a set of DE operators using a collection of pseudo-NPIs a concept we introduce and (b) using the newlyacquired DE operators to detect new pseudo-NPIs. [sent-53, score-1.3]

23 Preliminary work on learning (German) NPIs using a small list of simple known DE operators did not yield strong results [14]. [sent-55, score-0.637]

24 Hoeksema [10] discusses why NPIs might be hard to learn from data. [sent-56, score-0.026]

25 Also, our preliminary work determined that one of the most famous co-learning algorithms, hubs and authorities or HITS [11], is poorly suited to our problem. [sent-59, score-0.02]

26 4 Contributions To begin with, we apply our algorithm to produce the first large list of DE operators for a language other than English. [sent-60, score-0.637]

27 In our case study on Romanian (§4), we achieve quite high precisions aot mka (for example, i atecrhaiteivone aquchitieev heisg a precision at 30 of 87%). [sent-61, score-0.031]

28 Auxiliary experiments explore the effects of using a large but noisy NPI list, should one be available for the language in question. [sent-62, score-0.02]

29 Finally (§5), we engage in some cross-linguistic analysis y b(a §s5ed), on tenheg a rgeseu ilnts s oomf applying our atilc- gorithm to English. [sent-64, score-0.027]

30 We find that there are some suggestive connections with findings in linguistic typology. [sent-65, score-0.058]

31 Appendix available A more complete account of our work and its implications can be found in a version of this paper containing appendices, available at www. [sent-66, score-0.021]

32 2 DLD09: successes and challenges In this section, we briefly summarize those aspects of the DLD09 method that are important to understanding how our new co-learning method works. [sent-70, score-0.023]

33 DE operators and NPIs Acquiring DE operators is challenging because of the complete lack of annotated data. [sent-71, score-1.187]

34 DLD09’s insight was to make use of negative polarity items (NPIs), which are words or phrases that tend to occur only in negative contexts. [sent-72, score-0.327]

35 The reason they did so is that Ladusaw’s hypothesis [7, 13] asserts that NPIs only occur within the scope of DE operators. [sent-73, score-0.057]

36 Figure 1depicts examples involving the English NPIs ‘any’5 and ‘have a clue’ (in the idiomatic sense) that illustrate this relationship. [sent-74, score-0.03]

37 Thus, NPIs can be treated as clues that a DE operator might be present (although DE operators may also occur without NPIs). [sent-76, score-0.702]

38 4We explored three different edge-weighting schemes based on co-occurrence frequencies and seed-set membership, but the results were extremely poor; HITS invariably retrieved very frequent words. [sent-77, score-0.026]

39 248 nDoE no Etpoe rdeoartuonbtr’ sX×ITWhdeodyuobhtanv’teha hyna hyv3aepvaenlyaesnayp alepNslePIs×XhaTvWIeh daeyodcuolhbuantev’,thiadheicayolvmuhea tvieclasue cnlsue Figure 1: Examples consistent with Ladusaw’s hypothesis that NPIs can only occur within the scope of DE operators. [sent-79, score-0.057]

40 DLD09 algorithm Potential DE operators are collected by extracting those words that appear in an NPI’s context at least once. [sent-81, score-0.583]

41 6 Then, the potential DE operators x are ranked by f(x) :=fr aeclati oivne o f re NqPueIn c oynt oexft xs i tnha thte co cnotrapinus x, which compares x’s probability of occurrence conditioned on the appearance of an NPI with its probability of occurrence overall. [sent-82, score-0.759]

42 7 The method just outlined requires access to a list of NPIs. [sent-83, score-0.085]

43 DLD09’s system used a subset of John Lawler’s carefully curated and “moderately complete” list of English NPIs. [sent-84, score-0.114]

44 8 The resultant rankings of candidate English DE operators were judged to be of high quality. [sent-85, score-0.583]

45 The challenge in porting to other languages: cluelessness Can the unsupervised approach of DLD09 be successfully applied to languages other than English? [sent-86, score-0.05]

46 One might wonder whether one can circumvent the NPI-acquisition problem by simply translating a known English NPI list into the target language. [sent-88, score-0.089]

47 However, NPI-hood need not be preserved under translation [17]. [sent-89, score-0.043]

48 Thus, for most languages, we lack the critical clues that DLD09 depends on. [sent-90, score-0.051]

49 For Romanian, we treated only negations (‘nu’ and ‘n-’) and questions as well-known environments. [sent-92, score-0.049]

50 7DLD09 used an additional distilled score, but we found that the distilled score performed worse on Romanian. [sent-93, score-0.076]

51 l Note that we cannot evaluate impact on textual inference because, to our knowledge, no publicly available textual-entailment system or evaluation data for Romanian exists. [sent-102, score-0.037]

52 We therefore examine the system outputs directly to determine whether the top-ranked items are actually DE operators or not. [sent-103, score-0.674]

53 Our evaluation metric is precision at k of a given system’s ranked list of candidate DE operators; it is not possible to evaluate recall since no list of Romanian DE operators exists (a problem that is precisely the motivation for this paper). [sent-104, score-0.754]

54 To evaluate the results, two native Romanian speakers labeled the system outputs as being “DE”, “not DE” or “Hard (to decide)”. [sent-105, score-0.02]

55 The labeling protocol, which was somewhat complex to prevent bias, is described in the externallyavailable appendices (§7. [sent-106, score-0.197]

56 The complete system output laen dap apnenndoticaetison (§s7 are publicly aplveatiela sbylest eamt: http://www. [sent-108, score-0.021]

57 2 Generating a seed set Even though, as discussed above, the translation of an NPI need not be an NPI, a preliminary review of the literature indicates that in many languages, there is some NPI that can be translated as ‘any’ or related forms like ‘anybody’ . [sent-113, score-0.119]

58 Thus, with a small amount of effort, one can form a min- imal NPI seed set for the DLD09 method by using an appropriate target-language translation of ‘any’ . [sent-114, score-0.099]

59 For Romanian, we used ‘vreo’ and ‘vreun’, which are the feminine and masculine translations of English ‘any’ . [sent-115, score-0.02]

60 3 DLD09 using the Romanian seed set We first check whether DLD09 with the twoitem seed set described in §3. [sent-117, score-0.158]

61 2lts p are fairly poor: 249 40 DortperuNobfesEmrao−32 135 0 5k k = = 825430 0 10 k=10 5 0 0 5 9 10 15 Iteration Figure 2: Left: Number of DE operators in the top k results returned by the co-learning method at each iteration. [sent-120, score-0.604]

62 iv Cisuirovness are: DE (blue/darkest/largest) and Hard (red/lighter, sometimes non-existent). [sent-127, score-0.02]

63 (See blue/dark bars in figure 3 in the externallyavailable appendices for detailed results. [sent-129, score-0.197]

64 ) This relatively unsatisfactory performance may be a consequence of the very small size of the NPI list employed, and may therefore indicate that it would be fruitful to investigate automatically extending our list of clues. [sent-130, score-0.108]

65 4 Main idea: a co-learning approach Our main insight is that not only can NPIs be used as clues for finding DE operators, as shown by DLD09, but conversely, DE operators (if known) can potentially be used to discover new NPI-like clues, which we refer to as pseudo-NPIs (or pNPIs for short). [sent-132, score-0.662]

66 By “NPI-like” we mean, “serve as possible indicators of the presence of DE operators, regardless of whether they are actually restricted to negative contexts, as true NPIs are”. [sent-133, score-0.052]

67 For example, in English newswire, the words ‘allegation’ or ‘rumor’ tend to occur mainly in DE contexts, like ‘ denied ’ or ‘ dismissed ’, even though they are clearly not true NPIs (the sentence ‘I heard a rumor’ is fine). [sent-134, score-0.025]

68 Given this insight, we approach the problem using an iterative co-learning paradigm that integrates the search for new DE operators with a search for new pNPIs. [sent-135, score-0.611]

69 First, we describe an algorithm that is the “re- verse” of DLD09 (henceforth rDLD), in that it retrieves and ranks pNPIs assuming a given list of DE operators. [sent-136, score-0.054]

70 Then, our co-learning algorithm consists of the iteration of the following two steps: • • (DE learning) Apply DLD09 using a set N (oDf pseudo-NPIs top p rleytri DevLDe a 9lis ut oinf gca and siedta Nte DE operators ranked by f (defined in Section 2). [sent-138, score-0.686]

71 (pNPI learning) Apply rDLD using the set D (top N rePtrIi leevaer a nlgis)t Aofp pNPIs LrDan kuesidn by fr; extend N with the top nr pNPIs in this list. [sent-140, score-0.045]

72 At eHaecrhe iteration, we zceodns widitehr tthhee output odf tehte. [sent-143, score-0.026]

73 Aal-t gorithm to be the ranked list of DE operators retrieved in the DE-learning step. [sent-144, score-0.728]

74 In our experiments, we initialized n to 10 and set nr to 1. [sent-145, score-0.056]

75 Figure 2 plots the number of correctly retrieved DE operators in the top k outputs at each iteration. [sent-147, score-0.65]

76 The point at iteration 0 corresponds to a datapoint already discussed above, namely, DLD09 applied to the two ‘any’-translation NPIs. [sent-148, score-0.044]

77 (Thus, a larger seed set does not necessarily mean better performance. [sent-154, score-0.079]

78 5 Cross-linguistic analysis Applying our algorithm to English: connections to linguistic typology So far, we have made no assumptions about the language on which our algorithm is applied. [sent-156, score-0.064]

79 Note that in some sense, this is a perverse question: the motivation behind our algorithm is the non-existence of a high-quality list of NPIs for the language in question, and English is essentially the only case that does not fit this description. [sent-159, score-0.054]

80 On the other hand, the fact that DLD09 applied their method for extraction of DE operators to English necessitates some form of comparison, for the sake of experimental completeness. [sent-160, score-0.583]

81 We thus ran our algorithm on the English BLLIP newswire corpus with seed set {‘any’ } . [sent-161, score-0.108]

82 tion of pNPIs has very little effect: the precisions at k are good at the beginning and stay about the same across iterations (for details see figure 5 in in the externally-available appendices). [sent-163, score-0.054]

83 Why is English ‘any’ seemingly so “powerful”, in contrast to Romanian, where iterating beyond the initial ‘any’ translations leads to better results? [sent-165, score-0.02]

84 Interestingly, findings from linguistic typology may shed some light on this issue. [sent-166, score-0.05]

85 Haspelmath [9] compares the functions of indefinite pronouns in 40 languages. [sent-167, score-0.055]

86 In the other languages (including Romanian),10 no indirect pronoun can serve as a sufficient seed. [sent-170, score-0.05]

87 Using translation Another interesting question is whether directly translating DE operators from English is an alternative to our method. [sent-172, score-0.603]

88 First, we emphasize that there exists no complete list of English DE operators (the largest available collection is the one extracted by DLD09). [sent-173, score-0.706]

89 Second, we do not know whether DE operators in one language translate into DE operators in another language. [sent-174, score-1.166]

90 Therefore, a significant fraction of the DE operators derived by our co-learning algorithm would have been missed by the translation alternative even under ideal conditions. [sent-177, score-0.603]

91 6 Conclusions We have introduced the first method for discovering downward-entailing operators that is universally applicable. [sent-178, score-0.583]

92 Previous work on automatically detecting DE operators assumed the existence of a high-quality collection of NPIs, which renders it inapplicable in most languages, where such a resource does not exist. [sent-179, score-0.606]

93 Auxiliary experiments described in the externallyavailable appendices show that pNPIs are actually more effective seeds than a noisy “true” NPI list. [sent-182, score-0.217]

94 Finally, we noted some cross-linguistic differences in performance, and found an interesting connection between these differences and Haspelmath’s [9] characterization ofcross-linguistic variation in the occurrence of indefinite pronouns. [sent-183, score-0.081]

95 Creating a natural logic inference system with combinatory categorial grammar. [sent-194, score-0.022]

96 The role of negative polar- ity and concord marking in natural language reasoning. [sent-205, score-0.052]

97 Negative polarity items: Corpus linguistics, semantics, and psycholinguistics: Day 2: Corpus linguistics. [sent-252, score-0.099]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('operators', 0.583), ('npis', 0.416), ('npi', 0.307), ('romanian', 0.253), ('pnpis', 0.219), ('de', 0.181), ('appendices', 0.131), ('polarity', 0.099), ('seed', 0.079), ('items', 0.071), ('bans', 0.066), ('cristian', 0.066), ('externallyavailable', 0.066), ('haspelmath', 0.066), ('narcotic', 0.066), ('opium', 0.066), ('clue', 0.062), ('inferences', 0.058), ('cristi', 0.057), ('indefinite', 0.055), ('list', 0.054), ('negative', 0.052), ('clues', 0.051), ('languages', 0.05), ('negations', 0.049), ('english', 0.045), ('iteration', 0.044), ('anca', 0.044), ('defendant', 0.044), ('ladusaw', 0.044), ('pnpi', 0.044), ('rdld', 0.044), ('rumor', 0.044), ('vreo', 0.044), ('vreun', 0.044), ('witnesses', 0.044), ('operator', 0.043), ('entailment', 0.039), ('tnha', 0.038), ('curated', 0.038), ('distilled', 0.038), ('ranked', 0.038), ('textual', 0.037), ('connections', 0.036), ('default', 0.036), ('fr', 0.035), ('downward', 0.035), ('circumvent', 0.035), ('lichte', 0.035), ('doubt', 0.033), ('initialized', 0.032), ('scope', 0.032), ('access', 0.031), ('superset', 0.031), ('precisions', 0.031), ('involving', 0.03), ('comma', 0.03), ('violate', 0.03), ('newswire', 0.029), ('typology', 0.028), ('insight', 0.028), ('iterative', 0.028), ('gorithm', 0.027), ('xs', 0.027), ('retrieved', 0.026), ('hard', 0.026), ('occurrence', 0.026), ('odf', 0.026), ('occur', 0.025), ('exists', 0.025), ('tac', 0.025), ('contexts', 0.024), ('pascal', 0.024), ('dagan', 0.024), ('nr', 0.024), ('ido', 0.023), ('hits', 0.023), ('rada', 0.023), ('challenges', 0.023), ('beginning', 0.023), ('collection', 0.023), ('preserved', 0.023), ('findings', 0.022), ('semantics', 0.022), ('carefully', 0.022), ('tt', 0.022), ('categorial', 0.022), ('car', 0.022), ('lillian', 0.021), ('mihalcea', 0.021), ('complete', 0.021), ('re', 0.021), ('top', 0.021), ('preliminary', 0.02), ('translations', 0.02), ('noisy', 0.02), ('translation', 0.02), ('outputs', 0.02), ('van', 0.02), ('iv', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.

Author: Cristian Danescu-Niculescu-Mizil ; Lillian Lee

Abstract: Researchers in textual entailment have begun to consider inferences involving downward-entailing operators, an interesting and important class of lexical items that change the way inferences are made. Recent work proposed a method for learning English downward-entailing operators that requires access to a high-quality collection of negative polarity items (NPIs). However, English is one of the very few languages for which such a list exists. We propose the first approach that can be applied to the many languages for which there is no pre-existing high-precision database of NPIs. As a case study, we apply our method to Romanian and show that our method yields good results. Also, we perform a cross-linguistic analysis that suggests interesting connections to some findings in linguistic typology.

2 0.097848795 35 acl-2010-Automated Planning for Situated Natural Language Generation

Author: Konstantina Garoufi ; Alexander Koller

Abstract: We present a natural language generation approach which models, exploits, and manipulates the non-linguistic context in situated communication, using techniques from AI planning. We show how to generate instructions which deliberately guide the hearer to a location that is convenient for the generation of simple referring expressions, and how to generate referring expressions with context-dependent adjectives. We implement and evaluate our approach in the framework of the Challenge on Generating Instructions in Virtual Environments, finding that it performs well even under the constraints of realtime generation.

3 0.062694326 27 acl-2010-An Active Learning Approach to Finding Related Terms

Author: David Vickrey ; Oscar Kipersztok ; Daphne Koller

Abstract: We present a novel system that helps nonexperts find sets of similar words. The user begins by specifying one or more seed words. The system then iteratively suggests a series of candidate words, which the user can either accept or reject. Current techniques for this task typically bootstrap a classifier based on a fixed seed set. In contrast, our system involves the user throughout the labeling process, using active learning to intelligently explore the space of similar words. In particular, our system can take advantage of negative examples provided by the user. Our system combines multiple preexisting sources of similarity data (a standard thesaurus, WordNet, contextual similarity), enabling it to capture many types of similarity groups (“synonyms of crash,” “types of car,” etc.). We evaluate on a hand-labeled evaluation set; our system improves over a strong baseline by 36%.

4 0.059698611 141 acl-2010-Identifying Text Polarity Using Random Walks

Author: Ahmed Hassan ; Dragomir Radev

Abstract: Automatically identifying the polarity of words is a very important task in Natural Language Processing. It has applications in text classification, text filtering, analysis of product review, analysis of responses to surveys, and mining online discussions. We propose a method for identifying the polarity of words. We apply a Markov random walk model to a large word relatedness graph, producing a polarity estimate for any given word. A key advantage of the model is its ability to accurately and quickly assign a polarity sign and magnitude to any word. The method could be used both in a semi-supervised setting where a training set of labeled words is used, and in an unsupervised setting where a handful of seeds is used to define the two polarity classes. The method is experimentally tested using a manually labeled set of positive and negative words. It outperforms the state of the art methods in the semi-supervised setting. The results in the unsupervised setting is comparable to the best reported values. However, the proposed method is faster and does not need a large corpus.

5 0.058793463 123 acl-2010-Generating Focused Topic-Specific Sentiment Lexicons

Author: Valentin Jijkoun ; Maarten de Rijke ; Wouter Weerkamp

Abstract: We present a method for automatically generating focused and accurate topicspecific subjectivity lexicons from a general purpose polarity lexicon that allow users to pin-point subjective on-topic information in a set of relevant documents. We motivate the need for such lexicons in the field of media analysis, describe a bootstrapping method for generating a topic-specific lexicon from a general purpose polarity lexicon, and evaluate the quality of the generated lexicons both manually and using a TREC Blog track test set for opinionated blog post retrieval. Although the generated lexicons can be an order of magnitude more selective than the general purpose lexicon, they maintain, or even improve, the performance of an opin- ion retrieval system.

6 0.055005237 33 acl-2010-Assessing the Role of Discourse References in Entailment Inference

7 0.050332814 2 acl-2010-"Was It Good? It Was Provocative." Learning the Meaning of Scalar Adjectives

8 0.046531931 222 acl-2010-SystemT: An Algebraic Approach to Declarative Information Extraction

9 0.046056878 129 acl-2010-Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach

10 0.04440375 1 acl-2010-"Ask Not What Textual Entailment Can Do for You..."

11 0.042267174 157 acl-2010-Last but Definitely Not Least: On the Role of the Last Sentence in Automatic Polarity-Classification

12 0.04096828 134 acl-2010-Hierarchical Sequential Learning for Extracting Opinions and Their Attributes

13 0.040299967 210 acl-2010-Sentiment Translation through Lexicon Induction

14 0.039844405 226 acl-2010-The Human Language Project: Building a Universal Corpus of the World's Languages

15 0.039589118 121 acl-2010-Generating Entailment Rules from FrameNet

16 0.039367523 127 acl-2010-Global Learning of Focused Entailment Graphs

17 0.038741782 89 acl-2010-Distributional Similarity vs. PU Learning for Entity Set Expansion

18 0.038007185 258 acl-2010-Weakly Supervised Learning of Presupposition Relations between Verbs

19 0.036752217 208 acl-2010-Sentence and Expression Level Annotation of Opinions in User-Generated Discourse

20 0.036406588 160 acl-2010-Learning Arguments and Supertypes of Semantic Relations Using Recursive Patterns


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.114), (1, 0.041), (2, -0.041), (3, -0.002), (4, -0.002), (5, -0.001), (6, 0.019), (7, 0.035), (8, -0.017), (9, -0.025), (10, -0.006), (11, 0.08), (12, -0.019), (13, -0.008), (14, -0.011), (15, -0.006), (16, 0.027), (17, -0.009), (18, -0.02), (19, 0.057), (20, -0.011), (21, -0.006), (22, -0.018), (23, -0.012), (24, -0.018), (25, -0.01), (26, -0.086), (27, 0.065), (28, -0.033), (29, 0.059), (30, 0.037), (31, -0.01), (32, 0.013), (33, -0.074), (34, 0.032), (35, -0.004), (36, 0.006), (37, 0.075), (38, 0.007), (39, 0.025), (40, -0.053), (41, 0.006), (42, 0.013), (43, 0.041), (44, 0.013), (45, -0.023), (46, 0.035), (47, 0.018), (48, 0.147), (49, -0.07)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91217822 92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.

Author: Cristian Danescu-Niculescu-Mizil ; Lillian Lee

Abstract: Researchers in textual entailment have begun to consider inferences involving downward-entailing operators, an interesting and important class of lexical items that change the way inferences are made. Recent work proposed a method for learning English downward-entailing operators that requires access to a high-quality collection of negative polarity items (NPIs). However, English is one of the very few languages for which such a list exists. We propose the first approach that can be applied to the many languages for which there is no pre-existing high-precision database of NPIs. As a case study, we apply our method to Romanian and show that our method yields good results. Also, we perform a cross-linguistic analysis that suggests interesting connections to some findings in linguistic typology.

2 0.52768236 138 acl-2010-Hunting for the Black Swan: Risk Mining from Text

Author: Jochen Leidner ; Frank Schilder

Abstract: In the business world, analyzing and dealing with risk permeates all decisions and actions. However, to date, risk identification, the first step in the risk management cycle, has always been a manual activity with little to no intelligent software tool support. In addition, although companies are required to list risks to their business in their annual SEC filings in the USA, these descriptions are often very highlevel and vague. In this paper, we introduce Risk Mining, which is the task of identifying a set of risks pertaining to a business area or entity. We argue that by combining Web mining and Information Extraction (IE) techniques, risks can be detected automatically before they materialize, thus providing valuable business intelligence. We describe a system that induces a risk taxonomy with concrete risks (e.g., interest rate changes) at its leaves and more abstract risks (e.g., financial risks) closer to its root node. The taxonomy is induced via a bootstrapping algorithms starting with a few seeds. The risk taxonomy is used by the system as input to a risk monitor that matches risk mentions in financial documents to the abstract risk types, thus bridging a lexical gap. Our system is able to automatically generate company specific “risk maps”, which we demonstrate for a corpus of earnings report conference calls.

3 0.50503278 68 acl-2010-Conditional Random Fields for Word Hyphenation

Author: Nikolaos Trogkanis ; Charles Elkan

Abstract: Finding allowable places in words to insert hyphens is an important practical problem. The algorithm that is used most often nowadays has remained essentially unchanged for 25 years. This method is the TEX hyphenation algorithm of Knuth and Liang. We present here a hyphenation method that is clearly more accurate. The new method is an application of conditional random fields. We create new training sets for English and Dutch from the CELEX European lexical resource, and achieve error rates for English of less than 0.1% for correctly allowed hyphens, and less than 0.01% for Dutch. Experiments show that both the Knuth/Liang method and a leading current commercial alternative have error rates several times higher for both languages.

4 0.49627611 43 acl-2010-Automatically Generating Term Frequency Induced Taxonomies

Author: Karin Murthy ; Tanveer A Faruquie ; L Venkata Subramaniam ; Hima Prasad K ; Mukesh Mohania

Abstract: We propose a novel method to automatically acquire a term-frequency-based taxonomy from a corpus using an unsupervised method. A term-frequency-based taxonomy is useful for application domains where the frequency with which terms occur on their own and in combination with other terms imposes a natural term hierarchy. We highlight an application for our approach and demonstrate its effectiveness and robustness in extracting knowledge from real-world data.

5 0.49318543 35 acl-2010-Automated Planning for Situated Natural Language Generation

Author: Konstantina Garoufi ; Alexander Koller

Abstract: We present a natural language generation approach which models, exploits, and manipulates the non-linguistic context in situated communication, using techniques from AI planning. We show how to generate instructions which deliberately guide the hearer to a location that is convenient for the generation of simple referring expressions, and how to generate referring expressions with context-dependent adjectives. We implement and evaluate our approach in the framework of the Challenge on Generating Instructions in Virtual Environments, finding that it performs well even under the constraints of realtime generation.

6 0.48384011 222 acl-2010-SystemT: An Algebraic Approach to Declarative Information Extraction

7 0.47901744 141 acl-2010-Identifying Text Polarity Using Random Walks

8 0.46979395 2 acl-2010-"Was It Good? It Was Provocative." Learning the Meaning of Scalar Adjectives

9 0.46076304 157 acl-2010-Last but Definitely Not Least: On the Role of the Last Sentence in Automatic Polarity-Classification

10 0.4456926 63 acl-2010-Comparable Entity Mining from Comparative Questions

11 0.43739516 19 acl-2010-A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation

12 0.4309991 181 acl-2010-On Learning Subtypes of the Part-Whole Relation: Do Not Mix Your Seeds

13 0.42867494 182 acl-2010-On the Computational Complexity of Dominance Links in Grammatical Formalisms

14 0.42692083 61 acl-2010-Combining Data and Mathematical Models of Language Change

15 0.42329761 67 acl-2010-Computing Weakest Readings

16 0.41991431 258 acl-2010-Weakly Supervised Learning of Presupposition Relations between Verbs

17 0.41860017 27 acl-2010-An Active Learning Approach to Finding Related Terms

18 0.41836339 166 acl-2010-Learning Word-Class Lattices for Definition and Hypernym Extraction

19 0.41806367 100 acl-2010-Enhanced Word Decomposition by Calibrating the Decision Threshold of Probabilistic Models and Using a Model Ensemble

20 0.41165441 85 acl-2010-Detecting Experiences from Weblogs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(6, 0.322), (7, 0.014), (14, 0.019), (25, 0.057), (33, 0.012), (39, 0.018), (42, 0.039), (44, 0.012), (59, 0.06), (73, 0.047), (78, 0.04), (80, 0.027), (83, 0.079), (84, 0.045), (98, 0.119)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.87712616 179 acl-2010-Now, Where Was I? Resumption Strategies for an In-Vehicle Dialogue System

Author: Jessica Villing

Abstract: In-vehicle dialogue systems often contain more than one application, e.g. a navigation and a telephone application. This means that the user might, for example, interrupt the interaction with the telephone application to ask for directions from the navigation application, and then resume the dialogue with the telephone application. In this paper we present an analysis of interruption and resumption behaviour in human-human in-vehicle dialogues and also propose some implications for resumption strategies in an in-vehicle dialogue system.

same-paper 2 0.7433362 92 acl-2010-Don't 'Have a Clue'? Unsupervised Co-Learning of Downward-Entailing Operators.

Author: Cristian Danescu-Niculescu-Mizil ; Lillian Lee

Abstract: Researchers in textual entailment have begun to consider inferences involving downward-entailing operators, an interesting and important class of lexical items that change the way inferences are made. Recent work proposed a method for learning English downward-entailing operators that requires access to a high-quality collection of negative polarity items (NPIs). However, English is one of the very few languages for which such a list exists. We propose the first approach that can be applied to the many languages for which there is no pre-existing high-precision database of NPIs. As a case study, we apply our method to Romanian and show that our method yields good results. Also, we perform a cross-linguistic analysis that suggests interesting connections to some findings in linguistic typology.

3 0.48005605 71 acl-2010-Convolution Kernel over Packed Parse Forest

Author: Min Zhang ; Hui Zhang ; Haizhou Li

Abstract: This paper proposes a convolution forest kernel to effectively explore rich structured features embedded in a packed parse forest. As opposed to the convolution tree kernel, the proposed forest kernel does not have to commit to a single best parse tree, is thus able to explore very large object spaces and much more structured features embedded in a forest. This makes the proposed kernel more robust against parsing errors and data sparseness issues than the convolution tree kernel. The paper presents the formal definition of convolution forest kernel and also illustrates the computing algorithm to fast compute the proposed convolution forest kernel. Experimental results on two NLP applications, relation extraction and semantic role labeling, show that the proposed forest kernel significantly outperforms the baseline of the convolution tree kernel. 1

4 0.47854352 251 acl-2010-Using Anaphora Resolution to Improve Opinion Target Identification in Movie Reviews

Author: Niklas Jakob ; Iryna Gurevych

Abstract: unkown-abstract

5 0.47634131 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese

Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng

Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1

6 0.4740907 65 acl-2010-Complexity Metrics in an Incremental Right-Corner Parser

7 0.47365433 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields

8 0.47297472 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition

9 0.47295266 136 acl-2010-How Many Words Is a Picture Worth? Automatic Caption Generation for News Images

10 0.47292835 214 acl-2010-Sparsity in Dependency Grammar Induction

11 0.47223964 55 acl-2010-Bootstrapping Semantic Analyzers from Non-Contradictory Texts

12 0.47210371 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

13 0.4709028 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing

14 0.47036606 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning

15 0.47014305 39 acl-2010-Automatic Generation of Story Highlights

16 0.46999025 116 acl-2010-Finding Cognate Groups Using Phylogenies

17 0.46933162 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features

18 0.46929705 248 acl-2010-Unsupervised Ontology Induction from Text

19 0.46865422 208 acl-2010-Sentence and Expression Level Annotation of Opinions in User-Generated Discourse

20 0.46836239 202 acl-2010-Reading between the Lines: Learning to Map High-Level Instructions to Commands