acl acl2011 acl2011-274 knowledge-graph by maker-knowledge-mining

274 acl-2011-Semi-Supervised Frame-Semantic Parsing for Unknown Predicates


Source: pdf

Author: Dipanjan Das ; Noah A. Smith

Abstract: We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15% absolute improvement in frame identification accuracy and over 13% absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. [sent-4, score-0.596]

2 We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. [sent-6, score-0.835]

3 The lexicon suggests an analysis based on the theory of frame semantics (Fillmore, 1982). [sent-11, score-0.493]

4 Johansson and Nugues (2007) used WordNet (Fellbaum, 1998) to expand the list of targets that can evoke frames and trained classifiers to identify the best-suited frame for the newly created targets. [sent-16, score-1.199]

5 In past work, we described an approach where latent variables were used in a probabilistic model to predict frames for unseen targets (Das et al. [sent-17, score-0.841]

6 Unseen targets continue to present a major obstacle to domain-general semantic analysis. [sent-21, score-0.433]

7 In this paper, we address the problem of idenfifying the semantic frames for targets unseen either in FrameNet (including the exemplar sentences) or the collection of full-text annotations released along with the lexicon. [sent-22, score-1.031]

8 m Woset coof nwsthruiccht 1Notwithstanding state-of-the-art results, that approach was only able to identify the correct frame for 1. [sent-27, score-0.452]

9 9% of unseen targets in the test data available at that time. [sent-28, score-0.48]

10 Each row under the sentence correponds to a semantic frame and its set of corresponding arguments. [sent-39, score-0.501]

11 Thick lines indicate targets that evoke frames; thin solid/dotted lines with labels indicate arguments. [sent-40, score-0.408]

12 Next, we perform label propagation on the graph, which is initialized by frame distributions over the seen targets. [sent-43, score-0.656]

13 The resulting smoothed graph con- sists of posterior distributions over semantic frames for each target in the graph, thus increasing coverage. [sent-44, score-0.687]

14 Considering unseen targets i-ns tmesatn tdicata p (although . [sent-46, score-0.48]

15 7% are observed for frame identification and full framesemantic parsing, respectively, indicating improved coverage for hitherto unobserved predicates (§6). [sent-49, score-0.793]

16 Early work on frame-semantic role labeling made use of the exemplar sentences in the FrameNet corpus, each of which is annotated for a single frame and its arguments (Thompson et al. [sent-53, score-0.552]

17 , 2007), there has been work on identifying multiple frames and their corresponding sets of ar1436 guments in a sentence. [sent-58, score-0.339]

18 In the domain of frame semantics, previous work has sought to extend the coverage of FrameNet by exploiting resources like VerbNet, WordNet, or Wikipedia (Shi and Mihalcea, 2005; Giuglea and Moschitti, 2006; Pennacchiotti et al. [sent-66, score-0.501]

19 Bejan (2009) used self-training to improve frame identification and reported improvements, but did not explicitly model unknown targets. [sent-70, score-0.646]

20 2 Graph-based Semi-Supervised Learning In graph-based semi-supervised learning, one constructs a graph whose vertices are labeled and unlabeled examples. [sent-79, score-0.274]

21 In contrast, we make use of the smoothed graph during inference in a probabilistic setting, in turn using it for the full frame-semantic parsing task. [sent-87, score-0.279]

22 (2010) proposed the use of a graph over substructures of an underlying sequence model, and used a smoothed graph for domain adaptation of part-of-speech taggers. [sent-89, score-0.343]

23 3 Approach Overview Our overall approach to handling unobserved targets consists of four distinct stages. [sent-95, score-0.414]

24 Before going into the details of each stage individually, we provide their overview here: Graph Construction: A graph consisting of vertices corresponding to targets is constructed us- ing a combination of frame similarity (for observed targets) and distributional similarity as edge weights. [sent-96, score-1.201]

25 Label Propagation: The observed targets (a small subset of the vertices) are initialized with empirical frame distributions extracted from 1437 FrameNet annotations. [sent-98, score-0.887]

26 Label propagation results in a distribution of frames for each vertex in the graph. [sent-99, score-0.501]

27 Supervised Learning: Frame identification and argument identification models are trained following Das et al. [sent-100, score-0.324]

28 The graph is used to define the set of candidate frames for unseen targets. [sent-102, score-0.592]

29 Parsing: The frame identification model of Das et al. [sent-103, score-0.588]

30 disambiguated among only those frames associated with a seen target in the annotated data. [sent-104, score-0.401]

31 For an unseen target, all frames in the FrameNet lexicon were considered (a large number). [sent-105, score-0.476]

32 The current work replaces that strategy, considering only the top M frames in the distribution produced by label propagation. [sent-106, score-0.38]

33 This strategy results in large improvements in frame identification for the unseen targets and makes inference much faster. [sent-107, score-1.09]

34 4 Semi-Supervised Learning We perform semi-supervised learning by constructing a graph of vertices representing a large number of targets, and learn frame distributions for those which were not observed in FrameNet annotations. [sent-110, score-0.75]

35 1 Graph Construction We construct a graph with targets as vertices. [sent-112, score-0.541]

36 For example, two targets corresponding to the same lemma would look like boast. [sent-114, score-0.384]

37 At the end of this processing step, we were left with 61,702 units—approximately six times more than the targets found in FrameNet annotations—each labeled with one of 4 coarse tags. [sent-131, score-0.384]

38 We considered only the top 20 most similar targets for each target, and noted Lin’s similarity between two targets t and u, which we call simDL (t, u). [sent-132, score-0.804]

39 54 and the training section of the full-text annotations that we use to train the probabilistic frame parser (see §6. [sent-135, score-0.561]

40 aFthora pair of targets t and u, we measured the Euclidean distance5 between their frame distributions. [sent-138, score-0.836]

41 Finally, the overall similarity between two given targets t and u was computed as: sim (t, u) = α · simFN(t, u) + (1−α) · simDL (t, u) Note that this score is symmetric because its two components are symmetric. [sent-146, score-0.42]

42 We hope that distributionally similar targets would have the same semantic frames because ideally, lexical units evoking the same set of frames appear in similar syntactic contexts. [sent-148, score-1.151]

43 We would also like to involve the annotated data in graph construction so that it can eliminate some noise in the automatically constructed thesaurus. [sent-149, score-0.229]

44 m Woes tli snimk vilearrtic taer-s t and u in the graph with edge weight wtu, defined as: wtu=(0sim(t,u) oifth te ∈rw Kis(eu) or u ∈ K(t) The hyperparameters validation (§6. [sent-151, score-0.251]

45 2 Label Propagation First, we softly label those vertices of the constructed graph for which frame distributions are available from the FrameNet data (the same distributions that are used to compute simFN). [sent-154, score-0.888]

46 Thus, initially, a small fraction of the vertices in the graph 6In future work, one might consider learning a similarity metric from the annotated data, so as to exactly suit the frame identification task. [sent-155, score-0.871]

47 For simplicity, only the most probable frames under the empirical distribution for the observed targets are shown; we actually label each vertex with the full empirical distribution over frames for the corresponding observed target in the data. [sent-158, score-1.245]

48 7 Let V denote the set of all vertices in the graph, Vl ⊂ V be the set of known targets and F denote the set ⊂of V Vall b fer tahmee sse. [sent-164, score-0.522]

49 Fo}r each known target t ∈ Vl, we have an initial frame deaiscthri kbuntoiownn rt. [sent-170, score-0.514]

50 2 requPires that, for known targets, we stay close to the initial frame dis− µ× tributions. [sent-177, score-0.452]

51 The second term is the graph smoothness regularizer, which encourages the distributions of similar nodes (large wtu) to be similar. [sent-178, score-0.208]

52 The final distribution of frames for a target t is denoted by qt∗. [sent-197, score-0.401]

53 Note that in all our experiments, we assume that the targets are marked in a given sentence of which we want to extract a frame-semantic analysis. [sent-200, score-0.384]

54 1 Frame Identification For a given sentence x with frame-evoking targets t, let ti denote the ith target (a word sequence). [sent-203, score-0.543]

55 The set of candidate frames Fi for ti is defined to incTluhdee s every afrnadmidea f s fruacmh ethsa Ft ti ∈ Lf. [sent-212, score-0.485]

56 (2010a) considered all frames F in FrameNet as candidates. [sent-214, score-0.339]

57 Instead, eind our work, we c Fhraecmke Nwhetet ahse cra ti ∈ V , where V are the vertices of the constructed graph, and set: Fi = {f : f ∈ M-best frames under qt∗i } (6) The integer M is set using cross-validation (§6. [sent-215, score-0.548]

58 The frame prediction rule uses a probabilistic model over frames for a target: fi ← argmaxf∈Fi P‘∈Lfp(f,‘ | ti,x) ‘ (7) Note that a latent variabPle ∈ Lf is used, which iNs marginalized notu tv. [sent-218, score-0.889]

59 a Broadly, ∈le Lxical semantic re- lationships between the “prototype” variable ‘ (belonging to the set of seen targets for a frame f) and the target ti are used as features for frame identification, but since ‘ is unobserved, it is summed out both during inference and training. [sent-219, score-1.472]

60 , a feature might relate a frame f to a prototype ‘, represent a lexicalsemantic relationship between ‘ and ti, or encode part of the syntax of the sentence (Das et al. [sent-223, score-0.452]

61 9While training, in the partition function of the log-linear model, all frames F in FrameNet are summed up for a target ti imnostdeaedl, aollf only eFs iF (as Finr Eq. [sent-236, score-0.474]

62 , r|Rfi | } denote frame fi’s roLlese to bRserve=d in { rFrameNet an|}not daetnioontse. [sent-247, score-0.476]

63 Nak¨ ıψvek prediction of roles using Equation 10 may result in overlap among arguments filling different roles of a frame, since the argument identification model fills each role independently of the others. [sent-256, score-0.344]

64 We want to enforce the constraint that two roles of a single frame cannot be filled by overlapping spans. [sent-257, score-0.516]

65 78 documents with full-text annotations with multiple frames per sentence were also released (a superset of the SemEval’07 dataset). [sent-272, score-0.453]

66 Our training split of the full-text annotations contained 3,256 sentences with 19,582 frame annotatations with correspond- ing roles, while the test set contained 2,420 sentences with 4,458 annotations (the test set contained fewer annotated targets per sentence). [sent-275, score-0.96]

67 In this work we assume the frame-evoking targets have been correctly identified in training and test data. [sent-283, score-0.384]

68 For finding targets in a raw sentence, we used a relaxed target identification scheme, where we marked every target seen in the lexicon and all other words which were not prepositions, particles, proper nouns, foreign words and Wh-words as potential frame evoking units. [sent-291, score-1.177]

69 This was done so as to find unseen targets and get frame annotations with SEMAFOR on them. [sent-292, score-0.994]

70 We appended these automatic annotations to the training data, resulting in 711,401 frame annotations, more than 36 times the supervised data. [sent-293, score-0.573]

71 These data were next used to train a frame identification model (§5. [sent-294, score-0.588]

72 ” The third baseline uses a graph constructed only with Lin’s thesaurus, without using supervised data. [sent-300, score-0.24]

73 l1 propagation was run on theitse graph (and hyperparameters tuned using cross validation). [sent-304, score-0.33]

74 The posterior distribution of frames over targets was next used for frame identification (Eq. [sent-305, score-1.311]

75 12Note that we only self-train the frame identification model and not the argument identification model, which is fixed throughout. [sent-317, score-0.776]

76 uniform regularization hyperparameter ν for graph construction was set to 10−6 and not tuned. [sent-332, score-0.214]

77 For each cross-validation split, four folds were used to train a frame identification model, construct a graph, µµ µ run label propagation and then the model was tested on the fifth fold. [sent-333, score-0.741]

78 The standard evaluation script from the SemEval’07 task calculates precision, recall, and F1score for frames and arguments; it also provides a score that gives partial credit for hypothesizing a frame related to the correct one in the FrameNet lexicon. [sent-351, score-0.791]

79 4 Results Tables 1 and 2 present results for frame identification and full frame-semantic parsing respectively. [sent-357, score-0.659]

80 html # comparator 1442 short of the supervised baseline SEMAFOR, unlike what was observed by Bejan (2009) for the frame identification task. [sent-364, score-0.625]

81 This indicates that a graph constructed with some knowledge of the supervised data is more powerful. [sent-366, score-0.24]

82 7% absolute accuracy improvement over SEMAFOR for frame identification, and 13. [sent-368, score-0.474]

83 When all the test targets are considered, the gains are still significant, resulting in 5. [sent-370, score-0.384]

84 4% relative error reduction over SEMAFOR for frame identification, and 1. [sent-371, score-0.452]

85 2% of the test set targets are unseen in training. [sent-374, score-0.48]

86 This is because our model now disambiguates between only M = 2 frames instead of the full set of 877 frames in FrameNet. [sent-379, score-0.708]

87 V None of these targets were present in the supervised FrameNet data. [sent-393, score-0.421]

88 ” irse sdeensct irnibe thde ei snu uFprearmvieseNdet F as m“SeNomeet phenomenon s( tthhee provokes a particular emotion in an is noticeable, as SEMAFOR takes 13 1 seconds for frame identification, while the FullGraph model only takes 39 seconds. [sent-397, score-0.452]

89 Note that the model identifies an incorrect frame REASON for the target discrepancy. [sent-401, score-0.514]

90 The excerpt from our constructed graph in Figure 2 shows the same target discrepancy. [sent-405, score-0.265]

91 N drawn from annotated data, which evokes the frame SIMILARITY. [sent-408, score-0.452]

92 Thus, after label propagation, we expect the frame SIMILARITY to receive high probability for the target discrepancy. [sent-409, score-0.555]

93 Table 3 shows the top 5 frames that are assigned the highest posterior probabilities in the distribution qt∗ for four hand-selected test targets absent in supervised data, including discrepancy. [sent-411, score-0.76]

94 For all of them, the FullGraph model identifies the correct frames for all four words in the test data by ranking these frames in the top M = 2. [sent-413, score-0.678]

95 Across unknown targets, on average the M = 2 most common frames in the posterior distribution qt∗ found by FullGraph have = or seven times the average across all frames. [sent-416, score-0.397]

96 This suggests that the graph propagation method is confident only in predicting the top few frames out of the whole possible set. [sent-417, score-0.608]

97 Moreover, the automatically selected number of frames to extract per unknown target, M = 2, suggests that only a few meaningful frames were assigned to unknown predicates. [sent-418, score-0.794]

98 This matches the nature of FrameNet data, where the average frame ambiguity for a target type is 1. [sent-419, score-0.514]

99 We showed that graph-based label propagation and resulting smoothed frame distributions over unseen targets significantly improved the coverage of a state-of-the-art semantic frame disambiguation model to previously unseen predicates, also improving the quality of full framesemantic parses. [sent-422, score-1.89]

100 BiFrameNet: bilingual frame semantics resource construction by crosslingual induction. [sent-532, score-0.478]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('frame', 0.452), ('targets', 0.384), ('frames', 0.339), ('framenet', 0.284), ('semafor', 0.28), ('das', 0.161), ('graph', 0.157), ('identification', 0.136), ('qt', 0.131), ('fullgraph', 0.115), ('propagation', 0.112), ('subramanya', 0.107), ('rfi', 0.099), ('unseen', 0.096), ('vertices', 0.09), ('lingraph', 0.082), ('fi', 0.076), ('semeval', 0.074), ('rk', 0.074), ('ti', 0.073), ('exemplar', 0.072), ('wtu', 0.066), ('roles', 0.064), ('target', 0.062), ('annotations', 0.062), ('hyperparameters', 0.061), ('unknown', 0.058), ('thesaurus', 0.058), ('vl', 0.057), ('argument', 0.052), ('distributions', 0.051), ('vertex', 0.05), ('semantic', 0.049), ('bejan', 0.049), ('entropic', 0.049), ('framesemantic', 0.049), ('simdl', 0.049), ('simfn', 0.049), ('coverage', 0.049), ('predicates', 0.047), ('constructed', 0.046), ('lexicon', 0.041), ('parsing', 0.041), ('label', 0.041), ('evoking', 0.04), ('shi', 0.038), ('schneider', 0.038), ('supervised', 0.037), ('similarity', 0.036), ('pad', 0.035), ('lth', 0.034), ('verbnet', 0.034), ('gets', 0.033), ('bells', 0.033), ('fmi', 0.033), ('giuglea', 0.033), ('matsubayashi', 0.033), ('tli', 0.033), ('tonelli', 0.033), ('fillmore', 0.031), ('hyperparameter', 0.031), ('spans', 0.03), ('lf', 0.03), ('unobserved', 0.03), ('full', 0.03), ('released', 0.029), ('smoothed', 0.029), ('ai', 0.028), ('role', 0.028), ('petrov', 0.028), ('urstenau', 0.027), ('fleischman', 0.027), ('unlabeled', 0.027), ('construction', 0.026), ('quadratic', 0.026), ('johansson', 0.025), ('parser', 0.025), ('niu', 0.025), ('lin', 0.025), ('evoke', 0.024), ('evoked', 0.024), ('alia', 0.024), ('ponzetto', 0.024), ('scanning', 0.024), ('regularizer', 0.024), ('denote', 0.024), ('xn', 0.023), ('erk', 0.023), ('superset', 0.023), ('srl', 0.022), ('appended', 0.022), ('probabilistic', 0.022), ('semisupervised', 0.022), ('absolute', 0.022), ('improvements', 0.022), ('thompson', 0.021), ('inter', 0.021), ('bengio', 0.021), ('fung', 0.021), ('pennacchiotti', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999893 274 acl-2011-Semi-Supervised Frame-Semantic Parsing for Unknown Predicates

Author: Dipanjan Das ; Noah A. Smith

Abstract: We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15% absolute improvement in frame identification accuracy and over 13% absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline.

2 0.19597301 3 acl-2011-A Bayesian Model for Unsupervised Semantic Parsing

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We propose a non-parametric Bayesian model for unsupervised semantic parsing. Following Poon and Domingos (2009), we consider a semantic parsing setting where the goal is to (1) decompose the syntactic dependency tree of a sentence into fragments, (2) assign each of these fragments to a cluster of semantically equivalent syntactic structures, and (3) predict predicate-argument relations between the fragments. We use hierarchical PitmanYor processes to model statistical dependencies between meaning representations of predicates and those of their arguments, as well as the clusters of their syntactic realizations. We develop a modification of the MetropolisHastings split-merge sampler, resulting in an efficient inference algorithm for the model. The method is experimentally evaluated by us- ing the induced semantic representation for the question answering task in the biomedical domain.

3 0.16366896 216 acl-2011-MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles

Author: Chi-kiu Lo ; Dekai Wu

Abstract: We introduce a novel semi-automated metric, MEANT, that assesses translation utility by matching semantic role fillers, producing scores that correlate with human judgment as well as HTER but at much lower labor cost. As machine translation systems improve in lexical choice and fluency, the shortcomings of widespread n-gram based, fluency-oriented MT evaluation metrics such as BLEU, which fail to properly evaluate adequacy, become more apparent. But more accurate, nonautomatic adequacy-oriented MT evaluation metrics like HTER are highly labor-intensive, which bottlenecks the evaluation cycle. We first show that when using untrained monolingual readers to annotate semantic roles in MT output, the non-automatic version of the metric HMEANT achieves a 0.43 correlation coefficient with human adequacyjudgments at the sentence level, far superior to BLEU at only 0.20, and equal to the far more expensive HTER. We then replace the human semantic role annotators with automatic shallow semantic parsing to further automate the evaluation metric, and show that even the semiautomated evaluation metric achieves a 0.34 correlation coefficient with human adequacy judgment, which is still about 80% as closely correlated as HTER despite an even lower labor cost for the evaluation procedure. The results show that our proposed metric is significantly better correlated with human judgment on adequacy than current widespread automatic evaluation metrics, while being much more cost effective than HTER. 1

4 0.16128694 323 acl-2011-Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections

Author: Dipanjan Das ; Slav Petrov

Abstract: We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (BergKirkpatrick et al., 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.

5 0.13232405 214 acl-2011-Lost in Translation: Authorship Attribution using Frame Semantics

Author: Steffen Hedegaard ; Jakob Grue Simonsen

Abstract: We investigate authorship attribution using classifiers based on frame semantics. The purpose is to discover whether adding semantic information to lexical and syntactic methods for authorship attribution will improve them, specifically to address the difficult problem of authorship attribution of translated texts. Our results suggest (i) that frame-based classifiers are usable for author attribution of both translated and untranslated texts; (ii) that framebased classifiers generally perform worse than the baseline classifiers for untranslated texts, but (iii) perform as well as, or superior to the baseline classifiers on translated texts; (iv) that—contrary to current belief—naïve clas- sifiers based on lexical markers may perform tolerably on translated texts if the combination of author and translator is present in the training set of a classifier.

6 0.13019001 269 acl-2011-Scaling up Automatic Cross-Lingual Semantic Role Annotation

7 0.10189538 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

8 0.09685801 211 acl-2011-Liars and Saviors in a Sentiment Annotated Corpus of Comments to Political Debates

9 0.086984187 314 acl-2011-Typed Graph Models for Learning Latent Attributes from Names

10 0.084532671 84 acl-2011-Contrasting Opposing Views of News Articles on Contentious Issues

11 0.081421405 85 acl-2011-Coreference Resolution with World Knowledge

12 0.080566518 87 acl-2011-Corpus Expansion for Statistical Machine Translation with Semantic Role Label Substitution Rules

13 0.075217731 286 acl-2011-Social Network Extraction from Texts: A Thesis Proposal

14 0.070472308 229 acl-2011-NULEX: An Open-License Broad Coverage Lexicon

15 0.069951847 231 acl-2011-Nonlinear Evidence Fusion and Propagation for Hyponymy Relation Mining

16 0.068377383 234 acl-2011-Optimal Head-Driven Parsing Complexity for Linear Context-Free Rewriting Systems

17 0.068199769 292 acl-2011-Target-dependent Twitter Sentiment Classification

18 0.063410617 167 acl-2011-Improving Dependency Parsing with Semantic Classes

19 0.063113861 325 acl-2011-Unsupervised Word Alignment with Arbitrary Features

20 0.060944963 293 acl-2011-Template-Based Information Extraction without the Templates


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.177), (1, 0.028), (2, -0.049), (3, -0.045), (4, 0.03), (5, 0.0), (6, 0.047), (7, 0.04), (8, 0.018), (9, -0.03), (10, 0.039), (11, -0.043), (12, 0.041), (13, 0.04), (14, -0.089), (15, -0.06), (16, -0.079), (17, -0.074), (18, 0.022), (19, -0.092), (20, 0.04), (21, 0.041), (22, -0.074), (23, -0.031), (24, 0.014), (25, 0.005), (26, -0.163), (27, -0.133), (28, -0.011), (29, 0.011), (30, 0.103), (31, -0.042), (32, -0.084), (33, 0.013), (34, 0.12), (35, -0.038), (36, 0.126), (37, -0.003), (38, -0.182), (39, -0.049), (40, -0.041), (41, 0.14), (42, -0.021), (43, 0.087), (44, 0.105), (45, -0.014), (46, -0.148), (47, -0.053), (48, -0.027), (49, -0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94557774 274 acl-2011-Semi-Supervised Frame-Semantic Parsing for Unknown Predicates

Author: Dipanjan Das ; Noah A. Smith

Abstract: We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15% absolute improvement in frame identification accuracy and over 13% absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline.

2 0.69090331 269 acl-2011-Scaling up Automatic Cross-Lingual Semantic Role Annotation

Author: Lonneke van der Plas ; Paola Merlo ; James Henderson

Abstract: Broad-coverage semantic annotations for training statistical learners are only available for a handful of languages. Previous approaches to cross-lingual transfer of semantic annotations have addressed this problem with encouraging results on a small scale. In this paper, we scale up previous efforts by using an automatic approach to semantic annotation that does not rely on a semantic ontology for the target language. Moreover, we improve the quality of the transferred semantic annotations by using a joint syntacticsemantic parser that learns the correlations between syntax and semantics of the target language and smooths out the errors from automatic transfer. We reach a labelled F-measure for predicates and arguments of only 4% and 9% points, respectively, lower than the upper bound from manual annotations.

3 0.6408903 3 acl-2011-A Bayesian Model for Unsupervised Semantic Parsing

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We propose a non-parametric Bayesian model for unsupervised semantic parsing. Following Poon and Domingos (2009), we consider a semantic parsing setting where the goal is to (1) decompose the syntactic dependency tree of a sentence into fragments, (2) assign each of these fragments to a cluster of semantically equivalent syntactic structures, and (3) predict predicate-argument relations between the fragments. We use hierarchical PitmanYor processes to model statistical dependencies between meaning representations of predicates and those of their arguments, as well as the clusters of their syntactic realizations. We develop a modification of the MetropolisHastings split-merge sampler, resulting in an efficient inference algorithm for the model. The method is experimentally evaluated by us- ing the induced semantic representation for the question answering task in the biomedical domain.

4 0.6301291 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

Author: Joel Lang ; Mirella Lapata

Abstract: In this paper we describe an unsupervised method for semantic role induction which holds promise for relieving the data acquisition bottleneck associated with supervised role labelers. We present an algorithm that iteratively splits and merges clusters representing semantic roles, thereby leading from an initial clustering to a final clustering of better quality. The method is simple, surprisingly effective, and allows to integrate linguistic knowledge transparently. By combining role induction with a rule-based component for argument identification we obtain an unsupervised end-to-end semantic role labeling system. Evaluation on the CoNLL 2008 benchmark dataset demonstrates that our method outperforms competitive unsupervised approaches by a wide margin.

5 0.6217739 68 acl-2011-Classifying arguments by scheme

Author: Vanessa Wei Feng ; Graeme Hirst

Abstract: Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance ofone offive common schemes, using features specific to each scheme. We achieve accuracies of 63–91% in one-against-others classification and 80–94% in pairwise classification (baseline = 50% in both cases).

6 0.58603805 214 acl-2011-Lost in Translation: Authorship Attribution using Frame Semantics

7 0.57310861 323 acl-2011-Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections

8 0.54481953 84 acl-2011-Contrasting Opposing Views of News Articles on Contentious Issues

9 0.47577161 216 acl-2011-MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles

10 0.42787161 314 acl-2011-Typed Graph Models for Learning Latent Attributes from Names

11 0.4108572 320 acl-2011-Unsupervised Discovery of Domain-Specific Knowledge from Text

12 0.40034509 303 acl-2011-Tier-based Strictly Local Constraints for Phonology

13 0.39263359 234 acl-2011-Optimal Head-Driven Parsing Complexity for Linear Context-Free Rewriting Systems

14 0.3898885 174 acl-2011-Insights from Network Structure for Text Mining

15 0.38381845 319 acl-2011-Unsupervised Decomposition of a Document into Authorial Components

16 0.37680072 200 acl-2011-Learning Dependency-Based Compositional Semantics

17 0.3754847 231 acl-2011-Nonlinear Evidence Fusion and Propagation for Hyponymy Relation Mining

18 0.37358364 162 acl-2011-Identifying the Semantic Orientation of Foreign Words

19 0.36625466 167 acl-2011-Improving Dependency Parsing with Semantic Classes

20 0.36305562 293 acl-2011-Template-Based Information Extraction without the Templates


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.029), (17, 0.051), (26, 0.025), (37, 0.084), (39, 0.049), (41, 0.057), (53, 0.045), (55, 0.035), (59, 0.079), (72, 0.034), (78, 0.223), (91, 0.037), (96, 0.128), (97, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.768866 274 acl-2011-Semi-Supervised Frame-Semantic Parsing for Unknown Predicates

Author: Dipanjan Das ; Noah A. Smith

Abstract: We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15% absolute improvement in frame identification accuracy and over 13% absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline.

2 0.70020032 202 acl-2011-Learning Hierarchical Translation Structure with Linguistic Annotations

Author: Markos Mylonakis ; Khalil Sima'an

Abstract: While it is generally accepted that many translation phenomena are correlated with linguistic structures, employing linguistic syntax for translation has proven a highly non-trivial task. The key assumption behind many approaches is that translation is guided by the source and/or target language parse, employing rules extracted from the parse tree or performing tree transformations. These approaches enforce strict constraints and might overlook important translation phenomena that cross linguistic constituents. We propose a novel flexible modelling approach to introduce linguistic information of varying granularity from the source side. Our method induces joint probability synchronous grammars and estimates their parameters, by select- ing and weighing together linguistically motivated rules according to an objective function directly targeting generalisation over future data. We obtain statistically significant improvements across 4 different language pairs with English as source, mounting up to +1.92 BLEU for Chinese as target.

3 0.65405834 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

Author: Joel Lang ; Mirella Lapata

Abstract: In this paper we describe an unsupervised method for semantic role induction which holds promise for relieving the data acquisition bottleneck associated with supervised role labelers. We present an algorithm that iteratively splits and merges clusters representing semantic roles, thereby leading from an initial clustering to a final clustering of better quality. The method is simple, surprisingly effective, and allows to integrate linguistic knowledge transparently. By combining role induction with a rule-based component for argument identification we obtain an unsupervised end-to-end semantic role labeling system. Evaluation on the CoNLL 2008 benchmark dataset demonstrates that our method outperforms competitive unsupervised approaches by a wide margin.

4 0.64901257 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization

Author: Harr Chen ; Edward Benson ; Tahira Naseem ; Regina Barzilay

Abstract: We present a novel approach to discovering relations and their instantiations from a collection of documents in a single domain. Our approach learns relation types by exploiting meta-constraints that characterize the general qualities of a good relation in any domain. These constraints state that instances of a single relation should exhibit regularities at multiple levels of linguistic structure, including lexicography, syntax, and document-level context. We capture these regularities via the structure of our probabilistic model as well as a set of declaratively-specified constraints enforced during posterior inference. Across two domains our approach successfully recovers hidden relation structure, comparable to or outperforming previous state-of-the-art approaches. Furthermore, we find that a small , set of constraints is applicable across the domains, and that using domain-specific constraints can further improve performance. 1

5 0.64454782 327 acl-2011-Using Bilingual Parallel Corpora for Cross-Lingual Textual Entailment

Author: Yashar Mehdad ; Matteo Negri ; Marcello Federico

Abstract: This paper explores the use of bilingual parallel corpora as a source of lexical knowledge for cross-lingual textual entailment. We claim that, in spite of the inherent difficulties of the task, phrase tables extracted from parallel data allow to capture both lexical relations between single words, and contextual information useful for inference. We experiment with a phrasal matching method in order to: i) build a system portable across languages, and ii) evaluate the contribution of lexical knowledge in isolation, without interaction with other inference mechanisms. Results achieved on an English-Spanish corpus obtained from the RTE3 dataset support our claim, with an overall accuracy above average scores reported by RTE participants on monolingual data. Finally, we show that using parallel corpora to extract paraphrase tables reveals their potential also in the monolingual setting, improving the results achieved with other sources of lexical knowledge.

6 0.64422184 164 acl-2011-Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features

7 0.64335346 132 acl-2011-Extracting Paraphrases from Definition Sentences on the Web

8 0.64297545 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

9 0.6423595 269 acl-2011-Scaling up Automatic Cross-Lingual Semantic Role Annotation

10 0.6420567 3 acl-2011-A Bayesian Model for Unsupervised Semantic Parsing

11 0.64060593 126 acl-2011-Exploiting Syntactico-Semantic Structures for Relation Extraction

12 0.63854289 323 acl-2011-Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections

13 0.6371249 131 acl-2011-Extracting Opinion Expressions and Their Polarities - Exploration of Pipelines and Joint Models

14 0.63619578 87 acl-2011-Corpus Expansion for Statistical Machine Translation with Semantic Role Label Substitution Rules

15 0.63604796 235 acl-2011-Optimal and Syntactically-Informed Decoding for Monolingual Phrase-Based Alignment

16 0.63539809 66 acl-2011-Chinese sentence segmentation as comma classification

17 0.63510203 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

18 0.63363558 128 acl-2011-Exploring Entity Relations for Named Entity Disambiguation

19 0.63313913 65 acl-2011-Can Document Selection Help Semi-supervised Learning? A Case Study On Event Extraction

20 0.63259387 178 acl-2011-Interactive Topic Modeling