acl acl2012 acl2012-64 knowledge-graph by maker-knowledge-mining

64 acl-2012-Crosslingual Induction of Semantic Roles


Source: pdf

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a stateof-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. [sent-3, score-0.375]

2 Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a stateof-the-art generative Bayesian non-parametric model. [sent-4, score-0.689]

3 At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. [sent-5, score-0.675]

4 The goal of this work is to show that parallel data is useful in unsupervised induction of shallow semantic representations. [sent-10, score-0.448]

5 Semantic role labeling (SRL) (Gildea and Jurafsky, 2002) involves predicting predicate argument structure, i. [sent-11, score-0.903]

6 both the identification of arguments 647 and their assignment to underlying semantic roles. [sent-13, score-0.401]

7 (c) [A1Mary] was blamed [A2for planning a theft] [A0by Peter] the arguments ‘Peter’, ‘Mary’, and ‘planning a theft’ of the predicate ‘blame’ take the agent (A0), patient (A1) and reason (A2) roles, respectively. [sent-16, score-0.478]

8 In this work, we focus on predicting argument roles. [sent-17, score-0.434]

9 Though syntactic representations are often predictive of semantic roles (Levin, 1993), the interface between syntactic and semantic representations is far from trivial. [sent-23, score-0.747]

10 In this work, we make use of unsupervised data along with parallel texts and learn to induce semantic structures in two languages simultaneously. [sent-32, score-0.375]

11 As does most of the recent work on unsupervised SRL, we assume that our data is annotated with automatically-predicted syntactic dependency parses and aim to induce a model of linking between syntax and semantics in an unsupervised way. [sent-33, score-0.489]

12 For example, in our sentences (a) and (b) representing so-called blame alternation (Levin, 1993), the same information is conveyed in two different ways and a successful model of semantic role labeling needs to learn the corresponding linkings from the data. [sent-35, score-0.506]

13 Inducing them solely based on monolingual data, though possible, may be tricky as selectional preferences of the roles are not particularly restrictive; similar restrictions for patient and agent roles may further complicate the process. [sent-36, score-0.364]

14 Maximizing agreement between the roles predicted for both languages would provide a strong signal for inducing the proper linkings in our examples. [sent-38, score-0.276]

15 In this work, we begin with a state-of-the-art monolingual unsupervised Bayesian model (Titov and Klementiev, 2012) and focus on improving its performance in the crosslingual setting. [sent-39, score-0.39]

16 It induces a linking between syntax and semantics, encoded as a clustering of syntactic signatures of predicate arguments. [sent-40, score-0.476]

17 For predicates present in both sides of a bitext, we guide models in both languages to prefer clusterings which maximize agreement between predicate argument structures predicted for each aligned predicate pair. [sent-42, score-1.11]

18 • • • This work is the first to consider the crosslingual setting sfo trh unsupervised SidRerL. [sent-49, score-0.267]

19 Section 2 begins with a definition of the crosslingual semantic role induction task we address in this pa- per. [sent-53, score-0.603]

20 In Section 3, we describe the base monolingual model, and in Section 4 we propose an extension for the crosslingual setting. [sent-54, score-0.218]

21 2 Problem Definition As we mentioned in the introduction, in this work we focus on the labeling stage of semantic role labeling. [sent-58, score-0.446]

22 Instead of assuming the availability of role annotated data, we rely only on automatically generated syntactic dependency graphs in both languages. [sent-61, score-0.306]

23 While we cannot expect that syntactic structure can trivially map to a semantic representation1, we can make use of syntactic cues. [sent-62, score-0.344]

24 In the labeling stage, semantic roles are represented by clusters of arguments, and labeling a particular argument corresponds to deciding on its role cluster. [sent-63, score-1.099]

25 However, in- stead of dealing with argument occurrences directly, 1Although it provides a strong baseline which is difficult to beat (Grenager and Manning, 2006; Lang and Lapata, 2010; Lang and Lapata, 2011a). [sent-64, score-0.434]

26 we represent them as predicate-specific syntactic signatures, and refer to them as argument keys. [sent-65, score-0.533]

27 This representation aids our models in inducing high purity clusters (of argument keys) while reducing their granularity. [sent-66, score-0.577]

28 We follow (Lang and Lapata, 2011a) and use the following syntactic features for English to form the argument key representation: • Active or passive verb voice (ACT /PAS S). [sent-67, score-0.583]

29 In the example sentences in Section 1, the argument keys for candidate arguments Peter for sentences (a) and (c) would be ACT :LEFT : SBJ and PASS :RIGHT :LGS->by,2 respectively. [sent-72, score-0.785]

30 While aiming to increase the purity of argument key clusters, this particular representation will not always produce a good match: e. [sent-73, score-0.568]

31 In sum, we treat the unsupervised semantic role labeling task as clustering of argument keys. [sent-79, score-1.055]

32 Thus, argument occurrences in the corpus whose keys are clustered together are assigned the same semantic role. [sent-80, score-0.772]

33 The objective of this work is to improve argument key clusterings by inducing them simultaneously in two languages. [sent-81, score-0.543]

34 3 Monolingual Model In this section we describe one of the Bayesian models for semantic role induction proposed in (Titov and Klementiev, 2012). [sent-82, score-0.467]

35 2 The Generative Story In Section 2 we defined our task as clustering of argument keys, where each cluster corresponds to a semantic role. [sent-103, score-0.747]

36 If an argument key k is assigned to a role r (k ∈ r), all of its occurrences are labeled r. [sent-104, score-0.691]

37 First, it enforces the selectional restriction assumption: namely it stipulates that the distribution over potential argument fillers is sparse for every role, implying that ‘peaky’ distributions of arguments for each role r are preferred to flat distributions. [sent-106, score-0.881]

38 Second, each role normally appears at most once per predicate occurrence. [sent-107, score-0.413]

39 The model associates two distributions with each predicate: one governs the selection of argument fillers for each semantic role, and the other models (and penalizes) duplicate occurrence of roles. [sent-109, score-0.661]

40 Each predicate occurrence is generated independently given these distributions. [sent-110, score-0.252]

41 Let us describe the model by first defining how the set of model parameters and an argument key clustering are drawn, and then explaining the generation of individual predicate and argument instances. [sent-111, score-1.205]

42 For each predicate p, we start by generating a partition of argument keys Bp with each subset r ∈ Bp representing a single semwaintthic aroclhe. [sent-113, score-0.832]

43 The crucial part of the model is the set of selectional preference parameters θp,r, the distributions of arguments x for each role r of predicate p. [sent-115, score-0.655]

44 We represent arguments by lemmas of their syntactic heads. [sent-116, score-0.258]

45 3 The preference for sparseness of the distributions θp,r is encoded by drawing them from the DP prior DP(β, H(A)) with a small concentration parameter H(A) β, the base probability distribution is just the normalized frequencies of arguments in the corpus. [sent-117, score-0.278]

46 The geometric distribution ψp,r is used to model the number of times a role r appears with a given predicate occurrence. [sent-118, score-0.413]

47 If 0 is drawn then the semantic role is not realized for the given occurrence, otherwise the number of additional roles r is drawn from the geometric distribution Geom(ψp,r). [sent-120, score-0.494]

48 The Beta priors over ψ can indicate the preference towards generating at most one argument for each role. [sent-121, score-0.478]

49 Now, when parameters and argument key clusterings are chosen, we can summarize the remainder of the generative story as follows. [sent-122, score-0.583]

50 For each predicate role we independently decide on the number of role occurrences. [sent-124, score-0.666]

51 Then each of the arguments is generated (see GenArgument) by choosing an argument key kp,r uniformly from the set of argument keys assigned to the cluster r, and finally choosing its filler xp,r, where the filler is the lemma of the syntactic head of the argument. [sent-125, score-1.544]

52 However, the preposition is not ignored but rather encoded in the corresponding argument key, as explained in Section 2. [sent-127, score-0.434]

53 4 Multilingual Extension As we argued in Section 1, our goal is to penalize for disagreement in semantic structures predicted for each language on parallel data. [sent-129, score-0.242]

54 In doing so, as in much ofprevious work on unsupervised induction of linguistic structures, we rely on automatically produced word alignments. [sent-130, score-0.245]

55 In Section 6, we describe how we use word alignment to decide if two arguments are aligned; for now, we assume that (noisy) argument alignments are given. [sent-131, score-0.65]

56 Intuitively, when two arguments are aligned in parallel data, we expect them to be labeled with the same semantic role in both languages. [sent-132, score-0.634]

57 A straightforward implementation of this idea would require us to maintain one-to-one mapping between semantic roles across languages. [sent-137, score-0.287]

58 Instead of assuming this correspondence, we penalize for the lack of isomorphism between the sets of roles in aligned predicates with the penalty dependent on the degree of violation. [sent-138, score-0.405]

59 This softer approach is more appropriate in our setting, as individual argument keys do not always deterministically map to gold standard roles4 and strict penalization would result in the propagation of the corresponding overcoarse clusters to the other language. [sent-139, score-0.685]

60 , Pˆ(r(l) |r(l0)) is the proportion of times the role r(l0) of predic|arte p(l0) in language l0 is aligned to the where role r(l) of predicate p(l) in language l, and fr(l) is the total number of times the role is aligned, γ(l) is a non-negative constant. [sent-146, score-0.892]

61 Second, more frequent roles should have higher penalty as they compete with the joint probability term, the likelihood part of which scales linearly with role counts. [sent-149, score-0.487]

62 When choosing Pˆ(r(l)|r(l0)) 4The average purity for argument keys with automatic argument identification and using predicted syntactic trees, before any clustering, is approximately 90. [sent-155, score-1.386]

63 We start by discussing search for the maximum aposteriori clustering of argument keys in the mono- lingual set-up and then discuss how it can be extended to accommodate the role alignment penalty. [sent-163, score-1.006]

64 Nevertheless, searching for a MAP clustering can be expensive: even a move involving a single argument key implies some computations for all its occurrences in the corpus. [sent-166, score-0.565]

65 , (Daume III, 2007)), we use a greedy procedure where we start with each argument key assigned to an individual cluster, and then iteratively try to merge clusters. [sent-169, score-0.484]

66 Each move involves (1) choosing an argument key and (2) deciding on a cluster to reassign it to. [sent-170, score-0.615]

67 Instead of choosing argument keys randomly at the first stage, we order them by corpus frequency. [sent-172, score-0.671]

68 This ordering is beneficial as getting clustering right for frequent argument keys is more important and the corresponding decisions should be made earlier. [sent-173, score-0.707]

69 The role alignment penalty introduces interdependencies between the objectives for each bilingual predicate pair chosen by the assignment algorithm as discussed in Section 4. [sent-178, score-0.646]

70 At first glance it may seem that the alignment penalty can be easily integrated into the greedy MAP search algorithm: instead of considering individual argument keys, one could use pairs of argument keys and decide on their assignment to clusters jointly. [sent-180, score-1.352]

71 However, given that there is no isomorphic mapping between argument keys across languages, this solution is unlikely to be satisfactory. [sent-181, score-0.626]

72 For each predicate, we first induce semantic roles independently for the first language, as described in Section 5. [sent-183, score-0.333]

73 1 Data We run our main experiments on the EnglishGerman section of Europarl v6 parallel corpus 6We also considered a variation of this idea where a pair of argument keys is chosen randomly proportional to their alignment frequency and multiple iterations are repeated. [sent-190, score-0.74]

74 It is comprised of a list of 8 rules, which use nonlexicalized properties of syntactic paths between a predicate and a candidate argument to iteratively discard non-arguments from the list of all words in a sentence. [sent-206, score-0.739]

75 For German, we use the LTH argument identification classifier. [sent-207, score-0.493]

76 Accuracy of argument identification on CoNLL 2009 using predicted syntactic analyses was 80. [sent-208, score-0.631]

77 For every argument identified in the previous stage, we chose a set of words consisting of the argument’s syntactic head and, for prepositional phrases, the head noun of the object noun phrase. [sent-213, score-0.533]

78 We mark arguments in two languages as aligned if there is any word alignment between the corresponding sets and if they are arguments of aligned predicates. [sent-214, score-0.546]

79 Purity measures the degree to which each cluster contains arguments sharing the same gold role: PU = N1Ximajx|Gj∩ Ci| where Ci is the set of arguments in the i-th induced cluster, Gj is the set of arguments in the jth gold cluster, and N is the total number of arguments. [sent-217, score-0.616]

80 Since our goal is to evaluate the clustering algorithms, we do not include incorrectly identified arguments when computing these metrics. [sent-219, score-0.24]

81 Parameters governing duplicate role generation, and and penalty weights γ(·) were set to be the same for both languages, and are 100, 1. [sent-222, score-0.346]

82 We begin by evaluating our base monolingual model MonoBayes alone against the current best approaches to unsupervised semantic role induction. [sent-233, score-0.607]

83 1562ncewithgold argument identification and gold syntactic parses on CoNLL 2008 shared-task dataset. [sent-239, score-0.628]

84 We report the results using gold argument identification and gold syntactic parses in order to focus the evaluation on the argument labeling stage and to minimize the noise due to automatic syntactic annotations. [sent-242, score-1.254]

85 Additionally, we compute the syntactic function baseline (SyntF), which simply clusters predicate arguments according to the dependency relation to their head. [sent-244, score-0.523]

86 Note that recent unsupervised SRL meth9Note that the scores are computed on correctly identified arguments only, and tend to be higher in these experiments probably because the complex arguments get discarded by the argument identifier. [sent-254, score-0.883]

87 The relatively low expressivity and limited purity of our argument keys (see discussion in Section 4) are likely to limit potential improvements when using them in crosslingual learning. [sent-263, score-0.846]

88 The natural next step would be to consider crosslingual learning with a more expressive model of the syntactic frame and syntax-semantics linking. [sent-264, score-0.235]

89 , 2009) or morphologic analysis (Snyder and Barzilay, 2008) and we are not aware of any previous work on induction of semantic representations in the crosslingual setting. [sent-267, score-0.454]

90 Learning of semantic representations in the context of monolingual weakly-parallel data was studied in Titov and Kozhevnikov (2010) but their setting was semisupervised and they experimented only on a restricted domain. [sent-268, score-0.286]

91 654 Early unsupervised approaches to the SRL task include (Swier and Stevenson, 2004), where the VerbNet verb lexicon was used to guide unsupervised learning, and a generative model of Grenager and Manning (2006) which exploits linguistic priors on syntactic-semantic interface. [sent-276, score-0.262]

92 More recently, the role induction problem has been studied in Lang and Lapata (2010) where it has been reformulated as a problem of detecting alternations and mapping non-standard linkings to the canonical ones. [sent-277, score-0.376]

93 Later, Lang and Lapata (201 1a) proposed an algorithmic approach to clustering argument signatures which achieves higher accuracy and outperforms the syntactic baseline. [sent-278, score-0.664]

94 In Lang and Lapata (201 1b), the role induction problem is formulated as a graph partitioning problem: each vertex in the graph corresponds to a predicate occurrence and edges represent lexical and syntactic similarities between the occurrences. [sent-279, score-0.694]

95 Also, a related task of unsupervised argument identification has been considered in Abend et al. [sent-281, score-0.624]

96 8 Conclusions This work adds unsupervised semantic role labeling to the list of NLP tasks benefiting from the crosslingual induction setting. [sent-283, score-0.79]

97 We show that an agreement signal extracted from parallel data provides indirect supervision capable of substantially improving a state-of-the-art model for semantic role induction. [sent-284, score-0.445]

98 Semi-supervised semantic role labeling using the Latent Words Language Model. [sent-322, score-0.409]

99 Corpus expansion for statistical machine translation with semantic role label substitution rules. [sent-339, score-0.353]

100 The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. [sent-497, score-0.245]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('argument', 0.434), ('lang', 0.252), ('role', 0.207), ('predicate', 0.206), ('keys', 0.192), ('lapata', 0.161), ('arguments', 0.159), ('semantic', 0.146), ('roles', 0.141), ('penalty', 0.139), ('crosslingual', 0.136), ('unsupervised', 0.131), ('srl', 0.126), ('induction', 0.114), ('titov', 0.112), ('theft', 0.105), ('snyder', 0.103), ('conll', 0.102), ('syntactic', 0.099), ('klementiev', 0.091), ('cluster', 0.086), ('purity', 0.084), ('monolingual', 0.082), ('clustering', 0.081), ('restaurant', 0.075), ('german', 0.07), ('aligned', 0.065), ('blamed', 0.063), ('crps', 0.063), ('plas', 0.063), ('bp', 0.062), ('predicates', 0.06), ('identification', 0.059), ('clusters', 0.059), ('clusterings', 0.059), ('multilingual', 0.058), ('representations', 0.058), ('alignment', 0.057), ('parallel', 0.057), ('labeling', 0.056), ('linkings', 0.055), ('mirella', 0.055), ('bayesian', 0.055), ('surdeanu', 0.054), ('induced', 0.053), ('semantics', 0.052), ('grenager', 0.051), ('key', 0.05), ('customers', 0.05), ('goldwasser', 0.05), ('signatures', 0.05), ('planning', 0.05), ('gj', 0.047), ('ivan', 0.047), ('independently', 0.046), ('johansson', 0.045), ('choosing', 0.045), ('pado', 0.044), ('preference', 0.044), ('blame', 0.042), ('burchardt', 0.042), ('deschacht', 0.042), ('fillers', 0.042), ('kaisser', 0.042), ('kozhevnikov', 0.042), ('mbpa', 0.042), ('mmci', 0.042), ('monobayes', 0.042), ('salsa', 0.042), ('begin', 0.041), ('languages', 0.041), ('syntax', 0.04), ('joel', 0.04), ('der', 0.04), ('regina', 0.04), ('projection', 0.04), ('story', 0.04), ('europarl', 0.04), ('distributions', 0.039), ('predicted', 0.039), ('kuhn', 0.039), ('inference', 0.038), ('stage', 0.037), ('assignment', 0.037), ('pu', 0.037), ('mikhail', 0.037), ('swier', 0.037), ('abend', 0.037), ('meal', 0.037), ('urstenau', 0.037), ('mary', 0.037), ('parses', 0.036), ('concentration', 0.036), ('motivates', 0.036), ('discussing', 0.035), ('alexandre', 0.035), ('customer', 0.035), ('substantially', 0.035), ('graph', 0.034), ('dirichlet', 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999923 64 acl-2012-Crosslingual Induction of Semantic Roles

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a stateof-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences.

2 0.35484561 147 acl-2012-Modeling the Translation of Predicate-Argument Structure for SMT

Author: Deyi Xiong ; Min Zhang ; Haizhou Li

Abstract: Predicate-argument structure contains rich semantic information of which statistical machine translation hasn’t taken full advantage. In this paper, we propose two discriminative, feature-based models to exploit predicateargument structures for statistical machine translation: 1) a predicate translation model and 2) an argument reordering model. The predicate translation model explores lexical and semantic contexts surrounding a verbal predicate to select desirable translations for the predicate. The argument reordering model automatically predicts the moving direction of an argument relative to its predicate after translation using semantic features. The two models are integrated into a state-of-theart phrase-based machine translation system and evaluated on Chinese-to-English transla- , tion tasks with large-scale training data. Experimental results demonstrate that the two models significantly improve translation accuracy.

3 0.32627741 209 acl-2012-Unsupervised Semantic Role Induction with Global Role Ordering

Author: Nikhil Garg ; James Henserdon

Abstract: We propose a probabilistic generative model for unsupervised semantic role induction, which integrates local role assignment decisions and a global role ordering decision in a unified model. The role sequence is divided into intervals based on the notion of primary roles, and each interval generates a sequence of secondary roles and syntactic constituents using local features. The global role ordering consists of the sequence of primary roles only, thus making it a partial ordering.

4 0.18573263 33 acl-2012-Automatic Event Extraction with Structured Preference Modeling

Author: Wei Lu ; Dan Roth

Abstract: This paper presents a novel sequence labeling model based on the latent-variable semiMarkov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.

5 0.17424405 176 acl-2012-Sentence Compression with Semantic Role Constraints

Author: Katsumasa Yoshikawa ; Ryu Iida ; Tsutomu Hirao ; Manabu Okumura

Abstract: For sentence compression, we propose new semantic constraints to directly capture the relations between a predicate and its arguments, whereas the existing approaches have focused on relatively shallow linguistic properties, such as lexical and syntactic information. These constraints are based on semantic roles and superior to the constraints of syntactic dependencies. Our empirical evaluation on the Written News Compression Corpus (Clarke and Lapata, 2008) demonstrates that our system achieves results comparable to other state-of-the-art techniques.

6 0.15479384 53 acl-2012-Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions

7 0.13495068 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

8 0.12720555 48 acl-2012-Classifying French Verbs Using French and English Lexical Resources

9 0.12464432 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

10 0.11822063 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

11 0.11319434 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

12 0.10028513 167 acl-2012-QuickView: NLP-based Tweet Search

13 0.095180169 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models

14 0.084786668 12 acl-2012-A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relation Extraction

15 0.08344014 140 acl-2012-Machine Translation without Words through Substring Alignment

16 0.082954504 3 acl-2012-A Class-Based Agreement Model for Generating Accurately Inflected Translations

17 0.082272805 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

18 0.081083953 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

19 0.080447838 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

20 0.080067851 179 acl-2012-Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.278), (1, 0.019), (2, -0.086), (3, 0.037), (4, 0.02), (5, -0.009), (6, -0.061), (7, 0.14), (8, 0.058), (9, 0.093), (10, 0.062), (11, -0.066), (12, 0.165), (13, -0.166), (14, -0.484), (15, 0.083), (16, 0.002), (17, -0.111), (18, 0.024), (19, 0.226), (20, -0.073), (21, 0.114), (22, -0.01), (23, -0.068), (24, 0.051), (25, -0.046), (26, 0.05), (27, 0.015), (28, -0.018), (29, 0.02), (30, 0.024), (31, -0.071), (32, 0.041), (33, -0.006), (34, 0.01), (35, 0.058), (36, -0.152), (37, 0.048), (38, -0.063), (39, -0.095), (40, 0.024), (41, -0.13), (42, -0.101), (43, 0.033), (44, -0.031), (45, 0.017), (46, -0.063), (47, -0.101), (48, -0.074), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97914463 64 acl-2012-Crosslingual Induction of Semantic Roles

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a stateof-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences.

2 0.92507178 209 acl-2012-Unsupervised Semantic Role Induction with Global Role Ordering

Author: Nikhil Garg ; James Henserdon

Abstract: We propose a probabilistic generative model for unsupervised semantic role induction, which integrates local role assignment decisions and a global role ordering decision in a unified model. The role sequence is divided into intervals based on the notion of primary roles, and each interval generates a sequence of secondary roles and syntactic constituents using local features. The global role ordering consists of the sequence of primary roles only, thus making it a partial ordering.

3 0.67321426 147 acl-2012-Modeling the Translation of Predicate-Argument Structure for SMT

Author: Deyi Xiong ; Min Zhang ; Haizhou Li

Abstract: Predicate-argument structure contains rich semantic information of which statistical machine translation hasn’t taken full advantage. In this paper, we propose two discriminative, feature-based models to exploit predicateargument structures for statistical machine translation: 1) a predicate translation model and 2) an argument reordering model. The predicate translation model explores lexical and semantic contexts surrounding a verbal predicate to select desirable translations for the predicate. The argument reordering model automatically predicts the moving direction of an argument relative to its predicate after translation using semantic features. The two models are integrated into a state-of-theart phrase-based machine translation system and evaluated on Chinese-to-English transla- , tion tasks with large-scale training data. Experimental results demonstrate that the two models significantly improve translation accuracy.

4 0.62904 53 acl-2012-Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions

Author: Elena Cabrio ; Serena Villata

Abstract: Blogs and forums are widely adopted by online communities to debate about various issues. However, a user that wants to cut in on a debate may experience some difficulties in extracting the current accepted positions, and can be discouraged from interacting through these applications. In our paper, we combine textual entailment with argumentation theory to automatically extract the arguments from debates and to evaluate their acceptability.

5 0.49723196 176 acl-2012-Sentence Compression with Semantic Role Constraints

Author: Katsumasa Yoshikawa ; Ryu Iida ; Tsutomu Hirao ; Manabu Okumura

Abstract: For sentence compression, we propose new semantic constraints to directly capture the relations between a predicate and its arguments, whereas the existing approaches have focused on relatively shallow linguistic properties, such as lexical and syntactic information. These constraints are based on semantic roles and superior to the constraints of syntactic dependencies. Our empirical evaluation on the Written News Compression Corpus (Clarke and Lapata, 2008) demonstrates that our system achieves results comparable to other state-of-the-art techniques.

6 0.46953261 33 acl-2012-Automatic Event Extraction with Structured Preference Modeling

7 0.41296959 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

8 0.37670457 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

9 0.37103343 129 acl-2012-Learning High-Level Planning from Text

10 0.36544842 48 acl-2012-Classifying French Verbs Using French and English Lexical Resources

11 0.36247155 11 acl-2012-A Feature-Rich Constituent Context Model for Grammar Induction

12 0.35713199 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models

13 0.32926369 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

14 0.32890749 208 acl-2012-Unsupervised Relation Discovery with Sense Disambiguation

15 0.31256911 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

16 0.28877941 112 acl-2012-Humor as Circuits in Semantic Networks

17 0.28869328 72 acl-2012-Detecting Semantic Equivalence and Information Disparity in Cross-lingual Documents

18 0.28684008 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

19 0.28122619 117 acl-2012-Improving Word Representations via Global Context and Multiple Word Prototypes

20 0.27359536 167 acl-2012-QuickView: NLP-based Tweet Search


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(25, 0.022), (26, 0.062), (28, 0.031), (30, 0.026), (37, 0.367), (39, 0.036), (59, 0.015), (71, 0.022), (74, 0.028), (82, 0.022), (84, 0.026), (85, 0.034), (90, 0.102), (92, 0.065), (94, 0.028), (99, 0.049)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.89662498 42 acl-2012-Bootstrapping via Graph Propagation

Author: Max Whitney ; Anoop Sarkar

Abstract: Bootstrapping a classifier from a small set of seed rules can be viewed as the propagation of labels between examples via features shared between them. This paper introduces a novel variant of the Yarowsky algorithm based on this view. It is a bootstrapping learning method which uses a graph propagation algorithm with a well defined objective function. The experimental results show that our proposed bootstrapping algorithm achieves state of the art performance or better on several different natural language data sets.

same-paper 2 0.89095449 64 acl-2012-Crosslingual Induction of Semantic Roles

Author: Ivan Titov ; Alexandre Klementiev

Abstract: We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a stateof-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences.

3 0.88967896 114 acl-2012-IRIS: a Chat-oriented Dialogue System based on the Vector Space Model

Author: Rafael E. Banchs ; Haizhou Li

Abstract: This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of examplebased dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed. 1

4 0.87243104 115 acl-2012-Identifying High-Impact Sub-Structures for Convolution Kernels in Document-level Sentiment Classification

Author: Zhaopeng Tu ; Yifan He ; Jennifer Foster ; Josef van Genabith ; Qun Liu ; Shouxun Lin

Abstract: Convolution kernels support the modeling of complex syntactic information in machinelearning tasks. However, such models are highly sensitive to the type and size of syntactic structure used. It is therefore an important challenge to automatically identify high impact sub-structures relevant to a given task. In this paper we present a systematic study investigating (combinations of) sequence and convolution kernels using different types of substructures in document-level sentiment classification. We show that minimal sub-structures extracted from constituency and dependency trees guided by a polarity lexicon show 1.45 pointabsoluteimprovementinaccuracy overa bag-of-words classifier on a widely used sentiment corpus. 1

5 0.84735906 71 acl-2012-Dependency Hashing for n-best CCG Parsing

Author: Dominick Ng ; James R. Curran

Abstract: Optimising for one grammatical representation, but evaluating over a different one is a particular challenge for parsers and n-best CCG parsing. We find that this mismatch causes many n-best CCG parses to be semantically equivalent, and describe a hashing technique that eliminates this problem, improving oracle n-best F-score by 0.7% and reranking accuracy by 0.4%. We also present a comprehensive analysis of errors made by the C&C; CCG parser, providing the first breakdown of the impact of implementation decisions, such as supertagging, on parsing accuracy.

6 0.61035317 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

7 0.60581362 146 acl-2012-Modeling Topic Dependencies in Hierarchical Text Categorization

8 0.56988287 80 acl-2012-Efficient Tree-based Approximation for Entailment Graph Learning

9 0.56335783 147 acl-2012-Modeling the Translation of Predicate-Argument Structure for SMT

10 0.56018651 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

11 0.54331452 130 acl-2012-Learning Syntactic Verb Frames using Graphical Models

12 0.53968495 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

13 0.53655875 184 acl-2012-String Re-writing Kernel

14 0.53641051 183 acl-2012-State-of-the-Art Kernels for Natural Language Processing

15 0.53562087 175 acl-2012-Semi-supervised Dependency Parsing using Lexical Affinities

16 0.53447884 121 acl-2012-Iterative Viterbi A* Algorithm for K-Best Sequential Decoding

17 0.5318085 211 acl-2012-Using Rejuvenation to Improve Particle Filtering for Bayesian Word Segmentation

18 0.53082913 219 acl-2012-langid.py: An Off-the-shelf Language Identification Tool

19 0.52843159 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

20 0.52738577 30 acl-2012-Attacking Parsing Bottlenecks with Unlabeled Data and Relevant Factorizations