acl acl2010 acl2010-153 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng
Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1
Reference: text
sentIndex sentText sentNum sentScore
1 cn Abstract This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). [sent-3, score-1.817]
2 Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. [sent-5, score-1.372]
3 Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. [sent-6, score-1.511]
4 Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. [sent-7, score-1.787]
5 Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. [sent-8, score-1.336]
6 To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. [sent-9, score-1.519]
7 Due to the difficulty in deep semantic parsing, most previous work focuses on shallow semantic parsing, which assigns a simple structure (such as WHO did WHAT to WHOM, WHEN, WHERE, WHY, HOW) to each predicate in a sentence. [sent-11, score-0.599]
8 s g and a predicate (either a verb or a noun) in the sentence, SRL recognizes and maps all the constituents in the sentence into their corresponding semantic arguments (roles) of the predicate. [sent-15, score-0.485]
9 According to predicate type, SRL can be divided into SRL for verbal predicates (verbal SRL, in short) and SRL for nominal predicates (nominal SRL, in short). [sent-23, score-0.995]
10 Nevertheless, for both verbal and nominal SRL, state-of-the-art systems depend heavily on the top-best parse tree and there exists a large performance gap between SRL based on the gold parse tree and the top-best parse tree. [sent-27, score-1.157]
11 cs 2o0c1 i1a0t iAo8sn ofocriaCtio nmfpourtaCtoi mnpaultLaitnognuaisltLicsn,gpuaigsetisc1s 08–1 17, While it may be difficult to further improve syntactic parsing, a promising alternative is to perform both syntactic and semantic parsing in an integrated way. [sent-40, score-0.97]
12 Given the close interaction between the two tasks, joint learning not only allows uncertainty about syntactic parsing to be carried forward to semantic parsing but also allows useful information from semantic parsing to be carried backward to syntactic parsing. [sent-41, score-1.661]
13 This paper explores joint learning of syntactic and semantic parsing for Chinese texts from two levels. [sent-42, score-0.71]
14 Firstly, an integrated parsing approach is proposed to benefit from the close interaction between syntactic and semantic parsing. [sent-43, score-0.782]
15 This is done by integrating semantic parsing into the syntactic parsing process. [sent-44, score-0.951]
16 Secondly, various semantic role-related features are directly incorporated into the syntactic parsing model to better capture semantic role-related information in syntactic parsing. [sent-45, score-1.064]
17 To our best knowledge, this is the first research on exploring syntactic parsing and SRL for verbal and nominal predicates in an integrated way. [sent-48, score-1.247]
18 Section 4 presents our proposed method of joint syntactic and semantic parsing for Chinese texts. [sent-52, score-0.71]
19 2 Related Work Compared to the large body of work on either syntactic parsing (Ratnaparkhi, 1999; Collins, 1999; Charniak, 2001 ; Petrov and Klein, 2007), or SRL (Carreras and Màrquez, 2004; Carreras and Màrquez, 2005; Jiang and Ng, 2006), there is relatively less work on their joint learning. [sent-55, score-0.537]
20 (2005) adopted the outputs of multiple SRL systems (each on a single parse tree) and combined them into a coherent predicate argument output by solving an optimization problem. [sent-57, score-0.522]
21 As an alternative to the above pseudo-joint learning methods (strictly speaking, they are still pipeline methods), one can augment the syntactic label of a constituent with semantic information, like what function parsing does (Merlo and Musillo, 2005). [sent-60, score-0.968]
22 Based on this observation, they incorporated semantic role information into syntactic parse trees by extending syntactic constituent labels with their coarse-grained semantic roles (core argument or adjunct argument) in the sentence, and thus unified semantic parsing and syntactic parsing. [sent-62, score-2.035]
23 However, the results obtained with this method were negative, and they concluded that semantic parsing on PropBank was too difficult due to the differences between chunk annotation and tree structure. [sent-64, score-0.582]
24 Motivated by Yi and Palmer (2005), Merlo and Musillo (2008) first extended a statistical parser to produce a richly annotated tree that identifies and labels nodes with semantic role labels as well as syntactic labels. [sent-65, score-0.582]
25 , 2009) tackled joint parsing of syntactic and semantic dependencies. [sent-75, score-0.71]
26 (2000) and Finkel and Manning (2009) showed the effectiveness of joint learning on syntactic parsing and some simple NLP tasks, such as information extraction and name entity recognition. [sent-78, score-0.537]
27 3 Baseline: Pipeline Top-Best Parse Tree Parsing on In this section, we briefly describe our approach to syntactic parsing and semantic role labeling, as well as the baseline system with pipeline parsing on the top-best parse tree. [sent-86, score-1.396]
28 The parser recasts a syntactic parse tree as a sequence of decisions similar to those of a standard shift-reduce parser and the parsing process is organized into three left-to-right passes via four procedures, called TAG, CHUNK, BUILD, and CHECK. [sent-89, score-0.816]
29 In the figure, the verbal predicate “提供/provide” is annotated with three core arguments (i. [sent-100, score-0.603]
30 )” as Arg2, and “NP (人民币/RMB 贷款/loan)” as Arg1), while the 中 nominal predicate “贷款/loan” is annotated with two core arguments (i. [sent-105, score-0.583]
31 Unlike verbal predicate recognition, nominal predicate recognition is quite complicated. [sent-121, score-1.022]
32 Therefore, automatic predicate recognition is vital to nominal SRL. [sent-125, score-0.519]
33 For nominal predicates, a binary classifier is trained to predict whether a noun is a nominal predicate or not. [sent-130, score-0.713]
34 Let the nominal predicate candidate be w0, and its left and right neighboring words/POSs be w-1/p-1and w1/p1, respectively. [sent-132, score-0.483]
35 2 SRL for Chinese Predicates Our Chinese SRL models for both verbal and nominal predicates adopt the widely-used SRL framework, which divides the task into three sequential sub-tasks: argument pruning, argument identification, and argument classification. [sent-137, score-0.905]
36 As a result, our Chinese verbal and nominal SRL systems achieve performance of 92. [sent-145, score-0.504]
37 , 2009), the top-best parse tree is first returned from our syntactic parser and then fed into the SRL system. [sent-154, score-0.526]
38 Specifically, the verbal (nominal) SRL labeler is in charge of verbal (nominal) predicates, respectively. [sent-155, score-0.5]
39 4 Joint Syntactic and Semantic Parsing In this section, we first explore pipeline parsing on N-best parse trees, as a natural extension of pipeline parsing on the top-best parse tree. [sent-158, score-1.366]
40 Then, joint syntactic and semantic parsing is explored for Chinese texts from two levels. [sent-159, score-0.71]
41 Firstly, an integrated parsing approach to joint syntactic and semantic parsing is proposed. [sent-160, score-1.131]
42 Secondly, various semantic role-related features are directly incorporated into the syntactic parsing model for better interaction between the two tasks. [sent-161, score-0.703]
43 1 Pipeline Parsing on N-best Parse Trees The pipeline parsing approach employed in this paper is largely motivated by the general framework of re-ranking, as proposed in Sutton and McCallum (2005). [sent-163, score-0.512]
44 The idea behind this ap- proach is that it allows uncertainty about syntactic parsing to be carried forward through an N-best list, and that a reliable SRL system, to a certain extent, can reflect qualities of syntactic parse trees. [sent-164, score-0.842]
45 Given a sentence x, a joint parsing model is defined over a semantic frame F and a parse tree t in a log-linear way: =Sc(o1r−eα( F)l,o tg | xP)( F| t , x)+αlogP( t| x) (1) where P(t|x) is returned by a probabilistic syntactic parsing model, e. [sent-165, score-1.302]
46 , our syntactic parser, and P(F|t, x) is returned by a probabilistic semantic parsing model, e. [sent-167, score-0.686]
47 Assume: t: constituent which is complete with “YES” decision of CHECK procedure P: number of predicates Pi: ith predicate S: SRL result, set of predicates and its arguments BEGIN srl_prob = 0. [sent-171, score-0.691]
48 In our pipeline parsing approach, P(t|x) is calculated as the product of all involved decisions’ probabilities in the syntactic parsing model, and P(F|t, x) is calculated as the product of all the semantic role labels’ probabilities in a sentence (including both verbal and nominal SRL). [sent-173, score-1.705]
49 In particular, (F*, t*) with maximal Score(F, t|x) is selected as the final syntactic and semantic parsing results. [sent-176, score-0.656]
50 Given a sentence, N-best parse trees are generated first using the syntactic parser, and then for each parse tree, we predict the best SRL frame using our verbal and nominal SRL systems. [sent-177, score-1.111]
51 However, pipeline parsing on all possible parse trees is time-consuming and thus unrealistic. [sent-182, score-0.758]
52 As an alternative, we turn to integrated parsing, which aims to perform syntactic and semantic parsing synchronously. [sent-183, score-0.782]
53 The key idea is to construct a parse tree in a bottom-up way so that it is feasible to perform SRL at suitable moments, instead of only when the whole parse tree is built. [sent-184, score-0.482]
54 Integrated parsing is practicable, mostly due to the following two observations: (1) Given a predicate in a parse tree, its semantic arguments are usually siblings of the predicate, or siblings of its ancestor. [sent-185, score-0.993]
55 Actually, this special observation has been widely employed in SRL to prune non-arguments for a verbal or nominal predicate (Xue, 2008; Li et al. [sent-186, score-0.733]
56 As far as our syntactic parser is concerned, we invoke the SRL systems once a new constituent covering a predicate is complete with a “YES” decision from the CHECK procedure. [sent-192, score-0.582]
57 So, at this moment, the verbal SRL system is invoked to predict the semantic label of the constituent “NP (人 民 币 /RMB 贷款/loan)”, given the verbal predicate “VV (提供/provide)”. [sent-195, score-1.021]
58 In this way, both syntactic and semantic parsing are accomplished when the root node TOP is formed. [sent-201, score-0.656]
59 In particular, the probability computed from the SRL model is interpolated with that of the syntactic parsing model in a log-linear way (with equal weights in our experiments). [sent-204, score-0.483]
60 In contrast to traditional syntactic parsers where no semantic role-related information is used, it may be interesting to investigate the contribution of such information in the syntactic parsing model, due to the availability of such information in the syntactic parsing process. [sent-214, score-1.327]
61 In addition, it is found that 11% of predicates in a sentence are speculatively attached with two or more core arguments with the same label due to semantic parsing errors (partly caused by syntactic parsing errors in automatic parse trees). [sent-215, score-1.353]
62 This is abnormal since a predicate normally only allows at most one argument of each core argument role (i. [sent-216, score-0.547]
63 Therefore, such syntactic errors should be avoidable by considering those arguments already obtained in the bottom-up parsing process. [sent-219, score-0.562]
64 In terms of our syntactic parsing model, this is done by directly incorporating various semantic role-related features into the syntactic parsing model (i. [sent-221, score-1.16]
65 For the example shown in Figure 2, once the constituent “VP (提供/provide 人 币 /RMB 贷款/loan)”, which covers a verbal predicate “VV (提供/provide)”, is complete, the verbal SRL model would be triggered first to mark constituent “NP (人民币/RMB 贷款/loan)” as ARG1, given predicate “VV (提供/provide)”. [sent-224, score-1.221]
66 Table 2 lists various semantic role-related features explored in our syntactic parsing model and their instantiations with regard to the example shown in Figure 2. [sent-226, score-0.677]
67 Moreover, we differentiate whether the focus predicate is verbal or nominal, and whether it is the head word of the current constituent. [sent-229, score-0.503]
68 The algorithm repeatedly selects one feature each time which contributes the most, and stops when adding any of the remain1113 ing features fails to improve the syntactic parsing performance. [sent-233, score-0.504]
69 (提供/provide+Arg0, 提供/provide+Arg2) Table 2: SRL-related features and their instantiations for syntactic parsing, with “VP (提供/provide 人民 币/RMB 贷款/loan)” as the current constituent C and “提供/provide” as the focus predicate P, based on Figure 2. [sent-249, score-0.557]
70 We are not aware of any SRL system comb- ing automatic predicate recognition, verbal SRL and nominal SRL on Chinese PropBank and NomBank. [sent-273, score-0.733]
71 (2009) combined nominal predicate recognition and nominal SRL on Chinese NomBank. [sent-276, score-0.749]
72 , 2009) included both verbal and nominal SRL on dependency parsing, instead of constituent-based syntactic parsing. [sent-278, score-0.668]
73 2 Results and Discussions Results of pipeline parsing on N-best parse trees. [sent-281, score-0.683]
74 While performing pipeline parsing on N-best parse trees, 20-best (the same as the heap size in our syntactic parsing) parse trees are obtained for each sentence using our syntactic parser as described in Section 3. [sent-282, score-1.351]
75 Table 3 compares the two pipeline parsing approaches on the top-best parse tree and the N-best parse trees. [sent-286, score-0.924]
76 It shows that the approach on N-best parse trees outperforms the one on the top-best parse tree by 0. [sent-287, score-0.487]
77 In addition, syntactic parsing also benefits from the N-best parse trees approach with improvement of 0. [sent-289, score-0.756]
78 This suggests that pipeline parsing on N-best parse trees can improve both syntactic and semantic parsing. [sent-291, score-1.119]
79 Moreover, the huge performance gap between Chinese semantic parsing on the gold parse tree and that on the top-best parse tree leaves much room for performance improvement. [sent-295, score-0.998]
80 78 Table 3: Syntactic and semantic parsing performance on test data (using gold standard word boundaries). [sent-377, score-0.492]
81 Table 3 also compares the integrated parsing approach with the two pipeline parsing approaches. [sent-380, score-0.933]
82 It shows that the integrated parsing approach improves the performance of both syntactic and semantic parsing by 0. [sent-381, score-1.101]
83 09 (>>>) respectively in F1-measure over the pipeline parsing approach on the top-best parse tree. [sent-383, score-0.683]
84 It is also not surprising to find out that the integrated parsing approach outperforms the pipeline parsing approach on 20-best parse trees by 0. [sent-384, score-1.179]
85 67 (>>>) in F1-measure on SRL, due to its exploring a larger search space, although the integrated parsing approach integrates the SRL probability and the syntactic parsing probability in the same manner as the pipeline parsing approach on 20-best parse trees. [sent-385, score-1.614]
86 However, the syntactic parsing performance gap between the integrated parsing approach and the pipeline parsing approach on 20-best parse trees is negligible. [sent-386, score-1.686]
87 As what we have assumed, knowledge about the detected semantic roles and expected semantic roles is helpful for syntactic parsing. [sent-390, score-0.722]
88 It shows that the integration of semantic role-related features in integrated parsing significantly enhances both the performance of syntactic and semantic parsing by 0. [sent-392, score-1.295]
89 In addition, it shows that it outperforms the widely-used pipeline parsing approach on top-best parse tree by 0. [sent-395, score-0.753]
90 Finally, it shows that it outperforms the widely-used pipeline parsing approach on 20-best parse trees by 0. [sent-398, score-0.758]
91 This is very encouraging, considering the notorious difficulty and complexity of both the syntactic and semantic parsing tasks. [sent-401, score-0.676]
92 In addition, it shows that the performance of predicate recognition is very stable due to its high dependence on POS tagging results, rather than syntactic parsing results. [sent-403, score-0.796]
93 Finally, it is not surprising to find out that the performance of predicate recognition when mixing verbal and nominal predicates is better than the performance of either verbal predicates or nominal predicates. [sent-404, score-1.559]
94 3 Extending the Word-based Syntactic Parser to a Character-based Syntactic Parser The above experimental results on a word-based syntactic parser (assuming correct word seg- mentation) show that both syntactic and semantic parsing benefit from our integrated parsing approach. [sent-406, score-1.311]
95 Table 4 lists the syntactic and semantic parsing performance by adopting the character-based parser. [sent-420, score-0.68]
96 Table 4 shows that integrated parsing benefits syntactic and semantic parsing when automatic word segmentation is considered. [sent-421, score-1.14]
97 6 Conclusion In this paper, we explore joint syntactic and semantic parsing to improve the performance of both syntactic and semantic parsing, in particular that of semantic parsing. [sent-445, score-1.268]
98 Evaluation shows that our integrated parsing approach outperforms the pipeline parsing approach on N-best parse trees, a natural extension of the widely-used pipeline parsing approach on the top-best parse tree. [sent-446, score-1.787]
99 It also shows that incorporating semantic information into syntactic parsing significantly improves the performance of both syntactic and semantic parsing. [sent-447, score-1.041]
100 To our best knowledge, this is the first successful research on exploring syntactic parsing and semantic role labeling for verbal and nominal predicates in an integrated way. [sent-449, score-1.519]
wordName wordTfidf (topN-words)
[('srl', 0.536), ('parsing', 0.295), ('predicate', 0.253), ('verbal', 0.25), ('nominal', 0.23), ('pipeline', 0.217), ('chinese', 0.193), ('syntactic', 0.188), ('semantic', 0.173), ('parse', 0.171), ('predicates', 0.131), ('integrated', 0.126), ('xue', 0.116), ('vv', 0.114), ('argument', 0.098), ('constituent', 0.095), ('propbank', 0.09), ('nombank', 0.08), ('roles', 0.079), ('trees', 0.075), ('vp', 0.073), ('korean', 0.073), ('chtb', 0.071), ('tree', 0.07), ('rquez', 0.065), ('arguments', 0.059), ('role', 0.057), ('loan', 0.057), ('joint', 0.054), ('merlo', 0.052), ('poschunk', 0.048), ('rmb', 0.048), ('carreras', 0.048), ('sutton', 0.047), ('parser', 0.046), ('conll', 0.044), ('chunk', 0.044), ('labeling', 0.042), ('li', 0.042), ('core', 0.041), ('yes', 0.039), ('hajic', 0.039), ('ng', 0.039), ('character', 0.038), ('tou', 0.038), ('pradhan', 0.038), ('np', 0.038), ('recognition', 0.036), ('segmentation', 0.036), ('pass', 0.035), ('lluis', 0.034), ('guodong', 0.033), ('check', 0.033), ('gabriele', 0.032), ('ipnaer', 0.032), ('junhui', 0.032), ('koomen', 0.032), ('musillo', 0.032), ('soen', 0.032), ('adjunct', 0.032), ('nn', 0.031), ('merged', 0.03), ('detected', 0.03), ('palmer', 0.03), ('returned', 0.03), ('ratnaparkhi', 0.029), ('meyers', 0.028), ('nianwen', 0.028), ('cate', 0.028), ('qiaoming', 0.028), ('pos', 0.028), ('surdeanu', 0.028), ('hwee', 0.028), ('exploring', 0.027), ('benefits', 0.027), ('incorporated', 0.026), ('frame', 0.026), ('golden', 0.026), ('covers', 0.025), ('pp', 0.025), ('mccallum', 0.024), ('labels', 0.024), ('performance', 0.024), ('encouraging', 0.023), ('files', 0.023), ('procedure', 0.022), ('firstly', 0.022), ('paola', 0.022), ('jiang', 0.022), ('siblings', 0.021), ('fed', 0.021), ('zhou', 0.021), ('secondly', 0.021), ('features', 0.021), ('sanda', 0.02), ('johansson', 0.02), ('mihai', 0.02), ('shared', 0.02), ('tag', 0.02), ('considering', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 0.9999997 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng
Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1
2 0.49263909 207 acl-2010-Semantics-Driven Shallow Parsing for Chinese Semantic Role Labeling
Author: Weiwei Sun
Abstract: One deficiency of current shallow parsing based Semantic Role Labeling (SRL) methods is that syntactic chunks are too small to effectively group words. To partially resolve this problem, we propose semantics-driven shallow parsing, which takes into account both syntactic structures and predicate-argument structures. We also introduce several new “path” features to improve shallow parsing based SRL method. Experiments indicate that our new method obtains a significant improvement over the best reported Chinese SRL result.
3 0.39674062 184 acl-2010-Open-Domain Semantic Role Labeling by Modeling Word Spans
Author: Fei Huang ; Alexander Yates
Abstract: Most supervised language processing systems show a significant drop-off in performance when they are tested on text that comes from a domain significantly different from the domain of the training data. Semantic role labeling techniques are typically trained on newswire text, and in tests their performance on fiction is as much as 19% worse than their performance on newswire text. We investigate techniques for building open-domain semantic role labeling systems that approach the ideal of a train-once, use-anywhere system. We leverage recently-developed techniques for learning representations of text using latent-variable language models, and extend these techniques to ones that provide the kinds of features that are useful for semantic role labeling. In experiments, our novel system reduces error by 16% relative to the previous state of the art on out-of-domain text.
4 0.37262678 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features
Author: Weiwei Sun
Abstract: Developing features has been shown crucial to advancing the state-of-the-art in Semantic Role Labeling (SRL). To improve Chinese SRL, we propose a set of additional features, some of which are designed to better capture structural information. Our system achieves 93.49 Fmeasure, a significant improvement over the best reported performance 92.0. We are further concerned with the effect of parsing in Chinese SRL. We empirically analyze the two-fold effect, grouping words into constituents and providing syntactic information. We also give some preliminary linguistic explanations.
5 0.33119589 94 acl-2010-Edit Tree Distance Alignments for Semantic Role Labelling
Author: Hector-Hugo Franco-Penya
Abstract: ―Tree SRL system‖ is a Semantic Role Labelling supervised system based on a tree-distance algorithm and a simple k-NN implementation. The novelty of the system lies in comparing the sentences as tree structures with multiple relations instead of extracting vectors of features for each relation and classifying them. The system was tested with the English CoNLL-2009 shared task data set where 79% accuracy was obtained. 1
6 0.32715616 49 acl-2010-Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
7 0.29530802 216 acl-2010-Starting from Scratch in Semantic Role Labeling
8 0.27925622 25 acl-2010-Adapting Self-Training for Semantic Role Labeling
9 0.23080245 238 acl-2010-Towards Open-Domain Semantic Role Labeling
10 0.22027354 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
11 0.20321807 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
12 0.17377754 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
13 0.16236162 206 acl-2010-Semantic Parsing: The Task, the State of the Art and the Future
14 0.13104877 132 acl-2010-Hierarchical Joint Learning: Improving Joint Parsing and Named Entity Recognition with Non-Jointly Labeled Data
15 0.12934195 71 acl-2010-Convolution Kernel over Packed Parse Forest
16 0.12508321 203 acl-2010-Rebanking CCGbank for Improved NP Interpretation
17 0.12391918 158 acl-2010-Latent Variable Models of Selectional Preference
18 0.10269818 115 acl-2010-Filtering Syntactic Constraints for Statistical Machine Translation
19 0.10250113 169 acl-2010-Learning to Translate with Source and Target Syntax
20 0.10075638 99 acl-2010-Efficient Third-Order Dependency Parsers
topicId topicWeight
[(0, -0.302), (1, 0.125), (2, 0.505), (3, 0.235), (4, -0.037), (5, 0.012), (6, -0.246), (7, -0.045), (8, -0.124), (9, 0.075), (10, 0.065), (11, -0.138), (12, -0.009), (13, -0.008), (14, -0.177), (15, 0.008), (16, -0.056), (17, -0.028), (18, -0.017), (19, 0.029), (20, -0.091), (21, -0.088), (22, -0.002), (23, 0.072), (24, -0.028), (25, -0.051), (26, -0.072), (27, 0.002), (28, -0.063), (29, -0.043), (30, -0.009), (31, 0.033), (32, -0.019), (33, -0.012), (34, 0.043), (35, -0.008), (36, 0.005), (37, -0.003), (38, 0.002), (39, -0.007), (40, -0.027), (41, -0.003), (42, -0.024), (43, 0.001), (44, 0.002), (45, 0.028), (46, 0.027), (47, 0.022), (48, -0.012), (49, -0.004)]
simIndex simValue paperId paperTitle
same-paper 1 0.95135182 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng
Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1
2 0.93250912 207 acl-2010-Semantics-Driven Shallow Parsing for Chinese Semantic Role Labeling
Author: Weiwei Sun
Abstract: One deficiency of current shallow parsing based Semantic Role Labeling (SRL) methods is that syntactic chunks are too small to effectively group words. To partially resolve this problem, we propose semantics-driven shallow parsing, which takes into account both syntactic structures and predicate-argument structures. We also introduce several new “path” features to improve shallow parsing based SRL method. Experiments indicate that our new method obtains a significant improvement over the best reported Chinese SRL result.
3 0.87746662 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features
Author: Weiwei Sun
Abstract: Developing features has been shown crucial to advancing the state-of-the-art in Semantic Role Labeling (SRL). To improve Chinese SRL, we propose a set of additional features, some of which are designed to better capture structural information. Our system achieves 93.49 Fmeasure, a significant improvement over the best reported performance 92.0. We are further concerned with the effect of parsing in Chinese SRL. We empirically analyze the two-fold effect, grouping words into constituents and providing syntactic information. We also give some preliminary linguistic explanations.
4 0.78105283 184 acl-2010-Open-Domain Semantic Role Labeling by Modeling Word Spans
Author: Fei Huang ; Alexander Yates
Abstract: Most supervised language processing systems show a significant drop-off in performance when they are tested on text that comes from a domain significantly different from the domain of the training data. Semantic role labeling techniques are typically trained on newswire text, and in tests their performance on fiction is as much as 19% worse than their performance on newswire text. We investigate techniques for building open-domain semantic role labeling systems that approach the ideal of a train-once, use-anywhere system. We leverage recently-developed techniques for learning representations of text using latent-variable language models, and extend these techniques to ones that provide the kinds of features that are useful for semantic role labeling. In experiments, our novel system reduces error by 16% relative to the previous state of the art on out-of-domain text.
5 0.76367819 216 acl-2010-Starting from Scratch in Semantic Role Labeling
Author: Michael Connor ; Yael Gertner ; Cynthia Fisher ; Dan Roth
Abstract: A fundamental step in sentence comprehension involves assigning semantic roles to sentence constituents. To accomplish this, the listener must parse the sentence, find constituents that are candidate arguments, and assign semantic roles to those constituents. Each step depends on prior lexical and syntactic knowledge. Where do children learning their first languages begin in solving this problem? In this paper we focus on the parsing and argumentidentification steps that precede Semantic Role Labeling (SRL) training. We combine a simplified SRL with an unsupervised HMM part of speech tagger, and experiment with psycholinguisticallymotivated ways to label clusters resulting from the HMM so that they can be used to parse input for the SRL system. The results show that proposed shallow representations of sentence structure are robust to reductions in parsing accuracy, and that the contribution of alternative representations of sentence structure to successful semantic role labeling varies with the integrity of the parsing and argumentidentification stages.
6 0.68405694 238 acl-2010-Towards Open-Domain Semantic Role Labeling
7 0.6771186 94 acl-2010-Edit Tree Distance Alignments for Semantic Role Labelling
8 0.64675534 25 acl-2010-Adapting Self-Training for Semantic Role Labeling
9 0.64186865 49 acl-2010-Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
10 0.59525323 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
11 0.53897005 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
12 0.48713791 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
13 0.45168874 206 acl-2010-Semantic Parsing: The Task, the State of the Art and the Future
14 0.38617352 203 acl-2010-Rebanking CCGbank for Improved NP Interpretation
15 0.36374432 130 acl-2010-Hard Constraints for Grammatical Function Labelling
16 0.32975861 99 acl-2010-Efficient Third-Order Dependency Parsers
17 0.32903409 263 acl-2010-Word Representations: A Simple and General Method for Semi-Supervised Learning
18 0.3272402 108 acl-2010-Expanding Verb Coverage in Cyc with VerbNet
19 0.32380357 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing
20 0.320095 248 acl-2010-Unsupervised Ontology Induction from Text
topicId topicWeight
[(7, 0.062), (13, 0.015), (14, 0.019), (25, 0.104), (33, 0.015), (39, 0.01), (42, 0.02), (53, 0.014), (59, 0.094), (73, 0.046), (78, 0.122), (80, 0.02), (83, 0.134), (84, 0.033), (89, 0.057), (98, 0.146)]
simIndex simValue paperId paperTitle
same-paper 1 0.93935716 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese
Author: Junhui Li ; Guodong Zhou ; Hwee Tou Ng
Abstract: This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1
2 0.92776895 71 acl-2010-Convolution Kernel over Packed Parse Forest
Author: Min Zhang ; Hui Zhang ; Haizhou Li
Abstract: This paper proposes a convolution forest kernel to effectively explore rich structured features embedded in a packed parse forest. As opposed to the convolution tree kernel, the proposed forest kernel does not have to commit to a single best parse tree, is thus able to explore very large object spaces and much more structured features embedded in a forest. This makes the proposed kernel more robust against parsing errors and data sparseness issues than the convolution tree kernel. The paper presents the formal definition of convolution forest kernel and also illustrates the computing algorithm to fast compute the proposed convolution forest kernel. Experimental results on two NLP applications, relation extraction and semantic role labeling, show that the proposed forest kernel significantly outperforms the baseline of the convolution tree kernel. 1
3 0.92722237 155 acl-2010-Kernel Based Discourse Relation Recognition with Temporal Ordering Information
Author: WenTing Wang ; Jian Su ; Chew Lim Tan
Abstract: Syntactic knowledge is important for discourse relation recognition. Yet only heuristically selected flat paths and 2-level production rules have been used to incorporate such information so far. In this paper we propose using tree kernel based approach to automatically mine the syntactic information from the parse trees for discourse analysis, applying kernel function to the tree structures directly. These structural syntactic features, together with other normal flat features are incorporated into our composite kernel to capture diverse knowledge for simultaneous discourse identification and classification for both explicit and implicit relations. The experiment shows tree kernel approach is able to give statistical significant improvements over flat syntactic path feature. We also illustrate that tree kernel approach covers more structure information than the production rules, which allows tree kernel to further incorporate information from a higher dimension space for possible better discrimination. Besides, we further propose to leverage on temporal ordering information to constrain the interpretation of discourse relation, which also demonstrate statistical significant improvements for discourse relation recognition on PDTB 2.0 for both explicit and implicit as well. University of Singapore Singapore 117417 sg tacl @ comp .nus .edu . sg 1
4 0.92369676 70 acl-2010-Contextualizing Semantic Representations Using Syntactically Enriched Vector Models
Author: Stefan Thater ; Hagen Furstenau ; Manfred Pinkal
Abstract: We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task.
5 0.91817844 10 acl-2010-A Latent Dirichlet Allocation Method for Selectional Preferences
Author: Alan Ritter ; Mausam Mausam ; Oren Etzioni
Abstract: The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al. ’s system (Pantel et al., 2007).
6 0.91654158 17 acl-2010-A Structured Model for Joint Learning of Argument Roles and Predicate Senses
7 0.91025317 158 acl-2010-Latent Variable Models of Selectional Preference
8 0.89842325 120 acl-2010-Fully Unsupervised Core-Adjunct Argument Classification
9 0.89663994 229 acl-2010-The Influence of Discourse on Syntax: A Psycholinguistic Model of Sentence Processing
10 0.8949064 49 acl-2010-Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
11 0.89330035 23 acl-2010-Accurate Context-Free Parsing with Combinatory Categorial Grammar
12 0.89014822 101 acl-2010-Entity-Based Local Coherence Modelling Using Topological Fields
13 0.88879365 130 acl-2010-Hard Constraints for Grammatical Function Labelling
14 0.88808191 207 acl-2010-Semantics-Driven Shallow Parsing for Chinese Semantic Role Labeling
15 0.88525856 169 acl-2010-Learning to Translate with Source and Target Syntax
16 0.88520122 109 acl-2010-Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
17 0.88315713 41 acl-2010-Automatic Selectional Preference Acquisition for Latin Verbs
18 0.88294697 248 acl-2010-Unsupervised Ontology Induction from Text
19 0.88182133 198 acl-2010-Predicate Argument Structure Analysis Using Transformation Based Learning
20 0.88085961 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features