acl acl2010 acl2010-83 knowledge-graph by maker-knowledge-mining

83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification


Source: pdf

Author: Wenbin Jiang ; Qun Liu

Abstract: In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Box 2704, Beijing 100190, China { j iangwenbin Abstract In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. [sent-3, score-0.969]

2 And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. [sent-4, score-1.556]

3 Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. [sent-5, score-1.865]

4 More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline. [sent-6, score-0.51]

5 1 Introduction Supervised dependency parsing achieves the stateof-the-art in recent years (McDonald et al. [sent-7, score-0.465]

6 For example, the unsupervised dependency parsing (Klein and Manning, 2004) which is totally based on unannotated data, and the semisupervised dependency parsing (Koo et al. [sent-11, score-0.961]

7 cn ct For dependency projection, the relationship between words in the parsed sentences can be simply projected across the word alignment to words in the unparsed sentences, according to the DCA assumption (Hwa et al. [sent-20, score-1.055]

8 Such a projection procedure suffers much from the word alignment errors and syntactic isomerism between languages, which usually lead to relationship projection conflict and incomplete projected dependency structures. [sent-22, score-1.529]

9 Smith and Eisner (2009) perform dependency projection and annotation adaptation with quasi-synchronous grammar features. [sent-25, score-0.632]

10 Jiang and Liu (2009) resort to a dynamic programming procedure to search for a completed projected tree. [sent-26, score-0.625]

11 However, these strategies are all confined to the same category that dependency projection must produce completed projected trees. [sent-27, score-1.144]

12 Because of the free translation, the syntactic isomerism between languages and word alignment errors, it would be strained to completely project the dependency structure from one language to another. [sent-28, score-0.482]

13 We propose an effective method for dependency projection, which does not have to produce complete projected trees. [sent-29, score-0.953]

14 Given a wordaligned bilingual corpus with source language sentences parsed, the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language. [sent-30, score-0.974]

15 A dependency relationship is a boolean value that represents whether this word pair forms a dependency edge. [sent-31, score-0.771]

16 Meanwhile, we propose an intuitionistic model for dependency parsing, which uses a classifier to determine whether a pair of words form a dependency edge. [sent-33, score-0.995]

17 The classifier can then be trained on the projected classification instance set, so as to build a projected dependency parser without the need of complete projected trees. [sent-34, score-2.443]

18 ec A2s0s1o0ci Aatsiso nci faotrio Cno fomrp Cutoamtipounta lti Loin aglu Lisitnicgsu,ips atigces 12–20, j j ii Figure 1: Illegal (a) and incomplete (b) dependency tree produced by the simple-collection method. [sent-37, score-0.442]

19 Experimental results show that, the classifier trained on the projected classification instances significantly outperforms the projected dependency parsers in previous works. [sent-38, score-1.865]

20 The classifier trained on the Chinese projected classification instances achieves a precision of 58. [sent-39, score-0.943]

21 More importantly, when this classifier is integrated into a 2nd-ordered maximum spanning tree (MST) dependency parser (McDonald and Pereira, 2006) in a weighted average manner, significant improvement is obtained over the MST baselines. [sent-41, score-0.752]

22 In the rest of this paper, we first describe the word-pair classification model for dependency parsing (section 2) and the generation method of projected classification instances (section 3). [sent-47, score-1.327]

23 Then we describe an application of the projected parser: boosting a state-of-the-art 2nd-ordered MST parser (section 4). [sent-48, score-0.659]

24 After the comparisons with previous works on dependency parsing and projection, we finally five the experimental results. [sent-49, score-0.511]

25 y denotes the dependency tree for sentence x, and (i, j) ∈ y reppreesnednetsn a dependency edge xfr,om an dw (oir,dj xi ∈to y yw reoprdxj, where xi is the parent of xj. [sent-53, score-0.918]

26 The task of the word-pair classification model is to determine whether any candidate word pair, xi and xj s. [sent-54, score-0.201]

27 T ih,ej c ≤las |xsi|fi caantdio in = res ju,lt f C(i, j) can be a boolean value: = C(i, j) = p p ∈ {0, 1} (1) as produced by a support vector machine (SVM) classifier (Vapnik, 1998). [sent-57, score-0.171]

28 p = 1indicates that the classifier supports the candidate edge (i, j), and p = 0 the contrary. [sent-58, score-0.292]

29 C(i, j) can also be a realvpal =ued 0 probability: C(i,j) =p 0≤ p≤ 1 (2) as produced by an maximum entropy (ME) classifier (Berger et al. [sent-59, score-0.171]

30 p is a probability which indicates the degree the classifier support the candidate edge (i, j). [sent-61, score-0.292]

31 Ideally, given the classification results for all candidate word pairs, the dependency parse tree can be composed of the candidate edges with higher score (1 for the boolean-valued classifier, and large p for the real-valued classifier). [sent-62, score-0.641]

32 However, more robust strategies should be investigated since the ambiguity of the language syntax and the classification errors usually lead to illegal or incomplete parsing result, as shown in Figure 1. [sent-63, score-0.266]

33 Follow the edge based factorization method (Eisner, 1996), we factorize the score of a dependency tree s(x, y) into its dependency edges, and design a dynamic programming algorithm to search for the candidate parse with maximum score. [sent-64, score-1.013]

34 This strategy alleviate the classification errors to some degree and ensure a valid, complete dependency parsing tree. [sent-65, score-0.653]

35 Here we give the calculation of dependency probability C(i, j). [sent-68, score-0.372]

36 M WEe model, aon dde f(i, j,r) ator dme-note the feature vector for the assumption that the word pair iand j has a dependency relationship r. [sent-70, score-0.467]

37 The symbol r indicates the supposed classification result, where r = + means we suppose it as a dependency edge and r = − means the contrary. [sent-71, score-0.583]

38 The dependency probability can then be defined as: 2. [sent-73, score-0.372]

39 C(iF,ej)at= urePsexrf operx(CwPp(la·kPwsfw(k·if ,wc(×jaik,t+fij×ok,)n(rfi)k,(j +,)r (5) The feature templates for the classifier are similar to those of 1st-ordered MST model (McDonald et al. [sent-74, score-0.223]

40 Previous graph-based dependency models usually use the index distance of word iand word j 1We exclude the in between features of McDonald et al. [sent-78, score-0.471]

41 Here, V[i, j] contains the candidate parsing segments of the span [i, j], and the function EVAL(d) accumulates the scores of all the edges in dependency segment d. [sent-97, score-0.541]

42 3 Projected Classification Instance After the introduction of the word-pair classification model, we now describe the extraction of projected dependency instances. [sent-99, score-1.028]

43 In order to alleviate the effect of word alignment errors, we base the projection on the alignment matrix, a compact representation of multiple GIZA++ (Och and Ney, 2000) results, rather than a single word alignment in previous dependency projection works. [sent-100, score-1.023]

44 We finally define the probability of the supposed projected dependency edge as: Cp(i,j) =exp(s+(iex,jp)()s+ +( eix,jp)()s−(i,j)) (10) The probability Cp(i, j) is a real value between 0 aTnhde 1 p. [sent-108, score-1.027]

45 On the other hand, there are as many as 2|f| (|f| 1) candidoathtee projected dependency iynastsan 2|cfe|s( ffo|r− t1h)e target sentence f. [sent-111, score-0.922]

46 − − 4 Boosting an MST Parser The classifier can be used to boost a existing parser trained on human-annotated trees. [sent-113, score-0.284]

47 For a sentence to be parsed, x, the enhanced parser selects the best parse ˜y according to both the baseline model B and the projected classifier C. [sent-115, score-0.845]

48 ˜y = argymax[sB(x,y) + λsC(x,y)] (11) 15 Here, sB and sC denote the evaluation functions of the baseline model and the projected classifier, respectively. [sent-116, score-0.576]

49 The parameter λ is the relative weight of the projected classifier against the baseline model. [sent-117, score-0.721]

50 As described previously, the score of a dependency tree given by a word-pair classifier can be factored into each candidate dependency edge in this tree. [sent-121, score-1.108]

51 Therefore, the projected classifier can be integrated with a baseline model deeply at each dependency edge, if the evaluation score given by the baseline model can also be factored into dependency edges. [sent-122, score-1.639]

52 , 2006) parsing algorithms are related to our word-pair classification model. [sent-130, score-0.199]

53 Similar to the graph-based method, our model is factored on dependency edges, and its decoding procedure also aims to find a maximum spanning tree in a fully connected directed graph. [sent-131, score-0.561]

54 On the training method, however, our model obviously differs from other graph-based models, that we only need a set of word-pair dependency instances rather than a regular dependency treebank. [sent-133, score-0.844]

55 The most apparent similarity between our model and the transition-based category is that they all need a classifier to perform classification conditioned on a certain configuration. [sent-135, score-0.303]

56 The classifier in our model predicates a dependency probability for each pair of words, while the classifier in a transition-based model gives a possible next transition operation such as shift or reduce. [sent-137, score-0.766]

57 For our method, the evaluation score of a candidate parse is factorized into each dependency edge, while for the transition-based models, the score is factorized into each transition operation. [sent-139, score-0.497]

58 Thanks to the reminding of the third reviewer of our paper, we find that the pairwise classification schema has also been used in Japanese de- pendency parsing (Uchimoto et al. [sent-140, score-0.199]

59 2 Dependency Projection Many works try to learn parsing knowledge from bilingual corpora. [sent-144, score-0.191]

60 (2009) induce dependency grammar via projection from aligned bilingual corpora, and use some thresholds to filter out noise and some hand-written rules to handle heterogeneity. [sent-149, score-0.679]

61 Smith and Eisner (2009) perform dependency projection and annotation adaptation with Quasi-Synchronous Grammar features. [sent-150, score-0.632]

62 Jiang and Liu (2009) refer to alignment matrix and a dynamic programming search algorithm to obtain better projected dependency trees. [sent-151, score-1.098]

63 All previous works for dependency projection (Hwa et al. [sent-152, score-0.64]

64 , 2009; Smith and Eisner, 2009; Jiang and Liu, 2009) need complete projected trees to train the projected parsers. [sent-154, score-1.131]

65 Because of the free translation, the word alignment errors, and the heterogeneity between two languages, it is reluctant and less effective to project the dependency tree completely to the target language sentence. [sent-155, score-0.485]

66 On the contrary, our dependency projection strategy prefer to extract a set of dependency instances, which coincides our model’s demand for training corpus. [sent-156, score-1.017]

67 An obvious advantage of this strategy is that, we can select an appropriate filtering threshold to obtain dependency instances of good quality. [sent-157, score-0.531]

68 In addition, our word-pair classification model can be integrated deeply into a state-of-the-art MST dependency model. [sent-158, score-0.598]

69 factorized into dependency edges, the integration can be conducted at each dependency edge, by weightedly averaging their evaluation scores for this dependency edge. [sent-162, score-1.198]

70 This strategy makes better use of the projected parser while with faster decoding, compared with the cascaded approach of Jiang and Liu (2009). [sent-163, score-0.672]

71 Then we investigate the effectiveness of the dependency projection by evaluating the projected classifiers trained on the projected classification instances. [sent-165, score-1.842]

72 Finally, we report the performance of the integrated dependency parser which integrates the projected classifier and the 2nd-ordered MST dependency parser. [sent-166, score-1.589]

73 The constituent trees in the two treebanks are transformed to dependency trees according to the headfinding rules of Yamada and Matsumoto (2003). [sent-173, score-0.428]

74 For a dependency tree with n words, only n 1positive dependency eines wtainthce ns can db es, e oxntrlayc nte−d . [sent-177, score-0.788]

75 They account for only a small proportion of all the dependency instances. [sent-178, score-0.372]

76 The MaxEnt toolkit by Zhang 2 is adopted to train the ME classifier on extracted instances. [sent-189, score-0.171]

77 Therefore, if trained on instances extracted from human-annotated treebanks, the word-pair classification model would not demonstrate its advantage over existed stateof-the-art dependency parsing methods. [sent-214, score-0.713]

78 2 Dependency Projection In this work we focus on the dependency projection from English to Chinese. [sent-216, score-0.594]

79 We use the FBIS Chinese-English bitext as the bilingual corpus for dependency projection. [sent-217, score-0.424]

80 The English sentences are then parsed by an implementation of 2nd-ordered MST model of McDonald and Pereira (2006), which is trained on dependency trees extracted from WSJ. [sent-223, score-0.477]

81 Larger thresholds result in better but less classification instances, the lower coverage of the instances would hurt the performance of the classifier. [sent-227, score-0.213]

82 9 8 Table 4: The performance of the projected classifier on the test sets of CTB 2. [sent-233, score-0.721]

83 t 67e05d P% Table 5: Performance improvement brought by the projected classifier to the baseline 2nd-ordered MST parsers trained on CTB 1. [sent-240, score-0.763]

84 The classifier corresponding to this threshold is evaluated on the test set of CTB 5. [sent-249, score-0.205]

85 Table 4 shows the performance of the projected classifier, as well as the performance of previous works on the corresponding test sets. [sent-253, score-0.596]

86 The projected classifier significantly outperforms previous works on both test sets, which demonstrates that the word-pair classification model, although falling behind of the state-of-the-art on human- annotated treebanks, performs well in projected dependency parsing. [sent-254, score-1.795]

87 We give the credit to its good collaboration with the word-pair classification instance extraction for dependency projection. [sent-255, score-0.478]

88 First, we implement a chart-based dynamic programming parser for the 2nd-ordered MST model, and develop a training procedure based on the perceptron algorithm with averaged parameters (Collins, 2002). [sent-258, score-0.175]

89 Then, at each derivation step of this 2nd-ordered MST parser, we weightedly add the evaluation score given by the projected classifier to the original MST evaluation score. [sent-260, score-0.762]

90 0 as the baseline, the projected classifier brings an accuracy improvement of about 0. [sent-264, score-0.721]

91 This provides a promising strategy for boosting the parsing performance of resourcescarce languages. [sent-271, score-0.182]

92 Although this parsing method falls behind of previous models, it can collaborate well with the word-pair classification instance extraction strategy for dependency projection, and achieves the state-of-the-art in projected dependency parsing. [sent-274, score-1.544]

93 In addition, when integrated into a 2nd-ordered MST parser, the projected parser brings significant improvement to the baseline, especially for the baseline trained on smaller treebanks. [sent-275, score-0.716]

94 This provides a new strategy for resource-scarce languages to train high-precision dependency parsers. [sent-276, score-0.423]

95 However, considering its lower performance on human-annotated treebanks, the dependency parsing method itself still need a lot of investigations, especially on the training method of the classifier. [sent-277, score-0.465]

96 Three new probabilistic models for dependency parsing: An exploration. [sent-320, score-0.372]

97 Automatic adaptation of annotation standards for dependency parsing using projected treebank as source corpus. [sent-344, score-1.089]

98 Corpusbased induction of syntactic structure: Models of dependency and constituency. [sent-353, score-0.372]

99 Japanese dependency structure analysis based on support vector machines. [sent-361, score-0.372]

100 Japanese dependency structure analysis based on maximum entropy models. [sent-413, score-0.372]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('projected', 0.55), ('dependency', 0.372), ('ctb', 0.29), ('mst', 0.29), ('projection', 0.222), ('classifier', 0.171), ('hwa', 0.128), ('mcdonald', 0.123), ('classification', 0.106), ('parsing', 0.093), ('chinese', 0.08), ('edge', 0.078), ('cp', 0.077), ('instances', 0.074), ('jiang', 0.073), ('parser', 0.071), ('alignment', 0.069), ('collins', 0.069), ('iand', 0.068), ('wsj', 0.061), ('wordj', 0.061), ('pereira', 0.057), ('treebanks', 0.056), ('ganchev', 0.055), ('intuitionistic', 0.054), ('integrated', 0.053), ('bilingual', 0.052), ('strategy', 0.051), ('decoding', 0.05), ('carreras', 0.05), ('liu', 0.049), ('yc', 0.046), ('works', 0.046), ('tree', 0.044), ('candidate', 0.043), ('eisner', 0.043), ('trained', 0.042), ('deeply', 0.041), ('wenbin', 0.041), ('factorized', 0.041), ('qun', 0.041), ('spanning', 0.041), ('argmyax', 0.041), ('argyymax', 0.041), ('buf', 0.041), ('cstances', 0.041), ('illegal', 0.041), ('isomerism', 0.041), ('okan', 0.041), ('weightedly', 0.041), ('boosting', 0.038), ('adaptation', 0.038), ('dynamic', 0.038), ('nivre', 0.038), ('programming', 0.037), ('ratio', 0.037), ('parsed', 0.037), ('rebecca', 0.036), ('xavier', 0.036), ('humanannotated', 0.036), ('weinberg', 0.036), ('amy', 0.036), ('treebank', 0.036), ('huang', 0.035), ('smith', 0.035), ('threshold', 0.034), ('yamada', 0.034), ('matsumoto', 0.034), ('thresholds', 0.033), ('edges', 0.033), ('accumulation', 0.033), ('matrix', 0.032), ('unannotated', 0.031), ('distance', 0.031), ('complete', 0.031), ('increment', 0.031), ('xia', 0.031), ('uchimoto', 0.029), ('factorization', 0.029), ('perceptron', 0.029), ('ryan', 0.028), ('factored', 0.028), ('proceedings', 0.028), ('penn', 0.028), ('curves', 0.028), ('reranking', 0.028), ('partitions', 0.028), ('comma', 0.028), ('liang', 0.028), ('pos', 0.028), ('enhanced', 0.027), ('fk', 0.027), ('sb', 0.027), ('supposed', 0.027), ('koo', 0.027), ('relationship', 0.027), ('model', 0.026), ('xi', 0.026), ('templates', 0.026), ('incomplete', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification

Author: Wenbin Jiang ; Qun Liu

Abstract: In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.

2 0.22208109 143 acl-2010-Importance of Linguistic Constraints in Statistical Dependency Parsing

Author: Bharat Ram Ambati

Abstract: Statistical systems with high accuracy are very useful in real-world applications. If these systems can capture basic linguistic information, then the usefulness of these statistical systems improve a lot. This paper is an attempt at incorporating linguistic constraints in statistical dependency parsing. We consider a simple linguistic constraint that a verb should not have multiple subjects/objects as its children in the dependency tree. We first describe the importance of this constraint considering Machine Translation systems which use dependency parser output, as an example application. We then show how the current state-ofthe-art dependency parsers violate this constraint. We present two new methods to handle this constraint. We evaluate our methods on the state-of-the-art dependency parsers for Hindi and Czech. 1

3 0.21647836 99 acl-2010-Efficient Third-Order Dependency Parsers

Author: Terry Koo ; Michael Collins

Abstract: We present algorithms for higher-order dependency parsing that are “third-order” in the sense that they can evaluate substructures containing three dependencies, and “efficient” in the sense that they require only O(n4) time. Importantly, our new parsers can utilize both sibling-style and grandchild-style interactions. We evaluate our parsers on the Penn Treebank and Prague Dependency Treebank, achieving unlabeled attachment scores of 93.04% and 87.38%, respectively.

4 0.17794001 48 acl-2010-Better Filtration and Augmentation for Hierarchical Phrase-Based Translation Rules

Author: Zhiyang Wang ; Yajuan Lv ; Qun Liu ; Young-Sook Hwang

Abstract: This paper presents a novel filtration criterion to restrict the rule extraction for the hierarchical phrase-based translation model, where a bilingual but relaxed wellformed dependency restriction is used to filter out bad rules. Furthermore, a new feature which describes the regularity that the source/target dependency edge triggers the target/source word is also proposed. Experimental results show that, the new criteria weeds out about 40% rules while with translation performance improvement, and the new feature brings an- other improvement to the baseline system, especially on larger corpus.

5 0.16067314 20 acl-2010-A Transition-Based Parser for 2-Planar Dependency Structures

Author: Carlos Gomez-Rodriguez ; Joakim Nivre

Abstract: Finding a class of structures that is rich enough for adequate linguistic representation yet restricted enough for efficient computational processing is an important problem for dependency parsing. In this paper, we present a transition system for 2-planar dependency trees trees that can be decomposed into at most two planar graphs and show that it can be used to implement a classifier-based parser that runs in linear time and outperforms a stateof-the-art transition-based parser on four data sets from the CoNLL-X shared task. In addition, we present an efficient method – – for determining whether an arbitrary tree is 2-planar and show that 99% or more of the trees in existing treebanks are 2-planar.

6 0.15868211 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints

7 0.1509247 69 acl-2010-Constituency to Dependency Translation with Forests

8 0.14994307 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing

9 0.14844255 241 acl-2010-Transition-Based Parsing with Confidence-Weighted Classification

10 0.13609797 242 acl-2010-Tree-Based Deterministic Dependency Parsing - An Application to Nivre's Method -

11 0.12825641 214 acl-2010-Sparsity in Dependency Grammar Induction

12 0.11907735 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations

13 0.10725436 161 acl-2010-Learning Better Data Representation Using Inference-Driven Metric Learning

14 0.10614375 195 acl-2010-Phylogenetic Grammar Induction

15 0.098812424 253 acl-2010-Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing

16 0.094523586 153 acl-2010-Joint Syntactic and Semantic Parsing of Chinese

17 0.089607872 133 acl-2010-Hierarchical Search for Word Alignment

18 0.088217504 130 acl-2010-Hard Constraints for Grammatical Function Labelling

19 0.081244744 150 acl-2010-Inducing Domain-Specific Semantic Class Taggers from (Almost) Nothing

20 0.080792688 110 acl-2010-Exploring Syntactic Structural Features for Sub-Tree Alignment Using Bilingual Tree Kernels


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.247), (1, -0.105), (2, 0.077), (3, 0.064), (4, -0.074), (5, -0.063), (6, 0.076), (7, 0.008), (8, -0.072), (9, 0.275), (10, -0.218), (11, 0.024), (12, -0.082), (13, 0.135), (14, 0.15), (15, -0.05), (16, -0.011), (17, -0.028), (18, 0.018), (19, -0.084), (20, 0.001), (21, 0.008), (22, 0.062), (23, -0.047), (24, -0.019), (25, -0.0), (26, 0.034), (27, 0.04), (28, -0.054), (29, -0.022), (30, -0.056), (31, 0.027), (32, 0.001), (33, -0.024), (34, 0.026), (35, -0.011), (36, -0.09), (37, -0.024), (38, 0.056), (39, -0.146), (40, -0.09), (41, -0.078), (42, -0.007), (43, -0.064), (44, 0.017), (45, -0.043), (46, 0.075), (47, -0.062), (48, -0.111), (49, 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97349375 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification

Author: Wenbin Jiang ; Qun Liu

Abstract: In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.

2 0.89293182 143 acl-2010-Importance of Linguistic Constraints in Statistical Dependency Parsing

Author: Bharat Ram Ambati

Abstract: Statistical systems with high accuracy are very useful in real-world applications. If these systems can capture basic linguistic information, then the usefulness of these statistical systems improve a lot. This paper is an attempt at incorporating linguistic constraints in statistical dependency parsing. We consider a simple linguistic constraint that a verb should not have multiple subjects/objects as its children in the dependency tree. We first describe the importance of this constraint considering Machine Translation systems which use dependency parser output, as an example application. We then show how the current state-ofthe-art dependency parsers violate this constraint. We present two new methods to handle this constraint. We evaluate our methods on the state-of-the-art dependency parsers for Hindi and Czech. 1

3 0.83533651 99 acl-2010-Efficient Third-Order Dependency Parsers

Author: Terry Koo ; Michael Collins

Abstract: We present algorithms for higher-order dependency parsing that are “third-order” in the sense that they can evaluate substructures containing three dependencies, and “efficient” in the sense that they require only O(n4) time. Importantly, our new parsers can utilize both sibling-style and grandchild-style interactions. We evaluate our parsers on the Penn Treebank and Prague Dependency Treebank, achieving unlabeled attachment scores of 93.04% and 87.38%, respectively.

4 0.77066272 241 acl-2010-Transition-Based Parsing with Confidence-Weighted Classification

Author: Martin Haulrich

Abstract: We show that using confidence-weighted classification in transition-based parsing gives results comparable to using SVMs with faster training and parsing time. We also compare with other online learning algorithms and investigate the effect of pruning features when using confidenceweighted classification.

5 0.6829772 242 acl-2010-Tree-Based Deterministic Dependency Parsing - An Application to Nivre's Method -

Author: Kotaro Kitagawa ; Kumiko Tanaka-Ishii

Abstract: Nivre’s method was improved by enhancing deterministic dependency parsing through application of a tree-based model. The model considers all words necessary for selection of parsing actions by including words in the form of trees. It chooses the most probable head candidate from among the trees and uses this candidate to select a parsing action. In an evaluation experiment using the Penn Treebank (WSJ section), the proposed model achieved higher accuracy than did previous deterministic models. Although the proposed model’s worst-case time complexity is O(n2), the experimental results demonstrated an average pars- ing time not much slower than O(n).

6 0.67449665 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints

7 0.66442472 20 acl-2010-A Transition-Based Parser for 2-Planar Dependency Structures

8 0.65125847 12 acl-2010-A Probabilistic Generative Model for an Intermediate Constituency-Dependency Representation

9 0.64525437 253 acl-2010-Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing

10 0.63200748 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing

11 0.62080145 84 acl-2010-Detecting Errors in Automatically-Parsed Dependency Relations

12 0.59857911 48 acl-2010-Better Filtration and Augmentation for Hierarchical Phrase-Based Translation Rules

13 0.55741924 195 acl-2010-Phylogenetic Grammar Induction

14 0.55649298 214 acl-2010-Sparsity in Dependency Grammar Induction

15 0.55244577 130 acl-2010-Hard Constraints for Grammatical Function Labelling

16 0.51611131 69 acl-2010-Constituency to Dependency Translation with Forests

17 0.4564037 161 acl-2010-Learning Better Data Representation Using Inference-Driven Metric Learning

18 0.44865692 200 acl-2010-Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing

19 0.43096083 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

20 0.42439491 252 acl-2010-Using Parse Features for Preposition Selection and Error Detection


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.013), (14, 0.052), (18, 0.013), (25, 0.054), (39, 0.014), (42, 0.028), (44, 0.013), (59, 0.103), (71, 0.011), (72, 0.016), (73, 0.042), (76, 0.035), (78, 0.025), (83, 0.066), (84, 0.019), (98, 0.206), (99, 0.211)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86540949 106 acl-2010-Event-Based Hyperspace Analogue to Language for Query Expansion

Author: Tingxu Yan ; Tamsin Maxwell ; Dawei Song ; Yuexian Hou ; Peng Zhang

Abstract: p . zhang1 @ rgu .ac .uk Bag-of-words approaches to information retrieval (IR) are effective but assume independence between words. The Hyperspace Analogue to Language (HAL) is a cognitively motivated and validated semantic space model that captures statistical dependencies between words by considering their co-occurrences in a surrounding window of text. HAL has been successfully applied to query expansion in IR, but has several limitations, including high processing cost and use of distributional statistics that do not exploit syntax. In this paper, we pursue two methods for incorporating syntactic-semantic information from textual ‘events’ into HAL. We build the HAL space directly from events to investigate whether processing costs can be reduced through more careful definition of word co-occurrence, and improve the quality of the pseudo-relevance feedback by applying event information as a constraint during HAL construction. Both methods significantly improve performance results in comparison with original HAL, and interpolation of HAL and relevance model expansion outperforms either method alone.

same-paper 2 0.85552973 83 acl-2010-Dependency Parsing and Projection Based on Word-Pair Classification

Author: Wenbin Jiang ; Qun Liu

Abstract: In this paper we describe an intuitionistic method for dependency parsing, where a classifier is used to determine whether a pair of words forms a dependency edge. And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. Experiments show that, the classifier trained on the projected classification instances significantly outperforms previous projected dependency parsers. More importantly, when this clas- , sifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.

3 0.8230046 15 acl-2010-A Semi-Supervised Key Phrase Extraction Approach: Learning from Title Phrases through a Document Semantic Network

Author: Decong Li ; Sujian Li ; Wenjie Li ; Wei Wang ; Weiguang Qu

Abstract: It is a fundamental and important task to extract key phrases from documents. Generally, phrases in a document are not independent in delivering the content of the document. In order to capture and make better use of their relationships in key phrase extraction, we suggest exploring the Wikipedia knowledge to model a document as a semantic network, where both n-ary and binary relationships among phrases are formulated. Based on a commonly accepted assumption that the title of a document is always elaborated to reflect the content of a document and consequently key phrases tend to have close semantics to the title, we propose a novel semi-supervised key phrase extraction approach in this paper by computing the phrase importance in the semantic network, through which the influence of title phrases is propagated to the other phrases iteratively. Experimental results demonstrate the remarkable performance of this approach. 1

4 0.74592501 93 acl-2010-Dynamic Programming for Linear-Time Incremental Parsing

Author: Liang Huang ; Kenji Sagae

Abstract: Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging “equivalent” stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce depen- dency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster.

5 0.7449221 133 acl-2010-Hierarchical Search for Word Alignment

Author: Jason Riesa ; Daniel Marcu

Abstract: We present a simple yet powerful hierarchical search algorithm for automatic word alignment. Our algorithm induces a forest of alignments from which we can efficiently extract a ranked k-best list. We score a given alignment within the forest with a flexible, linear discriminative model incorporating hundreds of features, and trained on a relatively small amount of annotated data. We report results on Arabic-English word alignment and translation tasks. Our model outperforms a GIZA++ Model-4 baseline by 6.3 points in F-measure, yielding a 1.1 BLEU score increase over a state-of-the-art syntax-based machine translation system.

6 0.74363124 79 acl-2010-Cross-Lingual Latent Topic Extraction

7 0.74344897 52 acl-2010-Bitext Dependency Parsing with Bilingual Subtree Constraints

8 0.74017197 62 acl-2010-Combining Orthogonal Monolingual and Multilingual Sources of Evidence for All Words WSD

9 0.73883468 232 acl-2010-The S-Space Package: An Open Source Package for Word Space Models

10 0.73869956 146 acl-2010-Improving Chinese Semantic Role Labeling with Rich Syntactic Features

11 0.73675168 170 acl-2010-Letter-Phoneme Alignment: An Exploration

12 0.73594952 125 acl-2010-Generating Templates of Entity Summaries with an Entity-Aspect Model and Pattern Mining

13 0.73573965 20 acl-2010-A Transition-Based Parser for 2-Planar Dependency Structures

14 0.73464918 218 acl-2010-Structural Semantic Relatedness: A Knowledge-Based Method to Named Entity Disambiguation

15 0.73438871 80 acl-2010-Cross Lingual Adaptation: An Experiment on Sentiment Classifications

16 0.73319256 87 acl-2010-Discriminative Modeling of Extraction Sets for Machine Translation

17 0.73186785 214 acl-2010-Sparsity in Dependency Grammar Induction

18 0.73145103 99 acl-2010-Efficient Third-Order Dependency Parsers

19 0.73132372 211 acl-2010-Simple, Accurate Parsing with an All-Fragments Grammar

20 0.73109174 188 acl-2010-Optimizing Informativeness and Readability for Sentiment Summarization