acl acl2012 acl2012-87 knowledge-graph by maker-knowledge-mining

87 acl-2012-Exploiting Multiple Treebanks for Parsing with Quasi-synchronous Grammars


Source: pdf

Author: Zhenghua Li ; Ting Liu ; Wanxiang Che

Abstract: We present a simple and effective framework for exploiting multiple monolingual treebanks with different annotation guidelines for parsing. Several types of transformation patterns (TP) are designed to capture the systematic annotation inconsistencies among different treebanks. Based on such TPs, we design quasisynchronous grammar features to augment the baseline parsing models. Our approach can significantly advance the state-of-the-art parsing accuracy on two widely used target treebanks (Penn Chinese Treebank 5. 1 and 6.0) using the Chinese Dependency Treebank as the source treebank. The improvements are respectively 1.37% and 1.10% with automatic part-of-speech tags. Moreover, an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cn zh iu r , , Abstract We present a simple and effective framework for exploiting multiple monolingual treebanks with different annotation guidelines for parsing. [sent-4, score-0.391]

2 Several types of transformation patterns (TP) are designed to capture the systematic annotation inconsistencies among different treebanks. [sent-5, score-0.281]

3 Based on such TPs, we design quasisynchronous grammar features to augment the baseline parsing models. [sent-6, score-0.314]

4 Our approach can significantly advance the state-of-the-art parsing accuracy on two widely used target treebanks (Penn Chinese Treebank 5. [sent-7, score-0.498]

5 Moreover, an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion. [sent-13, score-0.369]

6 As a structural classification problem that is more challenging than binary classification and sequence labeling problems, syntactic parsing is more prone to suffer from the data sparseness problem. [sent-15, score-0.167]

7 However, the heavy cost of treebanking typically limits one single treebank in both scale and genre. [sent-16, score-0.36]

8 At present, learning from one single treebank seems inadequate for further boosting parsing accuracy. [sent-17, score-0.496]

9 cn 1Incorporating an increased number of global features, such as third-order features in graph-based parsers, slightly affects parsing accuracy (Koo and Collins, 2010; Li et al. [sent-21, score-0.238]

10 Therefore, studies have recently resorted to other resources for the enhancement of parsing models, such as large-scale unlabeled data (Koo et al. [sent-26, score-0.139]

11 , 2011), and bilingual texts or cross-lingual treebanks (Burkett and Klein, 2008; Huang et al. [sent-29, score-0.214]

12 The existence of multiple monolingual treebanks opens another door for this issue. [sent-33, score-0.25]

13 For example, table 1lists a few publicly available Chinese treebanks that are motivated by different linguistic theories or applications. [sent-34, score-0.214]

14 Despite the divergence of annotation philosophy, these treebanks contain rich human knowledge on the Chinese syntax, thereby having a great deal of common ground. [sent-44, score-0.312]

15 Therefore, exploiting multiple treebanks is very attractive for boosting parsing accuracy. [sent-45, score-0.427]

16 2 This example illustrates that the two treebanks annotate coordination constructions differently. [sent-50, score-0.214]

17 One natural idea for multiple treebank exploitation is treebank conversion. [sent-52, score-0.652]

18 First, the annotations in the source treebank are converted into the style of the target treebank. [sent-53, score-0.553]

19 Then, both the converted treebank and the target treebank are combined. [sent-54, score-0.799]

20 Finally, the combined treebank are used to train a better parser. [sent-55, score-0.326]

21 However, the inconsistencies among different treebanks are normally nontrivial, which makes rule-based conversion infeasible. [sent-56, score-0.417]

22 For example, a number of inconsistencies between CTB5 and CDT are lexicon-sensitive, that is, they adopt different annotations for some particular lexicons (or word senses). [sent-57, score-0.141]

23 (2009) use sophisticated strategies to reduce the noises of the converted treebank after automatic treebank conversion. [sent-59, score-0.735]

24 The proposed framework avoids directly addressing the difficult annotation transformation problem, but focuses on modeling the annotation inconsistencies using transformation patterns (TP). [sent-61, score-0.421]

25 The TPs are used to compose quasi-synchronous grammar (QG) features, such that the knowledge of the source treebank can inspire the target parser to build better trees. [sent-62, score-0.573]

26 We conduct extensive experiments using CDT as the source treebank to enhance two target treebanks (CTB5 and CTB6). [sent-63, score-0.637]

27 Results show that our approach can significantly boost state-of-the-art parsing accuracy. [sent-64, score-0.219]

28 Moreover, an indirect comparison indicates that our ap- 2CTB5 is converted to dependency structures following the standard practice of dependency parsing (Zhang and Clark, 2008b). [sent-65, score-0.531]

29 676 proach also outperforms the treebank conversion approach of Niu et al. [sent-67, score-0.388]

30 (2009) improve the performance of word segmentation and part-of-speech (POS) tagging on CTB5 using another large-scale corpus of different annotation standards (People’s Daily). [sent-72, score-0.176]

31 However, handling syntactic annotation inconsistencies is significantly more challenging in our case of parsing. [sent-74, score-0.296]

32 Smith and Eisner (2009) propose effective QG features for parser adaptation and projection. [sent-75, score-0.159]

33 First, they conduct simulated ex- periments on one treebank by manually creating a few trivial annotation inconsistencies based on two heuristic rules. [sent-77, score-0.565]

34 They then focus on better adapting a parser to a new annotation style with few sentences of the target style. [sent-78, score-0.321]

35 In contrast, we experiment with two real large-scale treebanks, and boost the stateof-the-art parsing accuracy using QG features. [sent-79, score-0.242]

36 Second, we explore much richer QG features to fully exploit the knowledge of the source treebank. [sent-80, score-0.08]

37 These features are tailored to the dependency parsing problem. [sent-81, score-0.319]

38 In summary, the present work makes substantial progress in modeling structural annotation inconsistencies with QG features for parsing. [sent-82, score-0.286]

39 Previous work on treebank conversion primarily focuses on converting one grammar formalism of a treebank into another and then conducting a study on the converted treebank (Collins et al. [sent-83, score-1.161]

40 (2009) is, to our knowledge, the only study to date that combines the converted treebank with the existing target treebank. [sent-87, score-0.473]

41 They automatically convert the dependency-structure CDT into the phrase-structure style of CTB5 using a statistical constituency parser trained on CTB5. [sent-88, score-0.192]

42 Their experiments show that the combined treebank can significantly improve the performance of constituency parsers. [sent-89, score-0.355]

43 Instead of using the noisy converted treebank as additional training data, our approach allows the QGenhanced parsing models to softly learn the systematic inconsistencies based on QG features, making our approach simpler and more robust. [sent-91, score-0.689]

44 Our approach is also intuitively related to stacked learning (SL), a machine learning framework that has recently been applied to dependency parsing to integrate two main-stream parsing models, i. [sent-92, score-0.411]

45 However, the SL framework trains two parsers on the same treebank and therefore does not need to consider the problem of annotation inconsistencies. [sent-96, score-0.467]

46 tn, the goal of dependency parsing is to build a dependency tree as depicted in Figure 1, denoted by d = {(h, m, l) : 0 ≤ h ≤ n, g0u < m ≤ n, le ∈ L}, dw =her {e( (h, m, l) :in 0dic ≤ate hs an dni,r0ec <=on10 . [sent-103, score-0.405]

47 At this point, both CTB5 and CTB6 contain dependency structures conforming to the style of CDT. [sent-107, score-0.18]

48 2 CTB5 as the Target Treebank Table 4 shows the results when the gold-standard POS tags of CTB5 are adopted by the parsing models. [sent-109, score-0.175]

49 We aim to analyze the efficacy of QG features under the ideal scenario wherein the parsing models suffer from no error propagation of POS tagging. [sent-110, score-0.354]

50 We determine that our baseline O2 model achieves comparable accuracy with the state-of-theart parsers. [sent-111, score-0.081]

51 We also find that QG features can boost the parsing accuracy by a large margin when the baseline parser is weak (O1). [sent-112, score-0.43]

52 When goldstandard POS tags are available, the baseline features are very reliable and the QG features becomes less helpful for more complex models. [sent-115, score-0.159]

53 We then turn to the more realistic scenario wherein the gold-standard POS tags of the target treebank are unavailable. [sent-117, score-0.499]

54 We find that QG features result in a surprisingly large improvement over the O1 baseline and can also boost the state-ofthe-art parsing accuracy by a large margin. [sent-157, score-0.318]

55 (201 1) show that a joint POS tagging and dependency parsing model can significantly improve parsing accuracy over a pipeline model. [sent-159, score-0.559]

56 Our QGenhanced parser outperforms their best joint model by 0. [sent-160, score-0.144]

57 Moreover, the QG features can be used to enhance a joint model and achieve higher accuracy, which we leave as future work. [sent-162, score-0.079]

58 We select the state-of-the-art O2 parser and focus on the realistic scenario with automatic POS tags. [sent-165, score-0.141]

59 Table 6 compares the efficacy of different feature sets. [sent-166, score-0.095]

60 The first major row analyzes the efficacy of 9We could use thePOS tags produced byTaggerPD in Section 5. [sent-167, score-0.131]

61 When using the few QG features in Table 2, the accuracy is very close to that when using the basic features. [sent-173, score-0.099]

62 The second major row compares the efficacy of the three kinds of QG features corresponding to the three types of scoring parts. [sent-175, score-0.142]

63 Meanwhile, the source parser ParserCDT is trained on the whole CDTtrain. [sent-182, score-0.145]

64 We can see that QG features render larger improvement when the target treebank is of smaller scale, which is quite reasonable. [sent-183, score-0.465]

65 More importantly, the curves indicate that a QG-enhanced parser trained on a target treebank of 16,000 sentences may achieve comparable accuracy with a baseline parser trained on a treebank that is double the size (32,000), which is very encouraging. [sent-184, score-1.021]

66 In the right subfigure, the target treebank is trained on the whole CTB5-train, whereas the source parser is trained on part of the CDT-train, and “55. [sent-185, score-0.57]

67 The curve clearly demonstrates that the QG features are more helpful when the source treebank gets larger, which can be ex- plained as follows. [sent-187, score-0.406]

68 A larger source treebank can teach a source parser of higher accuracy; then, the better source parser can parse the target treebank more reliably; and finally, the target parser can better learn the annotation divergences based on QG features. [sent-188, score-1.341]

69 Table 7 presents the detailed effect of the QG features on different dependency patterns. [sent-190, score-0.18]

70 A pattern “VV → NN” refers to a right-directed dependency w“VitVh →the hNea”d r tagged as “gVht-Vd”i eacntded t dhee menoddeinficeyr tagged as “NN”. [sent-191, score-0.133]

71 lwumhenre sahso w“←s t”he m neuanmsble erf ot-fd tihreec corresponding dependency pattern that appears in the gold-standard trees but misses in the results of the baseline parser, whereas the signed figures in the “+QG” column are the changes made by the QG- 78 781456209314wi/t8hoQG1687 0918. [sent-194, score-0.197]

72 Training Set Size of CTB5 Training Set Size of CDT Figure 5: Parsing accuracy (UAS) comparison on CTB5test when the scale of CDT and CTB5 varies (thousands in sentence number). [sent-196, score-0.086]

73 We find that the QG features can significantly help a variety of dependency patterns (i. [sent-200, score-0.209]

74 4 CTB6 as the Target Treebank We use CTB6 as the target treebank to further verify the efficacy of our approach. [sent-204, score-0.485]

75 Compared with CTB5, CTB6 is of larger scale and is converted into dependency structures according to finer-grained headfinding rules (Haji ˇc et al. [sent-205, score-0.329]

76 We directly adopt the same transformation patterns and features tuned on CTB5. [sent-207, score-0.089]

77 We list the top three systems of the CoNLL 2009 shared task in Table 8, showing that our approach also advances the stateof-the-art parsing accuracy on this data set. [sent-210, score-0.22]

78 The parsing accuracies of the top systems may be underestimated since the accuracy of the provided POS tags in CoNLL 2009 is only 92. [sent-216, score-0.227]

79 (2009) use the maximum entropy inspired generative parser (GP) of Charniak (2000) as their constituent parser. [sent-232, score-0.112]

80 (2009) automat- ically convert the dependency-structure CDT to the phrase-structure annotation style of CTB5X and use the converted treebank as additional labeled data. [sent-235, score-0.587]

81 We convert their phrase-structure results on CTB5Xtest into dependency structures using the same headfinding rules. [sent-236, score-0.217]

82 To compare with their results, we run our baseline and QG-enhanced O2 parsers on CTB5X. [sent-237, score-0.072]

83 11 The indirect comparison indicates that our approach can achieve larger improvement than their treebank conversion based method. [sent-239, score-0.459]

84 6 Conclusions The current paper proposes a simple and effective framework for exploiting multiple large-scale treebanks of different annotation styles. [sent-240, score-0.355]

85 We design rich TPs to model the annotation inconsistencies and consequently propose QG features based on these TPs. [sent-241, score-0.286]

86 Extensive experiments show that our approach can effectively utilize the syntactic knowledge from another treebank and significantly improve the stateof-the-art parsing accuracy. [sent-242, score-0.522]

87 In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 693–702, Portland, Oregon, USA, June. [sent-255, score-0.068]

88 In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 67–72, Boulder, Colorado, June. [sent-260, score-0.068]

89 In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 877–886, Honolulu, Hawaii, October. [sent-265, score-0.068]

90 In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL ’ 10, pages 46–54, Stroudsburg, PA, USA. [sent-270, score-0.068]

91 In Proceedings of CoNLL 2009: Shared Task, pages 49– 54. [sent-283, score-0.068]

92 Sinica treebank: Design criteria,representational issues and implementation, chap- ter 13, pages 23 1–248. [sent-285, score-0.068]

93 683 In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 570–579, Singapore, August. [sent-290, score-0.068]

94 In Proceedings of the 48th Annual Meeting ofthe Associationfor ComputationalLinguistics, pages 21–29, Uppsala, Sweden, July. [sent-295, score-0.068]

95 A latent variable model of synchronous syntactic-semantic parsing for multiple languages. [sent-307, score-0.139]

96 Automatic adaptation of annotation standards: Chinese word segmentation and pos tagging – a case study. [sent-327, score-0.226]

97 Joint models for chinese pos tagging and dependency parsing. [sent-342, score-0.334]

98 The Penn Chinese Treebank: Phrase structure annotation of a large corpus. [sent-419, score-0.098]

99 A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. [sent-431, score-0.133]

100 Exploiting web-derived selectional preference to im- prove statistical dependency parsing. [sent-441, score-0.133]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('qg', 0.613), ('treebank', 0.326), ('cdt', 0.23), ('treebanks', 0.214), ('fbs', 0.153), ('inconsistencies', 0.141), ('parsing', 0.139), ('dependency', 0.133), ('niu', 0.119), ('parser', 0.112), ('annotation', 0.098), ('efficacy', 0.095), ('pos', 0.093), ('converted', 0.083), ('fqg', 0.077), ('tps', 0.077), ('chinese', 0.073), ('pages', 0.068), ('conll', 0.067), ('burkett', 0.065), ('target', 0.064), ('conversion', 0.062), ('quasisynchronous', 0.061), ('koo', 0.058), ('wanxiang', 0.057), ('chen', 0.055), ('ting', 0.055), ('joakim', 0.054), ('smith', 0.052), ('accuracy', 0.052), ('che', 0.051), ('parentheses', 0.051), ('headfinding', 0.051), ('qgenhanced', 0.051), ('subfigure', 0.051), ('boost', 0.051), ('singapore', 0.05), ('association', 0.05), ('features', 0.047), ('style', 0.047), ('wenliang', 0.045), ('gp', 0.044), ('wherein', 0.044), ('parsers', 0.043), ('exploiting', 0.043), ('nivre', 0.043), ('yue', 0.043), ('standards', 0.043), ('indirect', 0.043), ('xia', 0.043), ('transformation', 0.042), ('jiang', 0.04), ('zhenghua', 0.038), ('wenbin', 0.038), ('grammar', 0.038), ('monolingual', 0.036), ('nn', 0.036), ('tags', 0.036), ('sinica', 0.036), ('martins', 0.036), ('li', 0.035), ('jan', 0.035), ('tr', 0.035), ('tagging', 0.035), ('whereas', 0.035), ('scale', 0.034), ('rp', 0.034), ('xue', 0.033), ('zhang', 0.033), ('source', 0.033), ('convert', 0.033), ('tp', 0.033), ('uas', 0.033), ('kazama', 0.033), ('sl', 0.033), ('joint', 0.032), ('charniak', 0.032), ('noah', 0.032), ('oregon', 0.031), ('terry', 0.031), ('boosting', 0.031), ('bansal', 0.031), ('mcdonald', 0.031), ('ryan', 0.031), ('proceedings', 0.03), ('iwpt', 0.03), ('haji', 0.03), ('baseline', 0.029), ('significantly', 0.029), ('kentaro', 0.029), ('honolulu', 0.029), ('scenario', 0.029), ('huang', 0.029), ('moreover', 0.029), ('shared', 0.029), ('hawaii', 0.028), ('larger', 0.028), ('zhou', 0.028), ('syntactic', 0.028), ('nianwen', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 87 acl-2012-Exploiting Multiple Treebanks for Parsing with Quasi-synchronous Grammars

Author: Zhenghua Li ; Ting Liu ; Wanxiang Che

Abstract: We present a simple and effective framework for exploiting multiple monolingual treebanks with different annotation guidelines for parsing. Several types of transformation patterns (TP) are designed to capture the systematic annotation inconsistencies among different treebanks. Based on such TPs, we design quasisynchronous grammar features to augment the baseline parsing models. Our approach can significantly advance the state-of-the-art parsing accuracy on two widely used target treebanks (Penn Chinese Treebank 5. 1 and 6.0) using the Chinese Dependency Treebank as the source treebank. The improvements are respectively 1.37% and 1.10% with automatic part-of-speech tags. Moreover, an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion.

2 0.2029167 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

Author: Wanxiang Che ; Valentin Spitkovsky ; Ting Liu

Abstract: Stanford dependencies are widely used in natural language processing as a semanticallyoriented representation, commonly generated either by (i) converting the output of a constituent parser, or (ii) predicting dependencies directly. Previous comparisons of the two approaches for English suggest that starting from constituents yields higher accuracies. In this paper, we re-evaluate both methods for Chinese, using more accurate dependency parsers than in previous work. Our comparison of performance and efficiency across seven popular open source parsers (four constituent and three dependency) shows, by contrast, that recent higher-order graph-based techniques can be more accurate, though somewhat slower, than constituent parsers. We demonstrate also that n-way jackknifing is a useful technique for producing automatic (rather than gold) partof-speech tags to train Chinese dependency parsers. Finally, we analyze the relations produced by both kinds of parsing and suggest which specific parsers to use in practice.

3 0.18780814 213 acl-2012-Utilizing Dependency Language Models for Graph-based Dependency Parsing Models

Author: Wenliang Chen ; Min Zhang ; Haizhou Li

Abstract: Most previous graph-based parsing models increase decoding complexity when they use high-order features due to exact-inference decoding. In this paper, we present an approach to enriching high-orderfeature representations for graph-based dependency parsing models using a dependency language model and beam search. The dependency language model is built on a large-amount of additional autoparsed data that is processed by a baseline parser. Based on the dependency language model, we represent a set of features for the parsing model. Finally, the features are efficiently integrated into the parsing model during decoding using beam search. Our approach has two advantages. Firstly we utilize rich high-order features defined over a view of large scope and additional large raw corpus. Secondly our approach does not increase the decoding complexity. We evaluate the proposed approach on English and Chinese data. The experimental results show that our new parser achieves the best accuracy on the Chinese data and comparable accuracy with the best known systems on the English data.

4 0.17481659 109 acl-2012-Higher-order Constituent Parsing and Parser Combination

Author: Xiao Chen ; Chunyu Kit

Abstract: This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.

5 0.15434782 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

Author: Katsuhiko Hayashi ; Taro Watanabe ; Masayuki Asahara ; Yuji Matsumoto

Abstract: This paper presents a novel top-down headdriven parsing algorithm for data-driven projective dependency analysis. This algorithm handles global structures, such as clause and coordination, better than shift-reduce or other bottom-up algorithms. Experiments on the English Penn Treebank data and the Chinese CoNLL-06 data show that the proposed algorithm achieves comparable results with other data-driven dependency parsing algorithms.

6 0.13921088 168 acl-2012-Reducing Approximation and Estimation Errors for Chinese Lexical Processing with Heterogeneous Annotations

7 0.13524154 119 acl-2012-Incremental Joint Approach to Word Segmentation, POS Tagging, and Dependency Parsing in Chinese

8 0.1297731 45 acl-2012-Capturing Paradigmatic and Syntagmatic Lexical Relations: Towards Accurate Chinese Part-of-Speech Tagging

9 0.12230293 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?

10 0.1218068 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

11 0.11847926 175 acl-2012-Semi-supervised Dependency Parsing using Lexical Affinities

12 0.10414038 30 acl-2012-Attacking Parsing Bottlenecks with Unlabeled Data and Relevant Factorizations

13 0.096005812 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

14 0.093329489 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

15 0.092115432 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures

16 0.091804199 170 acl-2012-Robust Conversion of CCG Derivations to Phrase Structure Trees

17 0.088572808 95 acl-2012-Fast Syntactic Analysis for Statistical Language Modeling via Substructure Sharing and Uptraining

18 0.0849232 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

19 0.084620997 122 acl-2012-Joint Evaluation of Morphological Segmentation and Syntactic Parsing

20 0.08204776 189 acl-2012-Syntactic Annotations for the Google Books NGram Corpus


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.231), (1, -0.017), (2, -0.215), (3, -0.188), (4, -0.062), (5, -0.042), (6, 0.03), (7, -0.136), (8, 0.02), (9, 0.011), (10, 0.134), (11, 0.121), (12, 0.031), (13, -0.005), (14, 0.065), (15, 0.044), (16, 0.019), (17, -0.001), (18, -0.048), (19, 0.061), (20, -0.039), (21, 0.055), (22, 0.036), (23, -0.02), (24, 0.071), (25, 0.027), (26, -0.05), (27, 0.072), (28, 0.024), (29, 0.01), (30, -0.018), (31, -0.021), (32, 0.009), (33, 0.054), (34, -0.086), (35, 0.035), (36, 0.016), (37, -0.072), (38, 0.09), (39, 0.021), (40, 0.004), (41, -0.037), (42, -0.042), (43, -0.038), (44, -0.014), (45, 0.054), (46, -0.014), (47, -0.001), (48, -0.014), (49, -0.013)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96254951 87 acl-2012-Exploiting Multiple Treebanks for Parsing with Quasi-synchronous Grammars

Author: Zhenghua Li ; Ting Liu ; Wanxiang Che

Abstract: We present a simple and effective framework for exploiting multiple monolingual treebanks with different annotation guidelines for parsing. Several types of transformation patterns (TP) are designed to capture the systematic annotation inconsistencies among different treebanks. Based on such TPs, we design quasisynchronous grammar features to augment the baseline parsing models. Our approach can significantly advance the state-of-the-art parsing accuracy on two widely used target treebanks (Penn Chinese Treebank 5. 1 and 6.0) using the Chinese Dependency Treebank as the source treebank. The improvements are respectively 1.37% and 1.10% with automatic part-of-speech tags. Moreover, an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion.

2 0.92491883 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

Author: Wanxiang Che ; Valentin Spitkovsky ; Ting Liu

Abstract: Stanford dependencies are widely used in natural language processing as a semanticallyoriented representation, commonly generated either by (i) converting the output of a constituent parser, or (ii) predicting dependencies directly. Previous comparisons of the two approaches for English suggest that starting from constituents yields higher accuracies. In this paper, we re-evaluate both methods for Chinese, using more accurate dependency parsers than in previous work. Our comparison of performance and efficiency across seven popular open source parsers (four constituent and three dependency) shows, by contrast, that recent higher-order graph-based techniques can be more accurate, though somewhat slower, than constituent parsers. We demonstrate also that n-way jackknifing is a useful technique for producing automatic (rather than gold) partof-speech tags to train Chinese dependency parsers. Finally, we analyze the relations produced by both kinds of parsing and suggest which specific parsers to use in practice.

3 0.83127517 213 acl-2012-Utilizing Dependency Language Models for Graph-based Dependency Parsing Models

Author: Wenliang Chen ; Min Zhang ; Haizhou Li

Abstract: Most previous graph-based parsing models increase decoding complexity when they use high-order features due to exact-inference decoding. In this paper, we present an approach to enriching high-orderfeature representations for graph-based dependency parsing models using a dependency language model and beam search. The dependency language model is built on a large-amount of additional autoparsed data that is processed by a baseline parser. Based on the dependency language model, we represent a set of features for the parsing model. Finally, the features are efficiently integrated into the parsing model during decoding using beam search. Our approach has two advantages. Firstly we utilize rich high-order features defined over a view of large scope and additional large raw corpus. Secondly our approach does not increase the decoding complexity. We evaluate the proposed approach on English and Chinese data. The experimental results show that our new parser achieves the best accuracy on the Chinese data and comparable accuracy with the best known systems on the English data.

4 0.79213482 30 acl-2012-Attacking Parsing Bottlenecks with Unlabeled Data and Relevant Factorizations

Author: Emily Pitler

Abstract: Prepositions and conjunctions are two of the largest remaining bottlenecks in parsing. Across various existing parsers, these two categories have the lowest accuracies, and mistakes made have consequences for downstream applications. Prepositions and conjunctions are often assumed to depend on lexical dependencies for correct resolution. As lexical statistics based on the training set only are sparse, unlabeled data can help ameliorate this sparsity problem. By including unlabeled data features into a factorization of the problem which matches the representation of prepositions and conjunctions, we achieve a new state-of-the-art for English dependencies with 93.55% correct attachments on the current standard. Furthermore, conjunctions are attached with an accuracy of 90.8%, and prepositions with an accuracy of 87.4%.

5 0.77631593 109 acl-2012-Higher-order Constituent Parsing and Parser Combination

Author: Xiao Chen ; Chunyu Kit

Abstract: This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.

6 0.73124784 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

7 0.71707273 175 acl-2012-Semi-supervised Dependency Parsing using Lexical Affinities

8 0.68852848 119 acl-2012-Incremental Joint Approach to Word Segmentation, POS Tagging, and Dependency Parsing in Chinese

9 0.68679315 45 acl-2012-Capturing Paradigmatic and Syntagmatic Lexical Relations: Towards Accurate Chinese Part-of-Speech Tagging

10 0.6487633 122 acl-2012-Joint Evaluation of Morphological Segmentation and Syntactic Parsing

11 0.64731544 95 acl-2012-Fast Syntactic Analysis for Statistical Language Modeling via Substructure Sharing and Uptraining

12 0.63927788 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

13 0.60963947 75 acl-2012-Discriminative Strategies to Integrate Multiword Expression Recognition and Parsing

14 0.6023789 168 acl-2012-Reducing Approximation and Estimation Errors for Chinese Lexical Processing with Heterogeneous Annotations

15 0.5965178 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

16 0.57714373 11 acl-2012-A Feature-Rich Constituent Context Model for Grammar Induction

17 0.56762838 189 acl-2012-Syntactic Annotations for the Google Books NGram Corpus

18 0.53608847 71 acl-2012-Dependency Hashing for n-best CCG Parsing

19 0.53577816 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

20 0.50364918 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(7, 0.011), (15, 0.245), (25, 0.025), (26, 0.03), (28, 0.044), (30, 0.028), (37, 0.052), (39, 0.032), (71, 0.059), (74, 0.019), (82, 0.047), (84, 0.022), (85, 0.029), (90, 0.145), (92, 0.043), (94, 0.019), (99, 0.069)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.79952037 100 acl-2012-Fine Granular Aspect Analysis using Latent Structural Models

Author: Lei Fang ; Minlie Huang

Abstract: In this paper, we present a structural learning model forjoint sentiment classification and aspect analysis of text at various levels of granularity. Our model aims to identify highly informative sentences that are aspect-specific in online custom reviews. The primary advantages of our model are two-fold: first, it performs document-level and sentence-level sentiment polarity classification jointly; second, it is able to find informative sentences that are closely related to some respects in a review, which may be helpful for aspect-level sentiment analysis such as aspect-oriented summarization. The proposed method was evaluated with 9,000 Chinese restaurant reviews. Preliminary experiments demonstrate that our model obtains promising performance. 1

2 0.75443631 11 acl-2012-A Feature-Rich Constituent Context Model for Grammar Induction

Author: Dave Golland ; John DeNero ; Jakob Uszkoreit

Abstract: We present LLCCM, a log-linear variant ofthe constituent context model (CCM) of grammar induction. LLCCM retains the simplicity of the original CCM but extends robustly to long sentences. On sentences of up to length 40, LLCCM outperforms CCM by 13.9% bracketing F1 and outperforms a right-branching baseline in regimes where CCM does not.

same-paper 3 0.73828495 87 acl-2012-Exploiting Multiple Treebanks for Parsing with Quasi-synchronous Grammars

Author: Zhenghua Li ; Ting Liu ; Wanxiang Che

Abstract: We present a simple and effective framework for exploiting multiple monolingual treebanks with different annotation guidelines for parsing. Several types of transformation patterns (TP) are designed to capture the systematic annotation inconsistencies among different treebanks. Based on such TPs, we design quasisynchronous grammar features to augment the baseline parsing models. Our approach can significantly advance the state-of-the-art parsing accuracy on two widely used target treebanks (Penn Chinese Treebank 5. 1 and 6.0) using the Chinese Dependency Treebank as the source treebank. The improvements are respectively 1.37% and 1.10% with automatic part-of-speech tags. Moreover, an indirect comparison indicates that our approach also outperforms previous work based on treebank conversion.

4 0.63955563 28 acl-2012-Aspect Extraction through Semi-Supervised Modeling

Author: Arjun Mukherjee ; Bing Liu

Abstract: Aspect extraction is a central problem in sentiment analysis. Current methods either extract aspects without categorizing them, or extract and categorize them using unsupervised topic modeling. By categorizing, we mean the synonymous aspects should be clustered into the same category. In this paper, we solve the problem in a different setting where the user provides some seed words for a few aspect categories and the model extracts and clusters aspect terms into categories simultaneously. This setting is important because categorizing aspects is a subjective task. For different application purposes, different categorizations may be needed. Some form of user guidance is desired. In this paper, we propose two statistical models to solve this seeded problem, which aim to discover exactly what the user wants. Our experimental results show that the two proposed models are indeed able to perform the task effectively. 1

5 0.61571115 4 acl-2012-A Comparative Study of Target Dependency Structures for Statistical Machine Translation

Author: Xianchao Wu ; Katsuhito Sudoh ; Kevin Duh ; Hajime Tsukada ; Masaaki Nagata

Abstract: This paper presents a comparative study of target dependency structures yielded by several state-of-the-art linguistic parsers. Our approach is to measure the impact of these nonisomorphic dependency structures to be used for string-to-dependency translation. Besides using traditional dependency parsers, we also use the dependency structures transformed from PCFG trees and predicate-argument structures (PASs) which are generated by an HPSG parser and a CCG parser. The experiments on Chinese-to-English translation show that the HPSG parser’s PASs achieved the best dependency and translation accuracies. 1

6 0.61011624 5 acl-2012-A Comparison of Chinese Parsers for Stanford Dependencies

7 0.59842587 191 acl-2012-Temporally Anchored Relation Extraction

8 0.58753145 213 acl-2012-Utilizing Dependency Language Models for Graph-based Dependency Parsing Models

9 0.58494854 146 acl-2012-Modeling Topic Dependencies in Hierarchical Text Categorization

10 0.58474606 187 acl-2012-Subgroup Detection in Ideological Discussions

11 0.58391011 45 acl-2012-Capturing Paradigmatic and Syntagmatic Lexical Relations: Towards Accurate Chinese Part-of-Speech Tagging

12 0.58246785 63 acl-2012-Cross-lingual Parse Disambiguation based on Semantic Correspondence

13 0.58168066 30 acl-2012-Attacking Parsing Bottlenecks with Unlabeled Data and Relevant Factorizations

14 0.58136827 62 acl-2012-Cross-Lingual Mixture Model for Sentiment Classification

15 0.5807749 156 acl-2012-Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

16 0.58052111 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

17 0.57904685 12 acl-2012-A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relation Extraction

18 0.57869411 214 acl-2012-Verb Classification using Distributional Similarity in Syntactic and Semantic Structures

19 0.57857376 40 acl-2012-Big Data versus the Crowd: Looking for Relationships in All the Right Places

20 0.57767183 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?