emnlp emnlp2012 emnlp2012-7 knowledge-graph by maker-knowledge-mining

7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis


Source: pdf

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng

Abstract: We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cueb University of British Columbia Vancouver, BC, V6T 1Z4, Canada Abstract We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. [sent-5, score-0.423]

2 Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. [sent-6, score-0.902]

3 1 Introduction Automatic discourse analysis has been shown to be critical in several fundamental Natural Language Processing (NLP) tasks including text generation (Prasad et al. [sent-8, score-0.391]

4 The adjacent EDUs are connected by a rhetorical relation (e. [sent-13, score-0.175]

5 , ELABORATION), and the resulting larger text spans are recursively also subject to this relation linking. [sent-15, score-0.205]

6 A span linked by a rhetorical relation can be either a NUCLEUS or a SATELLITE depending on how central the message is to the author. [sent-16, score-0.355]

7 Discourse analysis in RST involves two subtasks: (i) breaking the 904 text into EDUs (known as discourse segmentation) and (ii) linking the EDUs into a labeled hierarchical tree structure (known as discourse parsing). [sent-17, score-0.851]

8 Previous studies on discourse analysis have been quite successful in identifying what machine learning approaches and what features are more useful for automatic discourse segmentation and parsing (Soricut and Marcu, 2003; Subba and Eugenio, 2009; duVerle and Prendinger, 2009). [sent-19, score-1.018]

9 In this paper, we propose a new sentence-level discourse parser that addresses both limitations. [sent-21, score-0.46]

10 By representing the structure and the relation of each discourse tree constituent jointly and by explicitly capturing the sequential and hierarchical dependencies between constituents of a discourse tree, our DCRF model does not make any independence assumption among these properties. [sent-24, score-1.124]

11 lc L2a0n1g2ua Agseso Pcrioactieosnsi fnogr a Cnodm Cpoumtaptiuotna tilo Lnianlg Nuaist uircasl parsing model supports a bottom-up parsing algorithm which is non-greedy and provably optimal. [sent-27, score-0.188]

12 The discourse parser assumes that the input text has been already segmented into EDUs. [sent-28, score-0.503]

13 As an additional contribution of this paper, we propose a novel discriminative approach to discourse segmentation that not only achieves state-of-the-art performance, but also reduces the time and space complexities by using fewer features. [sent-29, score-0.563]

14 Notice that the combination of our segmenter with our parser forms a complete probabilistic discriminative framework for performing sentence-level discourse analysis. [sent-30, score-0.602]

15 The empirical evaluation indicates that our approach to discourse parsing outperforms the stateof-the-art by a wide margin. [sent-32, score-0.485]

16 Moreover, we show this to be the case on two very different genres: news articles and instructional how-to-do manuals. [sent-33, score-0.236]

17 In the rest of the paper, after discussing related work, we present our discourse parser. [sent-34, score-0.391]

18 2 Related work Automatic discourse analysis has a long history; see (Stede, 2011) for a detailed overview. [sent-37, score-0.391]

19 Soricut and Marcu (2003) present the publicly available SPADE1 system that comes with probabilistic models for sentence-level discourse segmentation and parsing based on lexical and syntactic features derived from the lexicalized syntactic tree of a sentence. [sent-38, score-0.771]

20 Their parsing algorithm finds the most probable DT for a sentence, where the probabilities of the constituents are estimated by their parsing model. [sent-39, score-0.365]

21 , ATTRIBUTION-NS[(1,2),3] in Figure 1) in a DT has two components, first, the label denoting the relation and second, the structure indicating which spans are being linked by the relation. [sent-42, score-0.205]

22 The nuclearity statuses of the spans are built into the relation labels (e. [sent-43, score-0.527]

23 , NS[(1,2),3] means that span (1,2) is the NUCLEUS and it comes before span 3 which is the SATELLITE). [sent-45, score-0.36]

24 edu/licensed-sw/spade/ 905 hierarchical dependencies between the constituents in the parsing model. [sent-49, score-0.249]

25 Furthermore, SPADE relies only on lexico-syntactic features, and it follows a generative approach to estimate the model parameters for the segmentation and the parsing models. [sent-50, score-0.21]

26 , 2002), which contains humanannotated discourse trees for news articles. [sent-52, score-0.391]

27 Subsequent research addresses the question of how much syntax one really needs in discourse analysis. [sent-53, score-0.391]

28 Sporleder and Lapata (2005) focus on discourse chunking, comprising the two subtasks of segmentation and non-hierarchical nuclearity assignment. [sent-54, score-0.736]

29 On the different genre of instructional manuals, Subba and Eugenio (2009) propose a shift-reduce parser that relies on a classifier to find the appropriate relation between two text segments. [sent-59, score-0.402]

30 However, their discourse parser implements a greedy approach (hence not optimal) and their classifier disregards the sequence and hierarchical dependencies. [sent-62, score-0.582]

31 (2010) present the HILDA system that comes with a segmenter and a parser based on Support Vector Machines (SVMs). [sent-64, score-0.179]

32 The discourse parser builds a DT iteratively utilizing two SVM classifiers in each iteration: (i) a binary classifier decides which of the two adjacent spans to link, and (ii) a multi-class classifier then connects the selected spans with the appropriate relation. [sent-66, score-0.776]

33 , 2008), syntactic chunking (Sha and Pereira, 2003) and discourse chunking (Ghosh et al. [sent-71, score-0.516]

34 en manually or by an automatic segmenter (see Section 4), the discourse parsing problem is to decide which spans to connect (i. [sent-85, score-0.728]

35 The second component, the parsing algorithm, finds the most probable DT 906 among the candidate discourse trees. [sent-97, score-0.537]

36 1 Parsing Model A DT can be represented as a set of constituents of the form R[i, m, j], which denotes a rhetorical relation R that holds between the span containing EDUs ithrough m, and the span containing EDUs m+1 through j. [sent-99, score-0.634]

37 RNAoTticIOe Nth-NatS a 1r e,2la]-, tAioTnT RRIB aUlsToI OinNd-iNcaSt[e1s, t,h3e] nuclearity assignments of the spans being connected, which can be one of NUCLEUS-SATELLITE (NS), SATELLITENUCLEUS (SN) and NUCLEUS-NUCLEUS (NN). [sent-101, score-0.362]

38 odels the structure and the relation jointly, but it also captures linear sequence dependencies and hierarchical dependencies between constituents of a DT. [sent-107, score-0.285]

39 A text span can be either an EDU or a concatenation of a sequence of EDUs. [sent-112, score-0.21]

40 M} denote the discourse re- lation between spans Wj−1 a dnedn Wj, given cthouatr sMe r eisthe total number of relations in our relation set. [sent-117, score-0.661]

41 Notice that we now model the structure and the relation jointly and also take the sequential dependencies between adjacent constituents into consideration. [sent-118, score-0.199]

42 Figure 2: A Dynamic CRF as a discourse parsing model. [sent-119, score-0.485]

43 We can obtain the conditional probabilities of the constituents (i. [sent-120, score-0.168]

44 , P(c|C, Θ)) of all candidate tDheTs c foonrs a sentence by applying t ohfe aDllC cRanF parsing model recursively at different levels, and by computing the posterior marginals of the relationstructure pairs. [sent-122, score-0.196]

45 The DCRF model for the first level is shown in Figure 3(a), where the (observed) EDUs are the spans in the span sequence. [sent-124, score-0.313]

46 Given this model, we obtain the probabilities of the constituents R[1, 1, 2] and R[2, 2, 3] by computing the posterior marginals P(R2 , S2=1 |e1, e2, e3, Θ) and P(R3, S3=1 |e1, e2, e3, Θ), respectively. [sent-125, score-0.227]

47 At the second level (see Figure 3(b)), there are two possible span sequences (e1:2, e3) and (e1, e2:3). [sent-126, score-0.211]

48 We apply our DCRF model to the two possible span sequences and obtain the probabilities of the constituents R[1, 2, 3] and R[1, 1, 3] by computing the posterior marginals P(R3, S3=1 |e1:2, e3, Θ) and P(R2:3, S2:3=1 |e1, e2:3, Θ), respectively. [sent-128, score-0.438]

49 At the first level (Figure 4(a)), there is only one possible span se907 quence to which we apply our DCRF model. [sent-133, score-0.18]

50 We obtain the probabilities of the constituents R[1, 1, 2], R[2, 2, 3] and R[3, 3, 4] by computing the posterior marginals P(R2, S2=1 |e1, e2, e3, e4, Θ), P(R3, S3=1 |e1, e2 , e3 , e4, Θ) a=nd1| P(R4, S4=1 |e1, e2, e3, e4, Θ), respectively. [sent-134, score-0.227]

51 Likewise, the posterior marginals P(R2:3, S2:3=1 |e1, e2:3, e4, Θ) and P(R4, S4=1 |e1, e2:3, e4, Θ) in= =th1|ee DCRF model applied to th=e1 sequence (e1, e2:3, e4) represents the probabilities of the constituents R[1, 1, 3] and R[2, 3, 4], respectively. [sent-137, score-0.257]

52 At the third level (Figure 4(c)), |theere are three possible sequences , (e1:3, e4), (e1 e2:4) and (e1:2, e3:4), to which we apply our model and acquire the probabilities of the constituents R[1, 3, 4] , R[1, 1, 4] and R[1, 2, 4] by computing their respective posterior marginals. [sent-139, score-0.196]

53 Figure 4: DCRF model applied to the sequences at different levels of a discourse tree. [sent-140, score-0.422]

54 Note that these features are defined on two consecutive spans Wj−1 and Wj of a span sequence. [sent-149, score-0.339]

55 For example, in a sentence containing three EDUs, a span containing two of these EDUs will have a relative EDU number of 0. [sent-155, score-0.18]

56 We also measure the distances of the spans from the beginning and to the end of the sentence in terms of the number of EDUs. [sent-157, score-0.211]

57 8 organizational features Relative number of EDUs in span 1 and span 2. [sent-158, score-0.452]

58 Distances of span 1in EDUs to the beginning and to the end. [sent-160, score-0.229]

59 Distances of span 2 in EDUs to the beginning and to the end. [sent-161, score-0.229]

60 8 N-gram features Beginning and end lexical N-grams in span 1. [sent-162, score-0.206]

61 5 dominance set features Syntactic labels of the head node and the attachment node. [sent-166, score-0.227]

62 2 substructure features Root nodes of the left and right rhetorical subtrees. [sent-170, score-0.176]

63 , because, but), when present, signal rhetorical relations between two text segments (Knott and Dale, 1994; Marcu, 2000a). [sent-174, score-0.168]

64 To build the lexical N-gram dictionary empirically from the training corpus we consider the first and last N tokens (N∈{1, 2}) of each span and rank tlhasetm N according Nto∈ t{h1e,ir2 }m)u otufa el icnhfo srpmanatio annd2 rwanithk the two labels, Structure and Relation. [sent-179, score-0.18]

65 Figure 5: A discourse segmented lexicalized syntactic tree. [sent-182, score-0.5]

66 A dominance set D (shown at the bottom of Figure 5 for our example) contains these attachment points of the EDUs in a DS-LST. [sent-187, score-0.169]

67 In addition to the syntactic and lexical information of the head and attachment nodes, each element in D also represents a dominance relationship between the EDUs involved. [sent-188, score-0.206]

68 der to extract dominance set features for two consecutive spans ei:j and ej+1:k, we first compute D from the DS-LST of the sentence. [sent-191, score-0.291]

69 In our running example, for the spans e1 and e2 (Figure 3(a)), the relevant dominance set element is (1, efforts/NP)>(2, to/S). [sent-193, score-0.265]

70 We encode the syntactic labels and lexical heads of NH and NA and the dominance relationship (i. [sent-194, score-0.226]

71 We also incorporate more contextual information by including the above features computed for the neighboring span pairs in the current feature vector. [sent-197, score-0.206]

72 For the two adjacent spans ei:j and ej+1:k, we extract the roots of the rhetorical subtrees spanning over ei:j (left) and ej+1:k (right). [sent-199, score-0.236]

73 2 Parsing Algorithm Our parsing model above assigns a conditional probability to every possible DT constituent for a sentence, the job of the parsing algorithm is to find the most probable DT. [sent-208, score-0.302]

74 Formally, this can be written as, DT∗ = argmax DTP(DT|Θ) Our discourse parser implements a probabilistic × CKY-like bottom-up algorithm for computing the most likely parse of a sentence using dynamic programming; see (Jurafsky and Martin, 2008) for a description. [sent-209, score-0.524]

75 T thhee nce ×ll [i, j] nina mthiec PDrPoTgr represents tbhlee span containing EDUs i through j and stores the 909 probability of a constituent R[i, m, j], where m = argmax i≤k≤jP(R[i, k, j] ). [sent-211, score-0.226]

76 4 The Discourse Segmenter Our discourse parser above assumes that the input sentences have been already segmented into EDUs. [sent-217, score-0.503]

77 Our segmenter implements a binary classifier to decide for each word (except the last word) in a sentence, whether to put an EDU boundary after that word. [sent-219, score-0.225]

78 1 Features Used in the Segmentation Model Our set of features for discourse segmentation are mostly inspired from previous studies but used in a novel way as we describe below. [sent-228, score-0.533]

79 To decide on an EDU boundary after a token wk, we find the lowest constituent in the lexicalized syntactic tree that spans over tokens wi . [sent-231, score-0.337]

80 Shallow syntactic parse (or Chunk) and POS tags have been shown to possess valuable cues for discourse segmentation (Fisher and Roark, 2007). [sent-243, score-0.544]

81 , 2002) that contains discourse annotations for 385 Wall Street Journal news articles from the Penn Treebank (Marcus et al. [sent-259, score-0.391]

82 Second, we use the Instructional corpus developed by Subba and Eugenio (2009) that contains discourse annotations for 176 instructional how-to-do manuals on home-repair. [sent-261, score-0.653]

83 However, the existence of a well-formed DT in not a necessity for discourse segmentation, therefore, we do not exclude any sentence in our discourse segmentation experiments. [sent-268, score-0.898]

84 2 Experimental Setup We perform our experiments on discourse pars- ing in RST-DT with the 18 coarser relations (see Figure 6) defined in (Carlson and Marcu, 2001) and also used in SPADE and HILDA. [sent-270, score-0.456]

85 , GOAL:ACT, CAUSE:EFFECT, GENERAL-SPECIFIC) used in 4Not all relations take all the possible nuclearity statuses. [sent-276, score-0.294]

86 Attaching the nuclearity statuses to these relations gives 70 distinct relations in the Instructional corpus. [sent-279, score-0.42]

87 3 Parsing based on Manual Segmentation First, we present the results of our discourse parser based on manual segmentation. [sent-289, score-0.46]

88 This may be due to the small amount of data available for training and the imbalanced distribution of a large number of discourse relations in this corpus. [sent-327, score-0.456]

89 More specifically, when we add the organizational features with the dominance set features (see S2), we get about 2% absolute improvement in nuclearity and relations. [sent-330, score-0.517]

90 Due to the lack of human-annotated syntactic trees in the Instructional corpus, we train SPADE in this corpus using the syntactic trees produced 7Note that, the high segmentation accuracy reported in (Hernault et al. [sent-363, score-0.19]

91 5 Parsing based on Automatic Segmentation In order to evaluate our full system, we feed our discourse parser the output of our discourse segmenter. [sent-382, score-0.851]

92 Nevertheless, taking into account the segmentation results in Table 4, this is not surprising because previous studies (Soricut and Marcu, 2003) have already shown that automatic segmentation is the primary impediment to high accuracy discourse parsing. [sent-404, score-0.623]

93 6 Error Analysis and Discussion The results in Table 2 suggest that given a manually segmented discourse, our sentence-level discourse parser finds the unlabeled (i. [sent-410, score-0.53]

94 , span) discourse tree and assigns the nuclearity statuses to the spans at a performance level close to human annotators. [sent-412, score-0.855]

95 We, therefore, look more closely into the performance of our parser on the hardest task of relation labeling. [sent-413, score-0.167]

96 Figure 6 shows the confusion matrix for the relation labeling task using manual segmentation on the RST-DT test set. [sent-414, score-0.25]

97 Based on these observations we will pursue two ways to improve our discourse parser. [sent-435, score-0.391]

98 6 Conclusion In this paper, we have described a complete probabilistic discriminative framework for performing sentence-level discourse analysis. [sent-440, score-0.423]

99 In ongoing work, we plan to generalize our DCRF-based parser to multi-sentential text and also verify to what extent parsing and segmentation can bejointly performed. [sent-442, score-0.279]

100 A longer term goal is to extend our framework to also work with graph structures of discourse, as recommended by several recent discourse theories (Wolf and Gibson, 2005). [sent-443, score-0.391]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('edus', 0.427), ('discourse', 0.391), ('spade', 0.305), ('dcrf', 0.259), ('instructional', 0.236), ('nuclearity', 0.229), ('span', 0.18), ('dt', 0.148), ('spans', 0.133), ('dominance', 0.132), ('segmentation', 0.116), ('segmenter', 0.11), ('subba', 0.107), ('rhetorical', 0.103), ('marcu', 0.102), ('constituents', 0.099), ('soricut', 0.095), ('parsing', 0.094), ('eugenio', 0.092), ('duverle', 0.092), ('hilda', 0.079), ('prendinger', 0.076), ('relation', 0.072), ('parser', 0.069), ('fisher', 0.068), ('organizational', 0.066), ('relations', 0.065), ('marginals', 0.062), ('hernault', 0.061), ('prasad', 0.061), ('statuses', 0.061), ('carlson', 0.058), ('wj', 0.053), ('boundary', 0.051), ('beginning', 0.049), ('roark', 0.049), ('rst', 0.047), ('substructure', 0.047), ('constituent', 0.046), ('chunking', 0.044), ('conditional', 0.043), ('segmented', 0.043), ('tree', 0.041), ('posterior', 0.04), ('edu', 0.04), ('dpt', 0.039), ('elaboration', 0.039), ('sporleder', 0.039), ('implements', 0.039), ('absolute', 0.038), ('syntactic', 0.037), ('attachment', 0.037), ('labeling', 0.034), ('crfs', 0.033), ('penn', 0.032), ('discriminative', 0.032), ('labels', 0.032), ('sequences', 0.031), ('bagging', 0.031), ('biran', 0.031), ('dinesh', 0.031), ('ghosh', 0.031), ('miltsakaki', 0.031), ('reparameterization', 0.031), ('satellite', 0.031), ('stede', 0.031), ('treebank', 0.03), ('sequence', 0.03), ('distances', 0.029), ('lexicalized', 0.029), ('dependencies', 0.028), ('confusion', 0.028), ('independence', 0.028), ('hierarchical', 0.028), ('lr', 0.028), ('cross', 0.027), ('charniak', 0.027), ('validation', 0.027), ('finds', 0.027), ('ej', 0.026), ('probabilities', 0.026), ('hardest', 0.026), ('knott', 0.026), ('manuals', 0.026), ('verberne', 0.026), ('features', 0.026), ('chunk', 0.025), ('heads', 0.025), ('sutton', 0.025), ('probable', 0.025), ('production', 0.025), ('dynamic', 0.025), ('classifier', 0.025), ('notice', 0.025), ('crf', 0.024), ('reduces', 0.024), ('carenini', 0.024), ('nh', 0.024), ('sal', 0.024), ('wainwright', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng

Abstract: We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.

2 0.13522071 70 emnlp-2012-Joint Chinese Word Segmentation, POS Tagging and Parsing

Author: Xian Qian ; Yang Liu

Abstract: In this paper, we propose a novel decoding algorithm for discriminative joint Chinese word segmentation, part-of-speech (POS) tagging, and parsing. Previous work often used a pipeline method Chinese word segmentation followed by POS tagging and parsing, which suffers from error propagation and is unable to leverage information in later modules for earlier components. In our approach, we train the three individual models separately during training, and incorporate them together in a unified framework during decoding. We extend the CYK parsing algorithm so that it can deal with word segmentation and POS tagging features. As far as we know, this is the first work on joint Chinese word segmentation, POS tagging and parsing. Our experimental results on Chinese Tree Bank 5 corpus show that our approach outperforms the state-of-the-art pipeline system. –

3 0.095337011 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

Author: Annie Louis ; Ani Nenkova

Abstract: We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles. 59.3 (100.0) Intro 50.3 (100.0) 1166 Rel wk 55.4 (100.0) >= 0.663.8 (67.2)50.8 (71.1)58.6 (75.9) >= 0.7 67.2 (32.0) 54.4 (38.6) 63.3 (52.8) >= 0.8 74.0 (10.0) 51.6 (22.0) 63.0 (25.7) >= 0.9 91.7 (2.0) 30.6 (5.0) 68.1 (7.2) Table 9: Accuracy (% examples) above each confidence level for the conference versus workshop task. These results are shown in Table 9. The proportion of examples under each setting is also indicated. When only examples above 0.6 confidence are examined, the classifier has a higher accuracy of63.8% for abstracts and covers close to 70% of the examples. Similarly, when a cutoff of 0.7 is applied to the confidence for predicting related work sections, we achieve 63.3% accuracy for 53% of examples. So we can consider that 30 to 47% of the examples in the two sections respectively are harder to tell apart. Interestingly however even high confidence predictions on introductions remain incorrect. These results show that our model can successfully distinguish the structure of articles beyond just clearly incoherent permutation examples. 7 Conclusion Our work is the first to develop an unsupervised model for intentional structure and to show that it has good accuracy for coherence prediction and also complements entity and lexical structure of discourse. This result raises interesting questions about how patterns captured by these different coherence metrics vary and how they can be combined usefully for predicting coherence. We plan to explore these ideas in future work. We also want to analyze genre differences to understand if the strength of these coherence dimensions varies with genre. Acknowledgements This work is partially supported by a Google research grant and NSF CAREER 0953445 award. References Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1): 1–34. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL-HLT, pages 113–120. Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL, pages 9–16. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180. Jackie C.K. Cheung and Gerald Penn. 2010. Utilizing extra-sentential context for parsing. In Proceedings of EMNLP, pages 23–33. Christelle Cocco, Rapha ¨el Pittier, Fran ¸cois Bavaud, and Aris Xanthos. 2011. Segmentation and clustering of textual sequences: a typological approach. In Proceedings of RANLP, pages 427–433. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 3 1:25–70. Isaac G. Councill, C. Lee Giles, and Min-Yen Kan. 2008. Parscit: An open-source crf reference string parsing package. In Proceedings of LREC, pages 661–667. Micha Elsner and Eugene Charniak. 2008. Coreferenceinspired coherence modeling. In Proceedings of ACLHLT, Short Papers, pages 41–44. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of ACL-HLT, pages 125–129. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of NAACL-HLT, pages 436–443. Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Transactions on Speech and Language Processing, 3(2): 1–16. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 3(12): 175–204. Yufan Guo, Anna Korhonen, and Thierry Poibeau. 2011. A weakly-supervised approach to argumentative zoning of scientific documents. In Proceedings of EMNLP, pages 273–283. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-HLT, pages 586–594, June. 1167 Nikiforos Karamanis, Chris Mellish, Massimo Poesio, and Jon Oberlander. 2009. Evaluating centering for information ordering using corpora. Computational Linguistics, 35(1):29–46. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423–430. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Proceedings of IJCAI. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of ACL, pages 545–552. Maria Liakata and Larisa Soldatova. 2008. Guidelines for the annotation of general scientific concepts. JISC Project Report. Maria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisation and zoning of scientific papers. In Proceedings of LREC. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of EMNLP, pages 343–351. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of ACL-HLT, pages 997– 1006. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of EMNLP, pages 186–195. Dragomir R. Radev, Mark Thomas Joseph, Bryan Gibson, and Pradeep Muthukrishnan. 2009. A Bibliometric and Network Analysis ofthe field of Computational Linguistics. Journal of the American Society for Information Science and Technology. David Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. In Proceedings of the 28th Annual Conference of the Cognitive Science Society, pages 685–690. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the fifth conference on Applied natural language processing, pages 16–19. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of COLING-ACL, pages 803–810. John Swales. 1990. Genre analysis: English in academic and research settings, volume 11. Cambridge University Press. Simone Teufel and Marc Moens. 2000. What’s yours and what’s mine: determining intellectual attribution in scientific text. In Proceedings of EMNLP, pages 9– 17. Simone Teufel, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumentation in research articles. In Proceedings of EACL, pages 110–1 17. Ying Zhao, George Karypis, and Usama Fayyad. 2005. Hierarchical clustering algorithms for document datasets. Data Mining and Knowledge Discovery, 10: 141–168. 1168

4 0.089516841 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

Author: Michaela Regneri ; Rui Wang

Abstract: Previous work on paraphrase extraction using parallel or comparable corpora has generally not considered the documents’ discourse structure as a useful information source. We propose a novel method for collecting paraphrases relying on the sequential event order in the discourse, using multiple sequence alignment with a semantic similarity measure. We show that adding discourse information boosts the performance of sentence-level paraphrase acquisition, which consequently gives a tremendous advantage for extracting phraselevel paraphrase fragments from matched sentences. Our system beats an informed baseline by a margin of 50%.

5 0.089501858 105 emnlp-2012-Parser Showdown at the Wall Street Corral: An Empirical Investigation of Error Types in Parser Output

Author: Jonathan K. Kummerfeld ; David Hall ; James R. Curran ; Dan Klein

Abstract: Constituency parser performance is primarily interpreted through a single metric, F-score on WSJ section 23, that conveys no linguistic information regarding the remaining errors. We classify errors within a set of linguistically meaningful types using tree transformations that repair groups of errors together. We use this analysis to answer a range of questions about parser behaviour, including what linguistic constructions are difficult for stateof-the-art parsers, what types of errors are being resolved by rerankers, and what types are introduced when parsing out-of-domain text.

6 0.089014575 131 emnlp-2012-Unified Dependency Parsing of Chinese Morphological and Syntactic Structures

7 0.088022158 16 emnlp-2012-Aligning Predicates across Monolingual Comparable Texts using Graph-based Clustering

8 0.084988184 45 emnlp-2012-Exploiting Chunk-level Features to Improve Phrase Chunking

9 0.069787279 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

10 0.067886256 80 emnlp-2012-Learning Verb Inference Rules from Linguistically-Motivated Evidence

11 0.061807945 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

12 0.059467647 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling

13 0.059287068 113 emnlp-2012-Resolving This-issue Anaphora

14 0.058797996 12 emnlp-2012-A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing

15 0.058411274 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

16 0.058173083 127 emnlp-2012-Transforming Trees to Improve Syntactic Convergence

17 0.057205185 57 emnlp-2012-Generalized Higher-Order Dependency Parsing with Cube Pruning

18 0.055842243 65 emnlp-2012-Improving NLP through Marginalization of Hidden Syntactic Structure

19 0.052687578 77 emnlp-2012-Learning Constraints for Consistent Timeline Extraction

20 0.052288361 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.211), (1, -0.037), (2, 0.07), (3, -0.053), (4, 0.086), (5, 0.05), (6, -0.017), (7, -0.034), (8, -0.143), (9, -0.006), (10, -0.111), (11, -0.013), (12, -0.116), (13, 0.037), (14, 0.012), (15, 0.065), (16, -0.045), (17, -0.04), (18, -0.037), (19, -0.063), (20, -0.008), (21, 0.054), (22, 0.024), (23, -0.078), (24, 0.15), (25, -0.043), (26, 0.07), (27, -0.054), (28, 0.049), (29, -0.14), (30, 0.031), (31, -0.042), (32, 0.04), (33, -0.104), (34, 0.014), (35, 0.052), (36, -0.134), (37, 0.241), (38, -0.127), (39, -0.134), (40, -0.135), (41, -0.155), (42, 0.025), (43, 0.024), (44, -0.041), (45, -0.059), (46, 0.258), (47, -0.155), (48, 0.104), (49, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93667823 7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng

Abstract: We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.

2 0.53256309 45 emnlp-2012-Exploiting Chunk-level Features to Improve Phrase Chunking

Author: Junsheng Zhou ; Weiguang Qu ; Fen Zhang

Abstract: Most existing systems solved the phrase chunking task with the sequence labeling approaches, in which the chunk candidates cannot be treated as a whole during parsing process so that the chunk-level features cannot be exploited in a natural way. In this paper, we formulate phrase chunking as a joint segmentation and labeling task. We propose an efficient dynamic programming algorithm with pruning for decoding, which allows the direct use of the features describing the internal characteristics of chunk and the features capturing the correlations between adjacent chunks. A relaxed, online maximum margin training algorithm is used for learning. Within this framework, we explored a variety of effective feature representations for Chinese phrase chunking. The experimental results show that the use of chunk-level features can lead to significant performance improvement, and that our approach achieves state-of-the-art performance. In particular, our approach is much better at recognizing long and complicated phrases. 1

3 0.5186404 3 emnlp-2012-A Coherence Model Based on Syntactic Patterns

Author: Annie Louis ; Ani Nenkova

Abstract: We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles. 59.3 (100.0) Intro 50.3 (100.0) 1166 Rel wk 55.4 (100.0) >= 0.663.8 (67.2)50.8 (71.1)58.6 (75.9) >= 0.7 67.2 (32.0) 54.4 (38.6) 63.3 (52.8) >= 0.8 74.0 (10.0) 51.6 (22.0) 63.0 (25.7) >= 0.9 91.7 (2.0) 30.6 (5.0) 68.1 (7.2) Table 9: Accuracy (% examples) above each confidence level for the conference versus workshop task. These results are shown in Table 9. The proportion of examples under each setting is also indicated. When only examples above 0.6 confidence are examined, the classifier has a higher accuracy of63.8% for abstracts and covers close to 70% of the examples. Similarly, when a cutoff of 0.7 is applied to the confidence for predicting related work sections, we achieve 63.3% accuracy for 53% of examples. So we can consider that 30 to 47% of the examples in the two sections respectively are harder to tell apart. Interestingly however even high confidence predictions on introductions remain incorrect. These results show that our model can successfully distinguish the structure of articles beyond just clearly incoherent permutation examples. 7 Conclusion Our work is the first to develop an unsupervised model for intentional structure and to show that it has good accuracy for coherence prediction and also complements entity and lexical structure of discourse. This result raises interesting questions about how patterns captured by these different coherence metrics vary and how they can be combined usefully for predicting coherence. We plan to explore these ideas in future work. We also want to analyze genre differences to understand if the strength of these coherence dimensions varies with genre. Acknowledgements This work is partially supported by a Google research grant and NSF CAREER 0953445 award. References Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1): 1–34. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL-HLT, pages 113–120. Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL, pages 9–16. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173–180. Jackie C.K. Cheung and Gerald Penn. 2010. Utilizing extra-sentential context for parsing. In Proceedings of EMNLP, pages 23–33. Christelle Cocco, Rapha ¨el Pittier, Fran ¸cois Bavaud, and Aris Xanthos. 2011. Segmentation and clustering of textual sequences: a typological approach. In Proceedings of RANLP, pages 427–433. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 3 1:25–70. Isaac G. Councill, C. Lee Giles, and Min-Yen Kan. 2008. Parscit: An open-source crf reference string parsing package. In Proceedings of LREC, pages 661–667. Micha Elsner and Eugene Charniak. 2008. Coreferenceinspired coherence modeling. In Proceedings of ACLHLT, Short Papers, pages 41–44. Micha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Proceedings of ACL-HLT, pages 125–129. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of NAACL-HLT, pages 436–443. Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Transactions on Speech and Language Processing, 3(2): 1–16. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 3(12): 175–204. Yufan Guo, Anna Korhonen, and Thierry Poibeau. 2011. A weakly-supervised approach to argumentative zoning of scientific documents. In Proceedings of EMNLP, pages 273–283. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-HLT, pages 586–594, June. 1167 Nikiforos Karamanis, Chris Mellish, Massimo Poesio, and Jon Oberlander. 2009. Evaluating centering for information ordering using corpora. Computational Linguistics, 35(1):29–46. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423–430. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Proceedings of IJCAI. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of ACL, pages 545–552. Maria Liakata and Larisa Soldatova. 2008. Guidelines for the annotation of general scientific concepts. JISC Project Report. Maria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisation and zoning of scientific papers. In Proceedings of LREC. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of EMNLP, pages 343–351. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of ACL-HLT, pages 997– 1006. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of EMNLP, pages 186–195. Dragomir R. Radev, Mark Thomas Joseph, Bryan Gibson, and Pradeep Muthukrishnan. 2009. A Bibliometric and Network Analysis ofthe field of Computational Linguistics. Journal of the American Society for Information Science and Technology. David Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. In Proceedings of the 28th Annual Conference of the Cognitive Science Society, pages 685–690. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the fifth conference on Applied natural language processing, pages 16–19. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of COLING-ACL, pages 803–810. John Swales. 1990. Genre analysis: English in academic and research settings, volume 11. Cambridge University Press. Simone Teufel and Marc Moens. 2000. What’s yours and what’s mine: determining intellectual attribution in scientific text. In Proceedings of EMNLP, pages 9– 17. Simone Teufel, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumentation in research articles. In Proceedings of EACL, pages 110–1 17. Ying Zhao, George Karypis, and Usama Fayyad. 2005. Hierarchical clustering algorithms for document datasets. Data Mining and Knowledge Discovery, 10: 141–168. 1168

4 0.44517997 16 emnlp-2012-Aligning Predicates across Monolingual Comparable Texts using Graph-based Clustering

Author: Michael Roth ; Anette Frank

Abstract: Generating coherent discourse is an important aspect in natural language generation. Our aim is to learn factors that constitute coherent discourse from data, with a focus on how to realize predicate-argument structures in a model that exceeds the sentence level. We present an important subtask for this overall goal, in which we align predicates across comparable texts, admitting partial argument structure correspondence. The contribution of this work is two-fold: We first construct a large corpus resource of comparable texts, including an evaluation set with manual predicate alignments. Secondly, we present a novel approach for aligning predicates across comparable texts using graph-based clustering with Mincuts. Our method significantly outperforms other alignment techniques when applied to this novel alignment task, by a margin of at least 6.5 percentage points in F1-score.

5 0.39739743 38 emnlp-2012-Employing Compositional Semantics and Discourse Consistency in Chinese Event Extraction

Author: Peifeng Li ; Guodong Zhou ; Qiaoming Zhu ; Libin Hou

Abstract: Current Chinese event extraction systems suffer much from two problems in trigger identification: unknown triggers and word segmentation errors to known triggers. To resolve these problems, this paper proposes two novel inference mechanisms to explore special characteristics in Chinese via compositional semantics inside Chinese triggers and discourse consistency between Chinese trigger mentions. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our approach over a strong baseline. 1

6 0.36658278 119 emnlp-2012-Spectral Dependency Parsing with Latent Variables

7 0.36131382 131 emnlp-2012-Unified Dependency Parsing of Chinese Morphological and Syntactic Structures

8 0.35287318 70 emnlp-2012-Joint Chinese Word Segmentation, POS Tagging and Parsing

9 0.34804901 80 emnlp-2012-Learning Verb Inference Rules from Linguistically-Motivated Evidence

10 0.3229146 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

11 0.31592757 135 emnlp-2012-Using Discourse Information for Paraphrase Extraction

12 0.31575921 105 emnlp-2012-Parser Showdown at the Wall Street Corral: An Empirical Investigation of Error Types in Parser Output

13 0.31254396 55 emnlp-2012-Forest Reranking through Subtree Ranking

14 0.30233189 46 emnlp-2012-Exploiting Reducibility in Unsupervised Dependency Parsing

15 0.2960127 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

16 0.28955606 64 emnlp-2012-Improved Parsing and POS Tagging Using Inter-Sentence Consistency Constraints

17 0.28614768 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

18 0.28229082 9 emnlp-2012-A Sequence Labelling Approach to Quote Attribution

19 0.25576037 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

20 0.25467974 89 emnlp-2012-Mixed Membership Markov Models for Unsupervised Conversation Modeling


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.013), (16, 0.034), (25, 0.023), (34, 0.059), (60, 0.074), (63, 0.045), (64, 0.023), (65, 0.026), (70, 0.012), (74, 0.501), (76, 0.041), (80, 0.013), (86, 0.025), (95, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.98745126 21 emnlp-2012-Assessment of ESL Learners' Syntactic Competence Based on Similarity Measures

Author: Su-Youn Yoon ; Suma Bhat

Abstract: This study presents a novel method that measures English language learners’ syntactic competence towards improving automated speech scoring systems. In contrast to most previous studies which focus on the length of production units such as the mean length of clauses, we focused on capturing the differences in the distribution of morpho-syntactic features or grammatical expressions across proficiency. We estimated the syntactic competence through the use of corpus-based NLP techniques. Assuming that the range and so- phistication of grammatical expressions can be captured by the distribution of Part-ofSpeech (POS) tags, vector space models of POS tags were constructed. We use a large corpus of English learners’ responses that are classified into four proficiency levels by human raters. Our proposed feature measures the similarity of a given response with the most proficient group and is then estimates the learner’s syntactic competence level. Widely outperforming the state-of-the-art measures of syntactic complexity, our method attained a significant correlation with humanrated scores. The correlation between humanrated scores and features based on manual transcription was 0.43 and the same based on ASR-hypothesis was slightly lower, 0.42. An important advantage of our method is its robustness against speech recognition errors not to mention the simplicity of feature generation that captures a reasonable set of learnerspecific syntactic errors. 600 Measures Suma Bhat Beckman Institute, Urbana, IL 61801 . spbhat 2 @ i l l ino i edu s

2 0.92300767 1 emnlp-2012-A Bayesian Model for Learning SCFGs with Discontiguous Rules

Author: Abby Levenberg ; Chris Dyer ; Phil Blunsom

Abstract: We describe a nonparametric model and corresponding inference algorithm for learning Synchronous Context Free Grammar derivations for parallel text. The model employs a Pitman-Yor Process prior which uses a novel base distribution over synchronous grammar rules. Through both synthetic grammar induction and statistical machine translation experiments, we show that our model learns complex translational correspondences— including discontiguous, many-to-many alignments—and produces competitive translation results. Further, inference is efficient and we present results on significantly larger corpora than prior work.

same-paper 3 0.89084762 7 emnlp-2012-A Novel Discriminative Framework for Sentence-Level Discourse Analysis

Author: Shafiq Joty ; Giuseppe Carenini ; Raymond Ng

Abstract: We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.

4 0.69968998 130 emnlp-2012-Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars

Author: Kewei Tu ; Vasant Honavar

Abstract: We introduce a novel approach named unambiguity regularization for unsupervised learning of probabilistic natural language grammars. The approach is based on the observation that natural language is remarkably unambiguous in the sense that only a tiny portion of the large number of possible parses of a natural language sentence are syntactically valid. We incorporate an inductive bias into grammar learning in favor of grammars that lead to unambiguous parses on natural language sentences. The resulting family of algorithms includes the expectation-maximization algorithm (EM) and its variant, Viterbi EM, as well as a so-called softmax-EM algorithm. The softmax-EM algorithm can be implemented with a simple and computationally efficient extension to standard EM. In our experiments of unsupervised dependency grammar learn- ing, we show that unambiguity regularization is beneficial to learning, and in combination with annealing (of the regularization strength) and sparsity priors it leads to improvement over the current state of the art.

5 0.56618482 31 emnlp-2012-Cross-Lingual Language Modeling with Syntactic Reordering for Low-Resource Speech Recognition

Author: Ping Xu ; Pascale Fung

Abstract: This paper proposes cross-lingual language modeling for transcribing source resourcepoor languages and translating them into target resource-rich languages if necessary. Our focus is to improve the speech recognition performance of low-resource languages by leveraging the language model statistics from resource-rich languages. The most challenging work of cross-lingual language modeling is to solve the syntactic discrepancies between the source and target languages. We therefore propose syntactic reordering for cross-lingual language modeling, and present a first result that compares inversion transduction grammar (ITG) reordering constraints to IBM and local constraints in an integrated speech transcription and translation system. Evaluations on resource-poor Cantonese speech transcription and Cantonese to resource-rich Mandarin translation tasks show that our proposed approach improves the system performance significantly, up to 3.4% relative WER reduction in Cantonese transcription and 13.3% relative bilingual evaluation understudy (BLEU) score improvement in Mandarin transcription compared with the system without reordering.

6 0.56055957 122 emnlp-2012-Syntactic Surprisal Affects Spoken Word Duration in Conversational Contexts

7 0.55341721 88 emnlp-2012-Minimal Dependency Length in Realization Ranking

8 0.54678476 136 emnlp-2012-Weakly Supervised Training of Semantic Parsers

9 0.53920811 109 emnlp-2012-Re-training Monolingual Parser Bilingually for Syntactic SMT

10 0.53054869 82 emnlp-2012-Left-to-Right Tree-to-String Decoding with Prediction

11 0.52427769 125 emnlp-2012-Towards Efficient Named-Entity Rule Induction for Customizability

12 0.51601809 51 emnlp-2012-Extracting Opinion Expressions with semi-Markov Conditional Random Fields

13 0.51166666 124 emnlp-2012-Three Dependency-and-Boundary Models for Grammar Induction

14 0.50860798 8 emnlp-2012-A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes

15 0.50074238 133 emnlp-2012-Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision

16 0.48824826 75 emnlp-2012-Large Scale Decipherment for Out-of-Domain Machine Translation

17 0.48131213 27 emnlp-2012-Characterizing Stylistic Elements in Syntactic Structure

18 0.47140491 70 emnlp-2012-Joint Chinese Word Segmentation, POS Tagging and Parsing

19 0.4702282 14 emnlp-2012-A Weakly Supervised Model for Sentence-Level Semantic Orientation Analysis with Multiple Experts

20 0.46433425 81 emnlp-2012-Learning to Map into a Universal POS Tagset