acl acl2012 acl2012-107 knowledge-graph by maker-knowledge-mining

107 acl-2012-Heuristic Cube Pruning in Linear Time


Source: pdf

Author: Andrea Gesmundo ; Giorgio Satta ; James Henderson

Abstract: We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Heuristic Cube Pruning in Linear Time Andrea Gesmundo Department of Computer Science University of Geneva andrea . [sent-1, score-0.117]

2 ch Abstract We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. [sent-3, score-0.549]

3 Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy. [sent-4, score-0.294]

4 1 Introduction Since its first appearance in (Huang and Chiang, 2005), the Cube Pruning (CP) algorithm has quickly gained popularity in statistical natural language pro- cessing. [sent-5, score-0.283]

5 Informally, this algorithm applies to scenarios in which we have the k-best solutions for two input sub-problems, and we need to compute the kbest solutions for the new problem representing the combination of the two sub-problems. [sent-6, score-0.533]

6 Standard implementations of CP run in time O(k log(k)), with k being the size of the input/output )b),eam wsit (Huang anngd Chiang, 2005). [sent-8, score-0.238]

7 eG iens-mundo and Henderson (2010) propose Faster CP (FCP) which optimizes the algorithm but keeps the O(k log(k)) time complexity. [sent-9, score-0.281]

8 Here, we propose a nOo(vkel l ghe(ukr)i)st ticim algorithm iftoyr. [sent-10, score-0.093]

9 C HPe running oinp tsime ea O(k) and evaluate its impact on the efficiency and performance olufa a irtesal i-mwpoarcldt mna tchhein eeff ctiraennscylat aionnd system. [sent-11, score-0.217]

10 296 Giorgio Satta Department of Information Engineering University of Padua s att a@ de i unipd . [sent-12, score-0.113]

11 James Henderson Department of Computer Science University of Geneva j ame s . [sent-14, score-0.077]

12 , xk−1i be a list over R, that is, an to Lrder =ed sequence of reia bl numbers, possibly awti itsh, repetitions. [sent-19, score-0.12]

13 th Wate L w irist descending dife xi ≥ xj fleorn every i,j Wweit hsa 0y ≤ it < j < ck. [sent-22, score-0.706]

14 W=e hwyrite L1 ⊕ L2 to ed etwnoote d tehsec descending loisvte rw Rith. [sent-30, score-0.583]

15 Weleem wernittes xi + yj for every i,j with 0 ≤ i< k and 0 ≤ j < k′. [sent-31, score-0.18]

16 aInnd dc 0ub ≤e pruning (CP) we are given as input two descending lists L1, L2 over R with |L1| = |L2 | = k, sacnedn we are tass kLed, tLo compute tihteh descending |li =st consisting of the first k elements of L1 ⊕ L2. [sent-32, score-1.342]

17 nAs problem reel faitrestd k to CmPe itss tohfe L k-way merge problem (Horowitz and Sahni, 1983). [sent-33, score-0.333]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('cp', 0.457), ('descending', 0.44), ('cube', 0.23), ('pruning', 0.196), ('unige', 0.183), ('chiang', 0.173), ('henderson', 0.147), ('geneva', 0.136), ('huang', 0.131), ('andrea', 0.117), ('merge', 0.117), ('xk', 0.114), ('beam', 0.104), ('running', 0.104), ('bet', 0.1), ('dife', 0.1), ('kbest', 0.1), ('nas', 0.1), ('riesa', 0.1), ('smundo', 0.1), ('wsit', 0.1), ('att', 0.091), ('gesmundo', 0.091), ('reel', 0.091), ('ch', 0.089), ('tlo', 0.085), ('preliminaries', 0.085), ('inexact', 0.085), ('solutions', 0.081), ('ed', 0.081), ('yk', 0.081), ('monotonic', 0.081), ('giorgio', 0.077), ('tohfe', 0.077), ('ame', 0.077), ('eg', 0.077), ('bl', 0.077), ('satta', 0.073), ('informally', 0.073), ('optimizes', 0.071), ('department', 0.07), ('log', 0.068), ('xi', 0.068), ('heuristic', 0.068), ('popularity', 0.068), ('li', 0.068), ('implementations', 0.066), ('appearance', 0.066), ('keeps', 0.066), ('lists', 0.065), ('dc', 0.064), ('yj', 0.064), ('rw', 0.062), ('gained', 0.056), ('algorithm', 0.053), ('time', 0.051), ('xj', 0.05), ('st', 0.05), ('every', 0.048), ('scenarios', 0.048), ('applies', 0.047), ('ea', 0.047), ('loss', 0.045), ('compute', 0.045), ('marcu', 0.044), ('possibly', 0.043), ('conditions', 0.043), ('knight', 0.042), ('write', 0.042), ('linear', 0.04), ('propose', 0.04), ('empirically', 0.04), ('quickly', 0.04), ('efficiency', 0.038), ('say', 0.038), ('translation', 0.037), ('elements', 0.036), ('faster', 0.035), ('gain', 0.035), ('decoding', 0.032), ('james', 0.032), ('programming', 0.031), ('consisting', 0.03), ('dynamic', 0.029), ('engineering', 0.029), ('impact', 0.028), ('representing', 0.028), ('input', 0.026), ('numbers', 0.026), ('definition', 0.026), ('alignment', 0.026), ('denote', 0.025), ('let', 0.024), ('problem', 0.024), ('combining', 0.023), ('de', 0.022), ('certain', 0.022), ('standard', 0.022), ('computer', 0.021), ('run', 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 107 acl-2012-Heuristic Cube Pruning in Linear Time

Author: Andrea Gesmundo ; Giorgio Satta ; James Henderson

Abstract: We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.

2 0.089467779 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation

Author: Tong Xiao ; Jingbo Zhu ; Hao Zhang ; Qiang Li

Abstract: We present a new open source toolkit for phrase-based and syntax-based machine translation. The toolkit supports several state-of-the-art models developed in statistical machine translation, including the phrase-based model, the hierachical phrase-based model, and various syntaxbased models. The key innovation provided by the toolkit is that the decoder can work with various grammars and offers different choices of decoding algrithms, such as phrase-based decoding, decoding as parsing/tree-parsing and forest-based decoding. Moreover, several useful utilities were distributed with the toolkit, including a discriminative reordering model, a simple and fast language model, and an implementation of minimum error rate training for weight tuning. 1

3 0.06813807 97 acl-2012-Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation

Author: Joern Wuebker ; Hermann Ney ; Richard Zens

Abstract: In this work we present two extensions to the well-known dynamic programming beam search in phrase-based statistical machine translation (SMT), aiming at increased efficiency of decoding by minimizing the number of language model computations and hypothesis expansions. Our results show that language model based pre-sorting yields a small improvement in translation quality and a speedup by a factor of 2. Two look-ahead methods are shown to further increase translation speed by a factor of2 without changing the search space and a factor of 4 with the side-effect of some additional search errors. We compare our ap- proach with Moses and observe the same performance, but a substantially better trade-off between translation quality and speed. At a speed of roughly 70 words per second, Moses reaches 17.2% BLEU, whereas our approach yields 20.0% with identical models.

4 0.064797021 140 acl-2012-Machine Translation without Words through Substring Alignment

Author: Graham Neubig ; Taro Watanabe ; Shinsuke Mori ; Tatsuya Kawahara

Abstract: In this paper, we demonstrate that accurate machine translation is possible without the concept of “words,” treating MT as a problem of transformation between character strings. We achieve this result by applying phrasal inversion transduction grammar alignment techniques to character strings to train a character-based translation model, and using this in the phrase-based MT framework. We also propose a look-ahead parsing algorithm and substring-informed prior probabilities to achieve more effective and efficient alignment. In an evaluation, we demonstrate that character-based translation can achieve results that compare to word-based systems while effectively translating unknown and uncommon words over several language pairs.

5 0.063163362 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation

Author: Junhui Li ; Zhaopeng Tu ; Guodong Zhou ; Josef van Genabith

Abstract: This paper presents an extension of Chiang’s hierarchical phrase-based (HPB) model, called Head-Driven HPB (HD-HPB), which incorporates head information in translation rules to better capture syntax-driven information, as well as improved reordering between any two neighboring non-terminals at any stage of a derivation to explore a larger reordering search space. Experiments on Chinese-English translation on four NIST MT test sets show that the HD-HPB model significantly outperforms Chiang’s model with average gains of 1.91 points absolute in BLEU. 1

6 0.057691559 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?

7 0.052618347 137 acl-2012-Lemmatisation as a Tagging Task

8 0.051938031 106 acl-2012-Head-driven Transition-based Parsing with Top-down Prediction

9 0.05152639 179 acl-2012-Smaller Alignment Models for Better Translations: Unsupervised Word Alignment with the l0-norm

10 0.050484594 204 acl-2012-Translation Model Size Reduction for Hierarchical Phrase-based Statistical Machine Translation

11 0.050365619 9 acl-2012-A Cost Sensitive Part-of-Speech Tagging: Differentiating Serious Errors from Minor Errors

12 0.049947821 123 acl-2012-Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT

13 0.04798438 121 acl-2012-Iterative Viterbi A* Algorithm for K-Best Sequential Decoding

14 0.047727872 213 acl-2012-Utilizing Dependency Language Models for Graph-based Dependency Parsing Models

15 0.047383644 109 acl-2012-Higher-order Constituent Parsing and Parser Combination

16 0.045240495 57 acl-2012-Concept-to-text Generation via Discriminative Reranking

17 0.044122502 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment

18 0.04124428 141 acl-2012-Maximum Expected BLEU Training of Phrase and Lexicon Translation Models

19 0.040568583 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation

20 0.039856296 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.103), (1, -0.063), (2, -0.01), (3, -0.014), (4, -0.021), (5, -0.013), (6, 0.014), (7, 0.011), (8, 0.029), (9, -0.01), (10, -0.034), (11, -0.032), (12, -0.047), (13, -0.029), (14, 0.023), (15, 0.022), (16, 0.019), (17, 0.103), (18, 0.07), (19, -0.028), (20, 0.059), (21, -0.042), (22, -0.053), (23, 0.035), (24, 0.01), (25, 0.009), (26, -0.002), (27, -0.0), (28, -0.014), (29, -0.044), (30, 0.05), (31, -0.005), (32, -0.036), (33, 0.059), (34, 0.011), (35, 0.041), (36, -0.067), (37, 0.058), (38, -0.161), (39, -0.089), (40, 0.053), (41, -0.041), (42, 0.039), (43, 0.032), (44, -0.016), (45, 0.041), (46, 0.104), (47, 0.034), (48, 0.023), (49, 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95644611 107 acl-2012-Heuristic Cube Pruning in Linear Time

Author: Andrea Gesmundo ; Giorgio Satta ; James Henderson

Abstract: We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.

2 0.59361738 121 acl-2012-Iterative Viterbi A* Algorithm for K-Best Sequential Decoding

Author: Zhiheng Huang ; Yi Chang ; Bo Long ; Jean-Francois Crespo ; Anlei Dong ; Sathiya Keerthi ; Su-Lin Wu

Abstract: Sequential modeling has been widely used in a variety of important applications including named entity recognition and shallow parsing. However, as more and more real time large-scale tagging applications arise, decoding speed has become a bottleneck for existing sequential tagging algorithms. In this paper we propose 1-best A*, 1-best iterative A*, k-best A* and k-best iterative Viterbi A* algorithms for sequential decoding. We show the efficiency of these proposed algorithms for five NLP tagging tasks. In particular, we show that iterative Viterbi A* decoding can be several times or orders of magnitude faster than the state-of-the-art algorithm for tagging tasks with a large number of labels. This algorithm makes real-time large-scale tagging applications with thousands of labels feasible.

3 0.55457282 139 acl-2012-MIX Is Not a Tree-Adjoining Language

Author: Makoto Kanazawa ; Sylvain Salvati

Abstract: The language MIX consists of all strings over the three-letter alphabet {a, b, c} that contain an equal n-luemttebrer a olpfh occurrences }o tfh heaatch c olentttaeinr. We prove Joshi’s (1985) conjecture that MIX is not a tree-adjoining language.

4 0.52581006 68 acl-2012-Decoding Running Key Ciphers

Author: Sravana Reddy ; Kevin Knight

Abstract: There has been recent interest in the problem of decoding letter substitution ciphers using techniques inspired by natural language processing. We consider a different type of classical encoding scheme known as the running key cipher, and propose a search solution using Gibbs sampling with a word language model. We evaluate our method on synthetic ciphertexts of different lengths, and find that it outperforms previous work that employs Viterbi decoding with character-based models.

5 0.4877353 185 acl-2012-Strong Lexicalization of Tree Adjoining Grammars

Author: Andreas Maletti ; Joost Engelfriet

Abstract: Recently, it was shown (KUHLMANN, SATTA: Tree-adjoining grammars are not closed under strong lexicalization. Comput. Linguist., 2012) that finitely ambiguous tree adjoining grammars cannot be transformed into a normal form (preserving the generated tree language), in which each production contains a lexical symbol. A more powerful model, the simple context-free tree grammar, admits such a normal form. It can be effectively constructed and the maximal rank of the nonterminals only increases by 1. Thus, simple context-free tree grammars strongly lexicalize tree adjoining grammars and themselves.

6 0.4701179 97 acl-2012-Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation

7 0.46087712 155 acl-2012-NiuTrans: An Open Source Toolkit for Phrase-based and Syntax-based Machine Translation

8 0.42857683 57 acl-2012-Concept-to-text Generation via Discriminative Reranking

9 0.42704546 181 acl-2012-Spectral Learning of Latent-Variable PCFGs

10 0.39487216 105 acl-2012-Head-Driven Hierarchical Phrase-based Translation

11 0.38613856 108 acl-2012-Hierarchical Chunk-to-String Translation

12 0.38029119 128 acl-2012-Learning Better Rule Extraction with Translation Span Alignment

13 0.37810785 131 acl-2012-Learning Translation Consensus with Structured Label Propagation

14 0.37727645 137 acl-2012-Lemmatisation as a Tagging Task

15 0.36470938 67 acl-2012-Deciphering Foreign Language by Combining Language Models and Context Vectors

16 0.36113808 174 acl-2012-Semantic Parsing with Bayesian Tree Transducers

17 0.35749227 89 acl-2012-Exploring Deterministic Constraints: from a Constrained English POS Tagger to an Efficient ILP Solution to Chinese Word Segmentation

18 0.35719594 123 acl-2012-Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT

19 0.33160725 33 acl-2012-Automatic Event Extraction with Structured Preference Modeling

20 0.32500383 143 acl-2012-Mixing Multiple Translation Models in Statistical Machine Translation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(26, 0.035), (28, 0.037), (37, 0.016), (39, 0.037), (47, 0.459), (74, 0.026), (82, 0.065), (85, 0.021), (90, 0.107), (92, 0.025), (94, 0.038), (99, 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.79233062 107 acl-2012-Heuristic Cube Pruning in Linear Time

Author: Andrea Gesmundo ; Giorgio Satta ; James Henderson

Abstract: We propose a novel heuristic algorithm for Cube Pruning running in linear time in the beam size. Empirically, we show a gain in running time of a standard machine translation system, at a small loss in accuracy.

2 0.29973552 57 acl-2012-Concept-to-text Generation via Discriminative Reranking

Author: Ioannis Konstas ; Mirella Lapata

Abstract: This paper proposes a data-driven method for concept-to-text generation, the task of automatically producing textual output from non-linguistic input. A key insight in our approach is to reduce the tasks of content selection (“what to say”) and surface realization (“how to say”) into a common parsing problem. We define a probabilistic context-free grammar that describes the structure of the input (a corpus of database records and text describing some of them) and represent it compactly as a weighted hypergraph. The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features. We propose a novel decoding algorithm for finding the best scoring derivation and generating in this setting. Experimental evaluation on the ATIS domain shows that our model outperforms a competitive discriminative system both using BLEU and in a judgment elicitation study.

3 0.29839808 187 acl-2012-Subgroup Detection in Ideological Discussions

Author: Amjad Abu-Jbara ; Pradeep Dasigi ; Mona Diab ; Dragomir Radev

Abstract: The rapid and continuous growth of social networking sites has led to the emergence of many communities of communicating groups. Many of these groups discuss ideological and political topics. It is not uncommon that the participants in such discussions split into two or more subgroups. The members of each subgroup share the same opinion toward the discussion topic and are more likely to agree with members of the same subgroup and disagree with members from opposing subgroups. In this paper, we propose an unsupervised approach for automatically detecting discussant subgroups in online communities. We analyze the text exchanged between the participants of a discussion to identify the attitude they carry toward each other and towards the various aspects of the discussion topic. We use attitude predictions to construct an attitude vector for each discussant. We use clustering techniques to cluster these vectors and, hence, determine the subgroup membership of each participant. We compare our methods to text clustering and other baselines, and show that our method achieves promising results.

4 0.29794812 12 acl-2012-A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relation Extraction

Author: Seokhwan Kim ; Gary Geunbae Lee

Abstract: Although researchers have conducted extensive studies on relation extraction in the last decade, supervised approaches are still limited because they require large amounts of training data to achieve high performances. To build a relation extractor without significant annotation effort, we can exploit cross-lingual annotation projection, which leverages parallel corpora as external resources for supervision. This paper proposes a novel graph-based projection approach and demonstrates the merits of it by using a Korean relation extraction system based on projected dataset from an English-Korean parallel corpus.

5 0.2943722 90 acl-2012-Extracting Narrative Timelines as Temporal Dependency Structures

Author: Oleksandr Kolomiyets ; Steven Bethard ; Marie-Francine Moens

Abstract: We propose a new approach to characterizing the timeline of a text: temporal dependency structures, where all the events of a narrative are linked via partial ordering relations like BEFORE, AFTER, OVERLAP and IDENTITY. We annotate a corpus of children’s stories with temporal dependency trees, achieving agreement (Krippendorff’s Alpha) of 0.856 on the event words, 0.822 on the links between events, and of 0.700 on the ordering relation labels. We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596. Our analysis of the dependency parser errors gives some insights into future research directions.

6 0.29273763 188 acl-2012-Subgroup Detector: A System for Detecting Subgroups in Online Discussions

7 0.28423983 25 acl-2012-An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?

8 0.28317937 191 acl-2012-Temporally Anchored Relation Extraction

9 0.28146926 123 acl-2012-Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT

10 0.27968988 45 acl-2012-Capturing Paradigmatic and Syntagmatic Lexical Relations: Towards Accurate Chinese Part-of-Speech Tagging

11 0.27966857 102 acl-2012-Genre Independent Subgroup Detection in Online Discussion Threads: A Study of Implicit Attitude using Textual Latent Semantics

12 0.27814734 73 acl-2012-Discriminative Learning for Joint Template Filling

13 0.27688015 150 acl-2012-Multilingual Named Entity Recognition using Parallel Data and Metadata from Wikipedia

14 0.276831 140 acl-2012-Machine Translation without Words through Substring Alignment

15 0.27655053 3 acl-2012-A Class-Based Agreement Model for Generating Accurately Inflected Translations

16 0.27611363 172 acl-2012-Selective Sharing for Multilingual Dependency Parsing

17 0.27543947 99 acl-2012-Finding Salient Dates for Building Thematic Timelines

18 0.27538469 85 acl-2012-Event Linking: Grounding Event Reference in a News Archive

19 0.27526599 127 acl-2012-Large-Scale Syntactic Language Modeling with Treelets

20 0.27512932 136 acl-2012-Learning to Translate with Multiple Objectives