acl acl2013 acl2013-176 knowledge-graph by maker-knowledge-mining

176 acl-2013-Grounded Unsupervised Semantic Parsing


Source: pdf

Author: Hoifung Poon

Abstract: We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. [sent-2, score-0.25]

2 Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. [sent-3, score-0.159]

3 1 Introduction Semantic parsing maps text to a formal meaning representation such as logical forms or structured queries. [sent-6, score-0.166]

4 In this paper, we present the GUSP system, which combines unsupervised semantic parsing with grounded learning from a database. [sent-21, score-0.171]

5 GUSP starts with the dependency tree of a sentence and produces a semantic parse by annotating the nodes and edges with latent semantic states derived from the database. [sent-22, score-0.431]

6 To compensate for the lack of direct supervision, GUSP constrains the search space using the database schema, and bootstraps learning using lexical scores computed from the names and values of database elements. [sent-24, score-0.174]

7 Unlike USP, GUSP predetermines the target logical forms based on the database schema, which alleviates the difficulty in learning and ensures that the output semantic parses can be directly used in querying the database. [sent-26, score-0.262]

8 c A2s0s1o3ci Aatsiosonc fioartio Cno fmorpu Ctoamtiopnuatalt Lioin gauli Lsitnicgsu,i psatgices 93 –943, by augmenting the state space to represent semantic relations beyond immediate dependency neighborhood. [sent-32, score-0.222]

9 (201 1) proposed DCS for dependency-based compositional semantics, which represents a semantic parse as a tree with nodes representing database elements and operations, and edges representing relational joins. [sent-47, score-0.292]

10 In this paper, we focus on semantic parsing for natural-language interface to database (Grosz et al. [sent-48, score-0.188]

11 USP defines a probabilistic model over the dependency tree and semantic parse using Markov logic (Domingos and Lowd, 2009), and recursively clusters and composes synonymous dependency treelets using a hard EM-like procedure. [sent-53, score-0.228]

12 Therefore, to answer complex questions against a database, it requires an additional ontology matching step to resolve USP clusters with database elements. [sent-59, score-0.148]

13 code Figure 1: End-to-end question answering by GUSP for sentence get flight from toronto to san diego stopping in dtw. [sent-74, score-0.474]

14 Top: the dependency tree of the sentence is annotated with latent semantic states by GUSP. [sent-75, score-0.272]

15 Raising occurs from flight to get and sinking occurs from get to diego. [sent-77, score-0.386]

16 GUSP produces a semantic parse of the question by annotating its dependency tree with latent semantic states. [sent-98, score-0.254]

17 Specifically, it defines the semantic states based on the database schema, and derives lexical-trigger scores from database elements to bootstrap learning. [sent-103, score-0.364]

18 Second, in contrast to most existing approaches for semantic parsing, GUSP starts directly from dependency trees and focuses on translating them into semantic parses. [sent-104, score-0.161]

19 To combat this problem, GUSP introduces a novel dependency-based meaning representation with an augmented state space to account for semantic re- lations that are nonlocal in the dependency tree. [sent-108, score-0.248]

20 GUSP also handles complex linguistic phenomena and syntax-semantics mismatch by explicitly augmenting the state space, whereas USP’s capability in handling such phenomena is indirect and more limited. [sent-111, score-0.211]

21 Their approach to semantic parsing, however, differs from GUSP in that it induced the semantic tree directly from a sentence, rather than starting from 935 a dependency tree and annotating it. [sent-114, score-0.246]

22 For example, in the phrase cheapest flight to Seattle, the scope of cheapest can be either flight or flight to seattle. [sent-132, score-1.006]

23 2 Simple Semantic States Node states GUSP creates a state E :X (E short for entity) for each database entity X (i. [sent-144, score-0.41]

24 , a database table), a state P :Y (P short for property) and V :Y (V short for value) for each database attribute Y (i. [sent-146, score-0.291]

25 For example, the ATIS domain contains entities such as flight and fare, which may contain properties such as the departure time flight . [sent-151, score-0.655]

26 In the semantic parse in Figure 1, for example, flight is assigned to entity state E : flight, where toronto is assigned to value state V : city . [sent-155, score-0.745]

27 There is a special node state NULL, which signifies that the subtree headed by the word contributes no meaning to the semantic parse (e. [sent-157, score-0.374]

28 Edge states GUSP creates an edge state for each valid relational join paths connecting two node states. [sent-160, score-0.597]

29 GUSP enforces the constraints that the node states of the dependency parent and child must agree with the node states in the edge state. [sent-162, score-0.697]

30 departure t ime represents a natural join between the flight entity and the property value departure time. [sent-164, score-0.524]

31 For a dependency edge e : a → b, the assignment to E : flight-V : f al ight . [sent-165, score-0.15]

32 departure t itm teo signifies httha-t a represents a flight entity, and b represents the value of its departure time. [sent-166, score-0.408]

33 An edge state may also represent a relational path consisting of a serial of joins. [sent-167, score-0.25]

34 For example, Zettlemoyer and Collins (2007) used a predicate from ( f c ) to signify that flight f starts from city c. [sent-168, score-0.361]

35 In the ATIS database, however, this amounts to a path of three joins: , flight . [sent-169, score-0.314]

36 from ai rport -ai rport airport -ai rport s ervi ce airport s e rvi ce- city In GUSP, this is represented by the edge state flight -flight . [sent-170, score-1.056]

37 from ai rport -ai rport -ai rport s e rvi ce-c ity. [sent-171, score-0.324]

38 936 GUSP only creates edge states for relational join paths up to length four, as longer paths rarely correspond to meaningful semantic relations. [sent-172, score-0.423]

39 In GUSP, this is handled by introducing, for each node state such as E :airline, a new node state such as E : airline :C, where C signifies composition. [sent-174, score-0.462]

40 Operator states GUSP create node states for the logical and comparison operators (OR, AND, NOT, MORE, LESS, EQ). [sent-180, score-0.452]

41 Additionally, to handle the cases when prepositions and logical connectives are collapsed into the label of a dependency edge, as in Stanford dependency, GUSP introduces an edge state for each triple of an operator and two node states, such as E : flight -AND-E : fare. [sent-181, score-0.745]

42 Quantifier states GUSP creates a node state for each of the standard SQL functions: argmin argmax count sum. [sent-182, score-0.415]

43 Additionally, it creates a node state for each pair of compatible function and property. [sent-183, score-0.26]

44 For example, argmin can be applied to any numeric property, in particular flight . [sent-184, score-0.335]

45 departure t ime, and so the node state P : flight . [sent-185, score-0.552]

46 4 Complex Semantic States For sentences with a correct dependency tree and well-aligned syntax and semantics, the simple semantic states suffice for annotating the correct semantic parse. [sent-188, score-0.347]

47 In Figure 1, the dependency tree contains multiple errors: from toronto and to san diego are mistakenly attached to get, which has no literal meaning here; stopping in dtw is also wrongly attached to diego rather than flight. [sent-190, score-0.348]

48 Annotating such a tree with only simple states will lead to incorrect semantic parses, e. [sent-191, score-0.223]

49 , by joining V : city : s an diego with V : airport :dtw via E :airport s ervi ce, rather than joining E : flight with V : airport :dtw via E : flight st op. [sent-193, score-0.934]

50 To overcome these challenges, GUSP introduces three types of complex states to handle syntax-semantics divergence. [sent-194, score-0.158]

51 Raising For each simple node state N, GUSP creates a “raised” state N :R (R short for raised). [sent-196, score-0.377]

52 A raised state signifies a word that has little or none of its own meaning, but effectively takes one of its child states to be its own (“raises”). [sent-197, score-0.374]

53 Correspondingly, GUSP creates a “raising” edge state N-R-N, which signifies that the parent is a raised state and its meaning is derived from the dependency child of state N. [sent-198, score-0.74]

54 For all other children, the parent behaves just as state N. [sent-199, score-0.177]

55 For example, in Figure 1, get is assigned to the raised state E : flight :R, and the edge between get and flight is assigned to the raising edge state E : flight -R-E : flight . [sent-200, score-1.765]

56 Sinking For simple node states A, B and an edge state E connecting the two, GUSP creates a “sinking” node state A+E+B : S (S for sinking). [sent-201, score-0.725]

57 When a node n is assigned to such a sinking state, n can behave as either A or B for its children (i. [sent-202, score-0.166]

58 , the edge states can connect to either one), and n’s parent must be of state B. [sent-204, score-0.393]

59 In Figure 1, for example, diego is assigned to a sinking state V : city . [sent-205, score-0.29]

60 name + E : flight (the edge state is omitted for brevity). [sent-206, score-0.532]

61 For child san, diego behaves as in state V : city . [sent-208, score-0.287]

62 name, and their edge state is a simple compositional join. [sent-209, score-0.218]

63 For the other child stopping, diego behaves as in state E : flight, and their edge state is a relational join connecting flight with flight stop. [sent-210, score-1.188]

64 Effectively, this connects stopping with get and eventually with flight (due to raising), virtually correcting the syntax-semantics mismatch stemming from attachment errors. [sent-211, score-0.373]

65 Implicit For simple node states A, B and an edge state E connecting the two, GUSP also creates a node state A+E+B : I(I for implicit) with the “implicit” state B. [sent-212, score-0.842]

66 For example, 937 to obtain the correct semantic parse for Give me the fare from Seattle to Boston, one needs to infer the existence of a flight entity, as in Give me the fare (of a flight) from Seattle to Boston. [sent-214, score-0.461]

67 As in sinking, child nodes have access to either of the two simple states, but the implicit state is not visible to the parent node. [sent-216, score-0.25]

68 If a database element has a name of k words, each word is assigned score 1/k for the corresponding node state. [sent-219, score-0.181]

69 In a sentence, if a word w triggers a node state with score s, its dependency children and left and right neighbors all get a trigger score of 0. [sent-221, score-0.26]

70 For multi-word values of property Y , and for a dependency edge connecting two collocated words, GUSP assigns a score 1. [sent-227, score-0.192]

71 0 to the edge state joining the value node state V :Y to its composition state V :Y :C, as well as the edge state joining two composition states V :Y :C. [sent-228, score-0.95]

72 6 The GUSP Model In a nutshell, the GUSP model resembles a treeHMM, which models the emission of words and dependencies by node and edge states, as well as transition between an edge state and the parent and child node states. [sent-236, score-0.642]

73 Specifically, GUSP defines a probability distribution over dependency tree d and semantic parse z by Pθ(d,z) =Z1expXifi(d,z) · wi(d,z) where fi and wi are features and their weights, and Z is the normalization constant that sums over all possible d, z (over the same unlabeled tree). [sent-240, score-0.179]

74 For example, given a token t that triggers node state N with score s, there is a corresponding features 1(lemma = t, state = N) with weight α· s, where α lise a parameter. [sent-242, score-0.328]

75 Emission features for node states GUSP uses two templates for emission of node states: for raised states, 1(token = ·), i. [sent-243, score-0.399]

76 , the emission weights tfaotre sal,l r1a(tisoekde nstate =s are tied; ,f tohre eno enm-risasisieodn states, 1(lemma = ·, state = N). [sent-245, score-0.161]

77 Complexity Prior To favor simple semantic parses, GUSP imposes an exponential prior with weight β on nodes states that are not null or raised, and on each relational join in an edge state. [sent-248, score-0.412]

78 8 Query Generation Given a semantic parse, GUSP generates the SQL by a depth-first traversal that recursively computes the denotation of a node from the denotations ofits children and its node state and edge states. [sent-258, score-0.531]

79 Value node state GUSP creates a semantic object of the given type with a unique index and the word constant. [sent-261, score-0.339]

80 For example, the denotation for node toronto is a city . [sent-262, score-0.218]

81 Entity or property node state GUSP creates a semantic object of the given type with a unique relation index. [sent-268, score-0.362]

82 For example, the denotation for node flight is simply a flight object with a unique index. [sent-269, score-0.792]

83 Simple edge state GUSP appends the child denotation to that of the parent, and appends equality constraints corresponding to the relational join path. [sent-272, score-0.398]

84 In the case of composition, such as the join between diego and san, GUSP simply keeps the parent object, while adding to it the words from the child. [sent-273, score-0.146]

85 In the case of a more complex join, such as that between stopping and dtw, GUSP adds the relational constraints that join flight st op with airport : flight stop. [sent-274, score-0.83]

86 Raising edge state GUSP simply takes the child denotation and sets that to the parent. [sent-277, score-0.315]

87 Implicit and sinking states GUSP maintains two separate denotations for the two simple states in the complex state, and processes their respective edge states accordingly. [sent-278, score-0.621]

88 For example, the node diego contains two denotations, one for V : city . [sent-279, score-0.195]

89 Domain-independent states For comparator states such as MORE or LES S, GUSP changes the default equality constraints to an inequality one, such as flight. [sent-281, score-0.268]

90 Resolve scoping ambiguities GUSP delays applying quantifiers until the child semantic object differs from the parent one or when reaching the root. [sent-285, score-0.192]

91 Since our goal is not to produce a specific logical form, we directly evaluate on the end-to-end task of translating questions into database queries and measure question-answering accuracy. [sent-292, score-0.194]

92 Since SPLAT does not output dependency trees, we ran the Stanford parser over SPLAT parses to generate the dependency trees in Stanford dependency (de Marneffe et al. [sent-303, score-0.171]

93 2, we discuss an edge state that joins a flight with its starting city: flight -flight . [sent-313, score-0.873]

94 The ATIS database also contains another path of the same length: flight -flight . [sent-315, score-0.401]

95 There is no obvious information to disambiguate between flight . [sent-328, score-0.314]

96 Addi- tionally, GUSP-FULL substantially outperformed GUSP-SIMPLE, highlighting the challenges of syntax-semantics mismatch in ATIS, and demonstrating the importance and effectiveness of complex states for handling such mismatch. [sent-354, score-0.185]

97 All three types of complex states produced significant contributions. [sent-355, score-0.158]

98 5 5 Conclusion This paper introduces grounded unsupervised semantic parsing, which leverages available database for indirect supervision and uses a grounded meaning representation to account for syntax-semantics mismatch in dependency-based semantic parsing. [sent-365, score-0.426]

99 The resulting GUSP system is the first unsupervised approach to attain an accuracy comparable to the best supervised systems in translating complex natural-language questions to database queries. [sent-366, score-0.173]

100 5Note that this is still different from the currently predominant approaches in semantic parsing, which learn to parse both syntax and semantics by training from the semantic parsing datasets alone, which are considerably smaller compared to resources available for syntactic parsing. [sent-369, score-0.198]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('gusp', 0.815), ('flight', 0.314), ('states', 0.134), ('atis', 0.119), ('state', 0.117), ('edge', 0.101), ('rport', 0.099), ('sql', 0.095), ('node', 0.094), ('usp', 0.087), ('database', 0.087), ('sinking', 0.072), ('logical', 0.07), ('airport', 0.063), ('zettlemoyer', 0.061), ('ime', 0.059), ('semantic', 0.056), ('diego', 0.054), ('forty', 0.054), ('join', 0.051), ('child', 0.05), ('dependency', 0.049), ('creates', 0.049), ('denotation', 0.047), ('city', 0.047), ('grounded', 0.045), ('poon', 0.045), ('parsing', 0.045), ('dtw', 0.045), ('fubl', 0.045), ('emission', 0.044), ('parse', 0.041), ('parent', 0.041), ('raising', 0.04), ('signifies', 0.04), ('questions', 0.037), ('superlatives', 0.036), ('supervision', 0.035), ('domingos', 0.035), ('tree', 0.033), ('raised', 0.033), ('relational', 0.032), ('stopping', 0.032), ('cheapest', 0.032), ('splat', 0.032), ('toronto', 0.03), ('clarke', 0.029), ('joins', 0.027), ('departure', 0.027), ('mismatch', 0.027), ('ervi', 0.027), ('rvi', 0.027), ('meaning', 0.026), ('joining', 0.026), ('quantifier', 0.026), ('san', 0.025), ('forms', 0.025), ('mooney', 0.025), ('fare', 0.025), ('unsupervised', 0.025), ('hoifung', 0.024), ('luke', 0.024), ('complex', 0.024), ('parses', 0.024), ('indirect', 0.024), ('kwiatkowski', 0.023), ('property', 0.023), ('collins', 0.023), ('edges', 0.023), ('entity', 0.023), ('object', 0.023), ('implicit', 0.022), ('pedro', 0.022), ('questionanswer', 0.022), ('quantifiers', 0.022), ('denotations', 0.022), ('liang', 0.022), ('titov', 0.021), ('argmin', 0.021), ('nodes', 0.02), ('popescu', 0.02), ('orschinger', 0.02), ('operators', 0.02), ('annotating', 0.019), ('ninth', 0.019), ('connecting', 0.019), ('answering', 0.019), ('schema', 0.019), ('behaves', 0.019), ('whereas', 0.019), ('em', 0.019), ('answers', 0.019), ('null', 0.018), ('arrival', 0.018), ('auxilliary', 0.018), ('guspsimple', 0.018), ('pstaatreen', 0.018), ('servi', 0.018), ('staotet', 0.018), ('seattle', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999994 176 acl-2013-Grounded Unsupervised Semantic Parsing

Author: Hoifung Poon

Abstract: We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.

2 0.079131544 228 acl-2013-Leveraging Domain-Independent Information in Semantic Parsing

Author: Dan Goldwasser ; Dan Roth

Abstract: Semantic parsing is a domain-dependent process by nature, as its output is defined over a set of domain symbols. Motivated by the observation that interpretation can be decomposed into domain-dependent and independent components, we suggest a novel interpretation model, which augments a domain dependent model with abstract information that can be shared by multiple domains. Our experiments show that this type of information is useful and can reduce the annotation effort significantly when moving between domains.

3 0.072107755 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

Author: Anthony Fader ; Luke Zettlemoyer ; Oren Etzioni

Abstract: We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.

4 0.062085919 274 acl-2013-Parsing Graphs with Hyperedge Replacement Grammars

Author: David Chiang ; Jacob Andreas ; Daniel Bauer ; Karl Moritz Hermann ; Bevan Jones ; Kevin Knight

Abstract: Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm’s complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.

5 0.058305532 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

Author: Joohyun Kim ; Raymond Mooney

Abstract: We adapt discriminative reranking to improve the performance of grounded language acquisition, specifically the task of learning to follow navigation instructions from observation. Unlike conventional reranking used in syntactic and semantic parsing, gold-standard reference trees are not naturally available in a grounded setting. Instead, we show how the weak supervision of response feedback (e.g. successful task completion) can be used as an alternative, experimentally demonstrating that its performance is comparable to training on gold-standard parse trees.

6 0.056440484 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models

7 0.054282442 215 acl-2013-Large-scale Semantic Parsing via Schema Matching and Lexicon Extension

8 0.053091548 19 acl-2013-A Shift-Reduce Parsing Algorithm for Phrase-based String-to-Dependency Translation

9 0.051288854 313 acl-2013-Semantic Parsing with Combinatory Categorial Grammars

10 0.049207408 230 acl-2013-Lightly Supervised Learning of Procedural Dialog Systems

11 0.048916847 124 acl-2013-Discriminative state tracking for spoken dialog systems

12 0.048693594 312 acl-2013-Semantic Parsing as Machine Translation

13 0.04866083 26 acl-2013-A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy

14 0.047740739 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

15 0.047558714 358 acl-2013-Transition-based Dependency Parsing with Selectional Branching

16 0.047095109 212 acl-2013-Language-Independent Discriminative Parsing of Temporal Expressions

17 0.047023579 276 acl-2013-Part-of-Speech Induction in Dependency Trees for Statistical Machine Translation

18 0.045252264 275 acl-2013-Parsing with Compositional Vector Grammars

19 0.044271201 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

20 0.044246595 133 acl-2013-Efficient Implementation of Beam-Search Incremental Parsers


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.116), (1, -0.013), (2, -0.066), (3, -0.041), (4, -0.049), (5, 0.021), (6, 0.031), (7, -0.045), (8, 0.026), (9, -0.012), (10, 0.028), (11, -0.019), (12, 0.014), (13, -0.024), (14, 0.022), (15, 0.003), (16, -0.01), (17, 0.001), (18, -0.003), (19, -0.016), (20, -0.014), (21, -0.014), (22, 0.036), (23, 0.055), (24, 0.012), (25, -0.013), (26, 0.013), (27, 0.067), (28, -0.002), (29, -0.005), (30, 0.005), (31, 0.003), (32, 0.015), (33, -0.006), (34, 0.011), (35, -0.009), (36, 0.01), (37, -0.019), (38, 0.014), (39, 0.031), (40, -0.011), (41, 0.033), (42, 0.007), (43, 0.004), (44, -0.031), (45, 0.073), (46, -0.018), (47, 0.026), (48, 0.001), (49, 0.046)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91570151 176 acl-2013-Grounded Unsupervised Semantic Parsing

Author: Hoifung Poon

Abstract: We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.

2 0.72674525 313 acl-2013-Semantic Parsing with Combinatory Categorial Grammars

Author: Yoav Artzi ; Nicholas FitzGerald ; Luke Zettlemoyer

Abstract: unkown-abstract

3 0.71218896 215 acl-2013-Large-scale Semantic Parsing via Schema Matching and Lexicon Extension

Author: Qingqing Cai ; Alexander Yates

Abstract: Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.

4 0.70937866 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

Author: Joohyun Kim ; Raymond Mooney

Abstract: We adapt discriminative reranking to improve the performance of grounded language acquisition, specifically the task of learning to follow navigation instructions from observation. Unlike conventional reranking used in syntactic and semantic parsing, gold-standard reference trees are not naturally available in a grounded setting. Instead, we show how the weak supervision of response feedback (e.g. successful task completion) can be used as an alternative, experimentally demonstrating that its performance is comparable to training on gold-standard parse trees.

5 0.65468735 228 acl-2013-Leveraging Domain-Independent Information in Semantic Parsing

Author: Dan Goldwasser ; Dan Roth

Abstract: Semantic parsing is a domain-dependent process by nature, as its output is defined over a set of domain symbols. Motivated by the observation that interpretation can be decomposed into domain-dependent and independent components, we suggest a novel interpretation model, which augments a domain dependent model with abstract information that can be shared by multiple domains. Our experiments show that this type of information is useful and can reduce the annotation effort significantly when moving between domains.

6 0.612454 163 acl-2013-From Natural Language Specifications to Program Input Parsers

7 0.58645278 311 acl-2013-Semantic Neighborhoods as Hypergraphs

8 0.57745701 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

9 0.57221299 275 acl-2013-Parsing with Compositional Vector Grammars

10 0.56864858 26 acl-2013-A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy

11 0.55654979 343 acl-2013-The Effect of Higher-Order Dependency Features in Discriminative Phrase-Structure Parsing

12 0.55185694 260 acl-2013-Nonconvex Global Optimization for Latent-Variable Models

13 0.5497151 161 acl-2013-Fluid Construction Grammar for Historical and Evolutionary Linguistics

14 0.54517931 324 acl-2013-Smatch: an Evaluation Metric for Semantic Feature Structures

15 0.54273885 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

16 0.53710878 124 acl-2013-Discriminative state tracking for spoken dialog systems

17 0.53656727 349 acl-2013-The mathematics of language learning

18 0.53461999 348 acl-2013-The effect of non-tightness on Bayesian estimation of PCFGs

19 0.52829492 212 acl-2013-Language-Independent Discriminative Parsing of Temporal Expressions

20 0.52521431 291 acl-2013-Question Answering Using Enhanced Lexical Semantic Models


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.037), (6, 0.062), (11, 0.073), (23, 0.012), (24, 0.036), (26, 0.05), (28, 0.016), (35, 0.073), (42, 0.05), (46, 0.235), (48, 0.044), (64, 0.033), (70, 0.053), (88, 0.03), (90, 0.021), (95, 0.045)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.87645662 360 acl-2013-Translating Italian connectives into Italian Sign Language

Author: Camillo Lugaresi ; Barbara Di Eugenio

Abstract: We present a corpus analysis of how Italian connectives are translated into LIS, the Italian Sign Language. Since corpus resources are scarce, we propose an alignment method between the syntactic trees of the Italian sentence and of its LIS translation. This method, and clustering applied to its outputs, highlight the different ways a connective can be rendered in LIS: with a corresponding sign, by affecting the location or shape of other signs, or being omitted altogether. We translate these findings into a computational model that will be integrated into the pipeline of an existing Italian-LIS rendering system. Initial experiments to learn the four possible translations with Decision Trees give promising results.

same-paper 2 0.79275626 176 acl-2013-Grounded Unsupervised Semantic Parsing

Author: Hoifung Poon

Abstract: We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.

3 0.57583606 174 acl-2013-Graph Propagation for Paraphrasing Out-of-Vocabulary Words in Statistical Machine Translation

Author: Majid Razmara ; Maryam Siahbani ; Reza Haffari ; Anoop Sarkar

Abstract: Out-of-vocabulary (oov) words or phrases still remain a challenge in statistical machine translation especially when a limited amount of parallel text is available for training or when there is a domain shift from training data to test data. In this paper, we propose a novel approach to finding translations for oov words. We induce a lexicon by constructing a graph on source language monolingual text and employ a graph propagation technique in order to find translations for all the source language phrases. Our method differs from previous approaches by adopting a graph propagation approach that takes into account not only one-step (from oov directly to a source language phrase that has a translation) but multi-step paraphrases from oov source language words to other source language phrases and eventually to target language translations. Experimental results show that our graph propagation method significantly improves performance over two strong baselines under intrinsic and extrinsic evaluation metrics.

4 0.57062972 126 acl-2013-Diverse Keyword Extraction from Conversations

Author: Maryam Habibi ; Andrei Popescu-Belis

Abstract: A new method for keyword extraction from conversations is introduced, which preserves the diversity of topics that are mentioned. Inspired from summarization, the method maximizes the coverage of topics that are recognized automatically in transcripts of conversation fragments. The method is evaluated on excerpts of the Fisher and AMI corpora, using a crowdsourcing platform to elicit comparative relevance judgments. The results demonstrate that the method outperforms two competitive baselines.

5 0.56079417 83 acl-2013-Collective Annotation of Linguistic Resources: Basic Principles and a Formal Model

Author: Ulle Endriss ; Raquel Fernandez

Abstract: Crowdsourcing, which offers new ways of cheaply and quickly gathering large amounts of information contributed by volunteers online, has revolutionised the collection of labelled data. Yet, to create annotated linguistic resources from this data, we face the challenge of having to combine the judgements of a potentially large group of annotators. In this paper we investigate how to aggregate individual annotations into a single collective annotation, taking inspiration from the field of social choice theory. We formulate a general formal model for collective annotation and propose several aggregation methods that go beyond the commonly used majority rule. We test some of our methods on data from a crowdsourcing experiment on textual entailment annotation.

6 0.55896282 333 acl-2013-Summarization Through Submodularity and Dispersion

7 0.55432659 22 acl-2013-A Structured Distributional Semantic Model for Event Co-reference

8 0.55376583 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

9 0.55365086 275 acl-2013-Parsing with Compositional Vector Grammars

10 0.55358207 228 acl-2013-Leveraging Domain-Independent Information in Semantic Parsing

11 0.55286723 318 acl-2013-Sentiment Relevance

12 0.55131841 159 acl-2013-Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction

13 0.55073416 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

14 0.55060613 212 acl-2013-Language-Independent Discriminative Parsing of Temporal Expressions

15 0.54946053 15 acl-2013-A Novel Graph-based Compact Representation of Word Alignment

16 0.5492183 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

17 0.54896933 132 acl-2013-Easy-First POS Tagging and Dependency Parsing with Beam Search

18 0.54860687 225 acl-2013-Learning to Order Natural Language Texts

19 0.54779118 185 acl-2013-Identifying Bad Semantic Neighbors for Improving Distributional Thesauri

20 0.54695576 265 acl-2013-Outsourcing FrameNet to the Crowd