acl acl2013 acl2013-321 knowledge-graph by maker-knowledge-mining

321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic


Source: pdf

Author: Arturo Curiel ; Christophe Collet

Abstract: . This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Sign Language Lexical Recognition Logic Arturo Curiel Université Paul Sabatier 118 route de Narbonne, IRIT, 31062, Toulouse, France curiel@irit . [sent-1, score-0.039]

2 fr With Propositional Dynamic Christophe Collet Université Paul Sabatier 1 route de Narbonne, IRIT, 18 31062, Toulouse, France collet@irit fr Abstract . [sent-2, score-0.039]

3 This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. [sent-3, score-0.133]

4 SLs are visual, complete, standalone languages which are just as expressive as oral languages. [sent-4, score-0.078]

5 Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. [sent-5, score-0.251]

6 Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. [sent-6, score-0.188]

7 We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages. [sent-7, score-0.09]

8 1 Introduction Sign languages (SL) , the vernaculars of deaf people, are complete, rich, standalone communication systems which have evolved in parallel with oral languages (Valli and Lucas, 2000) . [sent-8, score-0.165]

9 However, in contrast to the last ones, research in automatic SL processing has not yet managed to build a complete, formal definition oriented to their automatic recognition (Cuxac and Dalle, 2007) . [sent-9, score-0.162]

10 In SL, both hands and nonmanual features (NMF) , e. [sent-10, score-0.088]

11 facial muscles, can convey information with their placements, configurations and movements. [sent-12, score-0.037]

12 Our research strives to address the formalization problem by introducing a logical language that lets us represent SL from the lowest level, so as to render the recognition task more approachable. [sent-14, score-0.039]

13 For this, we use an instance of a formal logic, specifically Propositional Dynamic Logic (PDL) , as a possible description language for SL signs. [sent-15, score-0.046]

14 1 Current Sign Language Research Extensive efforts have been made to achieve efficient automatic capture and representation of the subtle nuances commonly present in sign language discourse (Ong and Ranganath, 2005) . [sent-21, score-0.418]

15 Research ranges from the development of hand and body trackers (Dreuw et al. [sent-22, score-0.153]

16 Works like (Losson and Vannobel, 1998) deal with the creation of a lexical description oriented to computer-based sign animation. [sent-30, score-0.458]

17 Both propose a thoroughly geometrical parametric encoding of signs, thus leaving behind meaningful information necessary for recognition and introducing data beyond the scope of recognition. [sent-32, score-0.039]

18 We work with our own variant of this logic, the Propositional Dynamic Logic for Sign Language (PDLSL) , which is just an instantiation of PDL where we take signers’ movements as programs . [sent-39, score-0.054]

19 Our sign formalization is based on the ap- proach of (Liddell and Johnson, 1989) and (Filhol, 2008) . [sent-40, score-0.418]

20 They describe signs as sequences of immutable key postures and movement transitions. [sent-41, score-0.247]

21 In general, each key posture will be characterized by the concurrent parametric state of each body articulator over a time-interval. [sent-42, score-0.448]

22 For us, a body articulator is any relevant body part involved in signing. [sent-43, score-0.502]

23 The parameters taken in account can vary from articulator to articulator, but most of the time they comprise their configurations, orientations and their placement within one or more places of articulation. [sent-44, score-0.307]

24 Transitions will correspond to the movements executed between fixed postures. [sent-45, score-0.054]

25 t body arti=cu {laDt,oWrs ,fRor, SL, bweh tehree D, W, R and L represent the dominant, weak, right and left hands, respectively. [sent-51, score-0.12]

26 Both D and W can be aliases for the right or left hands, but they change depending on whether the signer is right-handed or left-handed, or even depending on the context. [sent-52, score-0.044]

27 Let Ψ be the two-dimensional projection of a human body skeleton, seen by the front. [sent-53, score-0.12]

28 We define the set of places of articulation for SL as ΛSL = {HEAD, CHEST, NEUTRAL, . [sent-54, score-0.161]

29 Let CSL be the set of possible morphological configurations for a hand. [sent-58, score-0.037]

30 h, Lwhe t ∆ vector δb indicate movement with respect t. [sent-63, score-0.049]

31 o tLheet dominabnt or weak hand in the following manner: δb =( ←−δ i f DD ≡ ≡ LR o r WW ≡ ≡ R L v→1 v→2 θ(v −→1, →v2) Finally, let and be any two vectors with the same origin. [sent-64, score-0.033]

32 We denote the rotation angle between the two as . [sent-65, score-0.039]

33 Now we define the set of atomic propositions that we will use to characterize fixed states, and a set of atomic actions to describe movements. [sent-66, score-0.289]

34 2 (Atomic Propositions for SL Body Articulators ΦSL) The set of atomic propositions for SL articulators (ΦSL) is defined as: . [sent-68, score-0.297]

35 329 Figure 1: Possible places of articulation in BSL. [sent-70, score-0.161]

36 Intuitively, β1δβ2 indicates that articulator β1 is placed in relative direction δ with respect to articulator β2 . [sent-71, score-0.524]

37 Let the current place of articulation of β2 be the origin point of β2 ’s Cartesian system (Cβ2). [sent-72, score-0.116]

38 Let vector desCcarribtees itahne csuysrrteemnt place of articulation of β1 β→1 θin(β− → C1β,2δ. [sent-73, score-0.116]

39 β1δβ2holds when ∀ −→v ∈ ∆, Ξβλ1 asserts that articulator β1 is located in λ. [sent-75, score-0.262]

40 Tββ21 is active whenever articulator β1 physically touches articulator β2 . [sent-76, score-0.524]

41 Fcβ1 indicates that c is the morphological configuration of articulator β1 . [sent-77, score-0.295]

42 Finally, ∠δβ1 means that an articulator β1 is oriented towards direction ∈ ∆. [sent-78, score-0.302]

43 perpendicular to the plane of the palm has the smallest rotation angle with respect to Definition 2. [sent-80, score-0.039]

44 3 (Atomic Actions for SL Body Articulators ΠSL) The atomic actions for SL articulators ( ΠSL) are given by the following set: δ δ. [sent-81, score-0.254]

45 Lerete β1 ’∈s position before movement be the origin of β1 ’s Cartesian system (Cβ1) and be the position vaercttoersi of β1 itenm Cβ1 after moving. [sent-85, score-0.049]

46 β1 occurs when articulator β1 moves rapidly and continuously (thrills) without changing it’ ’s current place of articulation. [sent-88, score-0.262]

47 4 (Action Language for SL Body Articulators ASL) The action language f oAr body aatrtoircsula Ators (ASL) is given by tghuea following ryule a:r . [sent-90, score-0.17]

48 Finally, action α∗ indicates the reflexive transitive closure of α. [sent-94, score-0.05]

49 Models correspond to connected graphs representing key postures and transitions: states are determined by the values of their propositions, while edges represent sets of executed movements. [sent-100, score-0.165]

50 Here we present only a small extract of the logic semantics. [sent-101, score-0.099]

51 6 (Sign Language Utterance Model USL) A sign language utterance model (USL), i Us a tuple USL (S, R, J·KΠSL, J·KΦSL) (wUher)e,: . [sent-103, score-0.418]

52 = • S is a non-empty set of states • R is a transition relation R ⊆ S S where, ∀Rs • • ∈ S, a∃nss0 ∈ Sn sruelcahti tohna tR (s, s0) ∈ R wh. [sent-104, score-0.034]

53 330 We also need to define a structure over sequences of states to model internal dependencies between them, nevertheless we decided to omit the rest of our semantics, alongside satisfaction conditions, for the sake of readability. [sent-107, score-0.034]

54 3 Use Case: Semi-Automatic Sign Recognition We now present an example of how we can use our formalism in a semi-automatic sign recognition system. [sent-108, score-0.457]

55 Figure 2 shows a simple module diagram exemplifying information flow in the system’s architecture. [sent-109, score-0.078]

56 Corpus mTaenrandModultaStc k einigoegn- PGDraLphSL ForSimgunlæ InUpseurt ptorastnKusriet yison&sEPMxMtDor; adLcduStelLieoln VeMPrioDfiLcdauStleLionProSpigonsals Figure 2: Information flow in a semi-automatic SL lexical recognition system. [sent-111, score-0.039]

57 1 Tracking and Segmentation Module The process starts by capturing relevant information from video corpora. [sent-113, score-0.032]

58 We use an existing head and hand tracker expressly developed for SL research (Gonzalez and Collet, 2011) . [sent-114, score-0.033]

59 This tool analyses individual video instances, and returns the frame-by-frame positions of the tracked articulators. [sent-115, score-0.066]

60 By using this information, the module can immediately calculate speeds and directions on the fly for each hand. [sent-116, score-0.078]

61 The module further employs the method proposed by the authors in (Gonzalez and Collet, 2012) to achieve sub-lexical segmentation from the previously calculated data. [sent-117, score-0.078]

62 Like them, we use the relative velocity between hands to identify when hands either move at the same time, independently or don’t move at all. [sent-118, score-0.176]

63 With these, we can produce a set of possible key postures and transitions that will serve as input to the modeling module. [sent-119, score-0.168]

64 2 Model Extraction Module This module calculates a propositional state for each static posture, where atomic PDLSL formulas codify the information tracked in the previous part. [sent-121, score-0.32]

65 Detected movements are interpreted as PDLSL actions between states. [sent-122, score-0.103]

66 Here, each key posture is codified into propositions acknowledging the hand positions with respect to each other (RL←) , their place of articulation (e. [sent-154, score-0.346]

67 “left hand floats over the torse” with ΞTLORSE) , their configuration (e. [sent-156, score-0.066]

68 “right hand is open” with FORPENPALM CONFIG) and their mhaonvdem isen optse (e. [sent-158, score-0.033]

69 h “le Fft hand to the upleft direction” with %L) . [sent-160, score-0.033]

70 moves Tt dhiirs mctoiodnu”le w aitlsho %checks that the generated graph is correct: it will discard simple tracking errors to ensure that the resulting LTS will remain consistent. [sent-161, score-0.077]

71 3 Verification Module First of all, the verification module has to be loaded with a database of sign descriptions encoded as PDLSL formulas. [sent-163, score-0.547]

72 These will characterize the specific sequence of key postures that morphologically describe a sign. [sent-164, score-0.131]

73 For example, let’s take the case for sign “route” in FSL, shown in figure 4, with the following PDLSL formulation, Example 3. [sent-165, score-0.418]

74 Formula (1) describes ROUTEFSL as a sign with two key postures, connected by a twohand simultaneous movement (represented with operator ∩) . [sent-168, score-0.467]

75 It also indicates the positwioitnh o ofp eearachto hand, Ithte airls orientation, hwehe ptohseirthey touch and their respective configurations (in this example, both hold the same CLAMP configuration) . [sent-169, score-0.037]

76 The module can then verify whether a sign formula in the lexical database holds in any sub-sequence of states of the graph generated in the previous step. [sent-170, score-0.53]

77 Algorithm 1 PDLSLVerification Algorithm Require: SL model MSL Require: cSoLnn meoctdeedl graph GSL Require: lceoxnicnaelc tdeadta gbraapseh DBSL 1: Proposals_For[state_qty] 2: for state s ∈ GSL do 3: f sotra sign ϕ ∈ DBSL where s ∈ ϕ do 4: i sfi MSL, s ϕ twhehenr 5: Proposals_For[s] . [sent-172, score-0.418]

78 4 Conclusions and Future Work We have shown how a logical language can be used to model SL signs for semi-automatic recognition, albeit with some restrictions. [sent-175, score-0.067]

79 The traits we have chosen to represent were imposed by the limits of the tracking tools we had to our disposition, most notably working with 2D coordinates. [sent-176, score-0.077]

80 Our primitive sets, were intentionally defined in a very general fashion due to the same reason: all of the perceived directions, articulators and places of articulation can easily change their domains, depending on the SL we are modeling or the technological constraints we have to deal with. [sent-178, score-0.292]

81 Propositions can also be changed, or even induced, by existing written sign representation languages such as Zebedee (Filhol, 2008) or HamNoSys (Hanke, 2004) , mainly for the sake of extendability. [sent-179, score-0.418]

82 From the application side, we still need to create an extensive sign database codified in PDLSL and try recognition on other corpora, with different tracking information. [sent-180, score-0.573]

83 For verification and model extraction, further optimizations are expected, including the handling of data inconsistencies and repairing broken queries when verifying the graph. [sent-181, score-0.051]

84 We also expect to finish the definition of our formal semantics, as well as proving correction and complexity of our algorithms. [sent-184, score-0.083]

85 Problématique des chercheurs en traitement automatique des langues des signes, volume 48 of Traitement Automatique des Langues. [sent-191, score-0.58]

86 High level models for sign language analysis by a vision system. [sent-197, score-0.418]

87 Enhancing a sign language translation system with vision-based features. [sent-206, score-0.418]

88 The SignSpeak project - bridging the gap between signers and speakers. [sent-212, score-0.039]

89 Modèle descriptif des signes pour un traitement automatique des langues des signes. [sent-219, score-0.585]

90 Zebedee: a lexical description model for sign language synthesis. [sent-225, score-0.418]

91 Robust tracking for processing of videos of communication’s gestures. [sent-235, score-0.077]

92 Robust body parts tracking using particle filter and dynamic template. [sent-239, score-0.242]

93 Sign segmentation using dynamics and hand configuration for semi-automatic annotation of sign language corpora. [sent-243, score-0.484]

94 HamNoSys—Representing sign language data in language resources and language processing contexts. [sent-248, score-0.418]

95 Analyse sémantico-cognitive d’énoncés en Langue des Signes Fran\ \ ccaise pour une génération automatique de séquences gestuelles. [sent-259, score-0.21]

96 Us- ing signing space as a representation for sign language processing. [sent-265, score-0.418]

97 Re-thinking sign language verb classes: the body as subject. [sent-284, score-0.538]

98 Automatic sign language analysis: a survey and the future beyond lexical meaning. [sent-291, score-0.418]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('sl', 0.505), ('sign', 0.418), ('articulator', 0.262), ('pdlsl', 0.197), ('propositional', 0.134), ('articulators', 0.131), ('postures', 0.131), ('tlr', 0.131), ('body', 0.12), ('articulation', 0.116), ('irit', 0.116), ('des', 0.105), ('logic', 0.099), ('propositions', 0.092), ('hands', 0.088), ('collet', 0.087), ('deaf', 0.087), ('module', 0.078), ('tracking', 0.077), ('dalle', 0.077), ('dreuw', 0.077), ('pdl', 0.077), ('atomic', 0.074), ('patrice', 0.071), ('gonzalez', 0.071), ('rl', 0.069), ('signs', 0.067), ('aronoff', 0.066), ('config', 0.066), ('filhol', 0.066), ('meir', 0.066), ('posture', 0.066), ('routefsl', 0.066), ('signes', 0.066), ('usl', 0.066), ('automatique', 0.061), ('christophe', 0.058), ('sylvie', 0.058), ('movements', 0.054), ('langues', 0.053), ('bsl', 0.053), ('verification', 0.051), ('action', 0.05), ('actions', 0.049), ('movement', 0.049), ('asl', 0.048), ('formal', 0.046), ('traitement', 0.046), ('dynamic', 0.045), ('places', 0.045), ('oral', 0.044), ('curiel', 0.044), ('cuxac', 0.044), ('dbsl', 0.044), ('fcllamp', 0.044), ('fcrlamp', 0.044), ('fsl', 0.044), ('gallaudet', 0.044), ('gianni', 0.044), ('gibet', 0.044), ('gsl', 0.044), ('hamnosys', 0.044), ('lenseigne', 0.044), ('liddell', 0.044), ('linguistique', 0.044), ('losson', 0.044), ('matilde', 0.044), ('msl', 0.044), ('narbonne', 0.044), ('pour', 0.044), ('signer', 0.044), ('tlorse', 0.044), ('valli', 0.044), ('zebedee', 0.044), ('simulation', 0.04), ('oriented', 0.04), ('recognition', 0.039), ('route', 0.039), ('codified', 0.039), ('lts', 0.039), ('rotation', 0.039), ('sabatier', 0.039), ('signers', 0.039), ('heidelberg', 0.038), ('transitions', 0.037), ('definition', 0.037), ('configurations', 0.037), ('toulouse', 0.036), ('tiv', 0.036), ('states', 0.034), ('lr', 0.034), ('wendy', 0.034), ('tracked', 0.034), ('standalone', 0.034), ('gesture', 0.034), ('fischer', 0.034), ('configuration', 0.033), ('hand', 0.033), ('universit', 0.033), ('video', 0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

Author: Arturo Curiel ; Christophe Collet

Abstract: . This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.

2 0.18010506 360 acl-2013-Translating Italian connectives into Italian Sign Language

Author: Camillo Lugaresi ; Barbara Di Eugenio

Abstract: We present a corpus analysis of how Italian connectives are translated into LIS, the Italian Sign Language. Since corpus resources are scarce, we propose an alignment method between the syntactic trees of the Italian sentence and of its LIS translation. This method, and clustering applied to its outputs, highlight the different ways a connective can be rendered in LIS: with a corresponding sign, by affecting the location or shape of other signs, or being omitted altogether. We translate these findings into a computational model that will be integrated into the pipeline of an existing Italian-LIS rendering system. Initial experiments to learn the four possible translations with Decision Trees give promising results.

3 0.056601014 175 acl-2013-Grounded Language Learning from Video Described with Sentences

Author: Haonan Yu ; Jeffrey Mark Siskind

Abstract: We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts si- multaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video.

4 0.047260068 379 acl-2013-Utterance-Level Multimodal Sentiment Analysis

Author: Veronica Perez-Rosas ; Rada Mihalcea ; Louis-Philippe Morency

Abstract: During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques. This paper presents a method for multimodal sentiment classification, which can identify the sentiment expressed in utterance-level visual datastreams. Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can lead to error rate reductions of up to 10.5% as compared to the best performing individual modality.

5 0.04501361 26 acl-2013-A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy

Author: Francesco Sartorio ; Giorgio Satta ; Joakim Nivre

Abstract: We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attach∼ment score over an arc-eager parser, with a slow-down factor of 2.8.

6 0.044716191 124 acl-2013-Discriminative state tracking for spoken dialog systems

7 0.035360683 366 acl-2013-Understanding Verbs based on Overlapping Verbs Senses

8 0.034522131 384 acl-2013-Visual Features for Linguists: Basic image analysis techniques for multimodally-curious NLPers

9 0.034438718 240 acl-2013-Microblogs as Parallel Corpora

10 0.03411724 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

11 0.033447616 36 acl-2013-Adapting Discriminative Reranking to Grounded Language Learning

12 0.033338889 80 acl-2013-Chinese Parsing Exploiting Characters

13 0.033113159 19 acl-2013-A Shift-Reduce Parsing Algorithm for Phrase-based String-to-Dependency Translation

14 0.031321205 115 acl-2013-Detecting Event-Related Links and Sentiments from Social Media Texts

15 0.030486241 249 acl-2013-Models of Semantic Representation with Visual Attributes

16 0.030460961 187 acl-2013-Identifying Opinion Subgroups in Arabic Online Discussions

17 0.029704399 380 acl-2013-VSEM: An open library for visual semantics representation

18 0.02909871 167 acl-2013-Generalizing Image Captions for Image-Text Parallel Corpus

19 0.027172094 358 acl-2013-Transition-based Dependency Parsing with Selectional Branching

20 0.026990477 29 acl-2013-A Visual Analytics System for Cluster Exploration


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.084), (1, 0.011), (2, -0.023), (3, -0.006), (4, -0.012), (5, -0.039), (6, 0.02), (7, -0.004), (8, 0.023), (9, 0.027), (10, -0.063), (11, -0.026), (12, -0.019), (13, 0.028), (14, 0.014), (15, -0.031), (16, -0.021), (17, -0.006), (18, -0.003), (19, -0.028), (20, -0.012), (21, 0.004), (22, 0.019), (23, -0.013), (24, 0.015), (25, -0.007), (26, -0.05), (27, 0.033), (28, 0.034), (29, -0.033), (30, 0.002), (31, 0.045), (32, 0.022), (33, -0.009), (34, 0.04), (35, -0.001), (36, -0.022), (37, 0.012), (38, -0.011), (39, 0.021), (40, -0.024), (41, 0.057), (42, 0.026), (43, 0.036), (44, -0.099), (45, 0.059), (46, 0.023), (47, 0.014), (48, -0.066), (49, -0.058)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92288959 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

Author: Arturo Curiel ; Christophe Collet

Abstract: . This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.

2 0.69704121 360 acl-2013-Translating Italian connectives into Italian Sign Language

Author: Camillo Lugaresi ; Barbara Di Eugenio

Abstract: We present a corpus analysis of how Italian connectives are translated into LIS, the Italian Sign Language. Since corpus resources are scarce, we propose an alignment method between the syntactic trees of the Italian sentence and of its LIS translation. This method, and clustering applied to its outputs, highlight the different ways a connective can be rendered in LIS: with a corresponding sign, by affecting the location or shape of other signs, or being omitted altogether. We translate these findings into a computational model that will be integrated into the pipeline of an existing Italian-LIS rendering system. Initial experiments to learn the four possible translations with Decision Trees give promising results.

3 0.53227311 175 acl-2013-Grounded Language Learning from Video Described with Sentences

Author: Haonan Yu ; Jeffrey Mark Siskind

Abstract: We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts si- multaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video.

4 0.5265761 90 acl-2013-Conditional Random Fields for Responsive Surface Realisation using Global Features

Author: Nina Dethlefs ; Helen Hastie ; Heriberto Cuayahuitl ; Oliver Lemon

Abstract: Surface realisers in spoken dialogue systems need to be more responsive than conventional surface realisers. They need to be sensitive to the utterance context as well as robust to partial or changing generator inputs. We formulate surface realisation as a sequence labelling task and combine the use of conditional random fields (CRFs) with semantic trees. Due to their extended notion of context, CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers. This leads to more natural and less repetitive surface realisation. It also allows generation from partial and modified inputs and is therefore applicable to incremental surface realisation. Results from a human rating study confirm that users are sensitive to this extended notion of context and assign ratings that are significantly higher (up to 14%) than those for taking only local context into account.

5 0.51203996 86 acl-2013-Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures

Author: Sina Zarriess ; Jonas Kuhn

Abstract: We suggest a generation task that integrates discourse-level referring expression generation and sentence-level surface realization. We present a data set of German articles annotated with deep syntax and referents, including some types of implicit referents. Our experiments compare several architectures varying the order of a set of trainable modules. The results suggest that a revision-based pipeline, with intermediate linearization, significantly outperforms standard pipelines or a parallel architecture.

6 0.50136048 337 acl-2013-Tag2Blog: Narrative Generation from Satellite Tag Data

7 0.50093257 239 acl-2013-Meet EDGAR, a tutoring agent at MONSERRATE

8 0.49006388 349 acl-2013-The mathematics of language learning

9 0.48475128 1 acl-2013-"Let Everything Turn Well in Your Wife": Generation of Adult Humor Using Lexical Constraints

10 0.47460729 190 acl-2013-Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs

11 0.45732719 161 acl-2013-Fluid Construction Grammar for Historical and Evolutionary Linguistics

12 0.45606154 364 acl-2013-Typesetting for Improved Readability using Lexical and Syntactic Information

13 0.44362602 249 acl-2013-Models of Semantic Representation with Visual Attributes

14 0.4396354 303 acl-2013-Robust multilingual statistical morphological generation models

15 0.43119162 72 acl-2013-Bridging Languages through Etymology: The case of cross language text categorization

16 0.42865118 14 acl-2013-A Novel Classifier Based on Quantum Computation

17 0.42839935 198 acl-2013-IndoNet: A Multilingual Lexical Knowledge Network for Indian Languages

18 0.42526507 203 acl-2013-Is word-to-phone mapping better than phone-phone mapping for handling English words?

19 0.4168101 378 acl-2013-Using subcategorization knowledge to improve case prediction for translation to German

20 0.41601169 370 acl-2013-Unsupervised Transcription of Historical Documents


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.029), (3, 0.426), (6, 0.029), (11, 0.049), (14, 0.032), (24, 0.038), (26, 0.028), (28, 0.012), (35, 0.049), (42, 0.031), (48, 0.047), (70, 0.051), (88, 0.018), (90, 0.018), (95, 0.052)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.83382726 321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic

Author: Arturo Curiel ; Christophe Collet

Abstract: . This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.

2 0.47755462 216 acl-2013-Large tagset labeling using Feed Forward Neural Networks. Case study on Romanian Language

Author: Tiberiu Boros ; Radu Ion ; Dan Tufis

Abstract: Radu Ion Research Institute for ?????????? ???????????? ?????? Dr?????????? Romanian Academy radu@ racai . ro Dan Tufi? Research Institute for ?????????? ???????????? ?????? Dr?????????? Romanian Academy tufi s @ racai . ro Networks (Marques and Lopes, 1996) and Conditional Random Fields (CRF) (Lafferty et Standard methods for part-of-speech tagging suffer from data sparseness when used on highly inflectional languages (which require large lexical tagset inventories). For this reason, a number of alternative methods have been proposed over the years. One of the most successful methods used for this task, ?????? ?????? ??????? ??????, 1999), exploits a reduced set of tags derived by removing several recoverable features from the lexicon morpho-syntactic descriptions. A second phase is aimed at recovering the full set of morpho-syntactic features. In this paper we present an alternative method to Tiered Tagging, based on local optimizations with Neural Networks and we show how, by properly encoding the input sequence in a general Neural Network architecture, we achieve results similar to the Tiered Tagging methodology, significantly faster and without requiring extensive linguistic knowledge as implied by the previously mentioned method. 1

3 0.28943825 155 acl-2013-Fast and Accurate Shift-Reduce Constituent Parsing

Author: Muhua Zhu ; Yue Zhang ; Wenliang Chen ; Min Zhang ; Jingbo Zhu

Abstract: Shift-reduce dependency parsers give comparable accuracies to their chartbased counterparts, yet the best shiftreduce constituent parsers still lag behind the state-of-the-art. One important reason is the existence of unary nodes in phrase structure trees, which leads to different numbers of shift-reduce actions between different outputs for the same input. This turns out to have a large empirical impact on the framework of global training and beam search. We propose a simple yet effective extension to the shift-reduce process, which eliminates size differences between action sequences in beam-search. Our parser gives comparable accuracies to the state-of-the-art chart parsers. With linear run-time complexity, our parser is over an order of magnitude faster than the fastest chart parser.

4 0.28839874 31 acl-2013-A corpus-based evaluation method for Distributional Semantic Models

Author: Abdellah Fourtassi ; Emmanuel Dupoux

Abstract: Evaluation methods for Distributional Semantic Models typically rely on behaviorally derived gold standards. These methods are difficult to deploy in languages with scarce linguistic/behavioral resources. We introduce a corpus-based measure that evaluates the stability of the lexical semantic similarity space using a pseudo-synonym same-different detection task and no external resources. We show that it enables to predict two behaviorbased measures across a range of parameters in a Latent Semantic Analysis model.

5 0.28700006 329 acl-2013-Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization

Author: Guangyou Zhou ; Fang Liu ; Yang Liu ; Shizhu He ; Jun Zhao

Abstract: Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.

6 0.28582197 80 acl-2013-Chinese Parsing Exploiting Characters

7 0.28517485 275 acl-2013-Parsing with Compositional Vector Grammars

8 0.28484982 272 acl-2013-Paraphrase-Driven Learning for Open Question Answering

9 0.283117 82 acl-2013-Co-regularizing character-based and word-based models for semi-supervised Chinese word segmentation

10 0.28301692 137 acl-2013-Enlisting the Ghost: Modeling Empty Categories for Machine Translation

11 0.28258082 7 acl-2013-A Lattice-based Framework for Joint Chinese Word Segmentation, POS Tagging and Parsing

12 0.28241977 169 acl-2013-Generating Synthetic Comparable Questions for News Articles

13 0.28240949 185 acl-2013-Identifying Bad Semantic Neighbors for Improving Distributional Thesauri

14 0.28239158 134 acl-2013-Embedding Semantic Similarity in Tree Kernels for Domain Adaptation of Relation Extraction

15 0.28128797 78 acl-2013-Categorization of Turkish News Documents with Morphological Analysis

16 0.28123105 314 acl-2013-Semantic Roles for String to Tree Machine Translation

17 0.28106067 62 acl-2013-Automatic Term Ambiguity Detection

18 0.28083867 212 acl-2013-Language-Independent Discriminative Parsing of Temporal Expressions

19 0.28053617 267 acl-2013-PARMA: A Predicate Argument Aligner

20 0.28043053 318 acl-2013-Sentiment Relevance