acl acl2013 acl2013-321 acl2013-321-reference knowledge-graph by maker-knowledge-mining

321 acl-2013-Sign Language Lexical Recognition With Propositional Dynamic Logic


Source: pdf

Author: Arturo Curiel ; Christophe Collet

Abstract: . This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL) , the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.


reference text

Mark Aronoff, Irit Meir, and Wendy Sandler. 2005. The paradox of sign language morphology. Language, 81(2) :301. Christian Cuxac and Patrice Dalle. 2007. Problématique des chercheurs en traitement automatique des langues des signes, volume 48 of Traitement Automatique des Langues. Lavoisier, http://www.editions-hermes.fr/, October. Patrice Dalle. 2006. High level models for sign language analysis by a vision system. In Workshop on the Representation and Processing of Sign Language: Lexicographic Matters and Didactic Scenarios (LREC), Italy, ELDA, page 17–20. DictaSign. 2012. http://www.dictasign.eu. Philippe Dreuw, Daniel Stein, and Hermann Ney. 2009. Enhancing a sign language translation system with vision-based features. In Miguel Sales Dias, Sylvie Gibet, Marcelo M. 332 Wanderley, and Rafael Bastos, editors, GestureBased Human-Computer Interaction and Simulation, number 5085 in Lecture Notes in Computer Science, pages 108–113. Springer Berlin Heidelberg, January. Philippe Dreuw, Hermann Ney, Gregorio Martinez, Onno Crasborn, Justus Piater, Jose Miguel Moya, and Mark Wheatley. 2010. The SignSpeak project - bridging the gap between signers and speakers. In Nicoletta Calzolari (Conference Chair) , Khalid Choukri, and et. al. , editors, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta, May. European Language Resources Association (ELRA) . Michael Filhol. 2008. Modèle descriptif des signes pour un traitement automatique des langues des signes. Ph.D. thesis, Université Paris-sud (Paris 11). Michael Filhol. 2009. Zebedee: a lexical description model for sign language synthesis. Internal, LIMSI. Michael J. Fischer and Richard E. Ladner. 1979. Propositional dynamic logic of regular programs. Journal of Computer and System Sciences, 18(2) :194–21 1, April. Frédéric Gianni and Patrice Dalle. 2009. Robust tracking for processing of videos of communication’s gestures. Gesture-Based HumanComputer Interaction and Simulation, page 93–101. Matilde Gonzalez and Christophe Collet. 2011. Robust body parts tracking using particle filter and dynamic template. In 2011 18th IEEE International Conference on Image Processing (ICIP), pages 529 –532, September. Matilde Gonzalez and Christophe Collet. 2012. Sign segmentation using dynamics and hand configuration for semi-automatic annotation of sign language corpora. In Eleni Efthimiou, Georgios Kouroupetroglou, and Stavroula-Evita Fotinea, editors, Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, number 7206 in Lecture Notes in Computer Science, pages 204–215. Springer Berlin Heidelberg, January. Thomas Hanke. 2004. HamNoSys—Representing sign language data in language resources and language processing contexts. In Proceedings of the Workshop on the Representation and Processing of Sign Languages “From SignW Writing to Image Processing. Information, Lisbon, Portugal, 30 May. Jaakko Hintikka. 1962. Knowledge and Belief. Ithaca, N.Y.,Cornell University Press. Fanch Lejeune. 2004. Analyse sémantico-cognitive d’énoncés en Langue des Signes Fran\ \ ccaise pour une génération automatique de séquences gestuelles. Ph.D. thesis, PhD thesis, Orsay University, France. Boris Lenseigne and Patrice Dalle. 2006. Us- ing signing space as a representation for sign language processing. In Sylvie Gibet, Nicolas Courty, and Jean-François Kamp, editors, Gesture in Human- Computer Interaction and Simulation, number 3881 in Lecture Notes in Computer Science, pages 25–36. Springer Berlin Heidelberg, January. S. K. Liddell and R. E. Johnson. 1989. American sign language: The phonological base. Gallaudet University Press, Washington. DC. Olivier Losson and Jean-Marc Vannobel. 1998. Sign language formal description and synthesis. INT.JOURNAL OF VIRTUAL REALITY, 3:27—34. Irit Meir, Carol Padden, Mark Aronoff, and Wendy Sandler. 2006. Re-thinking sign language verb classes: the body as subject. In Sign Languages: Spinning and Unraveling the Past, Present and Future. 9th Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, volume 382. Sylvie C. W. Ong and Surendra Ranganath. 2005. Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 27(6):873 – 891, June. William C. Stokoe. 2005. Sign language structure: An outline of the visual communication systems of the american deaf. Journal of Deaf Studies and Deaf Education, 10(1) :3–37, January. Clayton Valli and Ceil Lucas. 2000. Linguistics of American Sign Language Text, 3rd Edition: An Introduction. Gallaudet University Press. Henri Wittmann. 1991. Classification linguistique des langues signées non vocalement. Revue québécoise de linguistique théorique et appliquée, 10(1):88. 333