Sign Language Recognition Based on Hand and Body Skeletal Data

D. Konstantinidis
K. Dimitropoulos
P. Daras
3DTV Conference, IEEE, Stockholm, Helsinki, 3-5 June 2018.


Sign language recognition (SLR) is a challenging, but highly important research field for several computer vision systems that attempt to facilitate the communication among the deaf and hearing impaired people. In this work, we propose an accurate and robust deep learning-based methodology for sign language recognition from video sequences. Our novel method relies on hand and body skeletal features extracted from RGB videos and, therefore, it acquires highly discriminative for gesture recognition skeletal data without the need for any additional equipment, such as data gloves, that may restrict signer's movements. Experimentation on a large publicly available sign language dataset eveals the superiority of our methodology with respect to other state of the art approaches relying solely on RGB features.