A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition

Authors
D. Konstantinidis
K. Dimitropoulos
P. Daras
Year
2018
Venue
IEEE International Conference on Imaging Systems and Techniques, Krakow, Poland, 16-18 October, 2018.
Download

Abstract

Sign language recognition (SLR) refers to the classification of signs with a specific meaning performed by the deaf and/or hearing-impaired people in their everyday communication. In this work, we propose a deep learning based framework, in which we examine and analyze the contribution of video (image and optical flow) and skeletal (body, hand and face) features in the challenging task of isolated SLR, in which each signed video corresponds to a single word. Moreover, we employ various fusion schemes in order to identify the optimal way to combine the information obtained from the various feature representations and propose a robust SLR methodology. Our experimentation on two sign language datasets and the comparison with state-of-the-art SLR methods reveals the superiority of optimally combining skeletal and video features for SLR tasks.