Authors
|
N. Adaloglou |
T. Chatzis | |
I. Papastratis | |
A. Stergioulas | |
G. T. Papadopoulos | |
V. Zacharopoulou | |
G. J. Xydopoulos | |
K. Atzakas | |
D. Papazachariou | |
P. Daras | |
Year
|
2021 |
Venue
|
IEEE Transactions on Multimedia |
Download
|
|
In this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a plethora of pretraining schemes is thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where three annotation levels are provided (individual gloss, sentence and spoken language) for the same set of video captures.