Authors
|
S. Asteriadis |
A. Chatzitofis | |
D. Zarpalas | |
D. Alexiadis | |
P. Daras | |
Year
|
2013 |
Venue
|
Mirage 2013, 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications, June 6–7, 2013, Berlin, Germany |
Human motion estimation is a topic receiving high attention during the last decades. There is a vast range of applications that employ human motion tracking, while the industry is continuously offering novel motion tracking systems, which are opening new paths compared to traditionally used passive cameras. Motion tracking algorithms, in their general form, estimate the skeletal structure of the human body and consider it as a set of joints and limbs. However, human motion tracking systems usually work on a single sensor basis, hypothesizing on occluded parts. We hereby present amethodology for fusing information from multiple sensors (Microsoft's Kinect sensors were utilized in this work) based on a series of factors that can alleviate from the problem of occlusion or noisy estimates of 3D joints' positions. View