Dr. Zarpalas and Dr. Daras are Guest Editors of Sensors Special Issue "Sensors Fusion for Human-Centric 3D Capturing"

July 15, 2019, 1:37 p.m.

 

 

This Special Issue tries to capture the recent advances in 3D capture technology by fusing data from multiple sensors (be it cameras, inertial, infrared or depth sensors) to produce high-quality 3D human representations (i.e., 3D motion, shape, appearance, performance, and activity). It invites contributions that address multi-sensor and multi-modal information fusion with the aim of capturing humans in 3D. It aims at capturing the current and emerging statues in the human capturing relevant technologies like 3D reconstruction, motion, and actions. In particular, submitted papers should clearly show novel contributions and innovative applications covering but not limited to any of the following topics around 3D human capturing using multiple sensor modalities:

  • Multi-modal data fusion;
  • Multi-sensor alignment;
  • Sensor data denoising and completion;
  • Multi-modal learning for sensor domain invariant representations;
  • Cross-modality transfer learning;
  • Self-supervised multi-modal learning;
  • Multi-sensor and multi-modal capturing systems;
  • Multi-modal dynamic scene capturing;
  • Open source frameworks and libraries for working with multi-modal sensors;
  • Multi-modal and multi-sensor applications (HCI, 3D capture for XR and/or free-viewpoint video, tele-presence, motion capture, real-time action recognition, simultaneous body, hands and face capture, non-rigid 3D reconstruction of humans, real-time calibration systems, and systems integrating multiple sensor types).

 

For more information regarding the topics of interest, the submission process and the relevant deadlines, visit the special issue webpage.