Authors
|
A. Chatzitofis |
L. Saroglou | |
P. Boutis | |
P. Drakoulis | |
N. Zioulis | |
S. Subramanyam | |
B. Kevelham | |
C. Charbonnier | |
P. Cesar | |
D. Zarpalas | |
S. Kollias | |
P. Daras | |
Year
|
2020 |
Venue
|
IEEE Access, Volume 8, pp 176241-176262, 2020. |
Download
|
|
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various fullbody movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets captured with the use of hardware (HW) synchronization, to the best of our knowledge, HUMAN4D is the first and only public resource that provides volumetric depth maps with high synchronization precision due to the use of intra- and inter-sensor HW-SYNC. Moreover, a spatio-temporally aligned scanned and rigged 3D character complements HUMAN4D to enable joint research on time-varying and highquality dynamic meshes. We provide evaluation baselines by benchmarking HUMAN4D with state-of-theart human pose estimation and 3D compression methods. We apply OpenPose and AlphaPose reaching 70.02% and 82.95% mAPPCKh-0.5 on single- and 68.48% and 73.94% mAPPCKh-0.5 on two-person 2D pose estimation, respectively. In 3D pose, a recent multi-view approach named Learnable Triangulation, achieves 80.26% mAPPCK3D-10cm. For 3D compression, we benchmark Draco, Corto and CWIPC open-source 3D codecs, respecting online encoding and steady bit-rates between 7-155 and 2-90 Mbps for mesh- and pointbased volumetric video, respectively. Qualitative and quantitative visual comparison between mesh-based volumetric data reconstructed in different qualities and captured RGB, showcases the available options with respect to 4D representations. HUMAN4D is introduced to enable joint research on spatio-temporally aligned pose, volumetric, mRGBD and audio data cues. The dataset and its code are available online.