Authors
|
D. Alexiadis |
D. Zarpalas | |
P. Daras | |
Year
|
2013 |
Venue
|
IEEE Transactions on Multimedia, VOL. 15, NO. 2, pp. 339-358, February 2013, <b>IEEE Distinguished paper</b>, IEEE MMTC R-Letter Vol 4, No 3, June2013 |
The problem of robust, realistic and especially fast 3-D reconstruction of objects, although extensively studied, is still a challenging research task. Most of the state-of-the-art approaches that target real-time applications, such as immersive reality, address mainly the problem of synthesizing intermediate views for given view-points, rather than generating a single complete 3-D surface. In this paper, we present a multiple-Kinect capturing system and a novel methodology for the creation of accurate, realistic, full 3-D reconstructions of moving foreground objects, e.g., humans, to be exploited in real-time applications. The proposed method generates multiple textured meshes from multiple RGB-Depth streams, applies a coarse-to-fine registration algorithm and finally merges the separate meshes into a single 3-D surface. Although the Kinect sensor has attracted the attention of many researchers and home enthusiasts and has already appeared in many applications over the Internet, none of the already presented works can produce full 3-D models of moving objects from multiple Kinect streams in real-time. We present the capturing setup, the methodology for its calibration and the details of the proposed algorithm for real-time fusion of multiple meshes. The presented experimental results verify the effectiveness of the approach with respect to the 3-D reconstruction quality, as well as the achieved frame rates. View