Due to the recent technological advancements in digital photography users have the opportunity to capture high resolution videos/image with low cost digital cameras and mobile devices. The easy capturing and the redundancy of visual data offers new opportunities of exploiting user generated data for multi-view reconstruction of large-scale scenes. CERTH explores ways of generating photorealistic models from uncalibrated user generated visual data. The first step of 3D reconstruction is to estimate the camera(s) positions, using state-of-the-art structure from motion methods. The next step is to generate dense accurate depth maps from image stereo pairs. Finally, depth maps are fused to generate the final 3D reconstruction.
Though mature, the task of generating depth maps, via disparity estimation, from stereo image pairs is still challenging, since there is still room for improving the way methods handle depth discontinuities and occlusions that occur in the images. Also, time-efficiency and accuracy can be further ameliorated.
So far, experimental results have proven that CERTH’s approach for depth map extraction is competitive to existing methods.
An advanced framework for immersive media capturing, representation, encoding and semi‐automated collaborative content production.
D. Alexiadis, G. Kordelas, K. Apostolakis, J. Agapito, J. Vegas, E. Izquierdo, P. Daras, “Reconstruction for 3D Immersive Virtual Worlds“, WIAMIS 2012: The 13th International Workshop on Image Analysis for Multimedia Interactive Services, 23rd – 25th May 2012, Dublin City University, Ireland