Immersive environments

This work showcases the majority of research done towards achieving immersive interaction of users within a networked virtual environment. Presented within the context of an online collaborative virtual dance masterclass application, milestones achieved towards reaching the seamless blending of real with virtual information includes the automatic, real-time on-line reconstruction of users within the shared virtual scene, virtual character embodiment and full-body control via skeleton tracking as well as environmental reaction and feedback to user interventions in the form of a dance routine evaluation process. Along these lines, real-time rendering of 3D content with standard web 3D graphics technology, capturing and recording RGB-D data over the web, auto-stereoscopic post-processing effects for generating glasses-free 3D on specialized displays and spatial 3D sound have been addressed as part of this research work. With the integrated demo developed for the 3DLife project, we demonstrate how research work on the aforementioned fields has matured, and along with the use of consumer-grade hardware such as the Microsoft Kinect sensor, can be employed for wide-spread application of web-based 3D Tele-Immersion systems.

Demonstrations

Visit our live application

http://vcl.iti.gr/3DLife

Relevant Projects

3DLife3DLife
Bringing the Media Internet to Life

 

Relevant Publications

T. Semertzidis, P. Daras, P. Moore, L. Makris, M. G. Strintzis, Automatic creation of 3D environments from a single sketch using Content Centric Networks“, Communications Magazine, IEEE , vol.49, no.3, pp.152-157, March 2011, doi: 10.1109/MCOM.2011.5723813

D. Alexiadis, G. Kordelas, K. Apostolakis, J. Agapito, J. Vegas, E. Izquierdo, P. DarasReconstruction for 3D Immersive Virtual Worlds“, WIAMIS 2012: The 13th International Workshop on Image Analysis for Multimedia Interactive Services, 23rd – 25th May 2012, Dublin City University, Ireland

S. Essid, X. Lin, M. Gowing, G. Kordelas, A. Aksay, P. Kelly, T. Fillon, Q. Zhang, A. Dielmann, V. Kitanovski, R. Tournemenne, A. Masurelle, E. Izquierdo, N. E. O’Connor, P. Daras, G. Richard, A multi-modal dance corpus for research into interaction between humans in virtual environments“, Journal on Multimodal User Interfaces, Special Issue on Multimodal Corpora, Springer, Aug. 2012, DOI 10.1007/s12193-012-0109-5

S. Essid, X. Lin, M. Gowing, G. Kordelas, A. Aksay, P. Kelly, T. Fillon, Q. Zhang, A. Dielmann, V. Kitanovski, R. Tournemenne, N. E. O’Connor, P. Daras, G. Richard, A multimodal dance corpus for research into real-time interaction between humans in online virtual environments“, 13th International Conference on Multimodal Interaction (ICMI 2011), Alicante, Spain, 14-18 November 2011

T. Semertzidis, K. Mcguinness, P. Daras, L. Makris, N. O. E’Connor, M. G. Strintzis, Creation of virtual worlds from 3D models retrieved from content aware networks based on sketch and image queries“, WIAMIS 2011, 13-15 April 2011, Delft, The Netherlands