Tele Immersion (TI) refers to an emerging Future Internet (FI) technology that can support realistic interpersonal communications allowing remote users to share activities and interact within shared simulated environments. With this technology the restrictions imposed by geographical location are lifted at that fine line separating the real world from the virtual, offering an all-around immersive experience, where the actual 3D appearance of all peers will be embedded.
Imagine a party of friends emplaced into the virtual environment simulating the Athenian Acropolis, taking an exciting tour along with their guide into this cultural re-enactment, or a tele-learning environment where an expert is tele-immersed in a simulated factory environment to teach a party of workers how to operate some newly acquired machine types.
Given the freedom offered by virtual environments, the interaction possibilities are endless. Tele immersion will alter the way people communicate and interact, lift the barrier of physical presence and create new pathways in industries such as education, healthcare, entertainment, broadcasting and many others.
VCL’s vision around TI-related technologies started since 2011 and this experience has been cultivated into an original expertise, with rich scientific knowledge and technological know-how. This is evident by the first tele-immersion based ski competition between an indoor user in Thessaloniki, Greece and an outdoor in Schladming, Austria. Following below is VCL’s most recent TI incarnation, targeting a mixed reality game experience:
SpaceWars is a Tele-Immersive game where two players are placed into the same virtual arena on top of futuristic hovercrafts, where they engage each other in a Capture-the-Flag type of game. SpaceWars utilizes the latest version of the TI platform developed at VCL and was developed in order to stress-test the technology with respect to the real-time interactions between remote users within the challenging responsiveness setting of multiplayer game.
VCL owns several depth sensors functioning in a spatio-temporally aligned multi-view setup using Kinect v2, Intel RealSense D400 series and Kinect DK Azure. Depth sensors are the key to unlocking next level computer vision applications. Depth sensing entirely changes how people interact with technology. In AR / VR depth sensors can be used for sensing real 3D environments and reconstructing them in the virtual world. With depth sensors, we’re giving computers an entire dimension of data, vastly expanding the possibility of what computers are capable of doing. Depth sensing systems can be combined with current computer vision applications to greatly enhance their performance and meet requirement for real-life deployment.
VCL has two state-of-the-art 3D capturing stations for the live capturing of persons and their reconstruction in 3D as a textured mesh. Each station is comprised of 4 depth cameras, 5 PCs, and assorted tripods and networking. Also VCL owns a XSens MVN system. XSens MVN is a flexible, camera-less full-body wearable human motion capture solution that offers superior motion capture data.
P. Athanasoulis, E. Christakis, K. Konstantoudakis, P. Drakoulis, S. Rizou, A. Weitz, A. Doumanoglou, N. Zioulis, D. Zarpalas, “Optimizing QoE and Cost in a 3D Immersive Media Platform: A Reinforcement Learning Approach”, In International Conference on Advances in Multimedia (MMEDIA), Lisbon, Portugal, February 23-27, 2020.
D. Alexiadis, D. Zarpalas, P. Daras, “Real-time, Realistic Full-body 3D Reconstruction and Texture Mapping from Multiple Kinects”, 11th IEEE IVMSP Workshop: 3D Image/Video Technologies and Applications, Yonsei University, Seoul, Korea, 10-12 June 2013.
K. Christaki, K. Apostolakis, A. Doumanoglou, N. Zioulis, D. Zarpalas, P. Daras, “Space Wars: An AugmentedVR Game”, 25th International Conference on MultiMedia Modeling (MMM), Thessaloniki, Greece, January 8-11, 2019.
D. Alexiadis, D. Zarpalas, P. Daras, “Real-Time, Full 3-D Reconstruction of Moving Foreground Objects From Multiple Consumer Depth Cameras”, IEEE Transactions on Multimedia, July 2012.
D. Alexiadis, D. Zarpalas, P. Daras, “Fast and smooth 3D reconstruction using multiple RGB-Depth sensors”, IEEE International Conference on Visual Communications and Image Processing, VCIP 2014, Valletta, Malta.
P. Daras, D. Tzovaras, A. Mademlis, A. Axenopoulos, D. Zarpalas, M. G. Strintzis, “3D search and retrieval using Krawtchouk moments” SHREC2006, 3D Shape Retrieval Contest, Technical report, UU-CS-2006-030, ISSN: 0924-3275, Jun 2006.