Voice driven interaction in XR spaces

project logo
Funding Organization
European Commission
Funding Programme
Horizon Europe
Funding Instument
Research & Innovation Action
Starting Date
Oct. 1, 2022
Total budget
ITI budget
Scientific responsible
Dr. Dimitrios Zarpalas


VOXReality is an ambitious project whose goal will be to facilitate and exploit the convergence of two important technologies, natural language processing (NLP) and computer vision (CV). Both technologies are experiencing a huge performance increase due to the emergence of data-driven methods, specifically machine learning (ML) and artificial intelligence (AI). On one hand, CV/ML are driving the extended reality (XR) revolution beyond what was possible up to now, and on the other hand, speech-based interfaces and text-based content understanding are revolutionizing human-machine and human-human interaction.

VOXReality will employ an economical approach to integrate language and vision-based AI models with either unidirectional or bidirectional exchanges between the two modalities. Vision systems drive both AR and VR, while language understanding adds a natural way for humans to interact with the backends of XR systems, or create multi-modal XR experiences combining vision and sound. The results of the project will be two-fold:

  • a set of pretrained next-generation XR models combining, in various levels and ways, language and vision AI, and enable richer, more natural immersive experiences which are expected to boost XR adoption, and 
  • a set of applications using these models to demonstrate innovations in various sectors.

The above technologies will be validated through three use cases:

  1. Personal Assistants that are an emerging type of digital technology that seeks to support humans in their daily tasks, with their core functionalities related to human-to-machine interaction,
  2. Virtual Conferences that are completely hosted and run online, typically using a virtual conferencing platform that sets up a shared virtual environment, allowing their attendees to view or participate from anywhere in the world,
  3. Theaters where VOXReality will combine language translation, audiovisual user associations, and AR VFX triggered by predetermined speech

Similar Projects