Factory2Fit is a H2020 funded research and innovation project. Creating worker-centred solutions for factories of the future. Mr. Kostas Apostolakis, from Visual Computing Lab, demonstrates AR training solutions for workers of PrimaPower factory in Kauhava using HoloLens! http://factory2fit.eu/ https://twitter.com/factory2fit_eu
Press release 12/1/18 – Improve sound experience and safety at large, cultural events in the city
Press release 12/1/18 Management of Networked IoT Wearables – Very Large Scale Demonstration of Cultural and Security Applications Improve sound experience and safety at large, cultural events in the city The innovation project MONICA will demonstrate how cities can use the Internet of Things to deal with sound, noise and security challenges at big, cultural, open-air events. A range of applications will be demonstrated in six major European cities involving more than 100.000 users in total. Imagine sound…
RESEARCH – CREATE – INNOVATE
The Single RTDI State Aid Action “RESEARCH – CREATE – INNOVATE” support measure is funded by the Operational Programme Competitiveness, Entrepreneurship and Innovation 2014-2020 (EPAnEK). The measure aims to support research and innovation, technological development and demonstration at operating enterprises for the development of new or improved products, the development of synergies among enterprises, research and development centres and higher education sector as well as to support the patentability of research results and industrial property. In that context, the main objectives of…
Best Paper Award ICBHI 2017
This work presented a fall detection method based on Recurrent Neural Networks. It leverages the ability of recurrent networks to process sequential data, such as acceleration measurements from body-worn devices, as well as data augmentation in the form of random rotations of the input acceleration signal. Τhe proposed method was able to find all but one fall event, while at the same time producing no false alarms when tested on the publicly available URFD dataset. Proceedings/Precision Medicine Powered by pHealth…
The Visual Computing Lab of CERTH-ITI, participated in the IEEE International Conference on Computer Vision (ICCV) 2017
The Visual Computing Lab of CERTH-ITI, participated in the IEEE International Conference on Computer Vision (ICCV) 2017, held between 22-29 October, in Venice, Italy. The conference, being the major international computer vision event, had an exciting programme of 621 papers, presenting the latest advances in the field. This year, the attendance of ICCV increased by 113%, having 3107 attendees! Our presented work was a poster paper, entitled “Non-linear Convolution Filters for CNN-based Learning”, by G. Zoumpourlis, A. Doumanoglou, N. Vretos…
MaTHiSiS – educational scheme based on custom-made and adaptable learning goals and educational material
PRESS RELEASE Release Date: 01/11/2017 MaTHiSiS – educational scheme based on custom-made and adaptable learning goals and educational material Three-year EU Co-funded Project to create a ubiquitous e-learning ecosystem for Mainstream & Special Education, Industrial Training and Career Guidance MaTHiSiS is a 36-month project funded by the European Union under the H2020 work programme that will assist the educational process for learners and their tutors and caregivers by creating a novel and continuously adaptable “robot/machine/computer”-human interaction ecosystem. This system…
S. Thermos, G. T. Papadopoulos, P. Daras, G. Potamianos, “Deep Affordance-grounded Sensorimotor Object Recognition”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, Hawaii, USA, July 2017. Abstract: It is well-established by cognitive neuroscience that human perception of objects constitutes a complex process, where object appearance information is combined with evidence about the so-called object “affordances”, namely the types of actions that humans typically perform when interacting with them. This fact has recently motivated the “sensorimotor” approach to…
Download the layer’s code from here: VolterraConvolution.zip The code is based on the following publication: G. Zoumpourlis, A. Doumanoglou, N. Vretos, P. Daras, “Non-linear Convolution Filters for CNN-based Learning”, IEEE International Conference on Computer Vision (ICCV 2017), Venice, Italy, October 22-29 2017