Deep Affordance-grounded Sensorimotor Object Recognition CVPR 2017 HONOLULU

S. Thermos, G. T. Papadopoulos, P. Daras, G. Potamianos, “Deep Affordance-grounded Sensorimotor Object Recognition”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, Hawaii, USA, July 2017.


It is well-established by cognitive neuroscience that human perception of objects constitutes a complex process, where object appearance information is combined with evidence about the so-called object “affordances”, namely the types of actions that humans typically perform when interacting with them. This fact has recently motivated the “sensorimotor” approach to the challenging task of automatic object recognition, where both information sources are fused to improve robustness. In this work, the aforementioned paradigm is adopted, surpassing current limitations of sensorimotor object recognition research. Specifically, the deep learning paradigm is introduced to the problem for the first time, developing a number of novel neuro-biologically and neuro-physiologically inspired architectures that utilize state-of-the-art neural networks for fusing the available information sources in multiple ways. The proposed methods are evaluated using a large RGB-D corpus, which is specifically collected for the task of sensorimotor object recognition and is made publicly available. Experimental results demonstrate the utility of affordance information to object recognition, achieving an up to 29% relative error reduction by its inclusion.

Full document available here.

Visual Computing Lab

The focus of the Visual Computing Laboratory is to develop new algorithms and architectures for applications in the areas of 3D processing, image/video processing, computer vision, pattern recognition, bioinformatics and medical imaging.

Contact Information

Dr. Petros Daras, Principal Researcher Grade Α
1st km Thermi – Panorama, 57001, Thessaloniki, Greece
P.O.Box: 60361
Tel.: +30 2310 464160 (ext. 156)
Fax: +30 2310 464164