Authors
|
S. Thermos |
G. T. Papadopoulos | |
P. Daras | |
G. Potamianos | |
Year
|
2018 |
Venue
|
IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7-10 October 2018. |
Download
|
|
Sensorimotor learning, namely the process of understanding the physical world by combining visual and motor information, has been recently investigated, achieving promising results for the task of 2D/3D object recognition. Following the recent trend in computer vision, powerful deep neural networks (NNs) have been used to model the "sensory" and "motor" information, namely the object appearance and affordance. However, the existing implementations cannot efficiently address the spatio-temporal nature of the human-object interaction. Inspired by recent work on attention-based learning, this paper introduces an attention-enhanced NN-based model that learns to selectively focus on parts of the physical interaction where the object appearance is corrupted by occlusions and deformations. The model's attention mechanism relies on the confidence of classifying an object based solely on its appearance. Three metrics are used to measure the latter, namely the prediction entropy, the average N-best likelihood difference, and the N-best likelihood dispersion. Evaluation of the attention-enhanced model on the SOR3D dataset reports 33% and 26% relative improvement over the appearance-only and the spatio-temporal fusion baseline models, respectively.