Authors
|
D. Konstantinidis |
K. Dimitropoulos | |
I. Ioakimidis | |
B. Langlet | |
P. Daras | |
Year
|
2019 |
Venue
|
In Proceedings of the 12th International Conference on Computer Vision Systems (ICVS), Thessaloniki, Greece, September 23-25, 2019. |
Download
|
|
Past research has now provided compelling evidence pointing towards correlations among individual eating styles and the development of (un)healthy eating patterns, obesity and other medical conditions. In this setting, an automatic, non-invasive food bite detection system can be a really useful tool in the hands of nutri-tionists, dietary experts and medical doctors in order to explore real-life eating behav-iors and dietary habits. Unfortunately, the automatic detection of food bites can be challenging due to occlusions between hands and mouth, use of different kitchen utensils and personalized eating habits. On the other hand, although accurate, manual bite detection is time-consuming for the annotator, making it infeasible for large scale experimental deployments or real-life settings. To this regard, we propose a novel deep learning methodology that relies solely on human body and face motion data extracted from videos depicting people eating meals. The purpose is to develop a system that can accurately, robustly and automatically identify food bite instances, with the long-term goal to complement or even replace manual bite-annotation proto-cols currently in use. The experimental results on a large dataset reveal the superb classification performance of the proposed methodology on the task of bite detection and paves the way for additional research on automatic bite detection systems.