| 
                    Authors
                 | A. Psaltis | 
| C. Chatzikonstantinou | |
| C. Z. Patrikakis | |
| P. Daras | |
| 
                    Year
                 | 2023 | 
| 
                    Venue
                 | Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops | 
| 
                    Download
                 |  | 
The present work proposes a holistic approach to address catastrophic forgetting in the field of computer vision during the process of incremental learning. More specifically, it suggests a series of steps for effective learning of models in distributed environments, based on extracting meaningful representations, modeling them into actual knowledge, and transferring it through a continual distillation mechanism. Additionally, it introduces a federated learning algorithm tailored to the problem, eliminating the need for central model transfer, by proposing an approach based on multi-scale representation learning, coupled with a Knowledge Distillation technique. Finally, inspired by the current trend, it modifies a contrastive learning technique combining existing knowledge with previous states, aiming to preserve previously learned knowledge while incorporating new knowledge. Thorough experimentation has been conducted to provide a comprehensive analysis of the issue at hand, highlighting the great potential of the proposed method, achieving great results in a federated environment with reduced communication cost and a robust performance within highly distributed incremental scenarios.