FedRAL: Cost-Effective Distributed Annotation via Federated Reinforced Active Learning

Authors
Y. Lazaridis
A. Kastellos
A. Psaltis
P. Daras
Year
2024
Venue
Washington, USA
Download

Abstract

This paper addresses the challenge of reducing annotation costs in distributed learning environments, particularly in systems with limited data and computational resources, such as those found in edge devices. We propose Federated Reinforced Active Learning, a framework that integrates Federated Learning with Reinforced Active Learning to optimize data labeling under strict cost constraints. The method is designed for small-scale networks where data is sparse, and minimal training epochs are available. By utilizing reinforcement learning within active learning, the system selects the most informative data samples, allowing for efficient training while significantly reducing the need for extensive annotations. This approach is particularly suited for environments where minimizing both annotation and computational costs is critical, such as in applications where cost efficiency and resource limitations are top priorities. The proposed method is evaluated on the CIFAR-10 and CIFAR-100 datasets using ResNet18, across 5 and 10 clients. Results demonstrate that the method significantly reduces annotation costs and improves learning outcomes, making it an ideal solution for cost-sensitive distributed systems.