This study investigates the challenging task of training visual models with very few available data, further complicated by the distribution being imbalanced and scattered across nodes. To address this diverse availability of training data in different federated settings, a customized self-supervised learning approach tailored specifically for each scenario is being proposed. In particular, a hybrid approach combining self-supervised and supervised learning techniques under a federated umbrella has been utilized at both the global and local level, harnessing the potential of unlabeled data. Extensive experiments provide a detailed analysis of the problem at hand and demonstrate the particular characteristics of the proposed learning schemes in distributed scenarios. The overall proposed approach achieves superior recognition performance in the currently broadest public dataset, surpassing all baselines by a substantial margin. The proposed solution can operate efficiently at a local level without prior knowledge of the characteristics or distribution of data across nodes.