Designing AI Systems with Correction Mechanisms Towards Attack-Resilient Architectures

Authors
E. Kafali
C. Spartalis
T. Semertzidis
C. Z Patrikakis
P. Daras
Year
2025
Venue
2025 IEEE International Conference on Cyber Security and Resilience (CSR)
Download

Abstract

AI models face increasing security and privacy threats which compromise their integrity and reliability. While numerous AI-based approaches have been proposed to detect and mitigate such risks, they often focus on specific aspects in isolation, resulting in a lack of unified guidance for building robust and resilient AI systems. This article proposes an attack-resilient framework that addresses both security and privacy threats against AI systems by integrating detection mechanisms, corrective actions, and explainable AI techniques. The proposed framework aims to equip AI systems with resilience strategies, improving defenses against evolving threats while ensuring reliability and compliance in high-stakes applications such as healthcare and GDPR-regulated environments.