Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability

Nuria Rodriguez-Barroso*, Javier Del Ser*, M. Victoria Luzon, Francisco Herrera*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks.

Original languageEnglish
Title of host publication2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350319545
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 - Yokohama, Japan
Duration: 30 Jun 20245 Jul 2024

Publication series

NameIEEE International Conference on Fuzzy Systems
ISSN (Print)1098-7584

Conference

Conference2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024
Country/TerritoryJapan
CityYokohama
Period30/06/245/07/24

Keywords

  • adversarial attacks
  • byzantine attacks
  • Federated Learning
  • safe AI
  • trustworthy AI

Fingerprint

Dive into the research topics of 'Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability'. Together they form a unique fingerprint.

Cite this