TY - GEN
T1 - Defense Strategy against Byzantine Attacks in Federated Machine Learning
T2 - 2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024
AU - Rodriguez-Barroso, Nuria
AU - Del Ser, Javier
AU - Luzon, M. Victoria
AU - Herrera, Francisco
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks.
AB - The rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks.
KW - adversarial attacks
KW - byzantine attacks
KW - Federated Learning
KW - safe AI
KW - trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85201570978&partnerID=8YFLogxK
U2 - 10.1109/FUZZ-IEEE60900.2024.10611769
DO - 10.1109/FUZZ-IEEE60900.2024.10611769
M3 - Conference contribution
AN - SCOPUS:85201570978
T3 - IEEE International Conference on Fuzzy Systems
BT - 2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 June 2024 through 5 July 2024
ER -