Abstract
Federated Learning emerged as a promising solution to enable collaborative training between organizations while avoiding centralization. However, it remains vulnerable to privacy breaches and attacks that compromise model robustness, such as data and model poisoning. This work presents PRoT-FL, a privacy-preserving and robust Training Manager capable of coordinating different training sessions at the same time. PRoT-FL conducts each training session through a Federated Learning scheme that is resistant to privacy attacks while ensuring robustness. To do so, the model exchange is conducted by a “Private Training Protocol” through secure channels and the protocol is combined with a public blockchain network to provide auditability, integrity and transparency. The original contribution of this work includes: (i) the proposal of a “Private Training Protocol” that breaks the link between a model and its generator, (ii) the integration of this protocol into a complete system, PRoT-FL, which acts as an orchestrator and manages multiple trainings and (iii) a privacy, robustness and performance evaluation. The theoretical analysis shows that PRoT-FL is suitable for a wide range of scenarios, being capable of dealing with multiple privacy attacks while maintaining a flexible selection of methods against attacks that compromise robustness. The experimental results are conducted using three benchmark datasets and compared with traditional Federated Learning using different robust aggregation rules. The results show that those rules still apply to PRoT-FL and that the accuracy of the final model is not degraded while maintaining data privacy.
Original language | English |
---|---|
Article number | 103929 |
Journal | Information Processing and Management |
Volume | 62 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2025 |
Keywords
- Blockchain
- Cryptography
- Federated learning
- Privacy
- Robustness
- Security