Projects per year
Abstract
Despite its potential benefits, Federated learning (FL) is vulnerable to various types of attacks that can compromise the accuracy and security of the trained model. While several defense mechanisms have been proposed to protect FL against such attacks, attackers are continuously developing more advanced techniques to bypass these protection mechanisms.In this context, this paper proposes a novel attack mechanism that allows malicious users to optimize their crafted reports, maximizing potential damage while limiting the chances of being detected. Our proposed attack technique is a robust approach designed to bypass existing defense mechanisms in FL. Our contributions are mainly investigating the FL model attack from the attacker's perspective, proposing a model relaxation approach to optimize a single poisoning ratio variable, and formulating a compromise between the chances of being detected and the amount of damage that the attack could cause. Additionally, we introduce three new attack designs, namely DTA, ATA, and NEA, which maximize the effect of the attack. The proposed Distance Target Attack (DTA) minimizes the distance from the target attack model, while the Accuracy Target Attack (ATA) deteriorates the accuracy of the global model. Furthermore, the Number Estimation Attack (NEA) aims to maximize the expected number of attackers that could bypass the aggregation detection mechanisms.The numerical results based on the KDD dataset confirm the ability of the proposed approach to deteriorate the global model accuracy. The experiments showed that the proposed DTA, ATA, and NEA attacks can significantly reduce the accuracy of the global model. These results demonstrate also the effectiveness and robustness of the proposed attack mechanism in compromising the accuracy and security of FL models.
Original language | English |
---|---|
Title of host publication | 2023 International Wireless Communications and Mobile Computing, IWCMC 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1644-1648 |
Number of pages | 5 |
ISBN (Electronic) | 9798350333398 |
DOIs | |
Publication status | Published - 2023 |
Event | 19th IEEE International Wireless Communications and Mobile Computing Conference, IWCMC 2023 - Hybrid, Marrakesh, Morocco Duration: 19 Jun 2023 → 23 Jun 2023 |
Publication series
Name | 2023 International Wireless Communications and Mobile Computing, IWCMC 2023 |
---|
Conference
Conference | 19th IEEE International Wireless Communications and Mobile Computing Conference, IWCMC 2023 |
---|---|
Country/Territory | Morocco |
City | Hybrid, Marrakesh |
Period | 19/06/23 → 23/06/23 |
Keywords
- Byzantine attacks
- Distributed Learning
- Federated Learning
- Intrusion Detection
Fingerprint
Dive into the research topics of 'Federating Learning Attacks: Maximizing Damage while Evading Detection'. Together they form a unique fingerprint.Projects
- 1 Finished
-
EX-QNRF-NPRPS-37: Secure Federated Edge Intelligence Framework for AI-driven 6G Applications
Abdallah, M. M. (Lead Principal Investigator), Al Fuqaha, A. (Principal Investigator), Hamood, M. (Graduate Student), Aboueleneen, N. (Graduate Student), Student-1, G. (Graduate Student), Student-2, G. (Graduate Student), Fellow-1, P. D. (Post Doctoral Fellow), Assistant-1, R. (Research Assistant), Mohamed, D. A. (Principal Investigator), Mahmoud, D. M. (Principal Investigator), Al-Dhahir, P. N. (Principal Investigator) & Khattab, P. T. (Principal Investigator)
19/04/21 → 30/08/24
Project: Applied Research