Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
Article
Article Title | Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks |
---|---|
ERA Journal ID | 17819 |
Article Category | Article |
Authors | Zhao, Ying, Chen, Junjun, Zhang, Jiale, Wu, Di, Blumenstein, Michael and Yu, Shui |
Journal Title | Concurrency and Computation: Practice and Experience |
Journal Citation | 34 (7) |
Article Number | e5906 |
Number of Pages | 12 |
Year | 2022 |
Publisher | John Wiley & Sons |
Place of Publication | United Kingdom |
ISSN | 1532-0626 |
1532-0634 | |
Digital Object Identifier (DOI) | https://doi.org/10.1002/cpe.5906 |
Web Address (URL) | https://onlinelibrary.wiley.com/doi/full/10.1002/cpe.5906 |
Abstract | In the age of the Internet of Things (IoT), large numbers of sensors and edge devices are deployed in various application scenarios; Therefore, collaborative learning is widely used in IoT to implement crowd intelligence by inviting multiple participants to complete a training task. As a collaborative learning framework, federated learning is designed to preserve user data privacy, where participants jointly train a global model without uploading their private training data to a third party server. Nevertheless, federated learning is under the threat of poisoning attacks, where adversaries can upload malicious model updates to contaminate the global model. To detect and mitigate poisoning attacks in federated learning, we propose a poisoning defense mechanism, which uses generative adversarial networks to generate auditing data in the training procedure and removes adversaries by auditing their model accuracy. Experiments conducted on two well-known datasets, MNIST and Fashion-MNIST, suggest that federated learning is vulnerable to the poisoning attack, and the proposed defense method can detect and mitigate the poisoning attack. |
Keywords | federated learning; generative adversarial networks; poisoning attacks; model security |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
4604. Cybersecurity and privacy | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Beijing University of Chemical Technology, China |
Nanjing University of Aeronautics and Astronautics, China | |
University of Technology Sydney |
https://research.usq.edu.au/item/z4y13/detecting-and-mitigating-poisoning-attacks-in-federated-learning-using-generative-adversarial-networks
46
total views0
total downloads5
views this month0
downloads this month