PDGAN: A novel poisoning defense method in federated learning using generative adversarial network
Paper
Paper/Presentation Title | PDGAN: A novel poisoning defense method in federated learning using generative adversarial network |
---|---|
Presentation Type | Paper |
Authors | Zhao, Ying, Chen, Junjun, Zhang, Jiale, Wu, Di, Teng, Jian and Yu, Shui |
Journal or Proceedings Title | Proceedings of the 19th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2019) |
Journal Citation | 11944, pp. 595-609 |
Number of Pages | 15 |
Year | 2024 |
Publisher | Springer |
Place of Publication | Switzerland |
ISBN | 9783030389901 |
9783030389918 | |
Digital Object Identifier (DOI) | https://doi.org/10.1007/978-3-030-38991-8_39 |
Web Address (URL) of Paper | https://link.springer.com/chapter/10.1007/978-3-030-38991-8_39 |
Web Address (URL) of Conference Proceedings | https://link.springer.com/book/10.1007/978-3-030-38991-8 |
Conference/Event | 19th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2019) |
Event Details | 19th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2019) Parent International Conference on Algorithms and Architectures for Parallel Processing Delivery In person Event Date 09 to end of 11 Dec 2019 Event Location Melbourne, Australia |
Abstract | Federated learning can complete an enormous training task efficiently by inviting participants to train a deep learning model collaboratively, and the user privacy will be well preserved for the users only upload model parameters to the centralized server. However, the attackers can initiate poisoning attacks by uploading malicious updates in federated learning. Therefore, the accuracy of the global model will be impacted significantly after the attack. To address this vulnerability, we propose a novel poisoning defense generative adversarial network (PDGAN) to defend the poising attack. The PDGAN can reconstruct training data from model updates and audit the accuracy for each participant model by using the generated data. Precisely, the participant whose accuracy is lower than a predefined threshold will be identified as an attacker and model parameters of the attacker will be removed from the training procedure in this iteration. Experiments conducted on MNIST and Fashion-MNIST datasets demonstrate that our approach can indeed defend the poisoning attacks in federated learning. |
Keywords | Federated learning; Poisoning defense; Generative adversarial network |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4604. Cybersecurity and privacy |
4602. Artificial intelligence | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Series | Lecture Notes in Computer Science |
Byline Affiliations | Beijing University of Chemical Technology, China |
Nanjing University of Aeronautics and Astronautics, China | |
University of Technology Sydney |
https://research.usq.edu.au/item/z4y19/pdgan-a-novel-poisoning-defense-method-in-federated-learning-using-generative-adversarial-network
9
total views1
total downloads1
views this month0
downloads this month