Poisoning attack in federated learning using generative adversarial nets
Paper
Paper/Presentation Title | Poisoning attack in federated learning using generative adversarial nets |
---|---|
Presentation Type | Paper |
Authors | Zhang, Jiale, Chen, Junjun, Wu, Di, Chen, Bing and Yu, Shui |
Journal or Proceedings Title | Proceedings of 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE) |
Journal Citation | pp. 374-380 |
Number of Pages | 7 |
Year | 2019 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
Digital Object Identifier (DOI) | https://doi.org/10.1109/TrustCom/BigDataSE.2019.00057 |
Web Address (URL) of Paper | https://ieeexplore.ieee.org/abstract/document/8887357 |
Web Address (URL) of Conference Proceedings | https://ieeexplore.ieee.org/xpl/conhome/8883860/proceeding |
Conference/Event | 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE) |
Event Details | 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE) Parent IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) Delivery In person Event Date 05 to end of 08 Aug 2018 Event Location Rotorua, New Zealand |
Abstract | Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks. |
Keywords | Federated learning; poisoning attack; generative adversarial nets; security; privacy |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
4604. Cybersecurity and privacy | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Nanjing University of Aeronautics and Astronautics, China |
Beijing University of Chemical Technology, China | |
University of Technology Sydney |
https://research.usq.edu.au/item/z4y1z/poisoning-attack-in-federated-learning-using-generative-adversarial-nets
28
total views0
total downloads2
views this month0
downloads this month