Defending poisoning attacks in federated learning via adversarial training method
Paper
Paper/Presentation Title | Defending poisoning attacks in federated learning via adversarial training method |
---|---|
Presentation Type | Paper |
Authors | Zhang, Jiale, Wu, Di, Liu, Chengyong and Chen, Bing |
Journal or Proceedings Title | Proceedings of the 3rd International Conference on Frontiers in Cyber Security (FCS 2020) |
Journal Citation | pp. 83-94 |
Number of Pages | 12 |
Year | 2020 |
Publisher | Springer |
Place of Publication | Singapore |
ISBN | 9789811597381 |
9789811597398 | |
Digital Object Identifier (DOI) | https://doi.org/10.1007/978-981-15-9739-8_7 |
Web Address (URL) of Paper | https://link.springer.com/chapter/10.1007/978-981-15-9739-8_7 |
Web Address (URL) of Conference Proceedings | https://link.springer.com/book/10.1007/978-981-15-9739-8 |
Conference/Event | 3rd International Conference on Frontiers in Cyber Security (FCS 2020) |
Event Details | 3rd International Conference on Frontiers in Cyber Security (FCS 2020) Delivery In person Event Date 15 to end of 17 Nov 2020 Event Location Tianjin, China |
Abstract | Recently, federated learning has shown its significant advantages in protecting training data privacy by maintaining a joint model across multiple clients. However, its model security issues have not only been recently explored but shown that federated learning exhibits inherent vulnerabilities on the active attacks launched by malicious participants. Poisoning is one of the most powerful active attacks where an inside attacker can upload the crafted local model updates to further impact the global model performance. In this paper, we first illustrate how the poisoning attack works in the context of federated learning. Then, we correspondingly propose a defense method that mainly relies upon a well-researched adversarial training technique: pivotal training, which improves the robustness of the global model with poisoned local updates. The main contribution of this work is that the countermeasure method is simple and scalable since it does not require complex accuracy validations, while only changing the optimization objectives and loss functions. We finally demonstrate the effectiveness of our proposed mitigation mechanisms through extensive experiments. |
Keywords | Federated learning; Poisoning attacks; Label-flipping; Pivotal training |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Nanjing University of Aeronautics and Astronautics, China |
University of Technology Sydney |
https://research.usq.edu.au/item/z4y1v/defending-poisoning-attacks-in-federated-learning-via-adversarial-training-method
47
total views0
total downloads11
views this month0
downloads this month