Defending against membership inference attacks in federated learning via adversarial example
Paper
Paper/Presentation Title | Defending against membership inference attacks in federated learning via adversarial example |
---|---|
Presentation Type | Paper |
Authors | Xie, Yuanyuan, Chen, Bing, Zhang, Jiale and Wu, Di |
Journal or Proceedings Title | Proceedings of 2021 17th International Conference on Mobility, Sensing and Networking (MSN) |
Journal Citation | pp. 153-160 |
Number of Pages | 8 |
Year | 2021 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
Digital Object Identifier (DOI) | https://doi.org/10.1109/MSN53354.2021.00036 |
Web Address (URL) of Paper | https://ieeexplore.ieee.org/abstract/document/9751527 |
Web Address (URL) of Conference Proceedings | https://ieeexplore.ieee.org/xpl/conhome/9751460/proceeding |
Conference/Event | 2021 17th International Conference on Mobility, Sensing and Networking (MSN) |
Event Details | 2021 17th International Conference on Mobility, Sensing and Networking (MSN) Parent International Conference on Mobility, Sensing and Networking Delivery In person Event Date 13 to end of 15 Dec 2021 Event Location Exeter, United Kingdom |
Abstract | Federated learning has attracted attention in recent years due to its native privacy-preserving features. However, it is still vulnerable to various membership inference attacks, such as backdoor, poisoning, and adversarial attacks. Membership Inference attack aims to discover the data used to train the model, which leads to privacy leaking ramifications on participants who use their local data to train the shared model. Recent research on countermeasure methods mainly focuses on protecting the parameters and has limitations in guaranteeing privacy while restraining the loss of the model. This paper proposes Fedefend, which applies adversarial examples to defend against membership inference attacks in federated learning. The proposed approach adds well-designed noise to the attack features of the target model of each iteration becomes an adversarial example. In addition, we also consider the utility loss of the model and use an adversarial method to generate noise to constrain the loss to a certain extent, which efficiently achieves a trade-off between privacy security and loss of the federated learning model. We evaluate the proposed Fedefend on two benchmark datasets, and the experimental results demonstrate that Fedefend has a good performance. |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 460407. System and network security |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Nanjing University of Aeronautics and Astronautics, China |
Yangzhou University, China | |
Deakin University |
https://research.usq.edu.au/item/z4y17/defending-against-membership-inference-attacks-in-federated-learning-via-adversarial-example
3
total views0
total downloads2
views this month0
downloads this month