FedInverse: Evaluating Privacy Leakage in Federated Learning
Paper
Paper/Presentation Title | FedInverse: Evaluating Privacy Leakage in Federated Learning |
---|---|
Presentation Type | Paper |
Authors | Wu, Di, Bai, Jun, Song,Yiliao, Chen, Junjun, Zhou, Wei, Xiang, Yong and Sajjanhar, Atul |
Journal or Proceedings Title | The Twelfth International Conference on Learning Representations |
Number of Pages | 31 |
Year | 2024 |
Place of Publication | Austria |
Web Address (URL) of Paper | https://openreview.net/forum?id=nTNgkEIfeb |
Web Address (URL) of Conference Proceedings | https://openreview.net/group?id=ICLR.cc/2024/Conference#tab-accept-oral |
Conference/Event | The Twelfth International Conference on Learning Representations |
Event Details | The Twelfth International Conference on Learning Representations Delivery In person Event Date 07 to end of 11 May 2024 Event Location Vienna, Austria Event Venue Messe Wien Exhibition and Congress Center Event Web Address (URL) |
Abstract | Federated Learning (FL) is a distributed machine learning technique where multiple devices (such as smartphones or IoT devices) train a shared global model by using their local data. FL promises better data privacy as the individual data isn’t shared with servers or other participants. However, this research uncovers a groundbreaking insight: a model inversion (MI) attacker, who acts as a benign participant, can invert the shared global model and obtain the data belonging to other participants. In such scenarios, distinguishing between attackers and benign participants becomes challenging, leading to severe data-leakage risk in FL. In addition, we found even the most advanced defense approaches could not effectively address this issue. Therefore, it is important to evaluate such data-leakage risks of an FL system before using it. Motivated by that, we propose FedInverse to evaluate whether the FL global model can be inverted by MI attackers. In particular, FedInverse can be optimized by leveraging the Hilbert-Schmidt independence criterion (HSIC) as a regularizer to adjust the diversity of the MI attack generator. We test FedInverse with three typical MI attackers, GMI, KED-MI, and VMI. The experiments show that FedInverse can effectively evaluate the data leakage risk that attackers successfully obtain the data belonging to other participants. The code of this work is available at https://github.com/Jun-B0518/FedInverse |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
4604. Cybersecurity and privacy | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | School of Mathematics, Physics and Computing |
Deakin University | |
University of Adelaide | |
Peking University, China | |
Swinburne University of Technology |
https://research.usq.edu.au/item/z5993/fedinverse-evaluating-privacy-leakage-in-federated-learning
51
total views0
total downloads0
views this month0
downloads this month