BADFSS: Backdoor Attacks on Federated Self-Supervised Learning
Presentation
Paper/Presentation Title | BADFSS: Backdoor Attacks on Federated Self-Supervised Learning |
---|---|
Presentation Type | Presentation |
Authors | Zhang, Jiale, Zhu, Chengcheng, Wu, Di, Sun, Xiaobing, Yong, Jianming and Long, Guodong |
Editors | Larson, Kate |
Journal or Proceedings Title | Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI-24) |
Journal Citation | pp. 548-558 |
Number of Pages | 11 |
Year | 2024 |
Place of Publication | Korea |
ISBN | 9781956792041 |
Digital Object Identifier (DOI) | https://doi.org/10.24963/ijcai.2024/61 |
Web Address (URL) of Paper | https://www.ijcai.org/proceedings/2024/61 |
Web Address (URL) of Conference Proceedings | https://www.ijcai.org/Proceedings/2024/ |
Conference/Event | 33rd International Joint Conference on Artificial Intelligence (IJCAI-24) |
Event Details | 33rd International Joint Conference on Artificial Intelligence (IJCAI-24) Parent International Joint Conference on Artificial Intelligence Delivery In person Event Date 03 to end of 09 Aug 2024 Event Location Jeju, Korea Event Web Address (URL) Rank A A A |
Abstract | Self-supervised learning (SSL) is capable of learning remarkable representations from centrally available data. Recent works further implement federated learning with SSL to learn from rapidly growing decentralized unlabeled images (e.g., from cameras and phones), often resulting from privacy constraints. Extensive attention has been paid to designing new frameworks or methods that achieve better performance for the SSL-based FL. However, such an effort has not yet taken the security of SSL-based FL into consideration. We aim to explore backdoor attacks in the context of SSL-based FL via an in-depth empirical study. In this paper, we propose a novel backdoor attack BADFSS against SSL-based FL. First, BADFSS learns a backdoored encoder via supervised contrastive learning on poison datasets constructed based on local datasets. Then, BADFSS employs attention alignment to enhance the backdoor effect and maintain the consistency between backdoored and global encoders. Moreover, we perform empirical evaluations of the proposed backdoor attacks on four datasets and compared BADFSS with three existing backdoor attacks that are transferred into federated self-supervised learning. The experiments demonstrate that BADFSS outperforms baseline methods and is effective under various settings. |
Keywords | AI Ethics, Trust, Fairness: ETF: Trustworthy AI; AI Ethics, Trust, Fairness: ETF: Safety and robustness; Multidisciplinary Topics and Applications: MTA: Security and privacy |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
4604. Cybersecurity and privacy | |
Byline Affiliations | Yangzhou University, China |
School of Mathematics, Physics and Computing | |
School of Business | |
University of Technology Sydney |
https://research.usq.edu.au/item/z987q/badfss-backdoor-attacks-on-federated-self-supervised-learning
4
total views1
total downloads2
views this month0
downloads this month