Towards next-generation federated learning: A case study on privacy attacks in artificial intelligence systems
Paper
Sharma, Ekta, Deo, Ravinesh C, Davey, Christopher P., Carter, Brad D. and Salcedo-sanz, Sancho. 2024. "Towards next-generation federated learning: A case study on privacy attacks in artificial intelligence systems." 2024 IEEE Conference on Artificial Intelligence (CAI 2024). Singapore 25 - 27 Jun 2024 United States. IEEE (Institute of Electrical and Electronics Engineers). https://doi.org/10.1109/CAI59869.2024.00259
Paper/Presentation Title | Towards next-generation federated learning: A case study on privacy attacks in artificial intelligence systems |
---|---|
Presentation Type | Paper |
Authors | Sharma, Ekta, Deo, Ravinesh C, Davey, Christopher P., Carter, Brad D. and Salcedo-sanz, Sancho |
Journal or Proceedings Title | Proceedings of 2024 IEEE Conference on Artificial Intelligence (CAI 2024) |
Journal Citation | pp. 1446-1452 |
Number of Pages | 8 |
Year | 2024 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
ISBN | 9798350354096 |
Digital Object Identifier (DOI) | https://doi.org/10.1109/CAI59869.2024.00259 |
Web Address (URL) of Paper | https://ieeexplore.ieee.org/document/10605448 |
Web Address (URL) of Conference Proceedings | https://ieeexplore.ieee.org/xpl/conhome/10605128/proceeding |
Conference/Event | 2024 IEEE Conference on Artificial Intelligence (CAI 2024) |
Event Details | 2024 IEEE Conference on Artificial Intelligence (CAI 2024) Delivery In person Event Date 25 to end of 27 Jun 2024 Event Location Singapore |
Abstract | Accurate and trust are crucial for ChatGPT and other artificial intelligence (AI) markets. One of the challenges is data leakage, which is frequently overlooked but possesses highly consequential implications. Federated learning (FL) is recognised as a new era of secure AI systems. The market for FL is estimated to reach USD 266.77 million by 2030 according to Polaris Market Research (1). This paper focuses on FL-based approaches for improving AI safety and examines the significance of Deep learning (DL) and its privacy implications. This has been achieved through six models: Federated Convolutional Neural Network (F-CNN), Federated averaging CNN (FA-CNN), Federated Adam (FA), Malicious Generative adversarial network (MGAN), Federated M-GAN (FMGAN) and Conditional GAN (CGAN). The authors analysed MNIST and CIFAR-10 datasets and conducted extensive numerical evaluations to confirm improved user privacy in federated learning for AI models. A case study with fast convergence speed and excellent asymptotic test accuracy was designed to outline White-box attacks on MGAN, FMGAN, and CGAN models. The study also implemented active inference attacks on deep neural networks without sharing raw data through FL. We created 256 synthetic images specifically to test the effectiveness of the original classifier. These counterfeit visuals effectively deceived the classifier, appearing as legitimate representations of true class labels. Trimming shared parameters was ineffective in preventing the attack, revealing limitations in collaborative learning. The generator shows the least loss of 0.0104 encountered of all models in the study. Our Generator is also the fastest after the FMGAN model. FMGAN performs best with maximum accuracy (0.9613) followed by CGAN (0.9208), MGAN (0.9163), FA (0.5148), FCNN (0.4376) and FACNN (0.4285). It also demonstrated high efficiency by successfully attacking in a short timeframe of 0.7459 milliseconds. The Federated approach led by Adam exhibited the longest processing time, at approximately 10.52 minutes. The case study illustrates the risks of surveillance and manipulation by attackers, who pressured participants to disclose confidential information. It also aimed to increase flexibility and robustness. Our work is accessible to diverse audiences, facilitating the adoption and practical applications of deep learning methods for privacy protection by major corporations. |
Keywords | Artificial Intelligence; Machine Learning; Federated Learning; Data Security; Attacks; Deep Learning |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | School of Mathematics, Physics and Computing |
Centre for Astrophysics | |
University of Alcala, Spain |
Permalink -
https://research.usq.edu.au/item/z9965/towards-next-generation-federated-learning-a-case-study-on-privacy-attacks-in-artificial-intelligence-systems
Download files
32
total views9
total downloads9
views this month4
downloads this month