Data Poisoning Attacks with Hybrid Particle Swarm Optimization Algorithms Against Federated Learning in Connected and Autonomous Vehicles

Publication Name

IEEE Access

Abstract

As a state-of-The-Art distributed learning approach, federated learning has gained much popularity in connected and autonomous vehicles (CAVs). In federated learning, models are trained locally, and only model parameters instead of raw data are exchanged to aggregate a global model. Compared with traditional learning approaches, the enhanced privacy protection and relieved network bandwidth provided by federated learning make it more favorable in CAVs. On the other hand, poisoning attack, which can break the integrity of the trained model by injecting crafted perturbations to the training samples, has become a major threat to deep learning in recent years. It has been shown that the distributed nature of federated learning makes it more vulnerable to poisoning attacks. In view of this situation, the strategies and attacking methods of the adversaries are worth studying. In this paper, two novel optimization-based black-box and clean-label data poisoning attacking methods are proposed. Poisoning perturbations are generated using particle swarm optimization hybrid with simulated annealing and genetic algorithm respectively. The attacking methods are evaluated by experiments conducted on the example of traffic sign recognition system on CAVs, and the results show that the prediction accuracy of the global model is significantly downgraded even with a small portion of poisoned data using the proposed methods.

Open Access Status

This publication may be available as open access

Volume

11

First Page

136361

Last Page

136369

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/ACCESS.2023.3337638