Data Poisoning Attack Using Hybrid Particle Swarm Optimization in Connected and Autonomous Vehicles

Publication Name

Proceedings of IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2022

Abstract

The development of connected and autonomous vehicles (CAV s) relies heavily on deep learning technology, which has been widely applied to perform a variety of tasks in CAYs. On the other hand, deep learning faces some security concerns. Data poisoning attack, as one of the security threats, can compromise the deep learning models by injecting poisoned training samples. The poisoned models may make more false predictions, and may cause fatal accidents of CA V s in the worst case. Therefore, the principles of poisoning attacks are worth studying in order to propose counter measures. In this work, we propose a black-box and clean-label data poisoning attack method that uses hybrid particle swarm optimization with simulated annealing to generate perturbations for poisoning. The attacking method is evaluated by experiments on the deep learning models of traffic sign recognition systems on CA V s, and the results show that the classification accuracies of the target deep learning models are obviously downgraded using our method, with a small portion of poisoned data samples in GTSRB dataset.

Open Access Status

This publication is not available as open access

Funding Number

23A520043

Funding Sponsor

Natural Science Foundation of Henan Province

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/CSDE56538.2022.10089285