Machine learning is becoming increasingly popular in modern technology and has been adopted in various application areas. However, researchers have demonstrated that machine learning models are vulnerable to adversarial examples in their inputs, which has given rise to a field of research known as adversarial machine learning. Potential adversarial attacks include methods of poisoning datasets by perturbing input samples to mislead machine learning models into producing undesirable results. While such perturbations are often subtle and imperceptible from the perspective of a human, they can greatly affect the performance of machine learning models. This paper presents two methods of verifying the visual fidelity of image-based datasets by using QR codes to detect perturbations in the data. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether or not an image in the dataset has been perturbed. In the second method, only a single verification string is stored and can be used to verify whether an entire dataset is intact.
History
Citation
Chow, Y., Susilo, W., Wang, J., Buckland, R., Baek, J., Kim, j. & Li, N. (2021). Utilizing QR codes to verify the visual fidelity of image datasets for machine learning. Journal of Network and Computer Applications, 173 102834-1-102834-11.