Adversarial Attacks and Defenses in a Self-driving Car Environment

  • Deya Chatterjee, Poovammal

Abstract

In an age where machine learning and deep learning is ubiquitous in software and technology, numerous state of the art results in various use cases like fraud detection, image classification,  have been achieved by use of ML/DL techniques, but concerns have also been raised with regard to the potential threats, dangers it causes due to breach of privacy and security. A relevant problem in machine learning security is the adversarial attack, in which perturbation of input by incorporation of an imperceptible amount of noise can “fool” a neural network and cause it to misclassify the input. This problem is particularly pronounced in self-driving cars, where an adversary can cause a perturbation in the image of traffic signals or road signs perceived by the car sensors. This can, for example, lead the car to go left when a sign tells it to stop or maybe go right (and crash) when it is supposed to go straight: this would lead to fatalities and wide spread chaos. Hence crafting appropriate defenses for each scenario of adversarial attack is important to actually realize the idea of autonomous driving and enable safe use of SDCs. We have studied various types of adversarial attacks -- physical (i.e. out-of-environment) and. in-environment, as well as the suitable solutions. After modeling the attacks, drop in accuracy or performance of the simulation was studied. These were modeled in Cleverhans library and CARLA simulator.

Keywords: autonomous cars, machine learning, deep learning, adversarial attack.

Published
2020-05-05
How to Cite
Deya Chatterjee, Poovammal. (2020). Adversarial Attacks and Defenses in a Self-driving Car Environment. International Journal of Advanced Science and Technology, 29(06), 2233 - 2240. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/13512