Implementation of Black Box Attack Using Explainable Artificial Intelligence (XAI) By Robustness Algorithm
This paper introduces the concept of Implementation of Black Box Attack On explainable artificial intelligence (XAI) is real. Artificial intelligence (AI) has spread everywhere from various applications to wide coverage in media and it plays a significant role in our society. AI has been seen as a powerful and useful tool in many ways, but it has also induced fear as a dangerous weapon to destroy human thinking, steal people’s jobs and create mass un employment. When applications are good enough to replace human work, as a consequence, changes in the job market will take place. Without a doubt, AI and the models associated with it are widely used and their usage is still getting more and more common in science and industry. While these models are becoming more complex and they give predictions with convincing accuracy, transparency is easily lost in the complexity of model, and as a consequence, too often the models are only black boxes to their users.
In the Big Data, with the increasing number of audit data features, human-centred smart intrusion detection system (IDS) performance is decreasing in training time and classification accuracy, and many SVM-based intrusion detection algorithms have been widely used to identify an intrusion quickly and accurately. This project proposes the SVM-GA (feature selection, weight, and parameter optimization of support vector machine based on the genetic algorithm) algorithm based on the characteristics of the genetic algorithm (GA) and the support vector machine (SVM) algorithm.