To Develop Robust Algorithm for Security in Black box Using Explainable Artificial Intelligence (XAI)
Abstract
Artificial Intelligence based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). AI to continue making steady progress without disruption.
Explainable Artificial Intelligence (XAI) is a branch of AI that advocates a set of tools, strategies, and algorithms for generating high-quality interpretable, intuitive, and human-understandable explanations of AI judgments. This paper gives mathematical summaries of seminal work in addition to offering a holistic assessment of the present XAI landscape in deep learning. We begin by establishing a taxonomy and categorising XAI strategies based on the scope of explanations, algorithmic methodology, and level of explanation or application, all of which aid in the development of reliable, interpretable, and self-explanatory deep learning models. The key ideas employed in XAI research are then described, and a timeline of significant XAI studies from 2007 to 2020 is shown. We evaluate the explanation maps created by XAI algorithms using image data, highlight the limitations of this methodology, and suggest potential future routes to improve XAI assessment after thoroughly discussing each category of methods and methodologies.