Securing Artificial Intelligence Applications using Protector Hiding
Artificial Intelligence (AI) has a vast extent of advancement, and re-searchers are continuously working on the development scope of AI instruments. AI is associated with the existing research and has the area of the incoming research topics. As the name AI suggests, some intelligence, insight appeared by the machines to work as people and work on accomplishing the objectives they are being furnished. Another utilization of AI could be to provide safeguards against the present day's cyber threats. AI explains some intelligence theoreti-cally, but inside the implementation, it too contains a lot of code that must be secure from intruders. AI has a lot of potential for making the world a better, wiser place, but it also poses many secu-rity dangers. Attackers can alter the inference results in ways that lead to misjudgment due to a lack of security awareness during the early development of AI systems. Security concerns can be disas-trous in vital domains such as healthcare, transportation, and sur-veillance. Successful attacks on AI systems can lead to property damage or put people's lives at risk. We must protect the technolo-gies used to implement the AI code. Several types of research have been done, like a default defense system, etc., but that system is still not so secure. The authors of this paper propose a term called "Pro-tector Hiding" as an alternative for securing Artificial Intelligence.