Unboxing the Classification for Visualization of the Outcomes with Naïve User Perspective
Most of the ML-based models behave like a black-box in the sense that their behaviour is not easily understandable to naïve users. This paper proposes a two-layer framework for evaluation of learning acquired by an ML model and facilitate trust of a human user through on-demand explanations. To verify the reliability of a model learning, the idea is to understand its behaviour and assessing to what extent the model has incorporated important characteristics of the provided dataset. Information gain measures using entropy and Gini index are used to compute dataset characteristics. Feature importance, global surrogate model, and local surrogate models are used to understand model behaviour. Measuring the degree of agreement between the provided dataset and the learned model is modelled as a 2-Judge and n-participants rank correlation problem. A positive association using Spearman's rank correlation acts as an indicator of the reliability of the learned ML-based model.