Unboxing the Classification for Visualization of the Outcomes with Naïve User Perspective

  • Pawan Kumar, Manmohan Sharma

Abstract

Most of the ML-based models behave like a black-box in the sense that their behaviour is not easily understandable to naïve users. This paper proposes a two-layer framework for evaluation of learning acquired by an ML model and facilitate trust of a human user through on-demand explanations. To verify the reliability of a model learning, the idea is to understand its behaviour and assessing to what extent the model has incorporated important characteristics of the provided dataset. Information gain measures using entropy and Gini index are used to compute dataset characteristics. Feature importance, global surrogate model, and local surrogate models are used to understand model behaviour. Measuring the degree of agreement between the provided dataset and the learned model is modelled as a 2-Judge and n-participants rank correlation problem. A positive association using Spearman's rank correlation acts as an indicator of the reliability of the learned ML-based model.

Published
2020-10-01
How to Cite
Pawan Kumar, Manmohan Sharma. (2020). Unboxing the Classification for Visualization of the Outcomes with Naïve User Perspective. International Journal of Control and Automation, 13(4), 1312-1325. Retrieved from http://sersc.org/journals/index.php/IJCA/article/view/33158
Section
Articles