Comparison of Interpretable Models on Telecom Churn Data

  • N. Sashi Kiran1, Dr. T. Uma Devi2

Abstract

Machine Learning models play a major role in the prescriptive and predictive analysis of many fields of research and business. But these models act as Black boxes and the decisions made by these are not explained. Understanding the decisions of these Black box algorithms is critical in view of optimizing business metrics. While interpretability of Machine Learning models is extensively studied and demonstrated in context of various research fields and business models; various techniques of interpreting and the comparative analysis between them is not well explored.

There are different types of interpretation methods that are being used to explain the Machine Learning model. In this paper, interpretable models were built using LIME, SHAP and a comparative study has been performed on model’s prediction with the actual prediction using Binning as a metric. The usage pattern of global and local interpretable models is also established.

Published
2020-07-01
Section
Articles