Interpretability of Machine Learning models

Nikhil Verma
5 min readJun 7, 2022

If a machine learning model performs well, why do we not just trust the model and ignore why it made a certain decision? Well, the problem is that a single metric, such as classification accuracy, is an incomplete description of most real-world tasks.

Let us dive deeper into the reasons why interpretability is so important. When it comes to predictive modelling, you have to make a trade-off:

  • Do you just want to know what is predicted? For example, the probability that a customer will churn or how effective some drug will be for a…

--

--

Nikhil Verma

Knowledge shared is knowledge squared | My Portfolio https://lihkinverma.github.io/portfolio/ | My blogs are living document, updated as I receive comments