Blog Content

Home – Blog Content

6 reasons why interpretability of the model is important

Machine Learning has become a fundamental part of many companies’ data science efforts. But as models have gotten more complex, it can be difficult to know what is causing them to make certain predictions. That is why we observe fast increase of interpretability tools such as SHAP or DALEX

In this article, I will discuss some reasons why interpretability is so important. There are more reasons than you could expect and some of them are very not-obvious.

1. More confidence that model works well

One of the most important aspects of interpretability is that it allows you to gain confidence in your model and be sure that it does what it was supposed to do.

The first step of assessing model quality is properly-defined metric, depending on the application — it can be e.g. accuracy, f1 score or MAPE. However, even if you have chosen proper metrics, it can be computed in wrong way or can be non-informative. Hence, usually we can achive the highest confidence about the model quality only by understanding what it is doing and why give such predictions.

2. Building trust

In Machine Learning projects, trust is the foundation. Without trust, you cannot build relationships or collaborations. Trust allows you to work together and share data, knowledge, and expertise. It also allows you to continue working with each other in the future.

In order for your model to be interpretable and trustworthy, it needs to have clear explanations of what it does and why it does it. The more transparent you can make your model’s output, the more likely it will be that others will trust your results — and want to work with you again!

3. Debugging

Understanding the model and how it works is extremely important when the model’s output is below your expectations and you start to find out what is going on. If you do not understand the model, then it is very hard to debug if something goes wrong with your prediction results.

For example, if an algorithm does not give accurate predictions for some data points in a test set but does well on other data points, maybe good datapoints are from the same distribution that was in the training set. When you understand on which features your model looks at most, you can find that maybe wrong datapoints are outliers from the perspective of these features. Or maybe there are some missing features in these points.

4. Simpler alternative

Thanks to interpretability, you can know which features are important for your model, and propose simpler alternative models which have similar predictive power.

For example, suppose that you have a classification problem with thousands of features and you discover that only 10% of them are significant in predicting customer churn. Now you know which features matter and which do not, so it may be possible to remove unimportant features and train your model again. It will be faster, because it has less parameters. But also the process of data gathering and preprocessing will be simpler. 

5. Extending domain knowledge

The tools for model explainability show us features that affect the model’s output the most. Some relations among features and correlation with the output can be very intuitive and known to experts, e.g. if you are trying to predict whether or not a customer will default on a loan, the history of their repayment can be an important factor. However, models very often detect correlation previously unexplored by human. Thanks to it, human decisions can be better in the future, even if you will not decide to replace them with the model.

6. Regulatory compliance

If a model is being used to determine whether or not you can get approved for something (a loan, medical insurance, etc.), then it is important to be sure that the model makes sense and provides an accurate answer. If the decision is based on an opaque black box, you may have no idea why one person was approved and another was not. This lack of transparency also means that it will be difficult to prove that the model is working properly in court if there are issues with its predictions going forward.

These six reasons explain why the interpretability of a model is important.

Having an interpretable model is important for being sure that the model works well and for building trust between you and your end users. It is also important for debugging the model, which can help you quickly isolate issues and fix them before deploying the model in production.

Model explaination is useful also if you will not use the current version of your model — by enabling building simpler alternative or by extending kowledge of human experts and improving their decisions. 

Finally, if you are going through regulatory compliance checks, having an interpretable version of your model will make these procedures much easier.

I hope you have found this article useful in your data science journey. Please join us on our blog for more articles!

Leave a Reply

Your email address will not be published. Required fields are marked *

Our mission is to create artificial intelligence technology that benefits people.

Our Services

AI algorithms

AI audit

Trainings

Consultations

Information

FAQ

Team

Company

Services

News

Industries

COGITA Sp. z o.o. is a company registered in the National Court Register kept by the District Court in Częstochowa (Poland), XVII Economic Division of the National Court Register. KRS (National Court Register) number: 0000995030, NIP (Tax Identification Number): 9492257381.