Responsible AI Series: Part 4

In this Responsible AI mini-series, we have discussed three different model-agnostic machine learning interpretability techniques offered by the MathWorks product suite, which are the following:

  1. Local Interpretable Model-Agnostic Explanations (LIME)
  2. Partial Dependence and Individual Conditional Expectations: PDP (Partial Dependence Plot) and ICE (Individual Conditional Expectation)
  3. Shapley Values

In this fourth and final piece, we shall investigate how you can choose an interpretability technique based on your project.

Interpretability of Machine Learning Models

Why Interpretable Machine Learning?

Interpretable machine learning techniques aid in understanding the underlying operations of most black-box machine learning algorithms. With the frequent use of machine learning models across multi-disciplined projects, it becomes important to not only understand the underlying workflow of the model, but to choose the correct interpretability technique based on the nature of the project. Machine learning interpretability is crucial because it partly instils a level of security and trust between machine learning models, their developers and users, and the regulators who approve them. Ernst and Young classified responsible AI into three important components, seen in Figure 1: Purposeful design, Agile Governance, and Vigilant Supervision. Our Responsible AI series can best be defined by Vigilant Supervision. The main purpose is to ensure model reliability, eliminate model bias and enhance transparency by carefully fine-tuning and evaluating machine learning systems. Understanding machine learning systems is achieved, amongst many other strategies, by using interpretability techniques to understand the underlying workings of most black-box models.

Figure 1: Ernst and Young’s trusted AI framework

Interpretability Techniques:

An overview of the three interpretability techniques discussed in our mini-series is presented below:

LIME

LIME runs on the local prediction level of a machine learning model, be it either classification or regression. The local level looks to explain the factors which contribute to an individual prediction, e.g., why a loan application was rejected.

PDP and ICE

PDP and ICE examine the effect of one or two predictors on the overall prediction by averaging the output of the model over all the possible feature values.

Shapley Values

The Shapley values technique explains the contribution of a predictor feature to a prediction model, by estimating the deviation of a prediction from the average prediction where the features are averaged and used to create a baseline for comparison.

How to Choose a Suitable Interpretability Technique?

Table 1 gives a great view of the common use-case for each class of interpretable techniques. The interpretable techniques discussed in this series can be classified into Local and Global techniques. Local techniques primarily focus on explaining the factors behind an individual prediction whilst Global techniques ensure understanding of the holistic workflow of a model on an entire dataset. Commonly, Local techniques can be used to validate the features of a model that do not offer immediate insight to a prediction. Local techniques are also useful in selecting features which will improve the explainability of a prediction and remove model bias. Global techniques are suitable for analysing a model’s overall workflow as well as comparing a model’s performance against another model.  Finally, interpretability techniques can be chosen based on the domain knowledge of a user where more experienced users may opt for complex interpretability techniques. Overall, the interpretation of classification and regression models can be enhanced by using either the appropriate Global or Local technique and often these techniques are used in conjunction with one another to achieve a more holistic interpretation of the model’s predictions.

Table 1: Classification of Interpretability Techniques

What Can I Do Next?

  • Opti-Num Solutions has been involved in numerous model building projects. Read our case study on Model Monitoring for Responsible AI to find out how we implemented a practical Responsible AI solution.
  • Request a trial.
  • Find out more from the team.
  • Visit our Financial Data Science Focus Area page to learn more.
  • Follow us

References

  1. com. 2021. Interpretability. [online] Available at: <https://www.mathworks.com/discovery/interpretability.html> [Accessed 21 April 2021].
  2. analyticsvidhya.com. [online] Available at: < https://www.nalyticsvidhya.com/blog/2020/10/global-model-interpretability-techniques-for-black-box-models/> [Accessed 1 May 2021].