Responsible AI Series

Download the code

In this mini-series, we are investigating three different model-agnostic machine learning interpretability techniques offered by the MathWorks product suite. These techniques are:

  1. Local Interpretable Model-Agnostic Explanations (LIME)
  2. Partial Dependence and Individual Conditional Expectations Plots
  3. SHAPley Values

Interpretability of Machine Learning Models using LIME

Why Interpretable Machine Learning?

Practitioners turn to interpretable machine learning to understand how machine learning models make predictions. Interpretable machine learning refers to the techniques used to overcome the black-box nature of most machine learning algorithms by revealing how various features contribute to their predictions. This is primarily done for three reasons:

  1. Overcome the “black-box” nature of the model.
    • These model representations are not intuitive and often difficult to understand.
    • For example: if a portfolio manager is using a model which states that a large portion of available funds needs to be invested in a particular asset class, the manager needs to be confident in that decision and be able to justify the decision to the investors.
  2. Regulatory requirements:
    • In South Africa, machine learning is rarely used for implementing regulatory models in the finance industry, but these techniques can be used to increase the confidence in understanding the behavior of machine learning models.
    • The financial regulations in South Africa state that if a loan applicant is declined credit, they are entitled to the reason as to why their application was declined. Machine learning models need to be interpretable.
  3. Model validation:
    • Are there biases in the data?
    • Where or why are predictions wrong?
    • Explore the “What-if” scenarios. This is important if one wants to stress test and validate the model with particular edge cases.

Local Interpretable Model-Agnostic Explanations – LIME

In a previous article, interpretable machine learning techniques such as Partial Dependence and Individual Conditional Expectations Plots were discussed. These techniques work over an entire training or testing data set, i.e. a global level. This article focuses on an interpretable technique known as Local Interpretable Model-Agnostic Explanations (LIME). LIME operates on the local prediction level of a machine learning model, be it either classification or regression. The local level looks to explain the factors which contribute to an individual prediction, e.g. why a loan application was rejected.

Figure 1 shows the trade-off between model performance and model explainability using common machine learning models. Models such as decision trees or linear regressions are inherently interpretable, however, they lack the power and accuracy of model predictions made by less explainable models.










How does LIME work? LIME approximates a complex model (e.g. SVMs, Ensembles, Boosts, Random Forests, etc.) in the neighbourhood of interest by identifying important predictors and fitting a simple interpretable model, such as a decision tree or linear model. The simple models are used to “explain” how the complex model works in that neighbourhood of interest. Figure 2 illustrates the LIME workflow.



To explore how the built-in MATLAB LIME function is used to assist in the interpretability of a machine learning model in more detail, download the interactive Live Script. The Live Script walks you through an example of predicting credit card default and interpreting local default predictions with the metrics determined by LIME.

What’s to come?

In the next part of the series we will look at how you can interpret your data on a global level using Partial Dependence Plots and Individual Conditional Expectations Plots.

What Can I Do Next?

Follow us





  1. com. 2021. Interpretability. [online] Available at: <> [Accessed 8 January 2021].
  2. com. 2021. Local Interpretable Model-Agnostic Explanations (LIME) – MATLAB. [online] Available at: <> [Accessed 8 January 2021].
  3. O’Sullivan, C., 2021. Interpretability In Machine Learning. [online] Medium. Available at: <> [Accessed 8 January 2021].