Explainability for machine learning models in MATLAB

As artificial intelligence becomes more prevalent in financial industries, its use presents considerable opportunities but also challenges. AI models have the potential to price options through deep hedging instead of Black-Scholes equations; neural nets can be used to call minute-ahead forex and slippage can be cut in stock trading to improve execution by means of trading algorithms. But how sure are we that the model is going to perform as expected?

There is trade-off between model simplicity and predictive power as illustrated in Figure 1. Very simple models, i.e. a calculation- or score-based models are very explainable – we can understand how they work and know what to expect from their behaviour. However, their predictive power is weak. Conversely, the other end of the spectrum offers a deep neural network: with a high predictive power, however the black-box feature extraction offers poor explainability.

Figure 1: Trade-off between model simplicity and predictive power

Unexplainable models bear risks to those who use them. How is a credit scoring AI model supposed to be probed when it discriminates against certain demographics? What happens when the model appears to be ‘sexist’?

There is an explicit requirement for the black box nature of AI models to be explainable, both from responsible business practitioners and from regulatory bodies. There is no industry standard on how to achieve this, but there have been some significant developments in improving the transparency and explainability of models. In this article, we will show you some of the model interrogation methods implemented in MATLAB that help to explain AI model behaviour. These methods are a step in the right direction towards creating a more explainable AI model.

The development of explainable and robust models are not solely dependent on your choice of model. Following a particular workflow when developing your AI model also contributes to the model explainability.  Figure 2 illustrates such a  workflow, which was adapted from a representative blueprint for human-friendly machine learning. Of particular importance is the fact that while the model assessment consists of several steps, including traditional model assessment and post-hoc explanations, the assessment processes is iterative. Further, the importance of a pair of “human eyes” is integral to this workflow. If the model, for any reason, becomes unsuitable or unexplainable, it is decommissioned from use.

 

Figure 2: Workflow for human-friendly machine learning.

Two post-hoc explanations, along with their MATLAB implementations will be discussed: the individual conditional expectation (ICE) plot and partial dependence plot (PDP). Note that these are a small snippet of the various ways that an AI model can be interrogated.

To explain how these plots are used, lets imagine we are developing a model that predicts the probability of a customer defaulting on a loan. Further, we are looking to understand how a variable, for example customer age, influences the prediction of the model. The PDP demonstrates the marginal relationship between separate predictor variables and the response variable. An example of a PDP is shown in Figure 3. The rationale is that the customer age is varied while the other variables are kept constant to see the marginal effect of age on prediction. From Figure 3, we can see that the age of the customer seems to be more impactful on the results of the model for elderly customers (age 60 and above) than for other age groups.

 

 

 

 

 

 

 

 

 

Figure 3: PDP for customer age

The ICE plot, Figure 4, offers a similar concept, but with a more granular view. Instead of the average for customer age, it shows separate lines for each datapoint. Hence, its use case if for seeing if the impacts on each individual datapoint corroborates with the average marginal effect of the PDP.

 

 

 

 

 

 

 

 

 

Figure 4: ICE plot for customer age

Both of plots can be created in MATLAB using the line of code below

 

Where Mdl is a machine learning model ranging from support vector machines to a decision tree and Vars are the features to visualise.

There is no unequivocal way to create an explainable machine learning model. However, a step in the right direction can be achieved by working alongside the “human-friendly” machine learning workflow as a guide (Figure 2). By working closely to a structured workflow and making use of analysis techniques (like PDP and ICE plots), the black box of machine learning can certainly begin to open.

Speak to us about possible approaches and available functionality to improve AI explainability in MATLAB.

What Can I Do Next?

Follow us

 

 

References: