Implied volatility surface generation using autoencoders in MATLAB

Download the code

An essential tool used in the pricing of options is the implied volatility (IV) surface. A problem faced when trying to price illiquid options is a lack of enough market data to make these surfaces. Alternatively, models could be used to simulate these surfaces, but these rely on certain assumptions which don’t necessarily reflect the behaviour of the market.

A possible solution could be the generation of new surfaces with characteristics which do reflect the current market situation. This will be explored in this article by means of a special type of deep neural network, the autoencoder.  This is an approach to using autoencoders for data generation.  The idea is that given some training data (existing IV surfaces), the autoencoder would be able to learn the features of typical surfaces and generate new ones.

While existing work is scarce with regard to deep learning and IV surfaces specifically, the use of gated neural networks has been investigated [1]. In this study, while arbitrage-freeness, limiting boundaries were considered, this was specifically aimed at predicting the appearance of IV surfaces, not generating new ones.

While we will use autoencoders to generate new IV surfaces, these new IV surfaces can be subjected to a particular perturbation. For example, if we have an IV surface, and we want to simulate adverse market conditions, how would that change the visual appearance of that surface? How could we apply a given perturbation to the IV surface?

The example covered in this article is implemented in MATLAB, and you can run it yourself and experiment by downloading the code.

Autoencoders are a type of deep neural network used to train unsupervised data for feature extraction of that data. Further, they are used to learn the features of the input such that the input can be replicated at the output. They consist of three main parts: an encoder network, the latent space, and the decoder network. The encoder network compresses the input via an encoding that it learns from the training data. The compressed input is now in the latent space: a condensed representation of the data. Thereafter, the data in the latent space is reconstructed by being passed through the decoder network. Autoencoders are typically used for data generation (this focus of this article), but additionally noise removal and dimensionality reduction.

In this workflow, an implied volatility (IV) surface is sent into the autoencoder as input. The surface is compressed to the latent space, and while the surface is in the latent space, the perturbation is performed. Thereafter, the perturbed, compressed surface is converted back into an IV surface by passing through the decoder network. This workflow is depicted in Figure 1.

Figure 1: Implied volatility surface generation workflow

Before this can be done, as with any deep or machine learning application, training data is required. Since we did not have access to real-world market data, the training data had to be simulated.  To this end, 10000 artificial IV surfaces are synthesised for training. The autoencoder needs as much training data as possible to learn the features of IV surfaces, so that it can generate realistic new ones. Special thanks go to Jörg Kienitz of Quaternion Risk Management for supplying the code, which generates the IV surfaces as per the Heston model. For training, 8000 of the surfaces are used to train the encoder and decoder networks. To generate the data yourself, run the live script Part1_SynthesiseData.mlx.

The networks are then trained in MATLAB. Special thanks go to Tomaso Cetto from the MathWorks for assistance in adapting an example using a variational autoencoder to one being a regular autoencoder, for this example. An important parameter for training is the dimensions of the latent space. Since this is the space to which the surfaces must be compressed, a possible question is what should these dimensions be? The latent space here was chosen to be dimensions 1  2, i.e. it contained two elements. This means that every surface (which effectively is of dimensions 28  28  1) is condensed into dimensions 1  2. The latent space dimension is a deep learning hyperparameter: there is no set requirement for what it should be and is application-specific. A latent space with dimensions 1  2 will assist in visualisation later. To train the networks, run the live script Part2_TrainAutoencoder.mlx.

Once the networks are trained, we are ready to send an IV surface into the encoder network. This can be seen in the live script Part3_PerturbAndReconstruct.mlx. Figure 2 is an example of an input surface.

Figure 2: Input implied volatility surface

As per the workflow described in Figure 1, a perturbation must now occur. In this example, we perform a simple mathematical perturbation. The mean value of the original surface is subtracted from every value in the surface and divided by the standard deviation of the original surface. Note that this perturbation occurs in the latent space.

This perturbation will be compared to the input surface which passes through the autoencoder with no perturbation, to act as a comparison. These two output surfaces can be seen in Figure 3.

Figure 3: Output implied volatility surfaces

 

Figure 3 shows the difference that the autoencoder makes, via Figure 3b: this represents an IV surface which has been generated by the autoencoder, and which could subsequently be used for options pricing. Figure 3a shows the effect of the perturbation on the appearance of the IV surface.

While there was quite a lot that was covered in this example, there is a significant recommendation that can be made. During the training of the encoder and decoder networks, conditions should be enforced which will result in the resulting surfaces being arbitrage-free [2]. This constrains the resultant surfaces such that they are closer to what surfaces would look like in real-world applications.

Within this example, we have covered an extensive workflow: IV surface data synthesis; autoencoder training and IV surface perturbation and reconstruction. These are all deep-learning, data-driven methods to options pricing within MATLAB. Do you have any real-world, IV surface data from the market? Download the code and see how the autoencoder reacts with your market-based data.

By using MATLAB and autoencoders to generate implied volatility surfaces, maybe we are getting a step closer to solving the elusive problem of a lack of market data.

What Can I Do Next?

Follow us

 

 

References

  • [1]  Y. Zheng, Y. Yang, and B. Chen, ‘Gated neural networks for implied volatility surfaces’, ArXiv190412834 Cs Q-Fin, Jan. 2020, Accessed: Aug. 05, 2020. [Online]. Available: http://arxiv.org/abs/1904.12834.
  • [2]  X. Wang, Y. Zhao, and Y. Bao, ‘Arbitrage-free conditions for implied volatility surface by Delta’, North Am. J. Econ. Finance, vol. 48, pp. 819–834, Apr. 2019, doi: 10.1016/j.najef.2018.08.011.