The Broad Index (also known as the Trade-Weighted US Dollar Index) seeks to measure the value of the US Dollar relative to other international currencies. In this example, we use deep learning to try and predict the Broad Index.

In this article, you will note the ease with which you can train and validate a neural network, and then see how this is applied by means of a trading strategy. By simple manipulations of historical data, we can leverage the flexibility of deep learning in MATLAB to generate future predictions of time series data, with reasonable precision.

We use a Long-Short Term Memory Network (LSTM) in this example. An LSTM is a type of neural network which can learn dependencies between different time steps of sequential data. This makes it useful for trying to predict trends in data which is defined as a time series, like the Broad Index.

The data that we use has already been prepared for us. This code focuses more on the simplicity of training, setting up and testing your deep learning model. Because we are trying to predict a continuous value, we use an LSTM for regression as opposed to classification, where we would try to make a prediction in a discrete set of output classes.

This code has been prepared using a Live Script in MATLAB. This allows you to intersperse text and figures amongst your code to make one readable document, which you are free to share in a variety of formats (PDF, HTML or .docx).

The data is imported from the Federal Reserve Economic Dataset (FRED), and spans 17 years’ worth of data from 1 January 2000 to 1 December 2017. This can easily be imported via a datafeed in MATLAB, but this has already been done. As mentioned, the data has already been separated into testing and training sets. The data has been separated such that approximately 85% of the data is for training, and the remaining 15% will be used for testing.

You will notice that this is one time series. How do we then have a predictor and response variable to train our network? By delaying the predictor data by a specified sequence length, we create a response variable. We can do this for our training and test data so give us predictors and responses to use for training and testing.

When you build any deep neural network in MATLAB, including an LSTM, you need three things:

- training data
- network architecture, i.e. the layers
- training options

We already have imported our training data in the previous section.

We now define the network layers as below. For an LSTM, we need a sequence input layer where we specify the number of features we are trying to predict. In this case, nFeatures = 1 because we are only trying to predict the next value. We specify an LSTM layer with 256 hidden units. A network which is modelling a very simple, linear sequence would have one hidden unit in its LSTM layer. As the system becomes more complex, more hidden units are required to capture this complexity. As the Broad Index is mathematically volatile and non-linear, we choose 256 hidden units as a starting point. A fully connected layer is added, followed by a regression layer as this is a regression problem.

It is now time to specify some specific parameters to use when training the network. The initial learning rate is set to 0.01 and it describes how the network is optimised. If it is too small, the training may take very long. If it is too large, the training may not coverage or reach a less-than-optimal value. The mini-batch size specifies into how many smaller chunks must the data be separated. This makes the training faster because smaller batches are processed at a time, instead of the whole dataset. The number of epochs describes how many times the entirety of the data must be fully passed through the network. We have chosen 6 epochs as the network behaviour stabilises after this. Too many epochs can result in an extended training time, while too little affects the integrity of the network.

There are many other training options described in the documentation, but these ones are highlighted as import to this discussion.

Using the single line of code below, train your network! All you have to do is specify the training data, network architecture as the layers, and training options.

The top plot shows the Root Mean Square Error (RMSE) because this is a regression problem. A network which performs classification can be characterised by its accuracy. However, both types of network can be described by its loss (the bottom plot). For the RMSE and loss, it is ideal for the plot to tend towards zero.

The predict() function allows us to use our trained network to make predictions based on our testing and training datasets.

The input data makes use of a set of normalised data. The data had been normalised for a better fit and to prevent the training from diverging. Since it was standardised by setting the training and testing set too have zero mean and unit variance, convert normalised data back using mean (mu1) and standard deviation (sigma1).

Let’s plot the true testing data and against the predicted testing data to get a feel of how the network performed.

The RMSE below shows that the network performs well for an initial exploration and build of a predictive model.

Now that we have made a prediction of the future values of the broad index, what do we do with it?

Let’s backtest this prediction as a trading strategy.

Starting on a particular day, the aim will be to predict what the price will be tomorrow. We will buy assets when the predicted value is above the actual index price, and we will sell assets when the predicted value is below the actual price. Assume the initital equity to be $10,000 and index multiplier to be $1,000.

The equity curve above shows us that over the trading period, the portfolio value grows. In fact, over the 504 day trading period. the portfolio has grown by

By using deep learning, we have been able to construct a trading strategy with positive returns which was created by data that’s difficult to model.

Deep learning does not have to be a mysterious beast. You can see how straightforward it is in MATLAB to build a network, train it and test it. Once we have made a prediction, we can then successfully apply it elsewhere: this time, in a trading strategy. Deep learning can easily be integrated into your workflow and can allow you to take your analysis to the next level.

**References**

Numpacharoen, K. *Deep Learning in Finance: LSTM Networks for Regression *(2018).

**Download**the code and script.**Request**a trial.**Find out more**from the team.

## Recent Comments