Multilayer perceptrons for time series forecasting

Multilayer perceptrons (MLP) are one of the basic architectures of neural networks. At a very high level, they consist of three components:

  • The input layer: A vector of features.
  • The hidden layers: Each hidden layer consists of N neurons.
  • The output layer: Output of the network; depends on the task (regression/classification).

The input of each hidden layer is first transformed linearly (multiplication by weights and adding the bias term) and then non-linearly (by applying activation functions such as ReLU). Thanks to the non-linear activation, the network is able to model complex, non-linear relationships between the features and the target.

A multilayer perceptron contains multiple hidden layers (also called dense layers or fully connected layers) stacked against each other. The following diagram presents a network with a single hidden layer and an MLP with two layers:

A potential benefit of using deep learning approaches to modeling time series is that they do not make assumptions about the underlying data. As opposed to ARIMA class models introduced in Chapter 3, Time Series Modeling, there is no need for the series to be stationary. 

In this recipe, we show how to estimate a multilayer perceptron for financial time series forecasting using PyTorch. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.149.143