There's more...

In this section, we wanted to mention a few additional, yet interesting things connected to MLP.

Multivariate setting: We can use the multilayer perceptron in the multivariate setting. Two types of possible cases are:

  • Multiple input series: Multiple time series are used to predict the future value(s) of a single time series.
  • Multiple parallel series: Multiple time series are used to predict future value(s) of multiple time series simultaneously.

A sequential approach to defining the network's architecture: We can also define the network's architecture using nn.Sequential, which might be similar to anyone who worked with Keras before. The idea is that the input tensor is sequentially passed through the specified layers.

In the following, we define the same network as we have already used before in this recipe:

model = nn.Sequential(
nn.Linear(3, 8),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(8, 4),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(4, 1)
)

model

Running the code prints the following architecture:

Sequential(
(0): Linear(in_features=3, out_features=8, bias=True)
(1): ReLU()
(2): Dropout(p=0.2, inplace=False)
(3): Linear(in_features=8, out_features=4, bias=True)
(4): ReLU()
(5): Dropout(p=0.2, inplace=False)
(6): Linear(in_features=4, out_features=1, bias=True)
)

The sequence of layers is marked with ordered integers. Alternatively, we can pass an OrderedDict to nn.Sequential and provide custom names for the layers – in the dictionary, the keys must be unique, so we should provide a unique name for each operation. 

Estimating neural networks using scikit-learn: It is also possible to train multilayer perceptrons using scikit-learn, thanks to MLPClassifier and MLPRegressor. They are not as complex and customizable as the ones trained using PyTorch (or any other deep learning framework); however, they are a good point to start and also easy to implement as they follow the familiar scikit-learn API. Here, we show the code defining a simple network resembling the one we used in this recipe. Please refer to chapter 10's Notebook in the accompanying GitHub repository for a short example of how to train such a network.

We can define the MLP model as follows:

mlp = MLPRegressor(hidden_layer_sizes=(8, 4,), 
learning_rate='constant',
batch_size=5,
max_iter=1000)

Multi-period forecast: Lastly, we can also use the multilayer perceptron to forecast more than one timestep ahead. To do so, we need to appropriately prepare the input data and slightly modify the network's architecture to account for more than one output. We do not present the entire code here, just the modified parts. For the entire code, including the training of the network, please refer to the accompanying Notebook on GitHub.

The following code presents the modified function for transforming the time series into a dataset accepted by the MLP:

def create_input_data(series, n_lags=1, n_leads=1):

X, y = [], []
for step in range(len(series) – n_lags – n_leads + 1):
end_step = step + n_lags
forward_end = end_step + n_leads
X.append(series[step:end_step])
y.append(series[end_step:forward_end])
return np.array(X), np.array(y)

We also present the modified network's architecture:

class MLP(nn.Module):


def __init__(self, input_size, output_size):
super(MLP, self).__init__()
self.linear1 = nn.Linear(input_size, 16)
self.linear2 = nn.Linear(16, 8)
self.linear3 = nn.Linear(8, output_size)
self.dropout = nn.Dropout(p=0.2)


def forward(self, x):
x = self.linear1(x)
x = F.relu(x)
x = self.dropout(x)
x = self.linear2(x)
x = F.relu(x)
x = self.dropout(x)
x = self.linear3(x)
return x

Lastly, we present the plotted forecasts:

For each time point, we forecast two points – time t+1 and t+2. The network's second prediction is always very close to the first one; hence, it does not accurately capture the dynamics of the stock prices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.77.117