Making predictions

Our model is now ready for use. We can therefore use it to execute our predictions:

trainPred = model.predict(trainX)
testPred = model.predict(testX)

The predict() module has been used, which generates output predictions for the input samples. Computation is done in batches. A Numpy array of predictions is returned. Previously, when data scaling was performed, we used the fit_transform() function. As we said, this function is particularly useful as it stores the transformation parameters used. These parameters will be useful when, after having made the forecasts, we will have to report the data in the initial form (before normalization), to compare it to the actual data. In fact, now the predictions must be reported in original form to compare with the actual values:

trainPred = scaler.inverse_transform(trainPred)
trainY = scaler.inverse_transform([trainY])
testPred = scaler.inverse_transform(testPred)
testY = scaler.inverse_transform([testY])

This code block is used exclusively to cancel the effect of normalization and to restore the initial form to the dataset. To estimate the performance of the algorithm, we will calculate the root mean squared error:

trainScore = math.sqrt(mean_squared_error(trainY[0], trainPred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
Root mean square error (RMSE) measures how much error there is between two datasets. In other words, it compares a predicted value and an observed value.

The following results are returned:

Train Score: 1.12 RMSE
Test Score: 1.35 RMSE

After evaluating the method's performance, we can now visualize the results by drawing an appropriate graph. To display the time series correctly, a prediction shift is required. This operation must be carried out both on the train set and on the test set:

trainPredPlot = numpy.empty_like(dataset)
trainPredPlot[:,:] = numpy.nan
trainPredPlot[1:len(trainPred)+1,:] = trainPred

Then perform the same operation on the test set:

testPredPlot = numpy.empty_like(dataset)
testPredPlot[:,:] = numpy.nan
testPredPlot[len(trainPred)+2:len(dataset),:] = testPred

Finally, we have to plot the actual data and the predictions:

plt.plot(scaler.inverse_transform(dataset))
plt.plot(trainPredPlot)
plt.plot(testPredPlot)
plt.show()

In the following graph are shown the actual data and the predictions:

From the analysis of the previous graph, we can see that what is reported by the RMSE is confirmed by the graph. In fact, we can see that the model has done an excellent job in the fitting of both the training and test datasets.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.101.97