Training

While things might seem very different at this point, training an LSTM is actually not any different than training a deep neural network on a typical cross-sectional problem:

LAGS=10
df = read_data()
df_train = select_dates(df, start="2017-01-01", end="2017-05-31")
df_test = select_dates(df, start="2017-06-01", end="2017-06-30")
X_train, X_test, y_train, y_test = prep_data(df_train, df_test, lags=LAGS)
model = build_network(sequence_length=LAGS)
callbacks = create_callbacks("lstm_100_100")
model.fit(x=X_train, y=y_train,
batch_size=100,
epochs=10,
callbacks=callbacks)
model.save("lstm_model.h5")

After preparing our data, we instantiate a network with the architecture we've walked through and then call fit on it as expected.

Here I'm using a stateful LSTM. One practical benefit of stateful LSTMs is that they tend to train in fewer epochs than stateless LSTMs. If you were to refactor this as a stateless LSTM, you might need 100 epochs before the network has finished learning, whereas here we are only using 10.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.204.186