Training our model

Now that we've defined our model, we're all set to train it. Here's how we do that:

input_features = data["train_X"].shape[1]
model = build_network(input_features=input_features)
model.fit(x=data["train_X"], y=data["train_y"], batch_size=32, epochs=20, verbose=1, validation_data=(data["val_X"], data["val_y"]), callbacks=callbacks)

This should look pretty familiar if you've already read Chapter 2Using Deep Learning to Solve Regression Problems. It's really, for the most part, the same. The callback list contains the TensorBoard callback, so let's watch our network train for 20 epochs and see what happens:

>

While our train loss continues to go mostly down, we can see that our val_loss is jumping all over the place. We're overfitting after about the eighth epoch.

There are several ways that we can reduce the variance in our network and manage this overfitting, and we will cover most of those methods in the next chapter. Before we do, however, I want to show you something useful called the checkpoint callback.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.142.194