Now that we've defined our neural network and loaded our data, all that's left is to train it.
Here's how I will train the model we've just built.
model = build_network(data["train_X"].shape[1])
model.fit(x=data["train_X"], y=data["train_y"],
batch_size=30,
epochs=50,
validation_data=(data["val_X"], data["val_y"]),
verbose=1,
callbacks=callbacks)
I'm using the same callbacks that we've previously used. I'm not using the ROC AUC callback we built in Chapter 4, Using Keras for Binary Classification, as ROC AUC isn't clearly defined for multiclass classifiers.
Let's watch TensorBoard as our model trains:
Before you read the next paragraph, take a second and think about what these graphs are telling us. Got it? OK, let's move on.
So, this is a familiar situation. Our training loss is continuing to creep down, while our validation loss is going up. We're overfitting. While early stopping is certainly an option, let me show you a few new tricks to handle overfitting. Let's look at dropout and l2 regularization in the next section. Before we do, however, we should look at how to measure accuracy and make predictions using a multiclass network.