Measuring the performance of our model

Now that our MLP has been trained, we can start to understand how good it is. I'll make a prediction on our Train, Val, and Test datasets to do so. The code for the same is as follows:

print("Model Train MAE: " + str(mean_absolute_error(data["train_y"], model.predict(data["train_X"]))))
print("Model Val MAE: " + str(mean_absolute_error(data["val_y"], model.predict(data["val_X"]))))
print("Model Test MAE: " + str(mean_absolute_error(data["test_y"], model.predict(data["test_X"]))))

For our MLP, this is how well we did:

Model Train MAE: 0.190074701809
Model Val MAE: 0.213255747475
Model Test MAE: 0.199885450841

Keep in mind that our data has been scaled to 0 mean and unit variance. The Train MAE is 0.19, and our Val MAE is 0.21. These two errors are pretty close to each other, so over fitting isn't something I'd be too concerned about. Because I am expecting some amount of over fitting that I don't see (usually over fitting is the bigger problem), I hypothesize this model might have too much bias. Said another way, we might not be able to fit the data closely enough. When this occurs, we need to add more layers, more neurons, or both to our model. We need to go deeper. Let's do that next. 

We can attempt to reduce network bias by adding parameters to the network, in the form of more neurons. While you might be tempted to start tuning your optimizer, it's usually better to find a network architecture you're comfortable with first. 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.179.220