We have specified the callbacks that store the loss and accuracy information for each epoch to be saved as the variable history. We can retrieve this data from the dictionary history.history. Let's check out the dictionary keys:
print(history.history.keys())
This will output dict_keys(['loss', 'acc']).
Next, we will plot out the loss function and accuracy along epochs in line graphs:
import pandas as pd
import matplotlib
matplotlib.style.use('seaborn')
# Here plots the loss function graph along Epochs
pd.DataFrame(history.history['loss']).plot()
plt.legend([])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Validation loss across 100 epochs',fontsize=20,fontweight='bold')
plt.show()
# Here plots the percentage of accuracy along Epochs
pd.DataFrame(history.history['acc']).plot()
plt.legend([])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Accuracy loss across 100 epochs',fontsize=20,fontweight='bold')
plt.show()
Upon training, we can say the loss function is decreasing, accompanied by an increase in accuracy, which is something we are delighted to see. Here is the first graph showing the loss function:
The next graph shows the changes in Accuracy across Epoch:
From these screenshots, we can observe a general trend for decreasing loss and increasing accuracy with each epoch along the training process, with alternating ups and downs. We can observe whether the final accuracy or the learning rate is desirable and optimize the model where necessary.