Pretrained CNN model as a feature extractor with image augmentation

We will leverage the same data generators for our train and validation datasets that we used before. The code for building them is depicted as follows for ease of understanding:

train_datagen = ImageDataGenerator(rescale=1./255, zoom_range=0.3, 
rotation_range=50,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest')

val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(train_imgs, train_labels_enc,
batch_size=30)
val_generator = val_datagen.flow(validation_imgs,
validation_labels_enc,
batch_size=20)

Let's now build our deep learning model architecture. We won't extract the bottleneck features like last time since we will be training on data generators; hence, we will be passing the vgg_model object as an input to our own model:

model = Sequential() 

model.add(vgg_model)
model.add(Dense(512, activation='relu', input_dim=input_shape)) model.add(Dropout(0.3)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['accuracy'])

You can clearly see that everything is the same. We bring the learning rate slightly down since we will be training for 100 epochs and don't want to make any sudden abrupt weight adjustments to our model layers. Do remember that the VGG-16 model's layers are still frozen here and we are still using it as a basic feature extractor only:

history = model.fit_generator(train_generator, steps_per_epoch=100, 
epochs=100,
validation_data=val_generator,
validation_steps=50,
verbose=1)

Epoch 1/100
100/100 - 45s 449ms/step - loss: 0.6511 - acc: 0.6153 - val_loss: 0.5147 - val_acc: 0.7840
Epoch 2/100
100/100 - 41s 414ms/step - loss: 0.5651 - acc: 0.7110 - val_loss: 0.4249 - val_acc: 0.8180
...
...
Epoch 99/100
100/100 - 42s 417ms/step - loss: 0.2656 - acc: 0.8907 - val_loss: 0.2757 - val_acc: 0.9050
Epoch 100/100
100/100 - 42s 418ms/step - loss: 0.2876 - acc: 0.8833 - val_loss: 0.2665 - val_acc: 0.9000

We can see that our model has an overall validation accuracy of 90%, which is a slight improvement from our previous model, and also the train and validation accuracy are quite close to each other, indicating that the model is not overfitting. This can be reinforced by looking at the following plots for model accuracy and loss:

We can clearly see that the values of train and validation accuracy are quite close to each other and the model doesn't overfit. Also, we reach 90% accuracy, which is neat! Let's save this model on the disk now for future evaluation on the test data:

model.save('cats_dogs_tlearn_img_aug_cnn.h5')

We will now fine-tune the VGG-16 model to build our last classifier, where we will unfreeze blocks 4 and 5, as we depicted at the beginning of this section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.236.231