Training (fine-tuning)

In order to fine tune the network, we will need to unfreeze some of those frozen layers. How many layers you unfreeze is your choice and you can unfreeze as much of the network as you like. In practice, most of the time, we only see benefits from unfreezing the top-most layers. Here I'm unfreezing only the very last inception block, which starts at layer 249 on the graph. The following code depicts the this technique:

def build_model_fine_tuning(model, learning_rate=0.0001, momentum=0.9):
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
model.compile(optimizer=SGD(lr=learning_rate,
momentum=momentum), loss='binary_crossentropy', metrics=
['accuracy'])
return model

Also note that I'm using a very small learning rate with stochastic gradient descent for fine-tuning. It's important to move weights very slowly at this point to keep from making too big a leap in the wrong direction. I would not recommend using adam or rmsprop for fine-tuning. The following code depicts the fine-tuning mechanism:

callbacks_ft = create_callbacks(name='fine_tuning')
# stage 2 fit
model = build_model_fine_tuning(model)
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.n // batch_size,
epochs=epochs,
validation_data=val_generator,
validation_steps=val_generator.n // batch_size,
callbacks=callbacks_ft,
verbose=2)

scores = model.evaluate_generator(val_generator, steps=val_generator.n // batch_size)
print("Step 2 Scores: Loss: " + str(scores[0]) + " Accuracy: " + str(scores[1]))

We can review our TensorBoard graphs yet again to see if we get anything with our fine-tuning effort:

There's no doubt that our model does improve, but only by a very small amount. While the scale is small, you'll notice that the validation loss is struggling to improve and might be showing some signs of beginning to overfit.

In this case, fine-tuning gave little to no benefit, but that isn't always the case. In this example, the target and source domain are very similar. As we learned earlier, as the source and target domain differ, the amount of benefit you get from fine-tuning will increase.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.39.144