Training with a generator

If you haven't used a generator before, it works like an iterator. Every time you call the ImageDataGenerator .flow() method, it will produce a new training minibatch, with random transformations applied to the images it was fed.

The Keras Model class comes with a .fit_generator() method that allows us to fit with a generator rather than a given dataset:

model.fit_generator(data_generator.flow(data["train_X"], data["train_y"], batch_size=32),
steps_per_epoch=len(data["train_X"]) // 32,
epochs=200,
validation_data=(data["val_X"], data["val_y"]),
verbose=1,
callbacks=callbacks)

Here, we've replaced the traditional x and y parameters with the generator. Most importantly, notice the steps_per_epoch parameter. You can sample with replacement any number of times from the training set, and you can apply random transformations each time. This means that we can use more minibatches each epoch than we have data. Here, I'm going to only sample as many batches as I have observations, but that isn't required. We can and should push this number higher if we can.

Before we wrap things up, let's look at how beneficial image augmentation is in this case:

As you can see, just a little bit of image augmentation really helped us out. Not only is our overall accuracy higher, but our network is overfitting much slower. If you have a computer vision problem with just a little bit of data, image augmentation is something you'll want to do.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.55.193