Training method

Let's talk about the critical pieces in this architecture to make sure this is easy to understand:

  1. First, we create a train method and loop over the number of specified epochs:
def train(self):
for e in range(self.EPOCHS):

  1. Next, we are going to grab a batch of random images from our training dataset and create our x_real_images and y_real_labels variables:
# Grab a batch
count_real_images = int(self.BATCH/2)
starting_index = randint(0, (len(self.X_train)-count_real_images))
real_images_raw = self.X_train[ starting_index : (starting_index +
count_real_images) ]
x_real_images = real_images_raw.reshape( count_real_images, self.W,
self.H, self.C )
y_real_labels = np.ones([count_real_images,1])
  1. Notice that we only grabbed half the number of images that we specified with the BATCH variable—why? Because, we're going to generate images with our generator in the next step for the other half of the batch:
# Grab Generated Images for this training batch
latent_space_samples = self.sample_latent_space(count_real_images)
x_generated_images =
self.generator.Generator.predict(latent_space_samples)
y_generated_labels = np.zeros([self.BATCH-count_real_images,1])
  1. We've now developed a whole batch for training. We need to concatenate these two sets into the x_batch and y_batch variables for training:
# Combine to train on the discriminator
x_batch = np.concatenate( [x_real_images, x_generated_images] )
y_batch = np.concatenate( [y_real_labels, y_generated_labels] )

This is where it gets interesting—we're going to use this batch to train our discriminator. The discriminator knows that these images are not real when it is trained, so the discriminator is constantly looking for the imperfections in the generated images versus the real images.

  1. Let's train the discriminator and grab a loss value to report:
# Now, train the discriminator with this batch
discriminator_loss =
self.discriminator.Discriminator.train_on_batch(x_batch,y_batch)
[0]

We'll now train the GAN with mislabeled generator outputs. That is to say that we will generate images from noise and assign a label to one of them while training with the GAN. Why? This is the so-called adversarial training portion of the training where we are using the newly trained discriminator to improve the generated output—the report GAN loss is going to describe the confusion of the discriminator from the generated outputs.

  1. Here's the code to train the generator:
# Generate Noise
x_latent_space_samples = self.sample_latent_space(self.BATCH)
y_generated_labels = np.ones([self.BATCH,1])
generator_loss =
self.gan.gan_model.train_on_batch(x_latent_space_samples,
y_generated_labels)
  1. Two pieces are left at the end of the script—printing loss metrics to the screen and checking the model with printed images in the data folder:
print ('Epoch: '+str(int(e))+', [Discriminator :: Loss: 
'+str(discriminator_loss)+'], [ Generator :: Loss:
'+str(generator_loss)+']')

if e % self.CHECKPOINT == 0 :
self.plot_checkpoint(e)
return

That's how you train the GAN. You're now officially a GAN master.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.77.153