Training  method

The train method shares common code with DCGAN with a few critical changes—we need to collect our data for our batch generator differently and we need to train each one of the discriminators we just developed (four in total):

  1. Start out by defining the train method in a similar way to our DCGAN implementation except with multiple sets of train folders:
    def train(self):
for e in range(self.EPOCHS):
b = 0
X_train_A_temp = deepcopy(self.X_train_A)
X_train_B_temp = deepcopy(self.X_train_B)

while
min(len(X_train_A_temp),len(X_train_B_temp))>self.BATCH:
# Keep track of Batches
b=b+1

Because the batch represents a single image, it isn't strictly required that each domain contain the same number of images. Now, this means that our while statement needs to take into account that there is one folder smaller than the other. The epoch will end when there are no more images in the smaller array of images between A and B.

  1. This code will look familiar—the key difference is that we have now added an additional data source and therefore we need to have an A and B version of our batches:
# Train Discriminator
# Grab Real Images for this training batch
count_real_images = int(self.BATCH)
starting_indexs = randint(0,
(min(len(X_train_A_temp),len(X_train_B_temp))-count_real_images))
real_images_raw_A = X_train_A_temp[ starting_indexs :
(starting_indexs
+ count_real_images) ]
real_images_raw_B = X_train_B_temp[ starting_indexs :
(starting_indexs
+ count_real_images) ]

# Delete the images used until we have none left
X_train_A_temp = np.delete(X_train_A_temp,range(starting_indexs,
(starting_indexs + count_real_images)),0)
X_train_B_temp = np.delete(X_train_B_temp,range(starting_indexs,
(starting_indexs + count_real_images)),0)
batch_A = real_images_raw_A.reshape( count_real_images, self.W_A,
self.H_A, self.C_A )
batch_B = real_images_raw_B.reshape( count_real_images, self.W_B,
self.H_B, self.C_B )
  1. As introduced in Chapter 4Dreaming of New Outdoor Structures Using DCGAN, we introduce label noise into the training process with the development of the batches for training the individual discriminators:
if self.flipCoin():
x_batch_A = batch_A
x_batch_B = batch_B
y_batch_A = np.ones([count_real_images,1])
y_batch_B = np.ones([count_real_images,1])
else:
x_batch_B = self.generator_A_to_B.Generator.predict(batch_A)
x_batch_A = self.generator_B_to_A.Generator.predict(batch_B)
y_batch_A = np.zeros([self.BATCH,1])
y_batch_B = np.zeros([self.BATCH,1])
  1. Train discriminator A and discriminator B with the newly developed batches:
# Now, train the discriminator with this batch
self.discriminator_A.Discriminator.trainable = True
discriminator_loss_A =
self.discriminator_A.Discriminator.train_on_batch
(x_batch_A,y_batch_A)[0]
self.discriminator_A.Discriminator.trainable = False
self.discriminator_B.Discriminator.trainable = True
discriminator_loss_B =
self.discriminator_B.Discriminator.train_on_batch
(x_batch_B,y_batch_B)[0]
self.discriminator_B.Discriminator.trainable = False
  1. Train your GAN model with all of your input values—record the loss:
# In practice, flipping the label when training the generator improves convergence
if self.flipCoin(chance=0.9):
y_generated_labels = np.ones([self.BATCH,1])
else:
y_generated_labels =np.zeros([self.BATCH,1])
generator_loss = self.gan.gan_model.train_on_batch([x_batch_A,
x_batch_B],[y_generated_labels, y_generated_labels,x_batch_A,
x_batch_B,x_batch_A, x_batch_B])
  1. Check the output of the batches periodically and at the epoch level:
print ('Batch: '+str(int(b))+', [Discriminator_A :: Loss: 
'+str(discriminator_loss_A)+'], [ Generator :: Loss:
'+str(generator_loss)+']')
if b % self.CHECKPOINT == 0 :
label = str(e)+'_'+str(b)
self.plot_checkpoint(label)

print ('Epoch: '+str(int(e))+', [Discriminator_A :: Loss:
'+str(discriminator_loss_A)+'], [ Generator :: Loss:
'+str(generator_loss)+']')

return
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.30.62