Understanding the changes in pseudocode

The original training process as seen in Chapter 3My First GAN in Under 100 Lines, is as follows:

### Pseudocode
On Epoch:
Batch Size = ##
x_train_real = Half of Batch From Real Images
y_train_real = 1s (Real) x Batch Size
x_train_gen = Half of Batch from Generated Images
y_train_gen = 0s (Fake) x Batch Size

train_discriminator(concat(x_train_real,x_train_gen),
concat(y_train_real,y_train_gen))

x_generated_images = generator(batch_size)
y_labels = 1s for Real
train_gan(x_generated_images,y_labels)

The updated process is as follows:

### Pseudocode
On Epoch:
Batch Size = ##
flipcoin()
x_train = Half of Batch From Real Images
y_train = 1s (Real) x Batch Size
else:
x_train = Half of Batch from Generated Images
y_train = 0s (Fake) x Batch Size

train_discriminator(x_train,y_train)

x_generated_images = generator(batch_size)
flipcoin()
y_labels = 1s for Real
else:
y_labels = 0s for Fake (Noisy Labels)
train_gan(x_generated_images,y_labels)

A key change with this training step is that we no longer mix the generated and real images when training the discriminator, as illustrated in the following diagram:

These minor changes to the trainer class work toward improving the performance of GAN, and in some cases, are the reason you can converge. For example, when writing this chapter, we ran into a few instances where the discriminator would diverge, making the generator only produce noise.

We'll discuss coin flipping in the upcoming recipe on parameter tuning. The preceding Python code represents the pseudocode.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.245.1