GAN loss function

As mentioned earlier, both the discriminator and generator have their own loss functions that depend on the output of each others networks. We can think of the GAN as playing a minimax game between the discriminator and the generator that looks like the following:

Here, D is our discriminator, G is our generator, z is a random vector input to the generator, and x is a real image. Although we have given the combined GAN loss here, it is actually easier to consider the two optimizations separately.

In order to train the GAN, we will alternate gradient step updates between the discriminator and the generator. When updating the discriminator, we want to try and maximize the probability that the discriminator makes the right choice. When updating the generator, we want to try and minimize the probability that the discriminator makes the right choice.

However, for practical implementation, we will alter the GAN loss function slightly from what we gave earlier; this is done to help training converge. The alterations are that when updating the generator, rather than minimize the probability that the discriminator makes the right choice; we instead maximize the probability that the discriminator makes the wrong choice:

When updating the discriminator, we try to maximize the probability that it makes the correct choice on both real and fake data:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.26.217