Training the SRGAN

Training the SRGAN network is a two-step process. In the first step, we train the discriminator network. In the second step, we train the adversarial network, which eventually trains the generator network. Let's start training the network.

Perform the following steps to train the SRGAN network:

  1. Start by defining the hyperparameters required for the training:
# Define hyperparameters
data_dir = "Paht/to/the/dataset/img_align_celeba/*.*"
epochs = 20000
batch_size = 1

# Shape of low-resolution and high-resolution images
low_resolution_shape = (64, 64, 3)
high_resolution_shape = (256, 256, 3)
  1. Next, define the training optimizer. For all networks, we will use Adam optimizer with the learning rate equal to 0.0002 and beta_1 equal to 0.5:
# Common optimizer for all networks
common_optimizer = Adam(0.0002, 0.5)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.142.166