The generator network

The generator network is a CNN that takes a 100-dimensional vector z and generates an image with a dimension of (64, 64, 3). Let's implement the generator network in the Keras framework.

Perform the following steps to implement the generator network:

  1. Start by creating the two input layers to the generator network:
latent_dims = 100
num_classes = 6

# Input layer for vector z
input_z_noise = Input(shape=(latent_dims, ))

# Input layer for conditioning variable
input_label = Input(shape=(num_classes, ))
  1. Next, concatenate the inputs along the channel dimension, as shown here:
x = concatenate([input_z_noise, input_label])

The preceding step will generate a concatenated tensor.

  1. Next, add a dense (fully connected) block with the following configurations:
    • Units (nodes): 2,048
    • Input dimension: 106
    • Activation: LeakyReLU with alpha equal to 0.2
    • Dropout: 0.2:
x = Dense(2048, input_dim=latent_dims+num_classes)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.2)(x)
  1. Next, add the second dense (fully-connected) block with the following configurations:
    • Units (nodes): 16,384
    • Batch normalization: Yes
    • Activation: LeakyReLU with alpha equal to 0.2
    • Dropout: 0.2:
x = Dense(256 * 8 * 8)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.2)(x)
  1. Next, reshape the output from the last dense layer to a three-dimensional tensor with a dimension of (8, 8, 256):
x = Reshape((8, 8, 256))(x)

This layer will generate a tensor of a dimension of (batch_size, 8, 8, 256).

  1. Next, add an upsampling block that contains an upsampling layer followed by a 2-D convolution layer and a batch normalization layer with the following configurations:
    • Upsampling size: (2, 2)
    • Filters: 128
    • Kernel size: 5
    • Padding: same
    • Batch normalization: Yes, with momentum equal to 0.8
    • Activation: LeakyReLU with alpha equal to 0.2:
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=128, kernel_size=5, padding='same')(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
Upsampling2D is the process of repeating the rows a specified number of times x and repeating the columns a specified number of times y, respectively. 
  1. Next, add another Upsampling block (similar to the previous layer), as shown in the following code. The configuration is similar to the previous block, except that the number of filters used in the convolution layer is 128:
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=64, kernel_size=5, padding='same')(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)
  1. Next, add the last Upsampling block. The configuration is similar to the previous layer, except for the the fact that there are three filters used in the convolution layer and batch normalization is not used:
x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=3, kernel_size=5, padding='same')(x)
x = Activation('tanh')(x)
  1. Finally, create a Keras model and specify the inputs and the outputs for the generator network:
model = Model(inputs=[input_z_noise, input_label], outputs=[x])

The entire code for the generator network is shown here:

def build_generator():
"""
Create a Generator Model with hyperparameters values defined as follows
:return: Generator model
"""
latent_dims = 100
num_classes = 6

input_z_noise = Input(shape=(latent_dims,))
input_label = Input(shape=(num_classes,))

x = concatenate([input_z_noise, input_label])

x = Dense(2048, input_dim=latent_dims + num_classes)(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.2)(x)

x = Dense(256 * 8 * 8)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.2)(x)

x = Reshape((8, 8, 256))(x)

x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=128, kernel_size=5, padding='same')(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)

x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=64, kernel_size=5, padding='same')(x)
x = BatchNormalization(momentum=0.8)(x)
x = LeakyReLU(alpha=0.2)(x)

x = UpSampling2D(size=(2, 2))(x)
x = Conv2D(filters=3, kernel_size=5, padding='same')(x)
x = Activation('tanh')(x)

model = Model(inputs=[input_z_noise, input_label], outputs=[x])
return model

We have now successfully created the generator network. We will next write the code for the discriminator network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.59