The discriminator network

The discriminator network is a CNN. Let's implement the discriminator network in the Keras framework.

Perform the following steps to implement the discriminator network:

  1. Start by creating two input layers, as our discriminator network will process two inputs:
# Specify hyperparameters
# Input image shape
input_shape = (64, 64, 3)
# Input conditioning variable shape
label_shape = (6,)

# Two input layers
image_input = Input(shape=input_shape)
label_input = Input(shape=label_shape)

  1. Next, add a 2-D convolution block (Conv2D + Activation function) with the following configuration:
    • Filters = 64
    • Kernel size: 3
    • Strides: 2
    • Padding: same
    • Activation: LeakyReLU with alpha equal to 0.2:
x = Conv2D(64, kernel_size=3, strides=2, padding='same')(image_input)
x = LeakyReLU(alpha=0.2)(x)
  1. Next, expand label_input so that it has a shape of (32, 32, 6) :
label_input1 = Lambda(expand_label_input)(label_input)

The expand_label_input function is as follows:

# The expand_label_input function
def
expand_label_input(x):
x = K.expand_dims(x, axis=1)
x = K.expand_dims(x, axis=1)
x = K.tile(x, [1, 32, 32, 1])
return x

The preceding function will transform a tensor with a dimension of (6, ) to a tensor with a dimension of (32, 32, 6). 

  1. Next, concatenate the transformed label tensor and the output of the last convolution layer along the channel dimension, as shown here:
x = concatenate([x, label_input1], axis=3)
  1. Add a convolution block (2D convolution layer + batch normalization + activation function) with the following configuration:
    • Filters: 128
    • Kernel size: 3
    • Strides: 2
    • Padding: same
    • Batch normalization: Yes
    • Activation: LeakyReLU with alpha equal to 0.2:
x = Conv2D(128, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
  1. Next, add two more convolution blocks, as follows:
x = Conv2D(256, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(512, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
  1. Next, add a flatten layer:
x = Flatten()(x)
  1. Next, add a dense layer (classification layer) that outputs a probability:
x = Dense(1, activation='sigmoid')(x)
  1. Finally, create a Keras model and specify the inputs and outputs for the discriminator network:
model = Model(inputs=[image_input, label_input], outputs=[x])

The entire code for the discriminator network looks as follows:

def build_discriminator():
"""
Create a Discriminator Model with hyperparameters values defined as follows
:return: Discriminator model
"""
input_shape = (64, 64, 3)
label_shape = (6,)
image_input = Input(shape=input_shape)
label_input = Input(shape=label_shape)

x = Conv2D(64, kernel_size=3, strides=2, padding='same')(image_input)
x = LeakyReLU(alpha=0.2)(x)

label_input1 = Lambda(expand_label_input)(label_input)
x = concatenate([x, label_input1], axis=3)

x = Conv2D(128, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(256, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(512, kernel_size=3, strides=2, padding='same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Flatten()(x)
x = Dense(1, activation='sigmoid')(x)

model = Model(inputs=[image_input, label_input], outputs=[x])
return model

We have now successfully created the encoder, the generator, and the discriminator networks. In the next section, we will assemble everything and train the network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.79.176