The generator network

The generator network takes an image of a dimension of (256, 256, 1) from a source domain A and translates it to an image in target domain B, with dimensions of (256, 256, 1). Basically, it translates an image from a source domain A to a target domain B. Let's implement the generator network in the Keras framework.

Perform the following steps to create the generator network:

  1. Start by defining the hyperparameters required for the generator network:
kernel_size = 4
strides = 2
leakyrelu_alpha = 0.2
upsampling_size = 2
dropout = 0.5
output_channels = 1
input_shape = (256, 256, 1)
  1. Now create an input layer to feed input to the network as follows:
input_layer = Input(shape=input_shape)
The input layer takes an input image of a shape of (256, 256, 1) and passes it to the next layer in the network.

As mentioned, the generator network has two parts: an encoder and decoder. In the next few steps, we will write the code of the encoder part.

  1.  Add the first convolutional block to the generator network with parameters indicated previously in the The architecture of pix2pix section:
# 1st Convolutional block in the encoder network
encoder1 = Convolution2D(filters=64, kernel_size=kernel_size, padding='same', strides=strides)(input_layer)
encoder1 = LeakyReLU(alpha=leakyrelu_alpha)(encoder1)

The first convolutional block contains a 2D convolution layer with an activation function. Unlike the other seven convolutional blocks, it doesn't have a batch normalization layer. 

  1. Add the other seven convolutional blocks to the generator network:
# 2nd Convolutional block in the encoder network
encoder2 = Convolution2D(filters=128, kernel_size=kernel_size, padding='same',
strides=strides)(encoder1)
encoder2 = BatchNormalization()(encoder2)
encoder2 = LeakyReLU(alpha=leakyrelu_alpha)(encoder2)

# 3rd Convolutional block in the encoder network
encoder3 = Convolution2D(filters=256, kernel_size=kernel_size, padding='same',
strides=strides)(encoder2)
encoder3 = BatchNormalization()(encoder3)
encoder3 = LeakyReLU(alpha=leakyrelu_alpha)(encoder3)

# 4th Convolutional block in the encoder network
encoder4 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same',
strides=strides)(encoder3)
encoder4 = BatchNormalization()(encoder4)
encoder4 = LeakyReLU(alpha=leakyrelu_alpha)(encoder4)

# 5th Convolutional block in the encoder network
encoder5 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same',
strides=strides)(encoder4)
encoder5 = BatchNormalization()(encoder5)
encoder5 = LeakyReLU(alpha=leakyrelu_alpha)(encoder5)

# 6th Convolutional block in the encoder network
encoder6 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same',
strides=strides)(encoder5)
encoder6 = BatchNormalization()(encoder6)
encoder6 = LeakyReLU(alpha=leakyrelu_alpha)(encoder6)

# 7th Convolutional block in the encoder network
encoder7 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same',
strides=strides)(encoder6)
encoder7 = BatchNormalization()(encoder7)
encoder7 = LeakyReLU(alpha=leakyrelu_alpha)(encoder7)

# 8th Convolutional block in the encoder network
encoder8 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same',
strides=strides)(encoder7)
encoder8 = BatchNormalization()(encoder8)
encoder8 = LeakyReLU(alpha=leakyrelu_alpha)(encoder8)

This is the end of the encoder part in the generator network. The second part in the generator network is the decoder. In the next few steps, let's write the code for the decoder.

  1. Add the first upsampling convolutional block to the parameters indicated previously in the The architecture of pix2pix section:
# 1st Upsampling Convolutional Block in the decoder network
decoder1 = UpSampling2D(size=upsampling_size)(encoder8)
decoder1 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same')(decoder1)
decoder1 = BatchNormalization()(decoder1)
decoder1 = Dropout(dropout)(decoder1)
decoder1 = concatenate([decoder1, encoder7], axis=3)
decoder1 = Activation('relu')(decoder1)

The first upsampling block takes an input from the last layer of the encoder part. It has a 2D upsampling layer, a 2D convolution layer, a batch normalization layer, a dropout layer, a concatenation operation, and an activation function. Refer to the Keras documentation to find out more about these layers, which is available at https://keras.io/

  1. Similarly, add the next seven convolutional blocks as follows:
# 2nd Upsampling Convolutional block in the decoder network
decoder2 = UpSampling2D(size=upsampling_size)(decoder1)
decoder2 = Convolution2D(filters=1024, kernel_size=kernel_size, padding='same')(decoder2)
decoder2 = BatchNormalization()(decoder2)
decoder2 = Dropout(dropout)(decoder2)
decoder2 = concatenate([decoder2, encoder6])
decoder2 = Activation('relu')(decoder2)

# 3rd Upsampling Convolutional block in the decoder network
decoder3 = UpSampling2D(size=upsampling_size)(decoder2)
decoder3 = Convolution2D(filters=1024, kernel_size=kernel_size, padding='same')(decoder3)
decoder3 = BatchNormalization()(decoder3)
decoder3 = Dropout(dropout)(decoder3)
decoder3 = concatenate([decoder3, encoder5])
decoder3 = Activation('relu')(decoder3)

# 4th Upsampling Convolutional block in the decoder network
decoder4 = UpSampling2D(size=upsampling_size)(decoder3)
decoder4 = Convolution2D(filters=1024, kernel_size=kernel_size, padding='same')(decoder4)
decoder4 = BatchNormalization()(decoder4)
decoder4 = concatenate([decoder4, encoder4])
decoder4 = Activation('relu')(decoder4)

# 5th Upsampling Convolutional block in the decoder network
decoder5 = UpSampling2D(size=upsampling_size)(decoder4)
decoder5 = Convolution2D(filters=1024, kernel_size=kernel_size, padding='same')(decoder5)
decoder5 = BatchNormalization()(decoder5)
decoder5 = concatenate([decoder5, encoder3])
decoder5 = Activation('relu')(decoder5)

# 6th Upsampling Convolutional block in the decoder network
decoder6 = UpSampling2D(size=upsampling_size)(decoder5)
decoder6 = Convolution2D(filters=512, kernel_size=kernel_size, padding='same')(decoder6)
decoder6 = BatchNormalization()(decoder6)
decoder6 = concatenate([decoder6, encoder2])
decoder6 = Activation('relu')(decoder6)

# 7th Upsampling Convolutional block in the decoder network
decoder7 = UpSampling2D(size=upsampling_size)(decoder6)
decoder7 = Convolution2D(filters=256, kernel_size=kernel_size, padding='same')(decoder7)
decoder7 = BatchNormalization()(decoder7)
decoder7 = concatenate([decoder7, encoder1])
decoder7 = Activation('relu')(decoder7)

# Last Convolutional layer
decoder8 = UpSampling2D(size=upsampling_size)(decoder7)
decoder8 = Convolution2D(filters=output_channels, kernel_size=kernel_size, padding='same')(decoder8)
decoder8 = Activation('tanh')(decoder8)

The activation function for the last layer is 'tanh' because we intend our generator to generate values in a range between -1 to 1. The 'concatenate' layer is used to add skip-connections. The last layer will generate a tensor of a dimension of (256, 256, 1).

The 'concatenate' layer concatenates tensors along the channel dimension. You can provide a value for the axis, along which you want your tensors to be concatenated.
  1. Finally, create a Keras model by specifying the inputs and outputs for the generator network:
# Create a Keras model
model = Model(inputs=[input_layer], outputs=[decoder8])

The entire code for the generator network inside a Python function looks as follows:

def build_unet_generator():
"""
Create U-Net Generator using the hyperparameter values defined
below
"""

kernel_size = 4
strides = 2
leakyrelu_alpha = 0.2
upsampling_size = 2
dropout = 0.5
output_channels = 1
input_shape = (256, 256, 1)

input_layer = Input(shape=input_shape)

# Encoder Network

# 1st Convolutional block in the encoder network
encoder1 = Convolution2D(filters=64, kernel_size=kernel_size,
padding='same',
strides=strides)(input_layer)
encoder1 = LeakyReLU(alpha=leakyrelu_alpha)(encoder1)

# 2nd Convolutional block in the encoder network
encoder2 = Convolution2D(filters=128, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder1)
encoder2 = BatchNormalization()(encoder2)
encoder2 = LeakyReLU(alpha=leakyrelu_alpha)(encoder2)

# 3rd Convolutional block in the encoder network
encoder3 = Convolution2D(filters=256, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder2)
encoder3 = BatchNormalization()(encoder3)
encoder3 = LeakyReLU(alpha=leakyrelu_alpha)(encoder3)

# 4th Convolutional block in the encoder network
encoder4 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder3)
encoder4 = BatchNormalization()(encoder4)
encoder4 = LeakyReLU(alpha=leakyrelu_alpha)(encoder4)

# 5th Convolutional block in the encoder network
encoder5 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder4)
encoder5 = BatchNormalization()(encoder5)
encoder5 = LeakyReLU(alpha=leakyrelu_alpha)(encoder5)

# 6th Convolutional block in the encoder network
encoder6 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder5)
encoder6 = BatchNormalization()(encoder6)
encoder6 = LeakyReLU(alpha=leakyrelu_alpha)(encoder6)

# 7th Convolutional block in the encoder network
encoder7 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder6)
encoder7 = BatchNormalization()(encoder7)
encoder7 = LeakyReLU(alpha=leakyrelu_alpha)(encoder7)

# 8th Convolutional block in the encoder network
encoder8 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same',
strides=strides)(encoder7)
encoder8 = BatchNormalization()(encoder8)
encoder8 = LeakyReLU(alpha=leakyrelu_alpha)(encoder8)

# Decoder Network

# 1st Upsampling Convolutional Block in the decoder network
decoder1 = UpSampling2D(size=upsampling_size)(encoder8)
decoder1 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same')(decoder1)
decoder1 = BatchNormalization()(decoder1)
decoder1 = Dropout(dropout)(decoder1)
decoder1 = concatenate([decoder1, encoder7], axis=3)
decoder1 = Activation('relu')(decoder1)

# 2nd Upsampling Convolutional block in the decoder network
decoder2 = UpSampling2D(size=upsampling_size)(decoder1)
decoder2 = Convolution2D(filters=1024, kernel_size=kernel_size,
padding='same')(decoder2)
decoder2 = BatchNormalization()(decoder2)
decoder2 = Dropout(dropout)(decoder2)
decoder2 = concatenate([decoder2, encoder6])
decoder2 = Activation('relu')(decoder2)

# 3rd Upsampling Convolutional block in the decoder network
decoder3 = UpSampling2D(size=upsampling_size)(decoder2)
decoder3 = Convolution2D(filters=1024, kernel_size=kernel_size,
padding='same')(decoder3)
decoder3 = BatchNormalization()(decoder3)
decoder3 = Dropout(dropout)(decoder3)
decoder3 = concatenate([decoder3, encoder5])
decoder3 = Activation('relu')(decoder3)

# 4th Upsampling Convolutional block in the decoder network
decoder4 = UpSampling2D(size=upsampling_size)(decoder3)
decoder4 = Convolution2D(filters=1024, kernel_size=kernel_size,
padding='same')(decoder4)
decoder4 = BatchNormalization()(decoder4)
decoder4 = concatenate([decoder4, encoder4])
decoder4 = Activation('relu')(decoder4)

# 5th Upsampling Convolutional block in the decoder network
decoder5 = UpSampling2D(size=upsampling_size)(decoder4)
decoder5 = Convolution2D(filters=1024, kernel_size=kernel_size,
padding='same')(decoder5)
decoder5 = BatchNormalization()(decoder5)
decoder5 = concatenate([decoder5, encoder3])
decoder5 = Activation('relu')(decoder5)

# 6th Upsampling Convolutional block in the decoder network
decoder6 = UpSampling2D(size=upsampling_size)(decoder5)
decoder6 = Convolution2D(filters=512, kernel_size=kernel_size,
padding='same')(decoder6)
decoder6 = BatchNormalization()(decoder6)
decoder6 = concatenate([decoder6, encoder2])
decoder6 = Activation('relu')(decoder6)

# 7th Upsampling Convolutional block in the decoder network
decoder7 = UpSampling2D(size=upsampling_size)(decoder6)
decoder7 = Convolution2D(filters=256, kernel_size=kernel_size,
padding='same')(decoder7)
decoder7 = BatchNormalization()(decoder7)
decoder7 = concatenate([decoder7, encoder1])
decoder7 = Activation('relu')(decoder7)

# Last Convolutional layer
decoder8 = UpSampling2D(size=upsampling_size)(decoder7)
decoder8 = Convolution2D(filters=output_channels,
kernel_size=kernel_size, padding='same')(decoder8)
decoder8 = Activation('tanh')(decoder8)

model = Model(inputs=[input_layer], outputs=[decoder8])
return model

We have now successfully created a Keras model for the generator network. In the next section, we will create a Keras model for the discriminator network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.30.62