The residual blocks

The residual blocks contain two 2D convolutional layers, each followed by a batch normalization layer and an activation function.

  1. Let's define the residual blocks. This code describes the residual blocks completely:
def residual_block(input):
"""
Residual block in the generator network
:return:
"""
x = Conv2D(128 * 4, kernel_size=(3, 3), padding='same', strides=1)(input)
x = BatchNormalization()(x)
x = ReLU()(x)

x = Conv2D(128 * 4, kernel_size=(3, 3), strides=1, padding='same')(x)
x = BatchNormalization()(x)

x = add([x, input])
x = ReLU()(x)

return x

The initial input is added to the output of the second 2D convolutional layer. The resultant tensor will be the output of the block.

  1. Next, add a 2D convolutional block with the following hyperparameters:
    • Padding size: (1, 1)
    • Filters: 512
    • Kernel size: (3, 3)
    • Strides: 1
    • Batch normalization: Yes
    • Activation: ReLU
x = ZeroPadding2D(padding=(1, 1))(c_code)
x = Conv2D(512, kernel_size=(3, 3), strides=1, use_bias=False)(x)
x = BatchNormalization()(x)
x = ReLU()(x)
  1. After that, add four residual blocks sequentially:
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)

The upsampling blocks will receive this output tensor from the residual blocks. Let's write the code for the upsampling blocks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.159.82