Configuring the discriminator network

Before moving forward, let's have a look at the architecture of the discriminator network:

The preceding diagram gives a top-level overview of the architecture of the generator network. 

As mentioned, the discriminator network is a CNN that contains 10 layers (you can add more layers to the network according to your requirements). Basically, it takes an image of dimensions of 64 x 64 x 3, downsamples it using 2D convolutional layers, and passes it to the fully connected layers for classification. Its output is a prediction as to whether the given image is a fake image or a real image. This can be either 0 or 1; if the output is 1, the image passed to the discriminator is real and if the output is 0, the image passed is a fake image. 

Let's have a look at the layers in the discriminator network:

Layer #

Layer name

Configuration

1.

Input layer

input_shape=(batch_size, 64, 64, 3), output_shape=(batch_size, 64, 64, 3)

2.

2D convolutional Layer

filters=128, kernel_size=(5, 5), strides=(1, 1), padding='valid', input_shape=(batch_size, 64, 64, 3), output_shape=(batch_size, 64, 64, 128), activation='leakyrelu', leaky_relu_alpha=0.2

3.

MaxPooling2D

pool_size=(2, 2), input_shape=(batch_size, 64, 64, 128), output_shape=(batch_size, 32, 32, 128)

4.

2D convolutional layer

filters=256, kernel_size=(3, 3), strides=(1, 1), padding='valid', input_shape=(batch_size, 32, 32, 128), output_shape=(batch_size, 30, 30, 256), activation='leakyrelu', leaky_relu_alpha=0.2

5.

MaxPooling2D

pool_size=(2, 2), input_shape=(batch_size, 30, 30, 256), output_shape=(batch_size, 15, 15, 256)

6.

2D convolutional layer

filters=512, kernel_size=(3, 3), strides=(1, 1), padding='valid', input_shape=(batch_size, 15, 15, 256), output_shape=(batch_size, 13, 13, 512), activation='leakyrelu', leaky_relu_alpha=0.2

7.

MaxPooling2D

pool_size=(2, 2), input_shape=(batch_size, 13, 13, 512), output_shape=(batch_size, 6, 6, 512)

8.

Flatten layer

input_shape=(batch_size, 6, 6, 512), output_shape=(batch_size, 18432)

9.

Dense layer

neurons=1024, input_shape=(batch_size, 18432), output_shape=(batch_size, 1024), activation='leakyrelu', 'leakyrelu_alpha'=0.2

10.

Dense layer

neurons=1, input_shape=(batch_size, 1024), output_shape=(batch_size, 1), activation='sigmoid'

 

Let's have a look at how tensors flow from the first layer to the last layer. The following diagram shows input and output shapes for the different layers:

These configurations are valid for Keras APIs with the TensorFlow backend and the channels_last format.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.97.40