Generator

As mentioned in the Architecture of DCGAN section, the generator network consists of some 2D convolutional layers, upsampling layers, a reshape layer, and a batch normalization layer. In Keras, every operation can be specified as a layer. Even activation functions are layers in Keras and can be added to a model just like a normal dense layer.

Perform the following steps to create a generator network:

  1. Let's start by creating a Sequential Keras model:
gen_model = Sequential()
  1. Next, add a dense layer that has 2,048 nodes, followed by an activation layer, tanh:
gen_model.add(Dense(units=2048))
gen_model.add(Activation('tanh'))
  1. Next, add the second layer, which is also a dense layer that has 16,384 neurons. This is followed by a batch normalization layer with default hyperparameters and tanh as the activation function:
gen_model.add(Dense(256*8*8))
gen_model.add(BatchNormalization())
gen_model.add(Activation('tanh'))

The output of the second dense layer is a tensor of a size of (16384,). Here, (256, 8, 8) is the number of neurons in the dense layer.

  1. Next, add a reshape layer to the network to reshape the tensor from the last layer to a tensor of a shape of (batch_size, 8, 8, 256):
# Reshape layer
gen_model.add(Reshape((8, 8, 256), input_shape=(256*8*8,)))
  1. Next, add a 2D upsampling layer to alter the shape from (8, 8, 256) to (16, 16, 256). The upsampling size is (2, 2), which increases the size of the tensor to double its original size. Here, we have 256 tensors of a dimension of 16 x 16:
gen_model.add(UpSampling2D(size=(2, 2)))
  1. Next, add a 2D convolutional layer. This applies 2D convolutions on the tensor using a specified number of filters. Here, we are using 64 filters and a kernel of a shape of (5, 5):
gen_model.add(Conv2D(128, (5, 5), padding='same'))
gen_model.add(Activation('tanh'))
  1. Next, add a 2D upsampling layer to change the shape of the tensor from (batch_size, 16, 16, 64) to (batch_size, 32, 32, 64):
gen_model.add(UpSampling2D(size=(2, 2))) 

A 2D upsampling layer repeats the rows and columns of the tensor by a size of [0] and a size of [1], respectively

  1. Next, add a second 2D convolutional layer with 64 filters and a kernel size of (5, 5) followed by tanh as the activation function:
gen_model.add(Conv2D(64, (5, 5), padding='same'))
gen_model.add(Activation('tanh'))
  1. Next, add a 2D upsampling layer to change the shape from (batch_size, 32, 32, 64) to (batch_size, 64, 64, 64):
gen_model.add(UpSampling2D(size=(2, 2))) 
  1. Finally, add the third 2D convolutional layer with three filters and a kernel size of (5, 5) followed by tanh as the activation function:
gen_model.add(Conv2D(3, (5, 5), padding='same'))
gen_model.add(Activation('tanh'))

The generator network will output a tensor of a shape of (batch_size, 64, 64, 3). One image tensor from this batch of tensors is similar to an image of a dimension of 64 x 64 with three channels: Red, Green, and Blue (RGB).

The complete code for the generator network wrapped in a Python method looks as follows:

def get_generator():
gen_model = Sequential()

gen_model.add(Dense(input_dim=100, output_dim=2048))
gen_model.add(LeakyReLU(alpha=0.2))

gen_model.add(Dense(256 * 8 * 8))
gen_model.add(BatchNormalization())
gen_model.add(LeakyReLU(alpha=0.2))

gen_model.add(Reshape((8, 8, 256), input_shape=(256 * 8 * 8,)))
gen_model.add(UpSampling2D(size=(2, 2)))

gen_model.add(Conv2D(128, (5, 5), padding='same'))
gen_model.add(LeakyReLU(alpha=0.2))

gen_model.add(UpSampling2D(size=(2, 2)))

gen_model.add(Conv2D(64, (5, 5), padding='same'))
gen_model.add(LeakyReLU(alpha=0.2))

gen_model.add(UpSampling2D(size=(2, 2)))

gen_model.add(Conv2D(3, (5, 5), padding='same'))
gen_model.add(LeakyReLU(alpha=0.2))
return gen_model

Now we have created the generator network, let's work on creating the discriminator network. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.111.87