Building the generator

The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code:

def build_generator(noise_shape=(100,)):
input = Input(noise_shape)
x = Dense(128 * 7 * 7, activation="relu")(input)
x = Reshape((7, 7, 128))(x)
x = BatchNormalization(momentum=0.8)(x)
x = UpSampling2D()(x)
x = Conv2D(128, kernel_size=3, padding="same")(x)
x = Activation("relu")(x)
x = BatchNormalization(momentum=0.8)(x)
x = UpSampling2D()(x)
x = Conv2D(64, kernel_size=3, padding="same")(x)
x = Activation("relu")(x)
x = BatchNormalization(momentum=0.8)(x)
x = Conv2D(1, kernel_size=3, padding="same")(x)
out = Activation("tanh")(x)
model = Model(input, out)
print("-- Generator -- ")
model.summary()
return model

We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output.

Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that's very deep but less tall and wide. Here I will do the opposite. I'll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, I'll be left with a 28 x 28 tensor. Since I need a grayscale image, I can use a convolutional layer with a single unit to get a 28 x 28 x 1 output.

This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours you will get the hang of it!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.45.5