The fully connected classifier

This merged input is then fed to a block with one convolutional layer and a dense layer for classification purposes:

x3 = Conv2D(64 * 8, kernel_size=1, padding="same", strides=1)(merged_input)
x3 = BatchNormalization()(x3)
x3 = LeakyReLU(alpha=0.2)(x3)
x3 = Flatten()(x3)
x3 = Dense(1)(x3)
x3 = Activation('sigmoid')(x3)

x3 is the output of this discriminator network. This outputs a probability of whether the passed image is real or fake.

Finally, create a model:

stage2_dis = Model(inputs=[input_layer, input_layer2], outputs=[x3])

As you can see, this model takes two inputs and returns one output.

The complete code for the discriminator network is as follows:

def build_stage2_discriminator():
input_layer = Input(shape=(256, 256, 3))

x = Conv2D(64, (4, 4), padding='same', strides=2, input_shape=(256, 256, 3), use_bias=False)(input_layer)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(128, (4, 4), padding='same', strides=2, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(256, (4, 4), padding='same', strides=2, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(512, (4, 4), padding='same', strides=2, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(1024, (4, 4), padding='same', strides=2, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(2048, (4, 4), padding='same', strides=2, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(1024, (1, 1), padding='same', strides=1, use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)

x = Conv2D(512, (1, 1), padding='same', strides=1, use_bias=False)(x)
x = BatchNormalization()(x)

x2 = Conv2D(128, (1, 1), padding='same', strides=1, use_bias=False)(x)
x2 = BatchNormalization()(x2)
x2 = LeakyReLU(alpha=0.2)(x2)

x2 = Conv2D(128, (3, 3), padding='same', strides=1, use_bias=False)(x2)
x2 = BatchNormalization()(x2)
x2 = LeakyReLU(alpha=0.2)(x2)

x2 = Conv2D(512, (3, 3), padding='same', strides=1, use_bias=False)(x2)
x2 = BatchNormalization()(x2)

added_x = add([x, x2])
added_x = LeakyReLU(alpha=0.2)(added_x)

input_layer2 = Input(shape=(4, 4, 128))

# Concatenation block
merged_input = concatenate([added_x, input_layer2])

x3 = Conv2D(64 * 8, kernel_size=1, padding="same", strides=1)(merged_input)
x3 = BatchNormalization()(x3)
x3 = LeakyReLU(alpha=0.2)(x3)
x3 = Flatten()(x3)
x3 = Dense(1)(x3)
x3 = Activation('sigmoid')(x3)

stage2_dis = Model(inputs=[input_layer, input_layer2], outputs=[x3])
return stage2_dis

We have now successfully created models for both StackGANs: Stage-I and Stage-II. Let's move on to training the model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.117.191