Building the generator model

Next, in this class development, we'll create two methods—a block method and a  model creation method:

  1. The block method is used in the model generation method—it focuses on creating a template for a block reused in this model architecture of a Deconv3D layer, BatchNorm, and a ReLU activation:
def block(self,first_layer,filter_size=512,stride_size=(2,2,2),kernel_size=
(4,4,4),padding='same'):

x = Deconv3D(filters=filter_size, kernel_size=kernel_size,
strides=stride_size, kernel_initializer='glorot_normal',
bias_initializer='zeros', padding=padding)(first_layer)
x = BatchNormalization()(x)
x = Activation(activation='relu')(x)

return x
  1. Create the model method that uses the input shape we defined in the initialization step and that starts with one block:
    def model(self):
input_layer = Input(shape=self.INPUT_SHAPE)

x = self.block(input_layer,filter_size=256,stride_size=
(1,1,1),kernel_size=(4,4,4),padding='valid')

  1. Create a second block and halve the number of filters:
x = self.block(x,filter_size=128,stride_size=(2,2,2),kernel_size=(4,4,4))
  1. The final block requires a few changes, so we'll define it explicitly  here—namely, the padding:
x = Deconv3D(filters=3, kernel_size=(4,4,4),
strides=(2,2,2), kernel_initializer='glorot_normal',
bias_initializer='zeros', padding='same')(x)
x = BatchNormalization()(x)
  1. At the end of this method, use a sigmoid activation and create the model by explicitly pointing the input and output layers:
output_layer = Activation(activation='sigmoid')(x)
model = Model(inputs=input_layer, outputs=output_layer)
return model
  1. The final action is to define the summary method (this should start to look familiar by now):
def summary(self):
return self.Generator.summary()

The next recipe will focus on developing the discriminator class!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.2.157