How to do it....

With the generator, we will replicate the paper with the number of filters and the block style.

These are the steps for this:

  1. Imports will match many of the previous chapters:
#!/usr/bin/env python
import sys
import numpy as np
from keras.layers import Dense, Reshape, Input, BatchNormalization,
Concatenate
from keras_contrib.layers.normalization import InstanceNormalization
from keras.layers.core import Activation
from keras.layers.convolutional import UpSampling2D, Convolution2D,
MaxPooling2D,Deconvolution2D
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential, Model
from keras.optimizers import Adam, SGD, Nadam, Adamax
from keras import initializers
from keras.utils import plot_model

It's important to note that we are using a keras_contributer layer—the InstanceNormalization layer. This layer is used in the original CycleGAN paper and we are lucky there is an open source implementation we can leverage here.

  1. Next, we instantiate the class and simplify the input to the class:
class Generator(object):
def __init__(self, width = 28, height= 28, channels = 1):

self.W = width
self.H = height
self.C = channels
self.SHAPE = (width,height,channels)

self.Generator = self.model()
self.OPTIMIZER = Adam(lr=1e-4, beta_1=0.2)
self.Generator.compile(loss='binary_crossentropy',
optimizer=self.OPTIMIZER,metrics=['accuracy'])

self.save_model()
self.summary()

This class is much simpler than other instantiations in the past—why? The complexity in the CycleGAN paper is in the architecture and the structure of the combination of models.

  1. The critical piece to this generator is the model—here is where we start:
    def model(self):
input_layer = Input(shape=self.SHAPE)
  1. The first part of the model involves 2D convolutions with this InstanceNormalization layer:
down_1 = Convolution2D(32  , kernel_size=4, strides=2, 
padding='same',activation=LeakyReLU(alpha=0.2))(input_layer)
norm_1 = InstanceNormalization()(down_1)

down_2 = Convolution2D(32*2, kernel_size=4, strides=2,
padding='same',activation=LeakyReLU(alpha=0.2))(norm_1)
norm_2 = InstanceNormalization()(down_2)

down_3 = Convolution2D(32*4, kernel_size=4, strides=2,
padding='same',activation=LeakyReLU(alpha=0.2))(norm_2)
norm_3 = InstanceNormalization()(down_3)

down_4 = Convolution2D(32*8, kernel_size=4, strides=2,
padding='same',activation=LeakyReLU(alpha=0.2))(norm_3)
norm_4 = InstanceNormalization()(down_4)

How does InstanceNormalization differ from BatchNormalization and other similar techniques?

  1. The upsample blocks are similar to the downsample but bring us back up to the original resolution of our images:
upsample_1 = UpSampling2D()(norm_4)
up_conv_1 = Convolution2D(32*4, kernel_size=4, strides=1,
padding='same',activation='relu')(upsample_1)
norm_up_1 = InstanceNormalization()(up_conv_1)
add_skip_1 = Concatenate()([norm_up_1,norm_3])

upsample_2 = UpSampling2D()(add_skip_1)
up_conv_2 = Convolution2D(32*2, kernel_size=4, strides=1,
padding='same',activation='relu')(upsample_2)
norm_up_2 = InstanceNormalization()(up_conv_2)
add_skip_2 = Concatenate()([norm_up_2,norm_2])

upsample_3 = UpSampling2D()(add_skip_2)
up_conv_3 = Convolution2D(32, kernel_size=4, strides=1,
padding='same',activation='relu')(upsample_3)
norm_up_3 = InstanceNormalization()(up_conv_3)
add_skip_3 = Concatenate()([norm_up_3,norm_1])

Notice the use of InstanceNormalization again—this layer is an important one in the development of the generator that gives the network better generalization for the style transfer function.

  1. The last piece of the generator model method is the output layer and the structure of this model:
last_upsample = UpSampling2D()(add_skip_3)
output_layer = Convolution2D(3, kernel_size=4, strides=1,
padding='same',activation='tanh')(last_upsample)

return Model(input_layer,output_layer)

This generator model will need the return statement—where we actually build the model in the return—why? Because when we go to link up the architecture in the GAN stage, we will need this type of structure to link the input and output of models.

  1. As with our previous classes, we have similar helper methods included with this class:
def summary(self):
return self.Generator.summary()

def save_model(self):
plot_model(self.Generator.model, to_file='/data/Generator_Model.png')

That's it! This is what you need from the generator to make it work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.205.235