Getting Ready

A denoising autoencoder will also have the KL divergence penalty term; it will differ from the sparse autoencoder of the previous recipe in two main aspects. First, n_hidden, the number of hidden units in the bottleneck layer is more than the number of units in the input layer, m, that is n_hidden > m. Second, the input to the encoder is corrupted input; to do this in TensorFlow, we add the function corrupt, which adds noise to the input:

def corruption(x, noise_factor = 0.3): #corruption of the input
noisy_imgs = x + noise_factor * np.random.randn(*x.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
return noisy_imgs
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.158.137