How it works...

VAE learns to reconstruct and at the same time generate new images. The generated images are dependent upon the latent space. The images generated have the same distribution as the dataset they are trained on.

We can also see the data in the latent space by defining a transform function within the VariationalAutoencoder class:

def transform(self, X):
"""Transform data by mapping it into the latent space."""
# Note: This maps to mean of distribution, we could alternatively sample from Gaussian distribution
return self.sess.run(self.z_mean, feed_dict={self.x: X})

The latent representation of the MNIST dataset using the transform function is as follows:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.218.105