Variational autoencoders

Our first true generative model, which can create more data that resembles the training data, will be the variational autoencoder (VAE). The VAE looks like the normal autoencoder but with a new constraint that will force our compressed representation (latent space) to follow a zero mean and unit variance Gaussian distribution.

The idea behind forcing this constraint on the latent space is that when we want to use our VAE to generate new data, we can just create sample vectors that come from a unit Gaussian distribution and give them to the trained decoder. It is this constraint on the latent space vector that is the difference between VAE and normal autoencoders. This constraint allows us a way to create new latent vectors than can be fed to the decoder to generate data.

The following figure shows that the VAE looks exactly the same in structure as the autoencoder, except for the constraint on the hidden space:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.93.141