Implementing Variational Autoencoders

Variational Autoencoders (VAE) are a mix of the best of both worlds of the neural networks and the Bayesian inference. They are the coolest neural networks and have emerged as one of the popular approaches to unsupervised learning. They are Autoencoders with a twist. Along with the conventional Encoder and the Decoder network of the Autoencoders (see Chapter 8, Autoencoders), they have additional stochastic layers. The stochastic layer, after the Encoder network, samples the data using a Gaussian distribution, and the one after the Decoder network samples the data using Bernoulli's distribution. Like GANs, Variational Autoencoders can be used to generate images and figures based on the distribution they have been trained on. VAEs allow one to set complex priors in the latent and thus learn powerful latent representations.

The following diagram describes a VAE. The Encoder network qᵩ(z|x) approximates the true but intractable posterior distribution p(z|x), where x is the input to the VAE and z is the latent representation. The decoder network pϴ(x|z) takes the d-dimensional latent variables (also called latent space) as its input and generate new images following the same distribution as P(x). The latent representation z is sampled from z|x ~  , and the output of the decoder network samples x|z from x|z ~  :

Example of Encoder-Decoder for Autoencoders.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.9.139