How it works...

It is interesting to note that in the preceding code, we reduced the dimensions of the input from 784 to 256 and our network could still reconstruct the original image. Let's compare our performance with RBM (Chapter 7, Unsupervised Learning ) with the same dimension of the hidden layer:

We can see that the images reconstructed by the autoencoder are much crisper than the ones reconstructed by RBM. The reason is that, in the autoencoder, there are additional weights (weights from hidden to decoder output layer) to be trained and hence retain learning. As the autoencoder learns more, it performs better than RBM, even when both compact the information to the same dimensions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.123.2