Uses and limitations of autoencoders

Autoencoders are cool in their simplicity, but they are somewhat limited in what they can do. One potential use of theirs is to pretrain a model (given that you have your model as the encoder part and that you are able to create an anti-model as the decoder). The use of autoencoders can be good for pretraining, as you can take your dataset and train your autoencoder to reconstruct it. Once trained, you can use the weights of the encoder and then fine-tune them to your intended task.

Another use is as a form of compression for your data if it isn't too complicated. You can use the autoencoder to reduce the dimensionality down to two or three dimensions and then try to visualize your inputs in the latent space to see whether it shows you anything useful.

One limitation of autoencoders, however, is that they cannot be used to generate more data for us. This is because we don't know how to create new latent vectors to feed to the decoder; the only way is to use the encoder on input data. We will now look at modification to the autoencoder that looks to help solve this issue.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.19.185