Creating a DeepDream network

Google trained a neural network on ImageNet for the Large Scale Visual Recognition Challenge (ILSVRC) in 2014 and made it open source in July 2015. The original algorithm is presented in Going Deeper with Convolutions, Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke e Andrew Rabinovich (https://arxiv.org/abs/1409.4842) . The network learned a representation of each image. The lower layers learned low-level features, such as lines and edges, while the higher layers learned more sophisticated patterns such as eyes, noses, mouths, and so on. Therefore, if we try to represent a higher level in the network, we will see a mix of different features extracted from the original ImageNet such as the eyes of a bird and the mouth of a dog. With this in mind, if we take a new image and try to maximize the similarity with an upper layer of the network, then the result is a new visionary image. In this visionary image, some of the patterns learned by the higher layers are dreamt (for example, imagined) in the original image. Here is an example of such visionary images:

An example of Google Deep Dreams as seen in https://commons.wikimedia.org/wiki/File:Aurelia-aurita-3-0009.jpg
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.194.106