Generating Anime Characters Using DCGANs

As we know, convolution layers are really good at processing images. They are capable of learning important features, such as edges, shapes, and complex objects, effectively, as shown in neural networks, such as Inception, AlexNet, Visual Geometry Group (VGG), and ResNet. Ian Goodfellow and others proposed a Generative Adversarial Network (GAN) with dense layers in their paper titled Generative Adversarial Nets, which can be found at the following link: https://arxiv.org/pdf/1406.2661.pdf. Complex neural networks, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) were not initially tested in GANs. The development of Deep Convolutional Generative Adversarial Networks (DCGANs) was an important step toward using CNNs for image generation. A DCGAN uses convolutional layers instead of dense layers. They were proposed by researchers Alec RadfordLuke MetzSoumith Chintala, and others, in their paper, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, which can be found at the following link: https://arxiv.org/pdf/1511.06434.pdf. Since then, DCGANs have been widely used for various image generation tasks. In this chapter, we will use a DCGAN architecture to generate anime characters.

In this chapter, we will be covering the following topics:

  • Introducing to DCGANs
  • Architectural details of a GAN network
  • Setting up the project
  • Preparing the dataset for training
  • A Keras implementation of a DCGAN for the generation of anime characters
  • Training the DCGAN on the anime character dataset
  • Evaluating the trained model
  • Optimizing the networks by optimizing the hyperparameters
  • Practical applications of DCGANs
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.251.70