0%

Book Description

Updated and revised second edition of the bestselling guide to advanced deep learning with TensorFlow 2 and Keras

Key Features

  • Explore the most advanced deep learning techniques that drive modern AI results
  • New coverage of unsupervised deep learning using mutual information, object detection, and semantic segmentation
  • Completely updated for TensorFlow 2.x

Book Description

Advanced Deep Learning with TensorFlow 2 and Keras, Second Edition is a completely updated edition of the bestselling guide to the advanced deep learning techniques available today. Revised for TensorFlow 2.x, this edition introduces you to the practical side of deep learning with new chapters on unsupervised learning using mutual information, object detection (SSD), and semantic segmentation (FCN and PSPNet), further allowing you to create your own cutting-edge AI projects.

Using Keras as an open-source deep learning library, the book features hands-on projects that show you how to create more effective AI with the most up-to-date techniques.

Starting with an overview of multi-layer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs), the book then introduces more cutting-edge techniques as you explore deep neural network architectures, including ResNet and DenseNet, and how to create autoencoders. You will then learn about GANs, and how they can unlock new levels of AI performance.

Next, you'll discover how a variational autoencoder (VAE) is implemented, and how GANs and VAEs have the generative power to synthesize data that can be extremely convincing to humans. You'll also learn to implement DRL such as Deep Q-Learning and Policy Gradient Methods, which are critical to many modern results in AI.

What you will learn

  • Use mutual information maximization techniques to perform unsupervised learning
  • Use segmentation to identify the pixel-wise class of each object in an image
  • Identify both the bounding box and class of objects in an image using object detection
  • Learn the building blocks for advanced techniques - MLPss, CNN, and RNNs
  • Understand deep neural networks - including ResNet and DenseNet
  • Understand and build autoregressive models – autoencoders, VAEs, and GANs
  • Discover and implement deep reinforcement learning methods

Who this book is for

This is not an introductory book, so fluency with Python is required. The reader should also be familiar with some machine learning approaches, and practical experience with DL will also be helpful. Knowledge of Keras or TensorFlow 2.0 is not required but is recommended.

Table of Contents

  1. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  2. Introducing Advanced Deep Learning with Keras
    1. 1. Why is Keras the perfect deep learning library?
      1. Installing Keras and TensorFlow
    2. 2. MLP, CNN, and RNN
      1. The differences between MLP, CNN, and RNN
    3. 3. Multilayer Perceptron (MLP)
      1. The MNIST dataset
      2. The MNIST digit classifier model
      3. Building a model using MLP and Keras
      4. Regularization
      5. Output activation and loss function
      6. Optimization
      7. Performance evaluation
      8. Model summary
    4. 4. Convolutional Neural Network (CNN)
      1. Convolution
      2. Pooling operations
      3. Performance evaluation and model summary
    5. 5. Recurrent Neural Network (RNN)
    6. 6. Conclusion
    7. 7. References
  3. Deep Neural Networks
    1. 1. Functional API
      1. Creating a two-input and one-output model
    2. 2. Deep Residual Network (ResNet)
    3. 3. ResNet v2
    4. 4. Densely Connected Convolutional Network (DenseNet)
      1. Building a 100-layer DenseNet-BC for CIFAR10
    5. 5. Conclusion
    6. 6. References
  4. Autoencoders
    1. 1. Principles of autoencoders
    2. 2. Building an autoencoder using Keras
    3. 3. Denoising autoencoders (DAEs)
    4. 4. Automatic colorization autoencoder
    5. 5. Conclusion
    6. 6. References
  5. Generative Adversarial Networks (GANs)
    1. 1. An Overview of GANs
      1. Principles of GANs
    2. 2. Implementing DCGAN in Keras
    3. 3. Conditional GAN
    4. 4. Conclusion
    5. 5. References
  6. Improved GANs
    1. 1. Wasserstein GAN
      1. Distance functions
      2. Distance function in GANs
      3. Use of Wasserstein loss
      4. WGAN implementation using Keras
    2. 2. Least-squares GAN (LSGAN)
    3. 3. Auxiliary Classifier GAN (ACGAN)
    4. 4. Conclusion
    5. 5. References
  7. Disentangled Representation GANs
    1. 1. Disentangled representations
      1. InfoGAN
      2. Implementation of InfoGAN in Keras
      3. Generator outputs of InfoGAN
    2. 2. StackedGAN
      1. Implementation of StackedGAN in Keras
      2. Generator outputs of StackedGAN
    3. 4. Conclusion
    4. 5. References
  8. Cross-Domain GANs
    1. 1. Principles of CycleGAN
      1. The CycleGAN model
      2. Implementing CycleGAN using Keras
      3. Generator outputs of CycleGAN
      4. CycleGAN on MNIST and SVHN datasets
    2. 2. Conclusion
    3. 3. References
  9. Variational Autoencoders (VAEs)
    1. 1. Principles of VAE
      1. Variational inference
      2. Core equation
      3. Optimization
      4. Reparameterization trick
      5. Decoder testing
      6. VAE in Keras
      7. Using CNN for AE
    2. 2. Conditional VAE (CVAE)
    3. 3. ?-VAE – VAE with disentangled latent representations
    4. 4. Conclusion
    5. 5. References
  10. Deep Reinforcement Learning
    1. 1. Principles of Reinforcement Learning (RL)
    2. 2. The Q value
    3. 3. Q-learning example
      1. Q-Learning in Python
    4. 4. Nondeterministic environment
    5. 5. Temporal-difference learning
      1. Q-learning on OpenAI Gym
    6. 6. Deep Q-Network (DQN)
      1. DQN on Keras
      2. Double Q-learning (DDQN)
    7. 7. Conclusion
    8. 8. References
  11. Policy Gradient Methods
    1. 1. Policy gradient theorem
    2. 2. Monte Carlo policy gradient (REINFORCE) method
    3. 3. REINFORCE with baseline method
    4. 4. Actor-Critic method
    5. 5. Advantage Actor-Critic (A2C) method
    6. 6. Policy Gradient methods using Keras
    7. 7. Performance evaluation of policy gradient methods
    8. 8. Conclusion
    9. 9. References
  12. Object Detection
    1. 1. Object detection
    2. 2. Anchor boxes
    3. 3. Ground truth anchor boxes
    4. 4. Loss functions
    5. 5. SSD model architecture
    6. 6. SSD model architecture in Keras
    7. 7. SSD objects in Keras
    8. 8. SSD model in Keras
    9. 9. Data generator model in Keras
    10. 10. Example dataset
    11. 11. SSD model training
    12. 12. Non-Maximum Suppression (NMS) algorithm
    13. 13. SSD model validation
    14. 14. Conclusion
    15. 15. References
  13. Semantic Segmentation
    1. 1. Segmentation
    2. 2. Semantic segmentation network
    3. 3. Semantic segmentation network in Keras
    4. 4. Example dataset
    5. 5. Semantic segmentation validation
    6. 6. Conclusion
    7. 7. References
  14. Unsupervised Learning Using Mutual Information
    1. 1. Mutual Information
    2. 2. Mutual Information and Entropy
    3. 3. Unsupervised learning by maximizing the Mutual Information of discrete random variables
    4. 4. Encoder network for unsupervised clustering
    5. 5. Unsupervised clustering implementation in Keras
    6. 6. Validation using MNIST
    7. 7. Unsupervised learning by maximizing the Mutual Information of continuous random variables
    8. 8. Estimating the Mutual Information of a bivariate Gaussian
    9. 9. Unsupervised clustering using continuous random variables in Keras
    10. 10. Conclusion
    11. 11. References
  15. Other Books You May Enjoy
  16. Index
18.224.44.108