0%

Book Description

Deep learning simplified by taking supervised, unsupervised, and reinforcement learning to the next level using the Python ecosystem

Key Features

  • Build deep learning models with transfer learning principles in Python
  • implement transfer learning to solve real-world research problems
  • Perform complex operations such as image captioning neural style transfer

Book Description

Transfer learning is a machine learning (ML) technique where knowledge gained during training a set of problems can be used to solve other similar problems.

The purpose of this book is two-fold; firstly, we focus on detailed coverage of deep learning (DL) and transfer learning, comparing and contrasting the two with easy-to-follow concepts and examples. The second area of focus is real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.

The book starts with the key essential concepts of ML and DL, followed by depiction and coverage of important DL architectures such as convolutional neural networks (CNNs), deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), and capsule networks. Our focus then shifts to transfer learning concepts, such as model freezing, fine-tuning, pre-trained models including VGG, inception, ResNet, and how these systems perform better than DL models with practical examples. In the concluding chapters, we will focus on a multitude of real-world case studies and problems associated with areas such as computer vision, audio analysis and natural language processing (NLP).

By the end of this book, you will be able to implement both DL and transfer learning principles in your own systems.

What you will learn

  • Set up your own DL environment with graphics processing unit (GPU) and Cloud support
  • Delve into transfer learning principles with ML and DL models
  • Explore various DL architectures, including CNN, LSTM, and capsule networks
  • Learn about data and network representation and loss functions
  • Get to grips with models and strategies in transfer learning
  • Walk through potential challenges in building complex transfer learning models from scratch
  • Explore real-world research problems related to computer vision and audio analysis
  • Understand how transfer learning can be leveraged in NLP

Who this book is for

Hands-On Transfer Learning with Python is for data scientists, machine learning engineers, analysts and developers with an interest in data and applying state-of-the-art transfer learning methodologies to solve tough real-world problems. Basic proficiency in machine learning and Python is required.

Downloading the example code for this book You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.

Table of Contents

  1. Title Page
  2. Copyright and Credits
    1. Hands-On Transfer Learning with Python
  3. Dedication
  4. Packt Upsell
    1. Why subscribe?
    2. PacktPub.com
  5. Foreword
  6. Contributors
    1. About the authors
    2. About the reviewer
    3. Packt is searching for authors like you
  7. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  8. Machine Learning Fundamentals
    1. Why ML?
      1. Formal definition
      2. Shallow and deep learning
    2. ML techniques
      1. Supervised learning
        1. Classification
        2. Regression
      2. Unsupervised learning
        1. Clustering
        2. Dimensionality reduction
        3. Association rule mining
        4. Anomaly detection
    3. CRISP-DM
      1. Business understanding
      2. Data understanding
      3. Data preparation
      4. Modeling
      5. Evaluation
      6. Deployment
    4. Standard ML workflow
      1. Data retrieval
      2. Data preparation
        1. Exploratory data analysis
        2. Data processing and wrangling
        3. Feature engineering and extraction
        4. Feature scaling and selection
      3. Modeling
      4. Model evaluation and tuning
        1. Model evaluation
        2. Bias variance trade-off
          1. Bias
          2. Variance
          3. Trade-off
          4. Underfitting
          5. Overfitting
          6. Generalization
        3. Model tuning
      5. Deployment and monitoring
    5. Exploratory data analysis
    6. Feature extraction and engineering
      1. Feature engineering strategies
        1. Working with numerical data
        2. Working with categorical data
        3. Working with image data
          1. Deep learning based automated feature extraction
        4. Working with text data
          1. Text preprocessing
          2. Feature engineering
    7. Feature selection
    8. Summary
  9. Deep Learning Essentials
    1. What is deep learning?
    2. Deep learning frameworks
    3. Setting up a cloud-based deep learning environment with GPU support
      1. Choosing a cloud provider
      2. Setting up your virtual server
      3. Configuring your virtual server
      4. Installing and updating deep learning dependencies 
      5. Accessing your deep learning cloud environment
      6. Validating GPU-enablement on your deep learning environment
    4. Setting up a robust, on-premise deep learning environment with GPU support
    5. Neural network basics
      1. A simple linear neuron
      2. Gradient-based optimization
      3. The Jacobian and Hessian matrices
      4. Chain rule of derivatives
      5. Stochastic Gradient Descent
      6. Non-linear neural units
      7. Learning a simple non-linear unit – logistic unit
      8. Loss functions
      9. Data representations
        1. Tensor examples
        2. Tensor operations
      10. Multilayered neural networks
      11. Backprop – training deep neural networks
      12. Challenges in neural network learning
        1. Ill-conditioning
        2. Local minima and saddle points 
        3. Cliffs and exploding gradients
        4. Initialization – bad correspondence between the local and global structure of the objective
        5. Inexact gradients
      13. Initialization of model parameters
        1. Initialization heuristics
      14. Improvements of SGD
        1. The momentum method
        2. Nesterov momentum
        3. Adaptive learning rate – separate for each connection
          1. AdaGrad
          2. RMSprop
          3. Adam
      15. Overfitting and underfitting in neural networks
        1. Model capacity
        2. How to avoid overfitting – regularization
          1. Weight-sharing
          2. Weight-decay 
          3. Early stopping
          4. Dropout
          5. Batch normalization
          6. Do we need more data?
      16. Hyperparameters of the neural network
        1. Automatic hyperparameter tuning
          1. Grid search
    6. Summary
  10. Understanding Deep Learning Architectures
    1. Neural network architecture
      1. Why different architectures are needed
    2. Various architectures
      1. MLPs and deep neural networks
      2. Autoencoder neural networks
      3. Variational autoencoders 
      4. Generative Adversarial Networks
        1. Text-to-image synthesis using the GAN architecture
      5. CNNs 
        1. The convolution operator
        2. Stride and padding mode in convolution
        3. The convolution layer
        4. LeNet architecture
        5. AlexNet
        6. ZFNet
        7. GoogLeNet (inception network)
        8. VGG
        9. Residual Neural Networks
      6. Capsule networks 
      7. Recurrent neural networks 
        1. LSTMs
        2. Stacked LSTMs
        3. Encoder-decoder – Neural Machine Translation
        4. Gated Recurrent Units
      8. Memory Neural Networks
        1. MemN2Ns
      9. Neural Turing Machine 
        1. Selective attention
        2. Read operation
        3. Write operation
      10. The attention-based neural network model
    3. Summary
  11. Transfer Learning Fundamentals
    1. Introduction to transfer learning
      1. Advantages of transfer learning
    2. Transfer learning strategies
    3. Transfer learning and deep learning
      1. Transfer learning methodologies
        1. Feature-extraction
        2. Fine-tuning
      2. Pretrained models
      3. Applications
    4. Deep transfer learning types
      1. Domain adaptation
      2. Domain confusion
      3. Multitask learning
      4. One-shot learning
      5. Zero-shot learning
    5. Challenges of transfer learning
      1. Negative transfer
      2. Transfer bounds
    6. Summary
  12. Unleashing the Power of Transfer Learning
    1. The need for transfer learning
      1. Formulating our real-world problem
      2. Building our dataset
      3. Formulating our approach
    2. Building CNN models from scratch
      1. Basic CNN model
      2. CNN model with regularization
      3. CNN model with image augmentation
    3. Leveraging transfer learning with pretrained CNN models
      1. Understanding the VGG-16 model
      2. Pretrained CNN model as a feature extractor
      3. Pretrained CNN model as a feature extractor with image augmentation
      4. Pretrained CNN model with fine-tuning and image augmentation
    4. Evaluating our deep learning models
      1. Model predictions on a sample test image
      2. Visualizing what a CNN model perceives
      3. Evaluation model performance on test data
    5. Summary
  13. Image Recognition and Classification
    1. Deep learning-based image classification
    2. Benchmarking datasets
    3. State-of-the-art deep image classification models
    4. Image classification and transfer learning
      1. CIFAR-10
        1. Building an image classifier
        2. Transferring knowledge
      2. Dog Breed Identification dataset
        1. Exploratory analysis
        2. Data preparation
        3. Dog classifier using transfer learning
    5. Summary
  14. Text Document Categorization
    1. Text categorization
      1. Traditional text categorization
      2. Shortcomings of BoW models
      3. Benchmark datasets
    2. Word representations
      1. Word2vec model
      2. Word2vec using gensim
      3. GloVe model
    3. CNN document model
      1. Building a review sentiment classifier
      2. What has embedding changed most?
      3. Transfer learning – application to the IMDB dataset
      4. Training on the full IMDB dataset with Word2vec embeddings
      5. Creating document summaries with CNN model
      6. Multiclass classification with the CNN model
      7. Visualizing document embeddings
    4. Summary
  15. Audio Event Identification and Classification
    1. Understanding audio event classification
      1. Formulating our real-world problem
    2. Exploratory analysis of audio events
    3. Feature engineering and representation of audio events
    4. Audio event classification with transfer learning
      1. Building datasets from base features
      2. Transfer learning for feature extraction
      3. Building the classification model
      4. Evaluating the classifier performance
    5. Building a deep learning audio event identifier
    6. Summary
  16. DeepDream
    1. Introduction
      1. Algorithmic pareidolia in computer vision
      2. Visualizing feature maps
    2. DeepDream
      1. Examples
    3. Summary
  17. Style Transfer
    1. Understanding neural style transfer
    2. Image preprocessing methodology
    3. Building loss functions
      1. Content loss
      2. Style loss
      3. Total variation loss
      4. Overall loss function
    4. Constructing a custom optimizer
    5. Style transfer in action
    6. Summary
  18. Automated Image Caption Generator
    1. Understanding image captioning
    2. Formulating our objective
    3. Understanding the data
    4. Approach to automated image captioning
      1. Conceptual approach
      2. Practical hands-on approach
        1. Image feature extractor – DCNN model with transfer learning
        2. Text caption generator – sequence-based language model with LSTM
        3. Encoder-decoder model
    5. Image feature extraction with transfer learning
    6. Building a vocabulary for our captions
    7. Building an image caption dataset generator
    8. Building our image language encoder-decoder deep learning model
    9. Training our image captioning deep learning model
    10. Evaluating our image captioning deep learning model
      1. Loading up data and models
      2. Understanding greedy and beam search
      3. Implementing a beam search-based caption generator
      4. Understanding and implementing BLEU scoring
      5. Evaluating model performance on test data
    11. Automated image captioning in action!
      1. Captioning sample images from outdoor scenes
      2. Captioning sample images from popular sports
      3. Future scope for improvement
    12. Summary
  19. Image Colorization
    1. Problem statement
    2. Color images
      1. Color theory
      2. Color models and color spaces
        1. RGB
        2. YUV
        3. LAB
      3. Problem statement revisited
    3. Building a coloring deep neural network
      1. Preprocessing
        1. Standardization
      2. Loss function
      3. Encoder
      4. Transfer learning – feature extraction
      5. Fusion layer
      6. Decoder
      7. Postprocessing
      8. Training and results
    4. Challenges
    5. Further improvements
    6. Summary
  20. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think
3.135.207.174