0%

Discover ways to implement various deep learning algorithms by leveraging Python and other technologies

Key Features

  • Learn deep learning models through several activities
  • Begin with simple machine learning problems, and finish by building a complex system of your own
  • Teach your machines to see by mastering the technologies required for image recognition

Book Description

Deep learning is rapidly becoming the most preferred way of solving data problems. This is thanks, in part, to its huge variety of mathematical algorithms and their ability to find patterns that are otherwise invisible to us.

Deep Learning from the Basics begins with a fast-paced introduction to deep learning with Python, its definition, characteristics, and applications. You'll learn how to use the Python interpreter and the script files in your applications, and utilize NumPy and Matplotlib in your deep learning models. As you progress through the book, you'll discover backpropagation—an efficient way to calculate the gradients of weight parameters—and study multilayer perceptrons and their limitations, before, finally, implementing a three-layer neural network and calculating multidimensional arrays.

By the end of the book, you'll have the knowledge to apply the relevant technologies in deep learning.

What you will learn

  • Use Python with minimum external sources to implement deep learning programs
  • Study the various deep learning and neural network theories
  • Learn how to determine learning coefficients and the initial values of weights
  • Implement trends such as Batch Normalization, Dropout, and Adam
  • Explore applications like automatic driving, image generation, and reinforcement learning

Who this book is for

Deep Learning from the Basics is designed for data scientists, data analysts, and developers who want to use deep learning techniques to develop efficient solutions. This book is ideal for those who want a deeper understanding as well as an overview of the technologies. Some working knowledge of Python is a must. Knowledge of NumPy and pandas will be beneficial, but not essential.

Table of Contents

  1. Preface
    1. About the Book
    2. About the Authors
    3. Learning Objectives
    4. Audience
    5. Approach
  2. Introduction
    1. Concept of this book
    2. For whom is this book written?
    3. For whom is this book not written?
    4. How to read this book
    5. Acknowledgments
  3. 1. Introduction to Python
    1. What is Python?
    2. Installing Python
    3. Python Versions
    4. External Libraries That We Use
    5. Anaconda Distribution
    6. Python Interpreter
    7. Mathematical Operations
    8. Data Types
    9. Variables
    10. Lists
    11. Dictionaries
    12. Boolean
    13. if Statements
    14. for Statements
    15. Functions
    16. Python Script Files
    17. Saving in a File
    18. Classes
    19. NumPy
    20. Importing NumPy
    21. Creating a NumPy Array
    22. Mathematical Operations in NumPy
    23. N-Dimensional NumPy Arrays
    24. Broadcasting
    25. Accessing Elements
    26. Matplotlib
    27. Drawing a Simple Graph
    28. Features of pyplot
    29. Displaying Images
    30. Summary
  4. 2. Perceptrons
    1. What Is a Perceptron?
    2. Simple Logic Circuits
    3. AND Gate
    4. NAND and OR gates
    5. Implementing Perceptrons
    6. Easy Implementation
    7. Introducing Weights and Bias
    8. Implementation with Weights and Bias
    9. Limitations of Perceptrons
    10. XOR Gate
    11. Linear and Nonlinear
    12. Multilayer Perceptrons
    13. Combining the Existing Gates
    14. Implementing an XOR Gate
    15. From NAND to a Computer
    16. Summary
  5. 3. Neural Networks
    1. From Perceptrons to Neural Networks
    2. Neural Network Example
    3. Reviewing the Perceptron
    4. Introducing an Activation Function
    5. Activation Function
    6. Sigmoid Function
    7. Implementing a Step Function
    8. Step Function Graph
    9. Implementing a Sigmoid Function
    10. Comparing the Sigmoid Function and the Step Function
    11. Nonlinear Function
    12. ReLU Function
    13. Calculating Multidimensional Arrays
    14. Multidimensional Arrays
    15. Matrix Multiplication
    16. Matrix Multiplication in a Neural Network
    17. Implementing a Three-Layer Neural Network
    18. Examining the Symbols
    19. Implementing Signal Transmission in Each Layer
    20. Implementation Summary
    21. Designing the Output Layer
    22. Identity Function and Softmax Function
    23. Issues when Implementing the Softmax Function
    24. Characteristics of the Softmax Function
    25. Number of Neurons in the Output Layer
    26. Handwritten Digit Recognition
    27. MNIST Dataset
    28. Inference for Neural Network
    29. Batch Processing
    30. Summary
  6. 4. Neural Network Training
    1. Learning from Data
    2. Data-Driven
    3. Training Data and Test Data
    4. Loss Function
    5. Sum of Squared Errors
    6. Cross-Entropy Error
    7. Mini-Batch Learning
    8. Implementing Cross-Entropy Error (Using Batches)
    9. Why Do We Configure a Loss Function?
    10. Numerical Differentiation
    11. Derivative
    12. Examples of Numerical Differentiation
    13. Partial Derivative
    14. Gradient
    15. Gradient Method
    16. Gradients for a Neural Network
    17. Implementing a Training Algorithm
    18. A Two-Layer Neural Network as a Class
    19. Implementing Mini-Batch Training
    20. Using Test Data for Evaluation
    21. Summary
  7. 5. Backpropagation
    1. Computational Graphs
    2. Using Computational Graphs to Solve Problems
    3. Local Calculation
    4. Why Do We Use Computational Graphs?
    5. Chain Rule
    6. Backward Propagation in a Computational Graph
    7. What Is the Chain Rule?
    8. The Chain Rule and Computational Graphs
    9. Backward Propagation
    10. Backward Propagation in an Addition Node
    11. Backward Propagation in a Multiplication Node
    12. Apples Example
    13. Implementing a Simple Layer
    14. Implementing a Multiplication Layer
    15. Implementing an Addition Layer
    16. Implementing the Activation Function Layer
    17. ReLU Layer
    18. Sigmoid Layer
    19. Implementing the Affine and Softmax Layers
    20. Affine Layer
    21. Batch-Based Affine Layer
    22. Softmax-with-Loss Layer
    23. Implementing Backpropagation
    24. Overall View of Neural Network Training
    25. Presupposition
    26. Implementing a Neural Network That Supports Backpropagation
    27. Gradient Check
    28. Training Using Backpropagation
    29. Summary
  8. 6. Training Techniques
    1. Updating Parameters
    2. Story of an Adventurer
    3. SGD
    4. Disadvantage of SGD
    5. Momentum
    6. AdaGrad
    7. Adam
    8. Which Update Technique Should We Use?
    9. Using the MNIST Dataset to Compare the Update Techniques
    10. Initial Weight Values
    11. How About Setting the Initial Weight Values to 0?
    12. Distribution of Activations in the Hidden Layers
    13. Initial Weight Values for ReLU
    14. Using the MNIST Dataset to Compare the Weight Initializers
    15. Batch Normalization
    16. Batch Normalization Algorithm
    17. Evaluating Batch Normalization
    18. Regularization
    19. Overfitting
    20. Weight Decay
    21. Dropout
    22. Validating Hyperparameters
    23. Validation Data
    24. Optimizing Hyperparameters
    25. Implementing Hyperparameter Optimization
    26. Summary
  9. 7. Convolutional Neural Networks
    1. Overall Architecture
    2. The Convolution Layer
    3. Issues with the Fully Connected Layer
    4. Convolution Operations
    5. Padding
    6. Stride
    7. Performing a Convolution Operation on Three-Dimensional Data
    8. Thinking in Blocks
    9. Batch Processing
    10. The Pooling Layer
    11. Characteristics of a Pooling Layer
    12. Implementing the Convolution and Pooling Layers
    13. Four-Dimensional Arrays
    14. Expansion by im2col
    15. Implementing a Convolution Layer
    16. Implementing a Pooling Layer
    17. Implementing a CNN
    18. Visualizing a CNN
    19. Visualizing the Weight of the First Layer
    20. Using a Hierarchical Structure to Extract Information
    21. Typical CNNs
    22. LeNet
    23. AlexNet
    24. Summary
  10. 8. Deep Learning
    1. Making a Network Deeper
    2. Deeper Networks
    3. Improving Recognition Accuracy
    4. Motivation for a Deeper Network
    5. A Brief History of Deep Learning
    6. ImageNet
    7. VGG
    8. GoogLeNet
    9. ResNet
    10. Accelerating Deep Learning
    11. Challenges to Overcome
    12. Using GPUs for Acceleration
    13. Distributed Training
    14. Reducing the Bit Number for Arithmetic Precision
    15. Practical Uses of Deep Learning
    16. Object Detection
    17. Segmentation
    18. Generating Image Captions
    19. The Future of Deep Learning
    20. Converting Image Styles
    21. Generating Images
    22. Automated Driving
    23. Deep Q-Networks (Reinforcement Learning)
    24. Summary
  11. Appendix A
    1. Computational Graph of the Softmax-with-Loss Layer
    2. Forward Propagation
    3. Backward Propagation
    4. Summary
35.175.174.36