0%

Deep learning is everywhere, making this powerful driver of AI something more STEM professionals need to know. Learning which library commands to use is one thing, but to truly understand the discipline, you need to grasp the mathematical concepts that make it tick. This book will give you a working knowledge of topics in probability, statistics, linear algebra, and differential calculus – the essential math needed to make deep learning comprehensible, which is key to practicing it successfully.

Each of the four subfields are contextualized with Python code and hands-on, real-world examples that bridge the gap between pure mathematics and its applications in deep learning. Chapters build upon one another, with foundational topics such as Bayes’ theorem followed by more advanced concepts, like training neural networks using vectors, matrices, and derivatives of functions. You’ll ultimately put all this math to use as you explore and implement deep learning algorithms, including backpropagation and gradient descent – the foundational algorithms that have enabled the AI revolution.

You’ll learn:

•The rules of probability, probability distributions, and Bayesian probability
•The use of statistics for understanding datasets and evaluating models
•How to manipulate vectors and matrices, and use both to move data through a neural network
•How to use linear algebra to implement principal component analysis and singular value decomposition
•How to apply improved versions of gradient descent, like RMSprop, Adagrad and Adadelta

Once you understand the core math concepts presented throughout this book through the lens of AI programming, you’ll have foundational know-how to easily follow and work with deep learning.

Table of Contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Dedication
  5. About the Author
  6. About the Technical Reviewer
  7. BRIEF CONTENTS
  8. CONTENTS IN DETAIL
  9. FOREWORD
  10. ACKNOWLEDGMENTS
  11. INTRODUCTION
    1. Who Is This Book For?
    2. About This Book
  12. 1 SETTING THE STAGE
    1. Installing the Toolkits
    2. NumPy
    3. SciPy
    4. Matplotlib
    5. Scikit-Learn
    6. Summary
  13. 2 PROBABILITY
    1. Basic Concepts
    2. The Rules of Probability
    3. Joint and Marginal Probability
    4. Summary
  14. 3 MORE PROBABILITY
    1. Probability Distributions
    2. Bayes’ Theorem
    3. Summary
  15. 4 STATISTICS
    1. Types of Data
    2. Summary Statistics
    3. Quantiles and Box Plots
    4. Missing Data
    5. Correlation
    6. Hypothesis Testing
    7. Summary
  16. 5 LINEAR ALGEBRA
    1. Scalars, Vectors, Matrices, and Tensors
    2. Arithmetic with Tensors
    3. Summary
  17. 6 MORE LINEAR ALGEBRA
    1. Square Matrices
    2. Eigenvectors and Eigenvalues
    3. Vector Norms and Distance Metrics
    4. Principal Component Analysis
    5. Singular Value Decomposition and Pseudoinverse
    6. Summary
  18. 7 DIFFERENTIAL CALCULUS
    1. Slope
    2. Derivatives
    3. Minima and Maxima of Functions
    4. Partial Derivatives
    5. Gradients
    6. Summary
  19. 8 MATRIX CALCULUS
    1. The Formulas
    2. The Identities
    3. Jacobians and Hessians
    4. Some Examples of Matrix Calculus Derivatives
    5. Summary
  20. 9 DATA FLOW IN NEURAL NETWORKS
    1. Representing Data
    2. Data Flow in Traditional Neural Networks
    3. Data Flow in Convolutional Neural Networks
    4. Summary
  21. 10 BACKPROPAGATION
    1. What Is Backpropagation?
    2. Backpropagation by Hand
    3. Backpropagation for Fully Connected Networks
    4. Computational Graphs
    5. Summary
  22. 11 GRADIENT DESCENT
    1. The Basic Idea
    2. Stochastic Gradient Descent
    3. Momentum
    4. Adaptive Gradient Descent
    5. Summary
    6. Epilogue
  23. APPENDIX: GOING FURTHER
    1. Probability and Statistics
    2. Linear Algebra
    3. Calculus
    4. Deep Learning
  24. INDEX
3.128.198.217