0%

Book Description

Updated and revised second edition of the bestselling guide to exploring and mastering the most important algorithms for solving complex machine learning problems

Key Features

  • Updated to include new algorithms and techniques
  • Code updated to Python 3.8 & TensorFlow 2.x
  • New coverage of regression analysis, time series analysis, deep learning models, and cutting-edge applications

Book Description

Mastering Machine Learning Algorithms, Second Edition helps you harness the real power of machine learning algorithms in order to implement smarter ways of meeting today's overwhelming data needs. This newly updated and revised guide will help you master algorithms used widely in semi-supervised learning, reinforcement learning, supervised learning, and unsupervised learning domains.

You will use all the modern libraries from the Python ecosystem – including NumPy and Keras – to extract features from varied complexities of data. Ranging from Bayesian models to the Markov chain Monte Carlo algorithm to Hidden Markov models, this machine learning book teaches you how to extract features from your dataset, perform complex dimensionality reduction, and train supervised and semi-supervised models by making use of Python-based libraries such as scikit-learn. You will also discover practical applications for complex techniques such as maximum likelihood estimation, Hebbian learning, and ensemble learning, and how to use TensorFlow 2.x to train effective deep neural networks.

By the end of this book, you will be ready to implement and solve end-to-end machine learning problems and use case scenarios.

What you will learn

  • Understand the characteristics of a machine learning algorithm
  • Implement algorithms from supervised, semi-supervised, unsupervised, and RL domains
  • Learn how regression works in time-series analysis and risk prediction
  • Create, model, and train complex probabilistic models
  • Cluster high-dimensional data and evaluate model accuracy
  • Discover how artificial neural networks work – train, optimize, and validate them
  • Work with autoencoders, Hebbian networks, and GANs

Who this book is for

This book is for data science professionals who want to delve into complex ML algorithms to understand how various machine learning models can be built. Knowledge of Python programming is required.

Table of Contents

  1. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Conventions used
    4. Get in touch
      1. Reviews
  2. Machine Learning Model Fundamentals
    1. Models and data
      1. Structure and properties of the datasets
        1. Limited Sample Populations
        2. Scaling datasets
        3. Whitening
        4. Training, validation, and test sets
        5. Cross-validation
    2. Characteristics of a machine learning model
      1. Learnability
      2. Capacity of a model
        1. Vapnik-Chervonenkis capacity
      3. Bias of an estimator
        1. Underfitting
      4. Variance of an estimator
        1. Overfitting
        2. The Cramér-Rao bound
    3. Summary
    4. Further reading
  3. Loss Functions and Regularization
    1. Defining loss and cost functions
      1. Examples of cost functions
        1. Mean squared error
        2. Huber cost function
        3. Hinge cost function
        4. Categorical cross-entropy
    2. Regularization
      1. Examples of Regularization Techniques
        1. L2 or Ridge regularization
        2. L1 or Lasso regularization
        3. ElasticNet
        4. Early stopping
    3. Summary
    4. Further reading
  4. Introduction to Semi-Supervised Learning
    1. Semi-supervised scenario
      1. Causal scenarios
      2. Transductive learning
      3. Inductive learning
      4. Semi-supervised assumptions
        1. Smoothness assumption
        2. Cluster assumption
        3. Manifold assumption
    2. Generative Gaussian Mixture
      1. Generative Gaussian Mixture theory
      2. Example of a Generative Gaussian Mixture
      3. Generative Gaussian Mixtures summary
        1. Weighted log-likelihood
    3. Self-Training
      1. Self-Training theory
      2. Example of Self-Training with the Iris dataset
      3. Self-Training summary
    4. Co-Training
      1. Co-Training theory
      2. Example of Co-Training with the Wine dataset
      3. Co-Training summary
    5. Summary
    6. Further reading
  5. Advanced Semi-Supervised Classification
    1. Contrastive Pessimistic Likelihood Estimation
      1. CPLE Theory
      2. Example of contrastive pessimistic likelihood estimation
      3. CPLE Summary
    2. Semi-supervised Support Vector Machines (S3VM)
      1. S3VM Theory
      2. Example of S3VM
      3. S3VM Summary
    3. Transductive Support Vector Machines (TSVM)
      1. TSVM Theory
      2. Example of TSVM
        1. Analysis of different TSVM configurations
      3. TSVM Summary
    4. Summary
    5. Further reading
  6. Graph-Based Semi-Supervised Learning
    1. Label propagation
    2. Example of label propagation
      1. Label propagation in scikit-learn
    3. Label spreading
      1. Example of label spreading
      2. Increasing the smoothness with Laplacian regularization
    4. Label propagation based on Markov random walks
      1. Example of label propagation based on Markov random walks
    5. Manifold learning
      1. Isomap
        1. Example of Isomap
      2. Locally linear embedding
        1. Example of LLE
      3. Laplacian Spectral Embedding
        1. Example of Laplacian Spectral Embedding
      4. t-SNE
        1. Example of t-distributed stochastic neighbor embedding
    6. Summary
    7. Further reading
  7. Clustering and Unsupervised Models
    1. K-nearest neighbors
      1. K-d trees
      2. Ball trees
      3. Fitting a KNN model
      4. Example of KNN with scikit-learn
    2. K-means
      1. K-means++
      2. Example of K-means with scikit-learn
    3. Evaluation metrics
      1. Homogeneity score
      2. Completeness score
      3. Adjusted Rand index
      4. Silhouette score
    4. Summary
    5. Further reading
  8. Advanced Clustering and Unsupervised Models
    1. Fuzzy C-means
      1. Example of Fuzzy C-means with SciKit-Fuzzy
    2. Spectral clustering
      1. Example of spectral clustering with scikit-learn
    3. DBSCAN
      1. Example of DBSCAN with scikit-learn
        1. The Calinski-Harabasz score
        2. The Davies-Bouldin score
      2. Analysis of DBSCAN results
    4. Summary
    5. Further reading
  9. Clustering and Unsupervised Models for Marketing
    1. Biclustering
      1. Example of Spectral Biclustering with Scikit-Learn
    2. Introduction to Market Basket Analysis with the Apriori Algorithm
      1. Example of Apriori in Python
    3. Summary
    4. Further reading
  10. Generalized Linear Models and Regression
    1. GLMs
      1. Least Squares Estimation
      2. Bias and Variance of Least Squares Estimators
      3. Example of Linear regression with Python
      4. Computing Linear regression Confidence Intervals with Statsmodels
      5. Increasing the robustness to outliers with Huber loss
    2. Other regression techniques
      1. Ridge Regression
        1. Example of Ridge Regression with scikit-learn
      2. Risk modeling with Lasso and Logistic Regression
        1. Example of Risk modeling with Lasso and Logistic Regression
      3. Polynomial Regression
        1. Examples of Polynomial Regressions
      4. Isotonic Regression
        1. Example of Isotonic Regression
    3. Summary
    4. Further reading
  11. Introduction to Time-Series Analysis
    1. Time-series
      1. Smoothing
    2. Introduction to linear models for time-series
      1. Autocorrelation
      2. AR, MA, and ARMA processes
        1. Modeling non-stationary trend models with ARIMA
    3. Summary
    4. Further reading
  12. Bayesian Networks and Hidden Markov Models
    1. Conditional probabilities and Bayes' theorem
      1. Conjugate priors
    2. Bayesian networks
      1. Sampling from a Bayesian network
        1. Direct sampling
        2. A gentle introduction to Markov Chains
        3. Gibbs sampling
        4. The Metropolis-Hastings algorithm
      2. Sampling using PyMC3
        1. Running the Sampling Process
      3. Sampling using PyStan
    3. Hidden Markov Models
      1. The Forward-Backward algorithm
        1. Forward phase
        2. Backward phase
        3. HMM parameter estimation
      2. The Viterbi algorithm
        1. Finding the most likely hidden state sequence using the Viterbi algorithm and hmmlearn
    4. Summary
    5. Further reading
  13. The EM Algorithm
    1. MLE and MAP Learning
    2. EM Algorithm
      1. Convex functions and the Jensen's inequality
      2. Application of the Jensen's inequality to the EM algorithm
      3. An example of parameter estimation
    3. Gaussian Mixture
      1. Example of Gaussian Mixture with scikit-learn
      2. Determining the optimal number of components using AIC and BIC
      3. Automatic component selection using Bayesian Gaussian Mixture
    4. Summary
    5. Further reading
  14. Component Analysis and Dimensionality Reduction
    1. Factor Analysis
      1. Linear relation analysis
      2. Example of Factor Analysis with scikit-learn
    2. Principal Component Analysis
      1. Component importance evaluation
      2. Example of PCA with scikit-learn
      3. Kernel PCA
      4. Sparse PCA
    3. Independent Component Analysis
      1. Example of FastICA with scikit-learn
    4. Addendum to Hidden Markov Models
    5. Summary
    6. Further reading
  15. Hebbian Learning
    1. Hebb's rule
      1. Analysis of the Covariance Rule
        1. Example of application of the covariance rule
      2. Weight vector stabilization and Oja's rule
    2. Sanger's network
      1. Example of Sanger's network
    3. Rubner-Tavan's network
      1. Example of Rubner-Tavan's Network
    4. Self-organizing maps
      1. Kohonen Maps
      2. Example of SOM
    5. Summary
    6. Further reading
  16. Fundamentals of Ensemble Learning
    1. Ensemble learning fundamentals
    2. Random forests
      1. Random forest fundamentals
      2. Why use Decision Trees?
      3. Random forests and the bias-variance trade-off
      4. Example of random forest with scikit-learn
        1. Feature importance
    3. AdaBoost
      1. AdaBoost.SAMME
      2. AdaBoost.SAMME.R
      3. AdaBoost.R2
      4. Example of AdaBoost with scikit-learn
    4. Summary
    5. Further reading
  17. Advanced Boosting Algorithms
    1. Gradient boosting
      1. Loss functions for gradient boosting
      2. Example of gradient tree boosting with scikit-learn
      3. Example of gradient boosting with XGBoost
        1. Evaluating the predictive power of the features
    2. Ensembles of voting classifiers
      1. Example of voting classifiers with scikit-learn
    3. Ensemble learning as model selection
    4. Summary
    5. Further reading
  18. Modeling Neural Networks
    1. The basic artificial neuron
    2. The perceptron
      1. Example of a Perceptron with scikit-learn
    3. Multilayer Perceptrons (MLPs)
      1. Activation functions
        1. Sigmoid and Hyperbolic Tangent
        2. Rectifier activation functions
        3. Softmax
    4. The back-propagation algorithm
      1. Stochastic gradient descent (SGD)
      2. Weight initialization
      3. Example of MLP with TensorFlow and Keras
    5. Summary
    6. Further reading
  19. Optimizing Neural Networks
    1. Optimization algorithms
      1. Gradient perturbation
      2. Momentum and Nesterov momentum
        1. SGD with Momentum in TensorFlow and Keras
      3. RMSProp
        1. RMSProp in TensorFlow and Keras
      4. Adam
        1. Adam in TensorFlow and Keras
      5. AdaGrad
        1. AdaGrad with TensorFlow and Keras
      6. AdaDelta
        1. AdaDelta in TensorFlow and Keras
    2. Regularization and Dropout
      1. Regularization
        1. Regularization in TensorFlow and Keras
      2. Dropout
        1. Dropout with TensorFlow and Keras
    3. Batch normalization
      1. Example of batch normalization with TensorFlow and Keras
    4. Summary
    5. Further reading
  20. Deep Convolutional Networks
    1. Deep convolutional networks
    2. Convolutional operators
      1. Bidimensional discrete convolutions
        1. Strides and Padding
      2. Atrous convolution
      3. Separable convolution
      4. Transpose convolution
    3. Pooling layers
      1. Other helpful layers
    4. Example of a deep convolutional network with TensorFlow and Keras
      1. Example of a deep convolutional network with TensorFlow/Keras and data augmentation
    5. Summary
    6. Further reading
  21. Recurrent Neural Networks
    1. Recurrent networks
      1. Backpropagation through time
      2. Limitations of BPTT
    2. Long Short-Term Memory (LSTM)
      1. Gated Recurrent Unit (GRU)
      2. Example of an LSTM with TensorFlow and Keras
    3. Transfer learning
    4. Summary
    5. Further reading
  22. Autoencoders
    1. Autoencoders
      1. Example of a deep convolutional autoencoder with TensorFlow
    2. Denoising autoencoders
      1. Example of a denoising autoencoder with TensorFlow
    3. Sparse autoencoders
      1. Adding sparseness to the Fashion MNIST deep convolutional autoencoder
    4. Variational autoencoders
      1. Example of a VAE with TensorFlow
    5. Summary
    6. Further reading
  23. Introduction to Generative Adversarial Networks
    1. Adversarial training
    2. Deep Convolutional GANs
      1. Example of DCGAN with TensorFlow
      2. Mode collapse
    3. Wasserstein GAN
      1. Example of WGAN with TensorFlow
    4. Summary
    5. Further reading
  24. Deep Belief Networks
    1. Introduction to Markov random fields
    2. Restricted Boltzmann Machines
      1. Contrastive Divergence
    3. Deep Belief Networks
      1. Example of an unsupervised DBN in Python
      2. Example of a supervised DBN in Python
    4. Summary
    5. Further reading
  25. Introduction to Reinforcement Learning
    1. Fundamental concepts of RL
      1. The Markov Decision Process
      2. Environment
        1. Rewards
        2. A checkerboard environment in Python
      3. Policy
    2. Policy iteration
      1. Policy iteration in the checkerboard environment
    3. Value iteration
      1. Value iteration in the checkerboard environment
    4. The TD(0) algorithm
      1. TD(0) in the checkerboard environment
    5. Summary
    6. Further reading
  26. Advanced Policy Estimation Algorithms
    1. TD(λ) algorithm
      1. TD(λ) in a more complex checkerboard environment
      2. Actor-Critic TD(0) in the checkerboard environment
    2. SARSA algorithm
      1. SARSA in the checkerboard environment
    3. Q-learning
      1. Q-learning in the checkerboard environment
      2. Q-learning modeling the policy with a neural network
    4. Direct policy search through policy gradient
      1. Example of policy gradient with OpenAI Gym Cartpole
    5. Summary
    6. Further reading
  27. Other Books You May Enjoy
  28. Index
52.14.126.74