0%

Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. You’ll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.

Table of Contents

  1. Machine Learning with TensorFlow, 2e
  2. Copyright
  3. dedication
  4. Praise for the First Edition
  5. front matter
    1. foreword
    2. preface
    3. acknowledgments
    4. about this book
    5. How this book is organized: A roadmap
    6. About the code
    7. liveBook discussion forum
    8. about the author
    9. about the cover illustration
  6. contents
  7. Part 1 Your machine-learning rig
  8. 1 A machine-learning odyssey
    1. 1.1 Machine-learning fundamentals
    2. 1.1.1 Parameters
    3. 1.1.2 Learning and inference
    4. 1.2 Data representation and features
    5. 1.3 Distance metrics
    6. 1.4 Types of learning
    7. 1.4.1 Supervised learning
    8. 1.4.2 Unsupervised learning
    9. 1.4.3 Reinforcement learning
    10. 1.4.4 Meta-learning
    11. 1.5 TensorFlow
    12. 1.6 Overview of future chapters
    13. Summary
  9. 2 TensorFlow essentials
    1. 2.1 Ensuring that TensorFlow works
    2. 2.2 Representing tensors
    3. 2.3 Creating operators
    4. 2.4 Executing operators within sessions
    5. 2.5 Understanding code as a graph
    6. 2.5.1 Setting session configurations
    7. 2.6 Writing code in Jupyter
    8. 2.7 Using variables
    9. 2.8 Saving and loading variables
    10. 2.9 Visualizing data using TensorBoard
    11. 2.9.1 Implementing a moving average
    12. 2.9.2 Visualizing the moving average
    13. 2.10 Putting it all together: The TensorFlow system architecture and API
    14. Summary
  10. Part 2 Core learning algorithms
  11. 3 Linear regression and beyond
    1. 3.1 Formal notation
    2. 3.1.1 How do you know the regression algorithm is working?
    3. 3.2 Linear regression
    4. 3.3 Polynomial model
    5. 3.4 Regularization
    6. 3.5 Application of linear regression
    7. Summary
  12. 4 Using regression for call-center volume prediction
    1. 4.1 What is 311?
    2. 4.2 Cleaning the data for regression
    3. 4.3 What’s in a bell curve? Predicting Gaussian distributions
    4. 4.4 Training your call prediction regressor
    5. 4.5 Visualizing the results and plotting the error
    6. 4.6 Regularization and training test splits
    7. Summary
  13. 5 A gentle introduction to classification
    1. 5.1 Formal notation
    2. 5.2 Measuring performance
    3. 5.2.1 Accuracy
    4. 5.2.2 Precision and recall
    5. 5.2.3 Receiver operating characteristic curve
    6. 5.3 Using linear regression for classification
    7. 5.4 Using logistic regression
    8. 5.4.1 Solving 1D logistic regression
    9. 5.4.2 Solving 2D regression
    10. 5.5 Multiclass classifier
    11. 5.5.1 One-versus-all
    12. 5.5.2 One-versus-one
    13. 5.5.3 Softmax regression
    14. 5.6 Application of classification
    15. Summary
  14. 6 Sentiment classification: Large movie-review dataset
    1. 6.1 Using the Bag of Words model
    2. 6.1.1 Applying the Bag of Words model to movie reviews
    3. 6.1.2 Cleaning all the movie reviews
    4. 6.1.3 Exploratory data analysis on your Bag of Words
    5. 6.2 Building a sentiment classifier using logistic regression
    6. 6.2.1 Setting up the training for your model
    7. 6.2.2 Performing the training for your model
    8. 6.3 Making predictions using your sentiment classifier
    9. 6.4 Measuring the effectiveness of your classifier
    10. 6.5 Creating the softmax-regression sentiment classifier
    11. 6.6 Submitting your results to Kaggle
    12. Summary
  15. 7 Automatically clustering data
    1. 7.1 Traversing files in TensorFlow
    2. 7.2 Extracting features from audio
    3. 7.3 Using k-means clustering
    4. 7.4 Segmenting audio
    5. 7.5 Clustering with a self-organizing map
    6. 7.6 Applying clustering
    7. Summary
  16. 8 Inferring user activity from Android accelerometer data
    1. 8.1 The User Activity from Walking dataset
    2. 8.1.1 Creating the dataset
    3. 8.1.2 Computing jerk and extracting the feature vector
    4. 8.2 Clustering similar participants based on jerk magnitudes
    5. 8.3 Different classes of user activity for a single participant
    6. Summary
  17. 9 Hidden Markov models
    1. 9.1 Example of a not-so-interpretable model
    2. 9.2 Markov model
    3. 9.3 Hidden Markov model
    4. 9.4 Forward algorithm
    5. 9.5 Viterbi decoding
    6. 9.6 Uses of HMMs
    7. 9.6.1 Modeling a video
    8. 9.6.2 Modeling DNA
    9. 9.6.3 Modeling an image
    10. 9.7 Application of HMMs
    11. Summary
  18. 10 Part-of-speech tagging and word-sense disambiguation
    1. 10.1 Review of HMM example: Rainy or Sunny
    2. 10.2 PoS tagging
    3. 10.2.1 The big picture: Training and predicting PoS with HMMs
    4. 10.2.2 Generating the ambiguity PoS tagged dataset
    5. 10.3 Algorithms for building the HMM for PoS disambiguation
    6. 10.3.1 Generating the emission probabilities
    7. 10.4 Running the HMM and evaluating its output
    8. 10.5 Getting more training data from the Brown Corpus
    9. 10.6 Defining error bars and metrics for PoS tagging
    10. Summary
  19. Part 3 The neural network paradigm
  20. 11 A peek into autoencoders
    1. 11.1 Neural networks
    2. 11.2 Autoencoders
    3. 11.3 Batch training
    4. 11.4 Working with images
    5. 11.5 Application of autoencoders
    6. Summary
  21. 12 Applying autoencoders: The CIFAR-10 image dataset
    1. 12.1 What is CIFAR-10?
    2. 12.1.1 Evaluating your CIFAR-10 autoencoder
    3. 12.2 Autoencoders as classifiers
    4. 12.2.1 Using the autoencoder as a classifier via loss
    5. 12.3 Denoising autoencoders
    6. 12.4 Stacked deep autoencoders
    7. Summary
  22. 13 Reinforcement learning
    1. 13.1 Formal notions
    2. 13.1.1 Policy
    3. 13.1.2 Utility
    4. 13.2 Applying reinforcement learning
    5. 13.3 Implementing reinforcement learning
    6. 13.4 Exploring other applications of reinforcement learning
    7. Summary
  23. 14 Convolutional neural networks
    1. 14.1 Drawback of neural networks
    2. 14.2 Convolutional neural networks
    3. 14.3 Preparing the image
    4. 14.3.1 Generating filters
    5. 14.3.2 Convolving using filters
    6. 14.3.3 Max pooling
    7. 14.4 Implementing a CNN in TensorFlow
    8. 14.4.1 Measuring performance
    9. 14.4.2 Training the classifier
    10. 14.5 Tips and tricks to improve performance
    11. 14.6 Application of CNNs
    12. Summary
  24. 15 Building a real-world CNN: VGG -Face and VGG -Face Lite
    1. 15.1 Making a real-world CNN architecture for CIFAR-10
    2. 15.1.1 Loading and preparing the CIFAR-10 image data
    3. 15.1.2 Performing data augmentation
    4. 15.2 Building a deeper CNN architecture for CIFAR-10
    5. 15.2.1 CNN optimizations for increasing learned parameter resilience
    6. 15.3 Training and applying a better CIFAR-10 CNN
    7. 15.4 Testing and evaluating your CNN for CIFAR-10
    8. 15.4.1 CIFAR-10 accuracy results and ROC curves
    9. 15.4.2 Evaluating the softmax predictions per class
    10. 15.5 Building VGG -Face for facial recognition
    11. 15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite
    12. 15.5.2 TensorFlow’s Dataset API and data augmentation
    13. 15.5.3 Creating a TensorFlow dataset
    14. 15.5.4 Training using TensorFlow datasets
    15. 15.5.5 VGG -Face Lite model and training
    16. 15.5.6 Training and evaluating VGG -Face Lite
    17. 15.5.7 Evaluating and predicting with VGG -Face Lite
    18. Summary
  25. 16 Recurrent neural networks
    1. 16.1 Introduction to RNNs
    2. 16.2 Implementing a recurrent neural network
    3. 16.3 Using a predictive model for time-series data
    4. 16.4 Applying RNNs
    5. Summary
  26. 17 LSTMs and automatic speech recognition
    1. 17.1 Preparing the LibriSpeech corpus
    2. 17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data
    3. 17.1.2 Converting the audio
    4. 17.1.3 Generating per-audio transcripts
    5. 17.1.4 Aggregating audio and transcripts
    6. 17.2 Using the deep-speech model
    7. 17.2.1 Preparing the input audio data for deep speech
    8. 17.2.2 Preparing the text transcripts as character-level numerical data
    9. 17.2.3 The deep-speech model in TensorFlow
    10. 17.2.4 Connectionist temporal classification in TensorFlow
    11. 17.3 Training and evaluating deep speech
    12. Summary
  27. 18 Sequence-to-sequence models for chatbots
    1. 18.1 Building on classification and RNNs
    2. 18.2 Understanding seq2seq architecture
    3. 18.3 Vector representation of symbols
    4. 18.4 Putting it all together
    5. 18.5 Gathering dialogue data
    6. Summary
  28. 19 Utility landscape
    1. 19.1 Preference model
    2. 19.2 Image embedding
    3. 19.3 Ranking images
    4. Summary
    5. What’s next
  29. appendix Installation instructions
    1. A.1 Installing the book’s code with Docker
    2. A.1.1 Installing Docker in Windows
    3. A.1.2 Installing Docker in Linux
    4. A.1.3 Installing Docker in macOS
    5. A.1.4 Using Docker
    6. A.2 Getting the data and storing models
    7. A.3 Necessary libraries
    8. A.4 Converting the call-center example to TensorFlow2
    9. A.4.1 The call-center example with TF2
  30. index
18.190.217.134