Index
A
- a.eval(), Initializing Constant Tensors
- A3C algorithm
- A3C.fit() method, Training the Policy
- accuracy, Binary Classification Metrics
- accuracy_score(), Evaluating Model Accuracy
- acknowledgments, Acknowledgments
- actions, Reinforcement Learning
- activations, Activations
- adding tensors, Tensor Addition and Scaling
- add_output() method, Defining a Graph of Layers
- advantage functions, Policy Learning
- adversarial models, Adversarial models
- agents, Markov Decision Processes
- AI winters, “Neurons” in Fully Connected Networks
- AlexNet, AlexNet, Convolutional Neural Networks
- algorithms
- A3C algorithm, The A3C Algorithm-The A3C Algorithm
- asynchronous training, Asynchronous Training
- black-box, Hyperparameter Optimization Algorithms
- catastrophic forgetting and, Q-Learning
- finding baselines, Setting Up a Baseline
- for reinforcement learning, Reinforcement Learning Algorithms-Asynchronous Training
- graduate student descent, Graduate Student Descent
- grid search, Grid Search
- policy learning, Policy Learning
- Q-learning, Q-Learning
- random hyperparameter search, Random Hyperparameter Search
- AlphaGo, AlphaGo, Reinforcement Learning, Markov Decision Processes, Limits of Reinforcement Learning, The A3C Algorithm
- Anaconda Python, Installing TensorFlow and Getting Started, Installing DeepChem
- architectural primitives, Deep Learning Primitives-Long Short-Term Memory Cells
- architectures
- AlexNet, AlexNet
- AlphaGo, AlphaGo
- generative adversarial networks (GANs), Generative Adversarial Networks, Sampling from Recurrent Networks
- Google's neural machine translation (Google-NMT), Google Neural Machine Translation
- LeNet, LeNet
- neural captioning model, Neural Captioning Model
- Neural Turing machine (NTM), Neural Turing Machines
- one-shot models, One-Shot Models
- ResNet, ResNet
- argparse tool, Code for Preprocessing
- artificial general intelligences (AGIs), Limits of Reinforcement Learning, Is Artificial General Intelligence Imminent?
- artificial intelligence
- ASIC (application specific integrated circuits), Tensor Processing Units
- Asynchronous Advantage Actor-Critic (see A3C algorithm)
- asynchronous reinforcement, The A3C Algorithm
- asynchronous training, Asynchronous Training
- ATARI arcade games, Reinforcement Learning
- atrous convolutions, Dilated Convolutions
- attributions, Using Code Examples
- autoencoders, Generating Images with Variational Autoencoders, Sampling from Recurrent Networks
- automated statistician project, Hyperparameter Optimization Algorithms
- automatic differentiation, Learning Fully Connected Networks with Backpropagation
- (see also backpropagation)
- automatic differentiation systems, Automatic Differentiation Systems
- automatic model tuning, Hyperparameter Optimization Algorithms
B
- backpropagation, Learning Fully Connected Networks with Backpropagation, Universal Convergence Theorem
- baselines, Setting Up a Baseline
- binary classification
- black-box learning algorithms, Model Evaluation and Hyperparameter Optimization, Hyperparameter Optimization Algorithms
- blind optimization, Metrics, Metrics, Metrics
- bounding boxes, Object Detection and Localization
- Breakout arcade game, Reinforcement Learning
- broadcasting, Introduction to Broadcasting
- build() method, Defining a Graph of Layers
- build_graph() method, The A3C Algorithm
C
- c.eval(), Imperative and Declarative Programming
- casting functions, Tensor Types
- catastrophic forgetting, Q-Learning
- central processing unit (CPU) training
- character-level modeling, Processing the Penn Treebank Corpus
- chess match, Reinforcement Learning
- chip speeds, GPU Training
- Cifar10 benchmark set
- classical regularization, Weight regularization
- classification metrics, Evaluating Trained Models
- classification problems
- co-adaptation, Dropout
- code examples, obtaining and using, Using Code Examples
- coefficient of determination, Metrics for evaluating regression models
- comments and questions, How to Contact Us
- computation graphs
- computational limitations, Universal Convergence Theorem
- computations in TensorFlow, Basic Computations in TensorFlow-Introduction to Broadcasting
- computer vs. human chess match, Reinforcement Learning
- confusion matrix, Multiclass Classification Metrics
- contact information, How to Contact Us
- continuous functions, Functions and Differentiability
- continuous probability distributions, Adding noise with Gaussians
- contravariant indices, Mathematical Asides
- control-flow operations, Long Short-Term Memory (LSTM)
- convolutional kernels, Convolutional Kernels
- convolutional layer
- Convolutional Neural Networks (CNNs)
- convolutional weights, The Convolutional Architecture
- covariant indices, Mathematical Asides
- CPU-based simulations, Asynchronous Training
- cross-entropy loss, Cross-entropy loss, Logistic Regression in TensorFlow
- CUDA (compute unified device architecture), GPU Training
- CuDNN library, Long Short-Term Memory (LSTM), GPU Training
- cutoffs, choosing, Binary Classification Metrics
- Cybenko, George, Universal Convergence Theorem
- CycleGAN, Adversarial models
D
- data parallelism, Data Parallelism
- debugging, visual vs. nonvisual styles, Visualizing linear regression models with TensorBoard
- decision trees, Setting Up a Baseline
- declarative programming, Imperative and Declarative Programming
- deep learning
- alternate applications of, Deep Learning Outside the Tech Industry
- architectures, Deep Learning Architectures-Neural Turing Machines
- defined, “Neurons” in Fully Connected Networks
- ethical use of, Using Deep Learning Ethically
- frameworks, Deep Learning Frameworks-Limitations of TensorFlow
- history of, Machine Learning Eats Computer Science
- mathematical review, Mathematical Review-Automatic Differentiation Systems
- overhyped claims regarding, “Neurons” in Fully Connected Networks
- potential for artificial general intelligence, Is Artificial General Intelligence Imminent?
- primitives, Deep Learning Primitives-Long Short-Term Memory Cells
- deep networks
- DeepBlue system, Reinforcement Learning
- DeepChem
- accepting minibatches of placeholders, Accepting Minibatches of Placeholders
- adding dropouts to hidden layers, Adding Dropout to a Hidden Layer
- evaluating model accuracy, Evaluating Model Accuracy
- hidden layers, Implementing a Hidden Layer
- implementing minibatching, Implementing Minibatching
- installing, Installing DeepChem
- MoleculeNet dataset collection, Tox21 Dataset
- reinforcement learning, Reinforcement Learning
- Tox21 dataset, Tox21 Dataset
- using TensorBoard to track model convergence, Using TensorBoard to Track Model Convergence
- DeepMind, Reinforcement Learning, The A3C Algorithm
- derivative-driven optimization, Functions and Differentiability
- derivatives, Functions and Differentiability
- detection and localization, Object Detection and Localization
- diagonal matrices, Matrix Operations
- differentiability, Functions and Differentiability, Automatic Differentiation Systems
- dilated convolutions, Dilated Convolutions
- directed graphs, The Layer Abstraction, Defining a Graph of Layers
- discounted rewards, Q-Learning
- discrete probability distributions, Adding noise with Gaussians
- DistBelief, Data Parallelism
- distributed deep network training
- Downpour stochastic gradient descent (SGD), Data Parallelism
- dropout, Dropout, Adding Dropout to a Hidden Layer
- dtype notation, Tensor Types, Introduction to Broadcasting
E
- early stopping, Early stopping
- Eastman, Peter, Reinforcement Learning
- elementwise multiplication, Tensor Addition and Scaling
- end-to-end training, Neural Captioning Model
- environments, Reinforcement Learning, Markov Decision Processes
- epochs, Gradient Descent
- errata, How to Contact Us
- error metrics, Evaluating Trained Models
- ethical considerations, Using Deep Learning Ethically
- eval(), Imperative and Declarative Programming
F
- face detection technologies, Using Deep Learning Ethically
- false positives/negatives, reducing number of, Binary Classification Metrics
- feature engineering, The A3C Algorithm
- featurization, Scalars, Vectors, and Matrices
- feed dictionaries, Feed dictionaries and Fetches
- fetches, Feed dictionaries and Fetches
- field programmable gate arrays (FPGAs), Field Programmable Gate Arrays
- filewriters (TensorBoard), Summaries and file writers for TensorBoard
- filters, Convolutional Kernels
- fit() method, Training the Policy
- fixed nonlinear transformations, Pooling Layers
- for loops, Training models with TensorFlow, Grid Search
- Fourier series, Universal Convergence Theorem
- Fourier transforms, Learnable Representations
- frameworks, Deep Learning Frameworks-Limitations of TensorFlow
- fully connected deep networks
- fully connected layer, Fully Connected Layer
- function minimization, Functions and Differentiability, Gradient Descent
- functional programming, TensorFlow Variables
- functions
- future rewards, Q-Learning
G
- GAN (generative adversarial network), Generative Adversarial Networks, Adversarial models, Sampling from Recurrent Networks
- gated recurrent units (GRU), Gated Recurrent Units (GRU)
- Gaussian probability distributions, Adding noise with Gaussians
- generalization, Model Evaluation and Hyperparameter Optimization, Setting Up a Baseline
- generalization of accuracy, Multiclass Classification Metrics
- get_input_tensors(), The Layer Abstraction
- get_layer_variables() method, Defining a Graph of Layers
- Google neural machine translation (GNMT), Google Neural Machine Translation, Seq2seq Models
- gradient, Functions and Differentiability, Gradient Descent, Taking gradients with TensorFlow
- gradient descent, Gradient Descent-Gradient Descent, The Basic Recurrent Architecture
- gradient instability, Recurrent Cells
- graduate student descent, Graduate Student Descent
- graph convolutions, Graph Convolutions
- graphics processing unit (GPU) Training, GPU Training
- graphs, TensorFlow Graphs
- grid-search method, Grid Search
H
- handwritten digits, The MNIST Dataset
- hardware memory interconnects, Model Parallelism
- hidden layers, Implementing a Hidden Layer
- hyperparameter optimization, Hyperparameter Optimization-Review
- algorithms for, Hyperparameter Optimization Algorithms-Random Hyperparameter Search
- automating, Hyperparameter Optimization Algorithms
- black-box learning algorithms, Model Evaluation and Hyperparameter Optimization, Hyperparameter Optimization Algorithms
- defined, Hyperparameter Optimization
- metrics selection, Metrics, Metrics, Metrics-Regression Metrics
- model evaluation and, Model Evaluation and Hyperparameter Optimization
- overview of, Hyperparameter Optimization
- hyperparameters, defined, Gradient Descent
I
- IBM's TrueNorth project, Neuromorphic Chips
- identity matrices, Matrix Operations
- image segmentation, Image Segmentation
- ImageNet Large Scale Visual Recognition Challenge (ILSVRC), AlexNet, Multiclass Classification Metrics
- images
- imbalanced datasets, Tox21 Dataset
- imperative programming, Imperative and Declarative Programming
- implicit type casting, Introduction to Broadcasting
- inference only hardware, Custom Hardware for Deep Networks
- InfiniBand interconnects, Model Parallelism
- Intel, GPU Training
L
- L2 loss, Adversarial models
- language modeling, Sampling from Recurrent Networks
- Laplace transforms, Learnable Representations
- large deep networks
- Layer object, The Layer Abstraction
- learnable parameters, Gradient Descent
- learnable representations, Learnable Representations
- learnable variables, Defining a Graph of Layers
- learnable weights, Gradient Descent, The Convolutional Architecture
- learned classes, Metrics for evaluating classification models
- learning rates, Learning rates
- learning the prior, Gradient Descent
- Legendre transforms, Learnable Representations
- LeNet, LeNet
- LeNet-5 convolutional architecture
- Leswing, Karl, Reinforcement Learning
- limitations of TensorFlow, Limitations of TensorFlow
- linear regression
- local receptive fields, Local Receptive Fields, Graph Convolutions
- localization, Object Detection and Localization
- logging statements, Summaries and file writers for TensorBoard, Visualizing linear regression models with TensorBoard
- logistic regression, model case study, Logistic Regression in TensorFlow-Metrics for evaluating classification models
- long short-term memory (LSTM) cells, Long Short-Term Memory Cells, Long Short-Term Memory (LSTM), The Basic Recurrent Architecture
- loss functions, Loss Functions-Cross-entropy loss, Adversarial models, The A3C Loss Function
M
- machine learning
- Markov decision processes, Reinforcement Learning, Markov Decision Processes
- master-level chess, Reinforcement Learning
- mathematical tools
- matrix mathematics, Matrix Mathematics
- matrix multiplication, Tensor Addition and Scaling-Matrix Operations
- matrix operations, Matrix Operations
- max pooling, Pooling Layers, TensorFlow Convolutional Primitives
- McCulloch, Warren S., “Neurons” in Fully Connected Networks
- mean, Adding noise with Gaussians
- metrics
- classification metrics, Evaluating Trained Models
- defined, Metrics, Metrics, Metrics
- error metrics, Evaluating Trained Models
- for binary classification, Binary Classification Metrics
- for evaluating classification models, Metrics for evaluating classification models
- for evaluating regression models, Metrics for evaluating regression models
- for multiclass classification, Multiclass Classification Metrics
- regression metrics, Regression Metrics
- role of common sense in, Metrics, Metrics, Metrics
- minibatches, Gradient Descent, Minibatching-Implementing Minibatching
- minima, Functions and Differentiability, Gradient Descent
- Minsky, Marvin, Learning Fully Connected Networks with Backpropagation
- MNIST dataset, The MNIST Dataset
- model parallelism, Model Parallelism
- model-based algorithms, Reinforcement Learning Algorithms
- model-free algorithms, Reinforcement Learning Algorithms
- models
- molecular machine learning, Loading MNIST
- multiclass classification, Multiclass Classification Metrics
- multidimensional inputs, Convolutional Kernels
- multilayer perceptron networks, Learning Fully Connected Networks with Backpropagation
- (see also fully connected deep networks)
- multilinear functions, Mathematical Asides
- multiplying tensors, Tensor Addition and Scaling
- multivariate functions, Functions and Differentiability
N
- name scopes, Name scopes
- ndarray, An (extremely) brief introduction to NumPy
- neural captioning model, Neural Captioning Model
- neural network layers
- neural networks, “Neurons” in Fully Connected Networks
- (see also fully connected deep networks)
- Neural Turing machine (NTM), Neural Turing Machines
- neuromorphic chips, Neuromorphic Chips
- np.ndarray, Feed dictionaries and Fetches
- NumPy, Initializing Constant Tensors, An (extremely) brief introduction to NumPy
- Nvidia, GPU Training, GPU Training
- NVLINK interconnects, Model Parallelism
P
- Papert, Seymour, Learning Fully Connected Networks with Backpropagation
- parameterization, Gradient Descent
- Pearson correlation coefficient, Metrics for evaluating regression models, Regression Metrics
- Penn Treebank corpus
- perceptrons, Learning Fully Connected Networks with Backpropagation
- perplexity, The Basic Recurrent Architecture
- physics, tensors in, Tensors in Physics
- Pitts, Walter, “Neurons” in Fully Connected Networks
- placeholders, Placeholders, Accepting Minibatches of Placeholders, The Convolutional Architecture
- policy learning, Policy Learning, The A3C Algorithm
- pooling layers, Pooling Layers
- precision, Binary Classification Metrics
- prediction categories, Binary Classification Metrics
- primitives
- probability density functions, Adding noise with Gaussians
- probability distributions, Probability distributions
- prospective prediction, Loading MNIST
- Python API, Installing TensorFlow and Getting Started
R
- random forests method, Setting Up a Baseline
- random values, Sampling Random Tensors
- rank-3 tensors, Tensors, Tensor Shape Manipulations
- read_cifar10() method, Downloading and Loading the DATA
- recall, Binary Classification Metrics
- receiver operator curve (ROC), Binary Classification Metrics
- rectified linear activation (ReLU), Activations
- Recurrent Neural Networks (RNNs)
- recurrent neural networks (RNNs)
- regression problems
- regularization
- reinforcement learning (RL)
- representation learning, Learnable Representations
- ResNet, ResNet
- restore() method, Defining a Graph of Layers
- reuse_variables(), The Basic Recurrent Architecture
- rewards, Reinforcement Learning, Markov Decision Processes, Q-Learning
- RGB images, Convolutional Kernels
- RMSE (root-mean-squared error), Metrics for evaluating regression models, Regression Metrics
- ROC-AUC (area under curve for the receiver operator curve), Binary Classification Metrics
- rollout concept, Policy Learning
- Rosenblatt, Frank, Learning Fully Connected Networks with Backpropagation
S
- sample_weight argument, Evaluating Model Accuracy
- sampling, Generating Images with Variational Autoencoders
- sampling sequences, Sampling from Recurrent Networks
- scalars, Scalars, Vectors, and Matrices
- scaling, Tensor Addition and Scaling
- scikit-learn library, Setting Up a Baseline
- scopes, Name scopes
- SELECT command, Imperative and Declarative Programming
- semiconductors, GPU Training
- sentience, Is Artificial General Intelligence Imminent?
- separating line, Metrics for evaluating classification models
- sequence-to-sequence (seq2seq) models, Seq2seq Models
- sess.run(), Training models with TensorFlow, Implementing Minibatching
- sess.run(var), Feed dictionaries and Fetches
- sessions, TensorFlow Sessions
- set_loss() method, Defining a Graph of Layers
- set_optimizer() method, Defining a Graph of Layers
- shapes, manipulating, Tensor Shape Manipulations
- sigmoid function, Logistic Regression in TensorFlow, Activations
- simulations, Reinforcement Learning Algorithms, Asynchronous Training
- sklearn.metrics, Evaluating Model Accuracy
- specificity, Binary Classification Metrics
- speech spectrograms, Overview of Recurrent Architectures
- spike train, Neuromorphic Chips
- spurious correlations, Fully Connected Networks Memorize
- squared Pearson correlation coefficient, Metrics for evaluating regression models, Regression Metrics
- standard deviation, Adding noise with Gaussians
- StarCraft, Limits of Reinforcement Learning
- stateful programming, TensorFlow Variables
- stationary evolution rule, Overview of Recurrent Architectures
- stochastic gradient descent (SGD), Data Parallelism
- Stone-Weierstrass theorem, Universal Convergence Theorem
- stride size, Convolutional Kernels
- structure agnostic networks, Fully Connected Deep Networks
- structured spatial data, Convolutional Neural Networks
- summaries (TensorBoard), Summaries and file writers for TensorBoard
- superintelligence, Is Artificial General Intelligence Imminent?
- supervised problems, Classification and regression
- supplemental material, Using Code Examples
- symmetry breaking, Sampling Random Tensors
- synthetic datasets, Creating Toy Datasets-Toy classification datasets
T
- Taylor series, Universal Convergence Theorem
- tensor processing units (TPU), Tensor Processing Units
- TensorBoard
- TensorFlow
- basic computations in, Basic Computations in TensorFlow-Introduction to Broadcasting
- basic machine learning concepts, Learning with TensorFlow-Toy classification datasets
- benefits of, Preface
- documentation, Installing TensorFlow and Getting Started
- fundamental underlying primitive of, Deep Learning Frameworks
- graphs, TensorFlow Graphs, Placeholders
- installing and getting started, Installing TensorFlow and Getting Started
- limitations of, Limitations of TensorFlow
- matrix operations, Matrix Operations
- new TensorFlow concepts, New TensorFlow Concepts-Training models with TensorFlow
- sessions, TensorFlow Sessions
- topics covered, Preface
- training linear and logistic models in, Training Linear and Logistic Models in TensorFlow-Metrics for evaluating classification models
- training models with, Training models with TensorFlow
- variables, TensorFlow Variables
- TensorFlow Eager, Imperative and Declarative Programming
- TensorGraph objects, Defining a Graph of Layers
- tensors
- adding and scaling, Tensor Addition and Scaling
- broadcasting, Introduction to Broadcasting
- creating and manipulating, Basic Computations in TensorFlow-Introduction to Broadcasting
- evaluating value of tensors, Initializing Constant Tensors
- initializing constant tensors, Initializing Constant Tensors
- matrix mathematics, Matrix Mathematics
- matrix operations, Matrix Operations
- as multilinear functions, Mathematical Asides
- in physics, Tensors in Physics
- rank-3 tensors, Tensors
- sampling random, Sampling Random Tensors
- scalars, vectors, and matrices, Scalars, Vectors, and Matrices-Scalars, Vectors, and Matrices
- shape manipulations, Tensor Shape Manipulations
- types of, Tensor Types
- test sets, Model Evaluation and Hyperparameter Optimization
- tf.assign(), TensorFlow Variables
- tf.constant(), Initializing Constant Tensors
- tf.contrib, The Basic Recurrent Architecture
- tf.convert_to_tensor, The Layer Abstraction
- tf.data, Loading Data into TensorFlow
- tf.diag(diagonal), Matrix Operations
- tf.estimator, The Layer Abstraction
- tf.expand_dims, Tensor Shape Manipulations
- tf.eye(), Matrix Operations
- tf.fill(), Initializing Constant Tensors
- tf.FixedLengthRecordReader, Downloading and Loading the DATA
- tf.FLags, Code for Preprocessing
- tf.float32, Tensor Types
- tf.float64, Tensor Types
- tf.get_default_graph(), TensorFlow Graphs
- tf.GFile, Code for Preprocessing
- tf.global_variables_initializer, TensorFlow Variables
- tf.gradients, Taking gradients with TensorFlow
- tf.Graph, TensorFlow Graphs, TensorFlow Variables, Defining a Graph of Layers
- tf.InteractiveSession(), Installing TensorFlow and Getting Started, Initializing Constant Tensors, TensorFlow Sessions
- tf.keras, The Layer Abstraction
- tf.matmul(), Matrix Operations, TensorFlow Graphs
- tf.matrix_transpose(), Matrix Operations
- tf.name_scope, Implementing a Hidden Layer
- tf.name_scope(name), Name scopes
- tf.name_scopes, Visualizing linear regression models with TensorBoard
- tf.nn.conv2d, TensorFlow Convolutional Primitives, The Convolutional Architecture
- tf.nn.dropout(x, keep_prob), Adding Dropout to a Hidden Layer
- tf.nn.embedding_lookup, The Basic Recurrent Architecture
- tf.nn.max_pool, TensorFlow Convolutional Primitives, The Convolutional Architecture
- tf.nn.relu, Implementing a Hidden Layer
- tf.ones(), Initializing Constant Tensors
- tf.Operation, TensorFlow Graphs
- tf.placeholder, Placeholders
- tf.Queue, Loading Data into TensorFlow
- tf.random_normal(), Sampling Random Tensors
- tf.random_uniform(), Sampling Random Tensors
- tf.range(start, limit, delta), Matrix Operations
- tf.register_tensor_conversion_function, The Layer Abstraction
- tf.reshape(), Tensor Shape Manipulations
- tf.Session(), TensorFlow Sessions, Defining a Graph of Layers
- tf.squeeze, Tensor Shape Manipulations
- tf.summary, Summaries and file writers for TensorBoard
- tf.summary.FileWriter, Visualizing linear regression models with TensorBoard
- tf.summary.merge_all(), Summaries and file writers for TensorBoard
- tf.summary.scalar, Summaries and file writers for TensorBoard
- tf.Tensor, TensorFlow Graphs
- tf.Tensor.eval(), Initializing Constant Tensors, TensorFlow Sessions
- tf.Tensor.get_shape(), Tensor Shape Manipulations
- tf.to_double(), Tensor Types
- tf.to_float(), Tensor Types
- tf.to_int32(), Tensor Types
- tf.to_int64(), Tensor Types
- tf.train, Optimizers
- tf.train.AdamOptimizer, Optimizers, Visualizing linear regression models with TensorBoard
- tf.train.FileWriter(), Summaries and file writers for TensorBoard
- tf.truncated_normal(), Sampling Random Tensors
- tf.Variable(), TensorFlow Variables, Gradient Descent, Defining a Graph of Layers
- tf.zeros(), Initializing Constant Tensors
- tic-tac-toe agent
- time-series modeling, Overview of Recurrent Architectures
- topics covered, Preface
- Tox21 dataset, Tox21 Dataset, Hyperparameter Optimization, Hyperparameter Optimization Algorithms
- toy datasets, Creating Toy Datasets-Toy classification datasets
- training and inference hardware, Custom Hardware for Deep Networks
- training loss, Fully Connected Networks Memorize
- train_op, Training models with TensorFlow
- transformations, Learnable Representations, Pooling Layers
- transistor sizes, GPU Training
- true positive rate (TPR), Binary Classification Metrics
- TrueNorth project, Neuromorphic Chips
- tuning (see hyperparameter optimization)
- Turing completeness, Neural Turing Machines
- Turing machines, Neural Turing Machines
- type casting, Introduction to Broadcasting
- typographical conventions, Conventions Used in This Book
V
- validation sets, Early stopping, Model Evaluation and Hyperparameter Optimization, Loading MNIST
- value functions, Policy Learning
- vanishing gradient problem, Activations
- variables, TensorFlow Variables
- variational autoencoders, Generating Images with Variational Autoencoders, Sampling from Recurrent Networks
- vector spaces, Mathematical Asides
- vectors, Scalars, Vectors, and Matrices
- video, Convolutional Neural Networks
- videos, The Convolutional Architecture
- vocabulary, building, Code for Preprocessing
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.