Index
A
Acquisition function
identity function
leaky ReLU
ReLU
sigmoid function
Swish activation function
tanh (hyperbolic tangent) activation function
Adam
Adam (Adaptive Moment estimation)
Adam optimizer
Advanced optimizers
Adam
exponentially weighted averages
momentum
RMSProp
Anomaly detection
Autoencoders
SeeFeed-forward autoencoders
applications
anomaly detection
classification
denoising autoencoders
dimensionality reduction
components
with convolutional layers
definition
general structure
handwritten digits
Keras implementation
regularization
training
B
Bayes error
Bayesian optimization
acquisition function
Gaussian processes
Nadaraya-Watson regression
prediction with Gaussian processes
stationary processes
Bias
Binary classification problem
Binary cross-entropy (BCE)
Black-box function
Black-box optimization
constraints
functions
global optimization
gradient descent
hyper-parameters
problems
Black-box problem
Bayesian optimization
SeeBayesian optimization
coarse to fine optimization
grid search
random search
sampling on logarithmic scale
Convolutional neural networks (CNN)
Decoder
Recurrent neural networks (RNN)
Broadcasting
C
Callback functions
Chatbots
Cheap functions
Coarse to fine optimization
compile() method
compile()/fit() approach
Complex neural network
Conditional GANs (CGANs)
Confusion matrix
Constrained optimization
Convolution
Convolutional auto encoder (CA)
Convolutional layer
Convolutional neural networks (CNN)
building blocks
convolutional layer
pooling layer
stacking layers
convolution
example
filters
kernels
padding
pooling
Cost function
Cross-entropy
Curse of dimensionality
Custom callback class
Custom training loops
D
Dataset splitting
MNIST dataset
training dataset
unbalanced class distribution
Datasets with distributions
Dedicated functions
Deep learning
Denoising auto encoders
Development set
Dimensionality reduction
Discriminator
Discriminator network architecture
CGAN
Discriminator neural network architecture
GANs
Dropout
E
Early stopping
evaluate() method
Extreme overfitting
F
Fashion MNIST dataset
Feed-forward auto encoders
architecture
learned representation
loss function
BCE
MSE
reconstructing handwritten digits
reconstruction error
ReLU activation function
sigmoid function
Feed-forward neural networks
adding layer
advantages, hidden layers
build neural networks with layer
cost function vs. epochs
creation
hyper-parameters
in Keras
keras implementation
memory footprint, formula
memory requirements of models
network architecture
network architectures and Q parameters
one-hot encoding
practical example
variations of gradient descent
weight initialization
wrong predictions
Zalando dataset
Feed-forward neural networks (FFNN)
Filters
Fractals
Functional Keras APIs
G
Gaussian processes
Generating image labels
Generating text
Generative adversarial networks (GANs)
conditional
with Keras and MNIST
training algorithm
Generator
Generator neural network architecture
global minimum
Gradient descent (GD) algorithm
Gradients calculation
GradientTape()
Grid search
H
Handwritten digits
Human-level performance
definition
error
evaluating
Karpathy
on MNIST
Bayes error
error
Hyperbolic tangent
Hyper-parameter tuning
problem
Zalando dataset
I, J
Identity function
Iidentity activation function
ImageNet challenge
Instructive dataset
K
Keras
Keras layers, activation function
Kernels
k-fold cross validation
L
Leaky ReLU
Learning
assumption
definition
neural networks, definition
optimization algorithms
SeeOptimization algorithms
rate
training
Linear regression model
Keras implementation
model’s learning phase
performance evaluation
Line search
Local minimum
Logarithmic scale
Logistic regression with single neuron
dataset, classification problem
dataset splitting
keras implementation
model’s learning phase
model’s performance evaluation
Loss function
M
Manual metric analysis
Matrix dimensions
Matrix notation
Max pooling algorithm
Mean squared error (MSE) function
Memory footprint formula
Metric analysis
dataset splitting
SeeDataset splitting
diagram
k-fold cross validation
manual
Mini-batch GD
Minimization process
MNIST dataset
Model’s learning phase
Momentum optimizer
N
Nadaraya-Watson regression
Network architecture
Network complexity
Network function
Neural network (NN)
description
definition of learning
Neuron
activation functions
SeeActivation functions
computational graph
computational steps
definition
implementation in Keras
linear regression
output
Notations
NumPy library
O
One-hot encoding
Optimization algorithms
gradient descent
line search
steepest descent
trust region
Optimization problem
Optimizers
Adam
Keras in TensorFlow 2.5
number of iterations
performance comparison
small coding digression
Optimizing metric
Overfitting
strategies, prevent
training data
P, Q
Padding
Parametric rectified linear unit
Pooling layer
predict() method
Predicted function
R
Radial basis function
Radon dataset
Random search
Reconstruction error (RE)
Recurrent neural networks (RNNs)
definition
information
learning to count
notation
recurrent meaning
schematic representation
use cases
Regularization
auto encoders
definition
Dropout
early stopping
l1 regularization
l2 regularization
Keras implementation
theory
lp norm
network complexity
parameter
term
weights are going to zero
ReLU activation function
ReLU (rectified linear unit) activation function
Right mini-batch size
S
Saving and loading models
Sequential model
SGD
Sigmoid function
Single-neuron model
Softmax activation function
Softmax function
Speech recognition
Stationary processes
Steepest descent
Stochastic GD
Stochastic Gradient Descent (SGD)
Supervised learning
Surrogate function
Swish activation function
T
Tanh (hyperbolic tangent) activation function
tape.gradient()
Target variables
TensorFlow code
Test dataset
Test set
Training dataset
Training set overfitting
Translation
Trigonometric function
Trust region approach
U, V, W, X, Y
Unbalanced class distribution
Unbalanced datasets
Unconstrained optimization problem
Upper confidence bound (UCB)
Z
Zalando dataset
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.249.220