Index

[SYMBOL][A][B][C][D][E][F][G][H][I][J][K][L][M][N][O][P][R][S][T][U][V][W][Y][Z]

SYMBOL

* function

A

absolute value
academic papers
accuracy
activation functions
  adding to layer
  computable
  continuous and infinite in domain
  defined
  hidden-layer
  installation instructions
  monotonic
  nonlinear
  output layer
  similar inputs
  slope
  softmax computation
  upgrading MNIST network
activation input parameter
actual error
__add__ function
addition backpropagation
additional functions, adding support for
algorithms
alpha
Anaconda framework
AND operator

arbitrary length
  backpropagation with
  challenge of
  forward propagation with
  weight update with

architecture of neural networks
  importance of visualization tools
  language
  overview of
artificial neural networks
attenuation

autograd (automatic gradient computation)
  adding cross entropy to
  adding indexing to
  general discussion
  implementing LSTM with
  to train neural network
  upgrading to support multiuse tensors
  used multiple times
automatic optimization.
    See also autograd (automatic gradient computation).
  adding
  adding support
  for additional functions
  for negation
  addition backpropagation
  deep learning framework2nd
  dynamic computation graph
  layers
  adding support for layer types
  containing layers
  cross-entropy layer
  embedding2nd
  in Keras or PyTorch.
  loss-function layers
  nonlinearity layers
  recurrent neural network (RNN) layer
  tensors
  defined
  that are used multiple times
averaged word vectors, RNN

B

Babi dataset

backpropagation
  addition
  in code
  iteration of
  recurrent neural network (RNN)2nd
  truncated
  disadvantages of
  iterating with
  weighted average delta
  weighted average error
.backward( ) method2nd3rd4th5th
bash commands
batch gradient descent2nd
batch_loss
batch_size2nd
batch_size/alpha pair
Bernoulli distribution
blogging, to teach deep learning
Bongo Java
bptt variable2nd3rd

C

calculus
cell-update vector
character language modeling.
    See also LSTM (long short-term memory).
ciphertext
cluster labels

comparing
  mean squared error2nd
  measuring error
computable activation functions
computation graphs2nd
concatenation
conditional (sometimes) correlation2nd
continuous functions
convolutional neural networks
  convolutional layer
  implementation in NumPy
  reusing weights in multiple places
corners.
    See convolutional neural networks.

correlation
  coefficients
  creating
  indirect
  input/output
  learning
  negative
  searching for
  selective
correlation summarization
counting-based learning.
    See nonparametric learning.
creation_op add
creators attribute
cross communication2nd
cross-entropy class2nd3rd4th
cross-entropy layer
cryptography
curves

D

data, grouping
data patterns
datapoints2nd

datasets
  Babi
  clustering into groups
  IMDB movie reviews
  learning whole
  MNIST2nd3rd
  preparing data
  streetlight problem
  transforming
debugging frameworks
decoder weight matrix

deep learning
  adapted for beginning learners
  analogies and
  defined
  difficulty level for learning
  ongoing education
  overview
  project-based teaching method
  reasons for
  incremental automation of intelligence
  potential for automation of skilled labor
  stimulates intelligence and creativity
  requirements for
  high school–level mathematics
  Jupyter Notebook
  NumPy Python library
  personal challenge to solve
  Python knowledge
  subfield of machine learning
  teaching
  textbook for
  to understand frameworks and code libraries
deep learning framework2nd.
    See also automatic optimization.
deep neural network
  backpropagation
  in code
  iteration of
  weighted average delta
  weighted average error
  batch gradient descent
  building
  correlation
  creating
  indirect
  learning
  full gradient descent
  importance of
  learning whole dataset
  linear versus nonlinear networks
  making predictions
  matrices and matrix relationship
  importing matrices into Python
  patterns
  streetlight problem
  overfitting
  preparing data
  running program
  sometimes correlation2nd
  stacking
  stochastic gradient descent2nd
  weight pressure
  conflicting pressure
  weight update
defeat_delta variable
delta, multiplying by slope
delta variable2nd3rd4th5th
deniability2nd

derivatives
  calculating
  defined
  example
  relationship between weight and error and2nd
  using to learn
  weight_delta
diagonal
diff variable
direct imitation
direction_and_amount variable
discorrelation
divergence2nd
dot products2nd3rd4th
  neural prediction
  visualizing
double negative
down pressure

dropout technique, regulation
  in code
  evaluated on MNIST
  general discussion
dynamic computation graph
DyNet framework

E

early stopping2nd
edges.
    See convolutional neural networks.
ele_mul function2nd
elementwise addition2nd
elementwise multiplication2nd
elementwise operation
embedding layers2nd3rd
encryption
error curve

errors
  error attribution
  mean squared error2nd
  measuring
  positive
  reducing
Euclidian distance
execution and output analysis, RNN
expand function, adding support for

exploding gradient problem
  countering with LSTM
  general discussion
  toy example

F



federated learning
  general discussion
  hacking into
  homomorphically encrypted
  overview of2nd
fill-in-the-blank task, neural networks2nd
for loop2nd3rd4th
forward( ) method2nd
forward propagation
  finishing with .index_select( ) method.
  importance of visualization tools
  linking variables
  with multiple inputs
  how it works
  runnable code
  weighted sum
  with multiple outputs2nd
  predicting with multiple inputs
  using single input
  neural networks
  defined
  purpose of
  simple
  stacking
  NumPy Python library
  overview of2nd3rd4th5th
  predicting on predictions
  prediction
  recurrent neural network2nd
  side-by-side visualization
frameworks, debugging
full gradient descent
functions

G

gates
generalization, regulation
get_parameters( ) method
global correlation summarization
goal_pred variable
goal_prediction variable
GPUs
gradient descent.
    See also neural learning.
  breaking
  general discussion
  iteration of
  with multiple inputs
  freezing one weight2nd
  single-weight neural network versus
  turning single delta into three weight_delta values
  with multiple inputs and outputs
  gradient descent generalizes to arbitrarily large networks
  neural networks making multiple predictions using single input
  with multiple outputs
  visualizing dot products (weighted sums)
  weights
graphs2nd
grouping data2nd3rd
Gryffindor

H

hidden variable
hidden-layer activation functions
hidden-to-hidden layer
hidden-to-output layer
high-low pattern
homomorphic encryption2nd

hot and cold learning
  characteristics of
  example
  general discussion

I

ICML (International Conference on Machine Learning)
identity matrices2nd3rd
identity vectors
image classification
images matrix
IMDB movie reviews dataset
imitation
inceptionism
.index_select( ) method
indices array2nd
indirect correlation
indirect imitation
infinite functions
infinite parameters
input -> goal_prediction pairs2nd
input data pattern
input datapoints
input datasets2nd
input layers
input node
input values2nd
input variable2nd
input vector2nd3rd
input/output correlation
inputs
  gradient descent with
  freezing one weight
  general discussion
  generalizes to arbitrarily large networks
  neural networks making multiple predictions using only single input
  single-weight neural network vs.
  turning single delta (on node) into three weight_delta values
  how it works
  overview of
  runnable code
  weighted sums
input-to-hidden layer
installation instructions, activation functions
intelligence targeting
intermediate dataset
intermediate predictions
International Conference on Machine Learning (ICML)

J

Jupyter Notebook

K

Keras framework2nd3rd
kernel_output
kernels
knob_weight variable2nd

L

labels2nd3rd4th5th
language, neural networks understanding
  embedding layer
  fill-in-the-blank task
  general discussion
  IMDB movie reviews dataset
  interpreting output
  loss function
  meaning of neuron
  natural language processing
  neural architecture
  predicting movie reviews
  supervised natural language processing
  word analogies
  word correlation
  word embeddings
Lasagne framework
Layer class
layer_0_delta variable
layer_1_delta variable2nd
layer_2_delta variable

layers
  adding activation functions to
  adding support for layer types
  containing layers
  cross-entropy layer
  dimensionality of matrices and
  embedding2nd
  embedding layer translates indices into activations
  in Keras or PyTorch
  loss-function layers
  nonlinearity layers
linear neural networks
linear nodes
list objects
local correlation summarization
log function
logical analysis
logical AND
long short-term memory.
    See LSTM (long short-term memory).
loss function2nd3rd4th
loss.backward( ) function
loss-function layers
lossless representation
lower weights
LSTM (long short-term memory)
  character language model
  training
  tuning
  upgrading
  countering vanishing and exploding gradients with
  gates
  using autograd system to implement

M

machine learning
make_sent_vect function
matrices and matrix relationship
  importing matrices into Python
  layers and
  patterns
  streetlight problem
matrix multiplication, adding support for
max pooling
mean pooling
mean squared error2nd3rd4th5th6th7th
measuring error
memorization, regulation
memorizing neural network code
mini-layers
missing values

MNIST (Modified National Institute of Standards and Technology) dataset
  overview of
  three-layer network on
  upgrading
MNIST digit classifier
MNISTPreprocessor notebook
monotonic activation functions
multi-input gradient descent
multiple inputs
  gradient descent with
  freezing one weight
  general discussion
  generalizes to arbitrarily large networks
  neural networks making multiple predictions using only single input
  single-weight neural network versus
  turning single delta (on node) into three weight_delta values
  how it works
  runnable code
  weighted sums
multiple outputs
  gradient descent with
  generalizes to arbitrarily large networks
  neural networks making multiple predictions using only single input
  how it works
  predicting with multiple inputs
  using single input
multiplication function, adding support for

N

n linear layers
n output neurons
n_batches
n_hidden parameter
n_layers input parameter
Nanodegree
NaNs (not-a-numbers)
natural language processing.
    See NLP.
n-dimensional tensors
__neg__ function
negation, adding support for
negative correlation
negative derivatives
negative labels2nd3rd
negative numbers
negative reversal attribute2nd3rd
negative sampling
negative sensitivity
negative weight
neural architecture.
    See architecture of neural networks.
neural learning
  alpha
  calculus and
  comparing
  mean squared error2nd
  measuring error
  derivatives
  calculating
  defined
  example
  relationship between weight and error and2nd
  using to learn
  weight_delta
  divergence
  error attribution
  functions
  gradient descent
  breaking
  general discussion
  iteration of
  hot and cold learning
  characteristics of2nd2nd
  memorizing
  overcorrections2nd
  visualizing
  reducing error
  steps of
neural networks.
    See also deep neural network; ; neural learning.
  backpropagation
  in code
  iteration of
  weighted average delta
  weighted average error
  batch gradient descent
  building
  correlation
  creating
  indirect
  learning
  defined
  full gradient descent
  importance of
  learning whole dataset
  linear versus nonlinear networks
  making multiple predictions using single input
  making predictions
  matrices and matrix relationship
  importing matrices into Python
  patterns
  streetlight problem
  overfitting
  preparing data
  purpose of
  running program
  simple
  sometimes correlation2nd
  stacking2nd
  stochastic gradient descent2nd
  visualizing
  architecture
  correlation summarization
  importance of visualization tools
  side by side
  simplifying2nd
  vector-matrix multiplication
  weight pressure
  conflicting pressure
  weight update
neural prediction
  with multiple inputs
  how it works
  runnable code
  weighted sum
  with multiple outputs2nd
  predicting with multiple inputs
  using single input
  neural networks
  defined
  purpose of
  simple
  stacking
  NumPy Python library
  predicting on predictions
  prediction
neurons2nd3rd
NLP (natural language processing)2nd3rd
nodes
noise2nd3rd.
    See also regulation.
nonlinear activation functions
nonlinear neural networks
nonlinearities.
    See also activation functions.
nonlinearity layers2nd

nonparametric learning
  counting-based methods
  parametric learning versus
normalization
normalized variants
normed_weights matrix
NOT operator
not-a-numbers (NaNs)
np.dot function
NumPy Python library2nd3rd

O

objective function
one_hot utility matrix
one-dimensional tensors
one-hot encoding
on-the-job training
open source project
OpenMined2nd
OR operator
output data pattern
output datapoints
output datasets2nd

output layer activation functions
  choosing
  configurations
  no activation function
  sigmoid
  softmax

outputs
  converting to slope
  gradient descent with
  generalizes to arbitrarily large networks
  neural networks making multiple predictions using only single input
  how it works
  neural networks
  predicting with multiple inputs
  using single input

overcorrections
  alpha
  visualizing

overfitting
  causes of
  general discussion
  overview of
overshooting

P

parameters2nd3rd

parametric learning
  nonparametric learning versus
  supervised
  unsupervised
patterns
peer support, for deep learning
perplexity metric
pip install phe command
pixels2nd3rd4th
plaintext
plausible deniability2nd
pooling
positive errors
positive labels2nd3rd
positive sensitivity
practice, importance of
pred variable

predictions
  deep neural networks
  with multiple inputs
  how it works
  runnable code
  weighted sum
  with multiple outputs2nd
  predicting with multiple inputs
  using single input
  neural networks
  defined
  purpose of
  simple
  stacking
  NumPy Python library
  predicting images
  predicting movie reviews
  predicting on predictions
  prediction
privacy.
    See also security and privacy.
  federated learning
  general discussion
  hacking into
  homomorphically encrypted
  homomorphic encryption
  privacy
  secure aggregation
  spam
private key
probabilities.
    See activation functions.
project-based teaching method
propagation
  finishing with .index_select( ) method
  importance of visualization tools
  linking variables
  with multiple inputs
  how it works
  runnable code
  weighted sum
  with multiple outputs2nd
  predicting with multiple inputs
  using single input
  neural networks
  defined
  purpose of
  simple
  stacking
  NumPy Python library
  overview of2nd3rd4th5th6th
  predicting on predictions
  prediction
  recurrent neural network2nd
  side-by-side visualization
public key
pure error2nd

Python
  creating sentence embeddings using identity matrices in
  forward propagation in
  learning
  NumPy Python library2nd3rd
Python Codecademy course
PyTorch2nd3rd

R

random subsections
randomized response
randomness
raw error2nd
recurrent embeddings
recurrent matrix
recurrent neural network.
    See RNN.
reducing error
regularization2nd3rd4th
regulation
  batch gradient descent
  dropout technique
  in code
  evaluated on MNIST
  general discussion
  early stopping
  generalization
  memorization
  overfitting
  causes of2nd
  predicting images
  three-layer network on MNIST
relu function2nd3rd4th5th6th7th8th9th
relu2deriv function2nd3rd
reviews2vectors matrix
RNN (recurrent neural network)
  arbitrary length of data
  backpropagation with
  challenge of
  forward propagation with
  weight update with
  averaged word vectors
  Babi dataset
  backpropagation
  character language modeling
  comparing sentence vectors
  execution and output analysis
  forward propagation in Python
  overview of2nd
  sentence embeddings
  setting up
  vanishing and exploding gradients
  word embeddings
  how information is stored in
  limitations of
  neural networks use of
  summing
RNNCell class
runnable code, neural prediction
running.backward( ) method

S

sampling output
scalar multiples
scalar-matrix multiplication
scalars
scaling attribute2nd3rd
security and privacy
  federated learning
  general discussion
  hacking into
  homomorphically encrypted
  homomorphic encryption
  privacy
  secure aggregation
  spam
selective correlation
self.children counter
self.data array
self.w_hh layer
self.w_ho layer
self.w_ho.forward(h)
self.w_ih layer
self.weight
sensitivity2nd3rd
sent2output layer
sentence embeddings, RNN
  sentence vectors2nd
  transition matrices
sentiment dataset
Sequential( ) method
SGD class
shape2nd3rd
sharpness of attenuation
short-term memory
sigmoid( ) function2nd3rd4th5th6th7th
signal2nd.
    See also regulation.
similarity
simple neural networks
simplifying visualization2nd
single-input gradient descent
single-weight neural network

slope
  converting output to
  multiplying delta by
  overview of

softmax computation
  activation functions
  output layer activation functions
softmax function2nd3rd4th5th
sometimes (conditional) correlation2nd
sox_delta variable
spam
square matrix
squiggly line
stacked convolutional layers
stacking neural networks2nd
stacks of layers
static computation graph
step_amount variable
stickiness
stochastic gradient descent optimizer2nd3rd4th
stopping attribute2nd3rd

streetlight problem
  datasets
  matrices and matrix relationship
subtraction function, adding support for
sum function, adding support for
sum pooling
.sum(dim) function
summing embeddings
supervised machine learning
supervised natural language processing (NLP)
supervised parametric learning

T

tanh( ) function2nd3rd4th5th
target labels, meaning of neuron based on
tasks, NLP
Tensor class2nd
Tensor.backward( ) function
TensorFlow2nd

tensors
  adding nonlinear functions to
  automatic gradient computation (autograd)
  defined
  that are used multiple times
test( ) function
test_images variable
test_labels variable
testing accuracy
Theano
three-layer networks2nd
topic classification
train( ) function
training accuracy
Training-Acc
transition weights
transpose function, adding support for
trial and error2nd.
    See also parametric learning.
true signal

truncated backpropagation
  disadvantages of
  iterating with
  overview of
Twitter
two-dimensional tensors
two-layer networks2nd

U

<unk> tokens
unsupervised machine learning2nd
up pressure
utility functions

V

validation set

vanishing gradient problem
  countering with LSTM
  general discussion
  toy example
variable-length text

variables
  linking
  multiplying
variants
vect_mat_mul function
vectors2nd3rd4th5th
vector-scalar addition and multiplication
virtual graph
visualizing neural networks
  architecture
  correlation summarization
  importance of visualization tools
  side by side
  simplifying2nd
  vector-matrix multiplication
  defined
  letters can be combined to indicate functions and operations
  linking variables
  using letters instead of pictures
volume2nd

W

w_sum function2nd

weight pressure
  conflicting pressure
  weight update
Weight Pressure table2nd
weight values2nd
weight variable
weight vector
weight_delta variable2nd3rd4th5th
weighted average delta
weighted average error
weighted sums2nd
  neural prediction
  visualizing
weights.
    See also multiple inputs.
  batch gradient descent
  convolutional neural networks
  freezing one weight
  full gradient descent
  MNIST dataset
  stochastic gradient descent
  turning single delta into three weight_delta values
  visualizing weight values
  weight update with arbitrary length
weights variable2nd3rd4th5th
weights vector
wlrec predictor
word analogies
word correlation, capturing in input data

word embeddings
  comparing
  recurrent neural network
  how information is stored in
  limitations of
  neural networks use of
  summing
word vectors2nd

Y

y. creation_op

Z

z .backward( ) function

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.139.8