Index
A
Absolute error
Accuracy
Activation functions
ReLU
sigmoid
tanh
AdaBoost
AlexNet
Anaconda
Anaconda navigator
Anaconda Python distribution
Apriori algorithm
Artificial neural networks (ANN)
regression
Artificial neuron
B
Backpropagation
Backpropagation through time (BPTT)
Bagging
Bayes theorem
Bias and variance
Biological neuron
Boosting
AdaBoost
decision stump
in Python
Bootstrap aggregation process
C
Class-conditional independence
Classification report
Classifying handwritten digits
Clustering
K-Means in Python
DBSCAN
image segmentation
middle blob
standard sklearn functions
value of k
randomly generated data
Comma-separated values (CSV) files
Computer vision (CV)
Conditional probability
Confusion matrix
Continuous variables
Convolutional neural networks (CNNs)
configuration
deep networks
image classification
SeeImage classification
padding/stride
in PyTorch
structure
Convolution operation
Cost function
Covariance matrix
CPython Implementation
Cross-correlation
Cross entropy loss
CRoss Industry Structured Process for Data Mining (CRISP-DM) process
business understanding
data preparation phase
data understanding
deployment
evaluation phase
methodology
modelling
Cross validation
D
Dataframe
Data science life cycle
CRISP-DM process
SeeCRoss Industry Structured Process for Data Mining (CRISP-DM) process
iterative process
model creation
Data variables
interval data
nominal data
ordinal data
ratio data
DecisionTreeClassifier
Decision trees
definition
learning phase
in Python
interpreting
pruning
splitting criterions
Deep learning algorithms
Deep neural networks
Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
Dimensionality reduction
curse of
principal component analysis
covariance matrix
data points
maximal amount of variance
in Python
transformation of dataset
Discrete variables
E
Ensemble learning methods
top-level schematic
Entropy
F
False negatives (FN)
False positives (FP)
Feature extraction
Feedforward neural network
artificial neural network
training neural networks
backpropagation
gradient descent
loss functions
SeeLoss functions
fit() method
F-measure
Forget gate
forward() method
Frequent pattern mining (FP mining)
market basket analysis
in Python
F1-Score
G
Gated recurrent unit (GRU)
Gaussian distribution
Gaussian naive Bayes model
Gradient descent
Grid search
H
Hinge loss
Histogram
Hyperparameter tuning
dataset creation
grid search
random search
I
Image classification
DataLoader()
CNN-based neural network
filters visualized
loss function/optimizer
Matplotlib
max pooling
MNIST Fashion dataset
model’s parameters
operations
ReLU
size of max pooling
ImageNet
Indexing of Ndarrays
Inference method
Input gate
Interval data
Iris dataset
J
Jupyter Notebooks
K
Kernel functions
Kernel trick
k-fold cross validation
K-means
L
Labelled dataset
Lagrangian formulation
Laplacian correction
Lasso regression
Learning
data
hosting, model
model preparation
prediction phase
problem defining
Leave-one-out cross validation
Linear regression
evaluation measures
polynomial features
Python
supervised learning method
visualizing
linspace() routine
Linux distributions
load_iris() method
Logistic regression
boundary line
creation
decision boundary
line vs. curve, expression probability
parameters learning
Python
Long short-term memory
Loss functions
cross entropy loss
hinge loss
mean absolute error
mean squared error
negative log likelihood loss
M
Machine learning
definition
models as microservice
supervised learning
unsupervised learning
Market basket analysis
Matplotlib data visualization
bar plot
histogram
linear plot
magic command
multiline plot
pie chart
Pyplot module
scatterplot
sinusoidal plot
subplots
Matrix operations
Maximum entropy models
Maximum margin classifiers
Max pooling
Mean absolute error
Mean squared error (MSE)
Miniconda
Min-max scaling
MinMax transformation
MNIST Fashion dataset
model.score() method
Modified National Institute of Standards and Technology (MNIST) database
Multidimensional arrays
Multilayer ANN
boundaries
decision boundaries
Iris dataset
neural network with hidden layer
NN Class in PyTorch
overfitting/dropouts
reduction in losses
ReLU
N
Naive Bayes
Bayes theorem
classifiers
conditional probability
multinomial
posterior probability
in Python
Natural language processing (NLP)
applications
pipeline
removing stopwords
segmentation
stemming/lemmatization
tokenization
word vectors
Natural Language ToolKit (NLTK)
N Dimensional Array (Ndarray)
constants
creation
indexing
multidimensional
NumPy routines
properties
Negative log likelihood loss
Neural networks
Nominal data
Nonlinear classification
Normalization
min-max scaling
standard scaling
np.identity() function
NumPy
library install
Ndarray
SeeN Dimensional Array (Ndarray)
numerical computation
routines, Ndarray creation
O
Ordinal data
Output gate
Overfitting
P, Q
Pandas
dataframes
data visualization
area plots
bins/buckets histogram
box plots
dataframe creation
generated dataset
histogram
horizontal bars
stacked histogram
stacked horizontal bars
stacked vertical bars
unstacked area plots
vertical bars
definition
series
PATH variable
Perceptrons
in Python
structure with constituent computations
weights with random values
Performance measures
accuracy
confusion matrix
f-measure
precision
recall
Performance metrics
Pip3 utility
Polynomial regression
Porter’s Stemmer
Precision
predict() method
Preprocessing text
NLP
SeeNatural language processing (NLP)
Principal component analysis
Processing images
PyDotPlus
Python
distributions
implementations
installation
Linux distributions
macOS
Windows
modes
interactive mode
script mode
performance metrics
Python 3 programming language
applications
history
object-oriented/procedural programming
philosophy
PyTorch
CNN
definition
installation
NN Class
tensor operations
tensors creation
R
RandomForestClassifier
Random forest in Python
Randomly generated dataset
Random search
Ratio data
Recall
Rectified Linear Unit (ReLU)
Recurrent neural networks (RNNs)
LSTM cell
in Python
types
many to many
many to one
one to many
one to one
Recurrent unit
Regression
ANN
randomly generated samples
Regularization
ROC Curve
Root mean squared error
S
Scientific Python ecosystem
Scikit-learn
API
definition
hello-world experiment
installation
R Squared value
Segmentation
Sentiment analysis
Sigmoid activation function
sklearn.metrics
Stacked generalization
Stacking ensemble
Standard scaling
Stratified cross validation
Subplots
Supervised learning
classification
regression
system structure
data annotation/data preparation
data collection
data wrangling
end-to-end process
evaluation
model deployment
model development
problem understanding
training
Support vector classifiers
Support vector machines (SVM)
decision boundaries
hyperplane
Kernel Trick
maximum margin hyperplace
nonlinear classification
in Python
Support vector machines (SVM) classifier
T
Tanh activation function
Test dataset
Time series prediction
Tokenization
torchvision.datasets()
Training and testing processes
Training dataset
Transformation
nominal attributes
nonnumeric features into numeric
ordinal attributes
Transform method
True negatives (TN)
True positives (TP)
Type 1/Type 2 Error
U, V
Underfitting
Unsupervised learning
Unsupervised learning methods
clustering
SeeClustering
dimensionality reduction
FP mining
W, X, Y, Z
Weak learners
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.144.170