Index
A
Accuracy
Advanced machine learning concepts
Apriori algorithm
Area under the curve (AUC)
Artificial neural networks (ANN)
Association rule
definition
mlxtend, implementation
output
visualization techniques
B
Backpropagation
Best predictions
C
Classification algorithm-based recommender system
buying propensity model
data collection
DataFrame (pandas)
data preprocessing
decision tree
EDA
feature engineering
implementation
KNN
logistic regression
random forest
steps
train-test split
Classification algorithms
Classification and regression tree (CART)
Clustering
Clustering-based recommender systems
approaches
data collection and downloading
data importing
data preprocessing
elbow method
exploratory data analysis
hierarchical
implementation
k-means clustering,
Co-clustering
Collaborative-based filtering methods
Collaborative-based recommendation engines
Collaborative filtering
approaches
customer’s purchase history and ratings
data1
data collection
DataFrame
dataset
features
size
imports basic libraries
machine learning
memory-based approach
model-based approaches
recommendation engines
types
user-to-user
Confusion matrix
Content-based filtering
Content-based recommender systems
co-occurrence matrix
cosine similarity
Euclidean output
Euclidean similarity
Manhattan similarity
CountVectorizer features
cosine similarity
Euclidean output
Euclidean similarity
Manhattan similarity
data collection
data preprocessing
output
pre-trained models
text
text to features
word embeddings
download word embeddings
fastText features
cosine output
cosine similarity
Euclidean similarity
GloVe features
co-occurrence matrix
cosine similarity
Euclidean similarity
Manhattan similarity
implementation
import data as a DataFrame (pandas)
similarity measures
cosine similarity
Euclidean distance
Manhattan distance
types
steps
TF-IDF features
cosine output
cosine similarity
Euclidean similarity
Word2vec features
Euclidean similarity
function
Manhattan output
Manhattan similarity
Conversational recommendations
Co-occurrence matrix
Cosine similarity
CountVectorizer
D
DataFrame
Data preprocessing
Dataset insights
cutomer
loyal customers
money spent per customer
number of orders per customer
DateTime, paterns
orders placed per month
placed per day, many orders
placed per hour, many orders
preprocessing data
free items/sales
frequently ordered items
highest number of customer
item insights
output
sold items based on quantity
invoice numbers
market basket DataFrame
Decision node
Decision tree
Deep learning
Deep neural networks (DNNs)
Dendrogram method
E
Elbow method
End-to-end graph-based recommender system
End-to-end neural collaborative filtering
Euclidean distance
Explicit feedback
Exploratory data analysis (EDA)
F
FastText features
Feature engineering
Forward propagation
G
Gender distribution
Gini index
GloVe features
Graph-based recommendation systems
DataFrame
data importing
function
Jaccard similarity
knowledge graph structures
movie recommendations
Neo4j and Python notebook connection
primary components
product recommend
recommendations output, customers
required libraries
sample output
steps
Grundy
H
Hierarchical clustering
dendrogram method
description column
df_product data
distance
elbow method
highlights
k-means, 15 clusters
k-means model
score_df
steps
text preprocessing
text-to-features conversion
Hybrid recommendation systems
data collection
data preparation
deep learning
getting recommendations
hybrid model
implementation
LightFM Python package
ML classification
ML clustering
model building
train and test data
working mechanism
I
ID mappings
Implicit feedback
Inverse document frequency (IDF)
Item-item collaborative filtering
Item purchase data matrix
Item-to-item collaborative filtering
J
Joint Representation Learning (JRL)
K
K-means clustering
K-nearest neighbor (KNN)
KNN-based approach
KNN method
Knowledge graphs
L
Label encoding
Leaf node
LightFM documentation
LightFM model
Linear regression
Logistic regression
M
Machine learning (ML) methods
artificial intelligence
categories
CSR matrix
fitted KNN model
KNN algorithms
output, KNN approach
reinforcement learning
supervised learning
unsupervised learning
user-to-user filtering
Manhattan distance
Market basket analysis (MBA)
creating function
definition
implementation
cleaning data
DataFrame, import data
dataset insights
libraries
validation
Matplotlib package
Matrix factorization (MF)
Memory-based approach
Model-based approaches
co-clustering
DataFrame
NMF method
Python package
recommendations
SVD method
Multilayer perceptron (MLP)
Multi-task learning
Multi-task recommenders
advantages
architecture
computer vision
e-commerce websites
JRL
NLP
N
Natural language processing (NLP)
Neural collaborative filtering (NCF)
activation function
collaborative filtering methods
data collection
implementation
import data as a DataFrame (pandas)
matrix factorization
MLP
modeling
neural networks
structure
Neural network
Non-negative matrix factorization (NMF)
O
One-hot encoding (OHE)
One-to-one relationship
Overfitting
P, Q
Pearson’s correlation
Popularity-based rule
buy again
definition
global popularity items
import packages
output
popular items by country
Product-to-features relationship data
Profile vector
Propensity modeling
R
Random forest
Rating scale approach
Real-time recommendations
Receiver operating characteristic (ROC)
Recommendation engines
collaborative-based filtering
content-based filtering method
market basket analysis
types
Recommendation systems
definition
types
Recommender systems
Rectified linear activation function (ReLU)
Reduction in variance
Regression
Reinforcement learning
RMSE and MAE performance metrics
Root node
Rule-based recommendation systems
buy-again
popularity-based rule
S
Self-organizing maps (SOM)
Singular value decomposition (SVD)
Statistical modeling
Supervised learning
classification
KNN
regression
T
Term Frequency–Inverse Document Frequency (TF-IDF)
Term frequency (TF)
Title
Top N approach
Train-test split
Types of contextual recommenders
U, V
Unsupervised learning
Unsupervised machine learning algorithms
user_item_interaction_matrix
User-to-product relationship
User-to-user collaborative filtering
cosine_similarity
DataFrame
data matrix covering purchase history
find similar users
implementation
item purchase data matrix
Item similarity scores DataFrame
item-to-item collaborative filtering
KNN-based approach
matrix
purchase data matrix
similar item recommendations
similar user recommendations
user similarity scores DataFrame
User-user collaborative filtering
W, X, Y, Z
Word2vec features
Word embeddings
Worst predictions
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.188.16