Skip to content
Home Page Icon
Home Page
Machine Learning
Author
Amit Kumar Das
,
Saikat Dutt
,
Subramanian Chandramouli
Release Date: 2018/04/01
ISBN: 9789389588132
Topic:
Data
0%
30
Chapters
0-1
Hours read
0k
Total Words
Start Reading Now
Add to Wishlist
View table of contents
Book Description
"Table of Contents: 1 Introduction to Machine Learning 2 Preparing to Model 3 Modelling and Evaluation 4 Basics of Feature Engineering 5 Brief Overview of Probability 6 B ayesian Concept Learning 7 Super vised Learning: Classification 8 Super vised
Show and hide more
Table of Contents
Cover
About Pearson
Title Page
Contents
Preface
Acknowledgements
About the Authors
Model Syllabus for Machine Learning
Lesson plan
1 Introduction to Machine Learning
1.1 Introduction
1.2 What is Human Learning?
1.3 Types of Human Learning
1.3.1 Learning under expert guidance
1.3.2 Learning guided by knowledge gained from experts
1.3.3 Learning by self
1.4 What is Machine Learning?
1.4.1 How do machines learn?
1.4.2 Well-posed learning problem
1.5 Types of Machine Learning
1.5.1 Supervised learning
1.5.2 Unsupervised learning
1.5.3 Reinforcement learning
1.5.4 Comparison – supervised, unsupervised, and reinforcement learning
1.6 Problems Not To Be Solved Using Machine Learning
1.7 Applications of Machine Learning
1.7.1 Banking and finance
1.7.2 Insurance
1.7.3 Healthcare
1.8 State-of-The-Art Languages/Tools In Machine Learning
1.8.1 Python
1.8.2 R
1.8.3 Matlab
1.8.4 SAS
1.8.5 Other languages/tools
1.9 Issues in Machine Learning
1.10 Summary
2 Preparing to Model
2.1 Introduction
2.2 Machine Learning Activities
2.3 Basic Types of Data in Machine Learning
2.4 Exploring Structure of Data
2.4.1 Exploring numerical data
2.4.2 Plotting and exploring numerical data
2.4.3 Exploring categorical data
2.4.4 Exploring relationship between variables
2.5 Data Quality and Remediation
2.5.1 Data quality
2.5.2 Data remediation
2.6 Data Pre-Processing
2.6.1 Dimensionality reduction
2.6.2 Feature subset selection
2.7 Summary
3 Modelling and Evaluation
3.1 Introduction
3.2 Selecting a Model
3.2.1 Predictive models
3.2.2 Descriptive models
3.3 Training a Model (for Supervised Learning)
3.3.1 Holdout method
3.3.2 K-fold Cross-validation method
3.3.3 Bootstrap sampling
3.3.4 Lazy vs. Eager learner
3.4 Model Representation and Interpretability
3.4.1 Underfitting
3.4.2 Overfitting
3.4.3 Bias – variance trade-off
3.5 Evaluating Performance of a Model
3.5.1 Supervised learning – classification
3.5.2 Supervised learning – regression
3.5.3 Unsupervised learning – clustering
3.6 Improving Performance of a Model
3.7 Summary
4 Basics of Feature Engineering
4.1 Introduction
4.1.1 What is a feature?
4.1.2 What is feature engineering?
4.2 Feature Transformation
4.2.1 Feature construction
4.2.2 Feature extraction
4.3 Feature Subset Selection
4.3.1 Issues in high-dimensional data
4.3.2 Key drivers of feature selection – feature relevance and redundancy
4.3.3 Measures of feature relevance and redundancy
4.3.4 Overall feature selection process
4.3.5 Feature selection approaches
4.4 Summary
5 Brief Overview of Probability
5.1 Introduction
5.2 Importance of Statistical Tools in Machine Learning
5.3 Concept of Probability – Frequentist and Bayesian Interpretation
5.3.1 A brief review of probability theory
5.4 Random Variables
5.4.1 Discrete random variables
5.4.2 Continuous random variables
5.5 Some Common Discrete Distributions
5.5.1 Bernoulli distributions
5.5.2 Binomial distribution
5.5.3 The multinomial and multinoulli distributions
5.5.4 Poisson distribution
5.6 Some Common Continuous Distributions
5.6.1 Uniform distribution
5.6.2 Gaussian (normal) distribution
5.6.3 The laplace distribution
5.7 Multiple Random Variables
5.7.1 Bivariate random variables
5.7.2 Joint distribution functions
5.7.3 Joint probability mass functions
5.7.4 Joint probability density functions
5.7.5 Conditional distributions
5.7.6 Covariance and correlation
5.8 Central Limit Theorem
5.9 Sampling Distributions
5.9.1 Sampling with replacement
5.9.2 Sampling without replacement
5.9.3 Mean and variance of sample
5.10 Hypothesis Testing
5.11 Monte Carlo Approximation
5.12 Summary
6 Bayesian Concept Learning
6.1 Introduction
6.2 Why Bayesian Methods are Important?
6.3 Bayes’ Theorem
6.3.1 Prior
6.3.2 Posterior
6.3.3 Likelihood
6.4 Bayes’ Theorem and Concept Learning
6.4.1 Brute-force Bayesian algorithm
6.4.2 Concept of consistent learners
6.4.3 Bayes optimal classifier
6.4.4 Naïve Bayes classifier
6.4.5 Applications of Naïve Bayes classifier
6.4.6 Handling Continuous Numeric Features in Naïve Bayes Classifier
6.5 Bayesian Belief Network
6.5.1 Independence and conditional independence
6.5.2 Use of the Bayesian Belief network in machine learning
6.6 Summary
7 Supervised Learning : Classification
7.1 Introduction
7.2 Example of Supervised Learning
7.3 Classification Model
7.4 Classification Learning Steps
7.5 Common Classification Algorithms
7.5.1 k-Nearest Neighbour (kNN)
7.5.2 Decision tree
7.5.3 Random forest model
7.5.4 Support vector machines
7.6 Summary
8 Super vised Learning : Regression
8.1 Introduction
8.2 Example of Regression
8.3 Common Regression Algorithms
8.3.1 Simple linear regression
8.3.2 Multiple linear regression
8.3.3 Assumptions in Regression Analysis
8.3.4 Main Problems in Regression Analysis
8.3.5 Improving Accuracy of the Linear Regression Model
8.3.6 Polynomial Regression Model
8.3.7 Logistic Regression
8.3.8 Maximum Likelihood Estimation
8.4 Summary
9 Unsupervised Learning
9.1 Introduction
9.2 Unsupervised vs Supervised Learning
9.3 Application of Unsupervised Learning
9.4 Clustering
9.4.1 Clustering as a machine learning task
9.4.2 Different types of clustering techniques
9.4.3 Partitioning methods
9.4.4 K-Medoids: a representative object-based technique
9.4.5 Hierarchical clustering
9.4.6 Density-based methods - DBSCAN
9.5 Finding Pattern using Association Rule
9.5.1 Definition of common terms
9.5.2 Association rule
9.5.3 The apriori algorithm for association rule learning
9.5.4 Build the apriori principle rules
9.6 Summary
10 Basics of Neural Network
10.1 Introduction
10.2 Understanding the Biological Neuron
10.3 Exploring the Artificial Neuron
10.4 Types of Activation Functions
10.4.1 Identity function
10.4.2 Threshold/step function
10.4.3 ReLU (Rectified Linear Unit) function
10.4.4 Sigmoid function
10.4.5 Hyperbolic tangent function
10.5 Early Implementations of ANN
10.5.1 McCulloch–Pitts model of neuron
10.5.2 Rosenblatt’s perceptron
10.5.3 ADALINE network model
10.6 Architectures of Neural Network
10.6.1 Single-layer feed forward network
10.6.2 Multi-layer feed forward ANNs
10.6.3 Competitive network
10.6.4 Recurrent network
10.7 Learning Process in ANN
10.7.1 Number of layers
10.7.2 Direction of signal flow
10.7.3 Number of nodes in layers
10.7.4 Weight of interconnection between neurons
10.8 Backpropagation
10.9 Deep Learning
10.10 Summary
11 Other Types of Learning
11.1 Introduction
11.2 Representation Learning
11.2.1 Supervised neural networks and multilayer perceptron
11.2.2 Independent component analysis (Unsupervised)
11.2.3 Autoencoders
11.2.4 Various forms of clustering
11.3 Active Learning
11.3.1 Heuristics for active learning
11.3.2 Active learning query strategies
11.4 Instance-Based Learning (Memory-based Learning)
11.4.1 Radial basis function
11.4.2 Pros and cons of instance-based learning method
11.5 Association Rule Learning Algorithm
11.5.1 Apriori algorithm
11.5.2 Eclat algorithm
11.6 Ensemble Learning Algorithm
11.6.1 Bootstrap aggregation (Bagging)
11.6.2 Boosting
11.6.3 Gradient boosting machines (GBM)
11.7 Regularization Algorithm
11.8 Summary
Appendix A: Programming Machine Learning in R
Appendix B: Programming Machine Learning in Python
Appendix C: A Case Study on Machine Learning Application: Grouping Similar Service Requests and Classifying a New One
Model Question Paper-1
Model Question Paper-2
Model Question Paper-3
Index
Copyright