Machine learning

Machine learning is one of the trending buzzwords in the technology domain today, along with big data. Although a lot of it is over-hyped, machine learning (ML) has proved to be really effective in solving tough problems and has been significant in propelling the rise of artificial intelligence (AI). Machine learning is basically an intersection of elements from the fields of computer science, statistics, and mathematics. This includes a combination of concepts from knowledge mining and discovery, artificial intelligence, pattern detection, optimization, and learning theory to develop algorithms and techniques which can learn from and make predictions on data without being explicitly programmed.

The learning here refers to the ability to make computers or machines intelligent, based on the data and algorithms which we provide to them, also known as training the model, so that they start detecting patterns and insights from new data. This can be better understood from the definition of machine learning by the well-known professor Tom Mitchell who said, "A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E."

Let's consider the task (T) of the system is to predict the sales of an organization for the next year. To perform such a task, it needs to rely upon historical sales information. We shall call this experience (E). Its performance (P) is measured on how well it predicts the sales in any given year. Thus, we can generalize that a system has successfully learned how to predict the sales (or task T) if it gets better at predicting it (or improves its performance P), utilizing the past information (or experience E).

Machine learning techniques

Machine-learning techniques are basically algorithms which can work on data and extract insights from it, which can include discovering, forecasting, or predicting trends and patterns. The idea is to build a model using a combination of data and algorithms which can then be used to work on new, previously unseen data and derive actionable insights.

Each and every technique depends on what type of data it can work on and the objective of the problem we are trying to solve. People often get tempted to learn a couple of algorithms and then try to apply them to every problem. An important point to remember here is that there is no universal machine-learning algorithm which fits all problems. The main inputs to machine-learning algorithms are features which are extracted from data using a process known as feature extraction, which is often coupled with another process called feature engineering (or building new features from existing features). Each feature can be described as an attribute of the dataset, such as your location, age, number of posts, shares and so on, if we were dealing with data related to social media user profiles. Machine-learning techniques can be classified into two major types namely supervised learning and unsupervised learning.

Supervised learning

Supervised learning techniques are a subset of the family of machine -learning algorithms which are mainly used in predictive modeling and forecasting. A predictive model is basically a model which can be constructed using a supervised learning algorithm on features or attributes from training data (available data used to train or build the model) such that we can predict using this model on newer, previously unseen data points. Supervised learning algorithms try to model relationships and dependencies between the target prediction output and the input features such that we can predict the output values for new data based on those relationships which it learned from the dataset used during model training or building.

There are two main types of supervised learning techniques

  • Classification: These algorithms build predictive models from training data where the response variable to be predicted is categorical. These predictive models use the features learnt from training data on new, previously unseen data to predict their class or category labels. The output classes belong to discrete categories. Types of classification algorithms include decision trees, support vector machines, random forests and many more.
  • Regression: These algorithms are used to build predictive models on data such that the response variable to be predicted is numerical. The algorithm builds a model based on input features and output response values of the training data and this model is used to predict values for new data. The output values in this case are continuous numeric values and not discrete categories. Types of regression algorithms include linear regression, multiple regression, ridge regression and lasso regression, among many others.

Unsupervised learning

Unsupervised learning techniques are a subset of the family of machine-learning algorithms which are mainly used in pattern detection, dimension reduction and descriptive modeling. A descriptive model is basically a model constructed from an unsupervised machine learning algorithm and features from input data similar to the supervised learning process. However, the output response variables are not present in this case. These algorithms try to use techniques on the input data to mine for rules, detect patterns, and summarize and group the data points which help in deriving meaningful insights and describe the data better to the users. There is no specific concept of training or testing data here since we do not have any specific relationship mapping between input features and output response variables (which do not exist in this case). There are three main types of unsupervised learning techniques:

  • Clustering: The main objective of clustering algorithms is to cluster or group input data points into different classes or categories using features derived from the input data alone and no other external information. Unlike classification, the output response labels are not known beforehand in clustering. There are different approaches to build clustering models, such as by using centroid based approaches, hierarchical approaches, and many more. Some popular clustering algorithms include k-means, k-medoids, and hierarchical clustering.
  • Association rule mining: These algorithms are used to mine and extract rules and patterns of significance from data. These rules explain relationships between different variables and attributes, and also depict frequent item sets and patterns which occur in the data.
  • Dimensionality reduction: These algorithms help in the process of reducing the number of features or variables in the dataset by computing a set of principal representative variables. These algorithms are mainly used for feature selection.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.114.245