Classification metrics

Before we start discussing classification metrics, we have to introduce an important concept called the confusion matrix. Let's assume that we have two classes and an algorithm that assigns them to an object. Here, the confusion matrix will look like this:

True positive (TP)

False positive (FP)

False negative (FN)

True negative (TN)

Here,  is the predicted class of the object and  is the ground truth label. The confusion matrix is an abstraction that we use to calculate different classification metrics. It gives us the number of items that were classified correctly and misclassified. It also provides us with information about the misclassification type. The false negatives are items that our algorithm incorrectly classified as negative ones, while the false positives are items that our algorithm incorrectly classified as positive ones. In this section, we'll learn how to use this matrix and calculate different classification performance metrics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.212.71