Classification metrics

Let's look at some of the commonly used classification metrics: 

  • Classification accuracy: For classification problems, accuracy is a standard measurement metric. It is typically measured as a ratio of the total number of predictions to the number of correct predictions. For example, if we have a use case for the classification of animals based on images, we measure the number of correct classifications by comparing the training and evaluation time data and taking a ratio with the total number of classification attempts. We need to run multiple tests by crossing the sample data into various evaluation tests in order to avoid over and underfitting the data. We can also deploy cross-validation to optimize the performance by comparing various models instead of running multiple tests with different random samples on the same model. With these methods, we can improve classification accuracy in an iterative manner. 
  • Confusion matrix: Information about incorrect classifications is also important in improving the overall reliability of the model. A confusion matrix is a useful technique in understanding the overall model efficiency. 
  • Logarithmic loss: This is an important metric for understanding the model performance when the input is a probability value between 0 and 1. It is preferable to have a minimum log loss for the ML model to be useful. The threshold typically set for this metric is less than 0.1. 
  • Area under curve (AUC): AUC offers an aggregate performance metric across all possible rating levels. 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.48.181