Evaluating the model

Once the model is trained with the training set, the evaluation set is used for model evaluation. The evaluation results are available in the EVALUATE tab and present Avg Precision, Precision, and Recall. Here is a screenshot of the web interface for the evaluation of the model:

As seen in the screenshot, we get the model evaluation metrics on the user interface. We get the following important model training parameters:

  • Avg Precision: This measures the quantum of model performance across all of the score thresholds. 
  • Precision: This is a measure of the correct proportion of positive indications. Mathematically, precision is defined as  . True positive represents an outcome where the model correctly predicts the positive class. False positive represents an outcome where the model incorrectly predicts the positive class. 
  • Recall: This is a measure of the proportion of actual positives that are identified correctly. Mathematically, recall is defined as . False negative represents an outcome where the model incorrectly predicts a negative class. 

A model can be fully evaluated by using both precision and recall measures, and hence average precision is significant in understanding the model's effectiveness. AutoML provides a consolidated view of the model parameters across all of the labels, along with the parameter values for a specific label:

The model can be evaluated by using REST APIs, which can be invoked via the command line as well as pragmatically. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.204.201