Evaluation

Evaluation is the next important task, after the model has been developed. It lets you decide whether the model is performing on the given dataset well and ensures that it will be able to handle data that it has never seen. The evaluation framework mostly uses the following features:

  • Error estimation: This uses holdout or interleaved test-and-train methods to estimate the errors. K-fold cross-validation is also used.
  • Performance measures: The Kappa statistics are used, which are more sensitive towards streaming classifiers.
  • Statistical validation: When comparing evaluating classifiers, we must look at the differences in random and non-random experiments. The McNemar's test is the most popular test in streaming, used to access the statistical significance of differences in two classifiers. If we are working with one classifier, the confidence intervals of parameter estimates indicate the reliability.
  • The cost measure of the process: As we are dealing with streaming data, which may require access to third-party or cloud-based solutions to get and process the data, the cost per hour of usage and memory is considered for evaluation purposes.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.82.253