The importance of evaluation

Another important aspect is model evaluation. Unless you apply your models to new data and measure a business objective, you're not doing predictive analytics. Evaluation techniques, such as cross-validation and separated train/test sets, simply split your test data, which can give only you an estimate of how your model will perform. Life often doesn't hand you a train dataset with all of the cases defined, so there is a lot of creativity involved in defining these two sets in a real-world dataset.

At the end of the day, we want to improve a business objective, such as improve ad conversion rate, and get more clicks on recommended items. To measure the improvement, execute A/B tests, measuring differences in metrics across statistically identical populations that each experience a different algorithm. Decisions on the product are always data-driven.

A/B testing is a method for a randomized experiment with two variants: A, which corresponds to the original version, controlling the experiment; and B, which corresponds to a variation. The method can be used to determine whether the variation outperforms the original version. It can be used to test everything from website changes to sales emails to search ads. Udacity offers a free course, covering design and analysis of A/B tests at https://www.udacity.com/course/ab-testing--ud257.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.227.82