Model evaluation

In the last section, we completed our model estimation. Now it is the time for us to evaluate these estimated models to see whether they fit our client's criteria so that we can either move to results explanation or go back to some previous stage to refine our predictive models.

As mentioned earlier for this project, using MLlib codes, our recommendations are evaluated by measuring the Mean Squared Error of rating predictions. However, most users may want to perform more evaluations with their favored measurements.

In practise, the model estimation results from SPSS Modeler may be exported for evaluation with other tools, such as R, as some users may wish. Within SPSS Modeler, we can create a Modeler Node against the test data to evaluate our results.

One of the most commonly used ways to evaluate is to measure the correlation between the predicted and actual ratings for our test dataset of movie users.

Another commonly used error index with Memory-Based algorithms can be calculated through the following steps:

  1. For each user a in the test set:
    1. Split a's votes into observed (I) and predict (P).
    2. Measure the average absolute deviation between predicted and actual votes in P.
    3. Predict the votes in P and form a ranked list.
    4. Score the list by its expected utility (Ra) by assuming (a) the utility of the kth item in the list is max(va,j-d,0), where d is the default vote (b), the probability of reaching the k rank drops exponentially in k.
  2. Average Ra over all test users.

On SPSS Modeler, once the model gets built, you can:

  1. Attach a Table node to explore your results.
  2. Use the Analysis node to create a coincidence matrix showing the pattern of matches between each predicted field and its target field. Run the Analysis node to see the results.
    Model evaluation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.48.161