Part III. Ranking

What are the appropriate candidates for a given recommendation? Which of these candidates are the best? What about the ten best?

Sometimes the best recommender system is simply item availability, but in the majority of cases, you’re hoping to capture subtle signals about user preference to deliver excellent recommendations amongst potentially millions of options. Personalization is the name of the game; while we previously focused on item-item similarity with respect to some external meaning, we need to start attempting to infer user taste and desire.

We’d also better start making this a machine learning task eventually. Beyond discussions of features and architectures, we’ll need to define the objective functions. At first blush the objective for recommendations is the simple binary “did they like it?” – so maybe we’re simply predicting the outcome of a Bournoulli trial. However, as we discussed in the introduction, there are a variety of ways to get the signal about how much they liked it. Moreover, recommendation systems in most cases grant one kindness: you get multiple shots on goal. Usually you get to recommend a few different options, so we are very interested in predictions of which things they’ll like the most. In this section, we’ll take all what we’ve learned, and start getting numbers out. We’ll also talk about explicit loss functions used to train and evaluation your models.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.18.109.154