Scaling Up Prediction to Terabyte Click Logs

In the previous chapter, we accomplished developing an ad click-through predictor using a logistic regression classifier. We proved that the algorithm is highly scalable by training efficiently on up to 1 million click log samples. Moving on to this chapter, we will be further boosting the scalability of the ad click-through predictor by utilizing a powerful parallel computing (or, more specifically, distributed computing) tool called Apache Spark. We will be demystifying how Apache Spark is used to scale up learning on massive data, as opposed to limiting model learning to one single machine. We will be using PySpark, which is the Python API, to explore the click log data, to develop classification solutions based on the entire click log dataset, and to evaluate performance, all in a distributed manner. Aside from this, we will be introducing two approaches to play around with the categorical features; one is related to hashing in computer science, while the other fuses multiple features. They will be implemented in Spark as well.

In this chapter, we will cover the following topics:

  • The main components of Apache Spark
  • Spark installation
  • Deployment of Spark application
  • Fundamental data structures in PySpark
  • Core programming in PySpark
  • The implementations of ad click-through predictions in PySpark
  • Data exploratory analysis in PySpark
  • Caching and persistence in Spark
  • What feature hashing is 
  • The implementations of feature hashing in PySpark
  • What is feature interaction?
  • The implementations of feature interaction in PySpark
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.123.34