Summary

In this chapter, we discussed the many types of RDDs, such as shuffledRDD, pairRDD, sequenceFileRDD, HadoopRDD, and so on. We also looked at the three main types of aggregations, groupByKey, reduceByKey, and aggregateByKey. We looked into how partitioning works and why it is important to have a proper plan around partitioning to increase the performance. We also looked at shuffling and the concepts of narrow and wide dependencies which are basic tenets of how Spark jobs are broken into stages. Finally, we looked at the important concepts of broadcast variables and accumulators.

The true power of the flexibility of RDDs makes it easy to adapt to most use cases and perform the necessary operations to accomplish the goal.

In the next chapter, we will switch gears to the higher layer of abstraction added to the RDDs as part of the Tungsten initiative known as DataFrames and Spark SQL and how it all comes together in the Chapter 8, Introduce a Little Structure – Spark SQL.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.132.193