Transformation of an existing RDD

RDDs, by nature, are immutable; hence, your RDDs could be created by applying transformations on any existing RDD. Filter is one typical example of a transformation.

The following is a simple rdd of integers and transformation by multiplying each integer by 2. Again, we use the SparkContext and parallelize function to create a sequence of integers into an RDD by distributing the Sequence in the form of partitions. Then, we use the map() function to transform the RDD into another RDD by multiplying each number by 2.

scala> val rdd_one = sc.parallelize(Seq(1,2,3))
rdd_one: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd_one.take(10)
res0: Array[Int] = Array(1, 2, 3)

scala> val rdd_one_x2 = rdd_one.map(i => i * 2)
rdd_one_x2: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[9] at map at <console>:26

scala> rdd_one_x2.take(10)
res9: Array[Int] = Array(2, 4, 6)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.134.17