There's more...

There is also a streaming version of KMeans implementation in Spark that allows you to classify the features on the fly. The streaming version of KMeans is covered in more detail in Chapter 13Spark Streaming and Machine Learning Library.

There is also a class that helps you to generate RDD data for KMeans. We found this to be very useful during our application development process:

def generateKMeansRDD(sc: SparkContext, numPoints: Int, k: Int, d: Int, r: Double, numPartitions: Int = 2): RDD[Array[Double]] 

This call uses Spark context to create RDDs while allowing you to specify the number of points, clusters, dimensions, and partitions.

A useful related API is: generateKMeansRDD(). Documentation for generateKMeansRDD can be found at http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib.util.KMeansDataGenerator$ for generate an RDD containing test data for KMeans.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.71.72