Introduction to SparkR

R is one of the most popular statistical programming languages with a number of exciting features that support statistical computing, data processing, and machine learning tasks. However, processing large-scale datasets in R is usually tedious as the runtime is single-threaded. As a result, only datasets that fit in someone's machine memory can be processed. Considering this limitation and for getting the full flavor of Spark in R, SparkR was initially developed at the AMPLab as a lightweight frontend of R to Apache Spark and using Spark's distributed computation engine.

This way it enables the R programmer to use Spark from RStudio for large-scale data analysis from the R shell. In Spark 2.1.0, SparkR provides a distributed data frame implementation that supports operations such as selection, filtering, and aggregation. This is somewhat similar to R data frames like dplyr but can be scaled up for large-scale datasets.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.156.50