Apache Spark with Jupyter Notebooks on IBM DataScience Experience

When talking about the cloud, nearly everybody has a different view. Some think of virtual machines, which you can start and stop instantaneously, while some others think of file storage. But the cloud is more than that. In this chapter, we will talk about using Apache Spark in the cloud. But this is, of course, more than provisioning some virtual machines and installing an Apache Spark cluster on it.

We are talking about the so-called Platform as a service clouds (PaaS), where everything is a service, ready to be consumed, and the operation of components is done by the cloud provider. Therefore, on PaaS clouds, Apache Spark is pre-installed. This is done in a way that workers can be added to and removed from the cluster on-the-fly. But not only is Apache Spark pre-installed, the required tooling around it is also available.

In this chapter, we'll introduce the following:

  • Why notebooks are the new standard for data science
  • A practical example in how to analyze bearing vibration data
  • The trinity of data science programming languages (Scala, Python, R) in action
  • Interactive, exploratory data analysis and visualization

In data science it is common practice now to use notebooks such as Juypter (covered in this chapter) and Zeppelin (covered in the next chapter on running Apache Spark on Kubernetes).

Another practice we are seeing more and more is that people tend to prefer Python and R over Scala and Java, especially for one-time scripts and visualization as opposed to long-term Extract Transform Load (ETL) processes where Scala and Java still are leaders.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.103.204