Preface

Having already written an introductory book on the Hadoop ecosystem, I was pleased to be asked by Packt to write a book on Apache Spark. Being a practical person with a support and maintenance background, I am drawn to system builds and integration. So, I always ask the questions "how can the systems be used?", "how do they fit together?" and "what do they integrate with?" In this book, I will describe each module of Spark, and explain how they can be used with practical examples. I will also show how the functionality of Spark can be extended with extra libraries like H2O from http://h2o.ai/.

I will show how Apache Spark's Graph processing module can be used in conjunction with the Titan graph database from Aurelius (now DataStax). This provides a coupling of graph-based processing and storage by grouping together Spark GraphX and Titan. The streaming chapter will show how data can be passed to Spark streams using tools like Apache Flume and Kafka.

Given that in the last few years there has been a large-scale migration to cloud-based services, I will examine the Spark cloud service available at https://databricks.com/. I will do so from a practical viewpoint, this book does not attempt to answer the question "server or cloud", as I believe it to be a subject of a separate book; it just examines the service that is available.

What this book covers

Chapter 1, Apache Spark, will give a complete overview of Spark, functionalities of its modules, and the tools available for processing and storage. This chapter will briefly give the details of SQL, Streaming, GraphX, MLlib, Databricks, and Hive on Spark.

Chapter 2, Apache Spark MLlib, covers the MLlib module, where MLlib stands for Machine Learning Library. This describes the Apache Hadoop and Spark cluster that I will be using during this book, as well as the operating system that is involved—CentOS. It also describes the development environment that is being used: Scala and SBT. It provides examples of both installing and building Apache Spark. A worked example of classification using the Naïve Bayes algorithm is explained, as is clustering with KMeans. Finally, an example build is used to extend Spark to include some Artificial Neural Network (ANN) work by Bert Greevenbosch (www.bertgreevenbosch.nl). I have always been interested in neural nets, and being able to use Bert's work (with his permission) in this chapter was enjoyable. So, the final topic in this chapter classifies some small images including distorted images using a simple ANN. The results and the resulting score are quite good!

Chapter 3, Apache Spark Streaming, covers the comparison of Apache Spark to Storm and especially Spark Streaming, but I think that Spark offers much more functionality. For instance, the data used in one Spark module can be passed to and used in another. Also, as shown in this chapter, Spark streaming integrates easily with big data movement technologies like Flume and Kafka.

So, the streaming chapter starts by giving an overview of checkpointing, and explains when you might want to use it. It gives Scala code examples of how it can be used, and shows the data can be stored on HDFS. It then moves on to give practical examples in Scala, as well as execution examples of TCP, File, Flume, and the Kafka streaming. The last two options are shown by processing an RSS data stream and finally storing it on HDFS.

Chapter 4, Apache Spark SQL, explains the Spark SQL context in Scala code terms. It explains File I/O as text, Parquet, and JSON formats. Using Apache Spark 1.3 it explains the use of data frames by example, and shows the methods that they make available for data analytics. It also introduces Spark SQL by Scala-based example, showing how temporary tables can be created, and how the SQL-based operations can be used against them.

Next, the Hive context is introduced. Initially, a local context is created and the Hive QL operations are then executed against it. Then, a method is introduced to integrate an existing distributed CDH 5.3 Hive installation to a Spark Hive context. Operations against this context are then shown to update a Hive database on the cluster. In this way, the Spark applications can be created and scheduled so that the Hive operations are driven by the real-time Spark engine.

Finally, the ability to create user-defined functions (UDFs) is introduced, and the UDFs that are created are then used in the SQL calls against the temporary tables.

Chapter 5, Apache Spark GraphX, introduces the Apache Spark GraphX module and the graph processing module. It works through a series of graph functions by example from based counting to triangle processing. It then introduces Kenny Bastani's Mazerunner work which integrates the Neo4j NoSQL database with Apache Spark. This work has been introduced with Kenny's permission; take a look at www.kennybastani.com.

This chapter works through the introduction of Docker, then Neo4j, and then it gives an introduction to the Neo4j interface. Finally, it works through some of the Mazerunner supplied functionality via the supplied REST interface.

Chapter 6, Graph-based Storage, examines graph-based storage as Apache Spark Graph processing was introduced in this book. I looked for a product that could integrate with Hadoop, was open sourced, could scale to a very high degree, and could integrate with Apache Spark.

Although it is still a relatively young product both in terms of community support and development, I think that Titan from Aurelius (now DataStax) fits the bill. The 0.9.x releases that are available, as I write now, use Apache TinkerPop for graph processing.

This chapter provides worked examples of graph creation and storage using Gremlin shell and Titan. It shows how both HBase and Cassandra can be used for backend Titan storage.

Chapter 7, Extending Spark with H2O, talks about the H2O library set developed at http://h2o.ai/, which is a machine learning library system that can be used to extend the functionality of Apache Spark. In this chapter, I examine the sourcing and installation of H2O, as well as the Flow interface for data analytics. The architecture of Sparkling Water is examined, as is data quality and performance tuning.

Finally, a worked example of deep learning is created and executed. Chapter 2, Spark MLlib, used a simple ANN for neural classification. This chapter uses a highly configurable and tunable H2O deep learning neural network for classification. The result is a fast and accurate trained neural model, as you will see.

Chapter 8, Spark Databricks, introduces the https://databricks.com/ AWS cloud-based Apache Spark cluster system. It offers a step-by-step process of setting up both an AWS account and the Databricks account. It then steps through the https://databricks.com/ account functionality in terms of Notebooks, Folders, Jobs, Libraries, development environments, and more.

It examines the table-based storage and processing in Databricks, and also introduces the DBUtils package for Databricks utilities functionality. This is all done by example to give you a good understanding of how this cloud-based system can be used.

Chapter 9, Databricks Visualization, extends the Databricks coverage by concentrating on data visualization and dashboards. It then examines the Databricks REST interface, showing how clusters can be managed remotely using various example REST API calls. Finally, it looks at data movement in terms of table's folders and libraries.

The cluster management section of this chapter shows that it is possible to launch Apache Spark on AWS EC2 using scripts supplied with the Spark release. The https://databricks.com/ service takes this functionality a step further by providing a method to easily create and resize multiple EC2-based Spark clusters. It provides extra functionality for cluster management and usage, as well as user access and security as these two chapters show. Given that the people who brought us Apache Spark have created this service, it must be worth considering and examining.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.128.113