Chapter 8. Integration with Hadoop

Big data is the latest trend in the technical community and industry in general. Cassandra and many other NoSQL solutions solve a major part of the problem: storing a large amount of datasets in a scalable manner while keeping the mutations and retrieval queries fast. However, this is just half the picture. A major part is processing. A database that provides better integration with analytical tools such as Apache Hadoop, Twitter Storm, Pig, Spark, and other platforms will be a preferable choice.

Cassandra provides native support to Hadoop MapReduce, Pig, Hive, and Oozie. It is a matter of tiny changes to get the Hadoop family up and working with Cassandra. Third-party support for Hadoop and Solr has taken Cassandra to the next level in terms of integration. Third-party proprietary tooling, such as DataStax Enterprise Edition for Cassandra, makes it easy to work with Hadoop and actually helps text search Cassandra using Solr. Enterprise Edition also provides support for the Spark project.

Cassandra is a very powerful database engine. We have seen its salient features as a single software entity. In this chapter, we will see how Cassandra can be used as a data store for third-party software such as Hadoop MapReduce and Pig.

Using Hadoop

Hadoop is for data processing. You may ask "So are MATLAB, R, Octave, Python (NLTK and many other libraries for data analysis), and SAS, then why Hadoop". They are great tools, but they are good for data that can fit in memory. It means that you can churn a couple of GBs to maybe 10s of GBs, and the rate of processing depends on the CPU on that machine, maybe 16 cores. This poses a big restriction. The data is no more in GB limits at the Internet scale. In the age of billions of mobile phones (there were an estimated 7.7 billion mobile users at the end of 2014, source: http://mobithinking.com/mobile-marketing-tools/latest-mobile-stats/a#subscribers), we are generating humongous amounts of data every second (Twitter reports 143,199 tweets per second, source: http://dazeinfo.com/2014/04/29/7-7-billion-mobile-devices-among-7-1-billion-world-population-end-2014/) by checking in places, tagging photos, uploading videos, commenting, messaging, purchasing, dining, running (fitness apps monitor your activities), and many other activities that we do; we literally record these events somewhere. It does not stop at organic data generation.

A lot of data, a lot more than organic data, is generated by machines (http://en.wikipedia.org/wiki/Machine-generated_data). Web logs, financial market data, data from various sensors (including ones in your cell phone), machine part data, and many more are such examples. Health, genomics, and medical science have some of the most interesting big data corpora ready to be analyzed and inferred. To give you a glimpse of how big genetic data can be, we should check data from the 1,000 genome projects (http://www.1000genomes.org/). This data is available for free (there are storage charges) to be used by anyone. The genome data for (only) 1,700 individuals makes a corpus of 200 terabytes. It is doubtful that any conventional in-memory computation tool such as R or MATLAB can do it. Hadoop helps you process the data of that extent.

Hadoop is an example of distributed computing, so you can scale beyond a single computer. Hadoop virtualizes the storage and processors. This means you can roughly treat a 10-machine Hadoop cluster as one machine with 10 times the processing power and 10 times the storage capacity than of a single one. With multiple machines parallely processing the data, Hadoop is best fit for large unstructured datasets. It can help you clean data (data munging) and perform data transformation too. HDFS provides redundant distributed data storage. Effectively, it can work as your extract, transform, and load (ETL) platform.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.188.244