Hadoop for near real-time applications

Hadoop has been popular for its capability for fast and performant batch processing of large amounts of varied data with considerable variance and high velocity. However, there was always an inherent need for handling data for near real-time applications as well.

While Flume did provide some level of stream based processing in the Hadoop ecosystem, it required considerable amount of implementation for custom processing. Most of the source and sink implementations of flume are performing data ETL roles. For any flume processing requirement, it required implementation of custom sinks.

A more mature implementation for near real-time processing of data came with Spark Streaming, which works with HDFS, based on micro-batches as discussed earlier, and provided greater capabilities compared to flume, as pipeline-based processing in near real time.

However, even if the data was processed in near real time and stored in the Hadoop File System, there was an even greater challenge of how to access data randomly from HDFS, it being primarily a sequential filesystem.

In order to solve the problem of random access of data, HBase was implemented based on Google’s Bigtable architecture. Though it allows random access of data, it is key value oriented. The data can be directly looked up only if the key of the data is available. For any partial match scenarios, this is not appropriate as it can potentially cause file scans within HDFS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.37.10