MapReduce

One of the important paradelvems on which the Hadoop framework processes large datasets is using the MapReduce programming model. Again, MapReduce also uses the master-slave concept, in which the input file is first broken into smaller ones and then each piece is fed to worker nodes, which process (map task) the data and then the master collects it (reduce task) and sends it back. This is depicted in a pictorial fashion in the following figure:

Figure 11: Working of MapReduce programming model in Hadoop

As shown in the preceding figure, Map sends the queries (code to data) to the nodes and then reduce collects the results and collates and sends them back. YARN does the parallel processing job here, and MapReduce gives a framework by which to distribute the code (query) across multiple nodes for execution/processing. MapReduce is a Java-based programming model inspired from Google.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.131.255