One of the important paradelvems on which the Hadoop framework processes large datasets is using the MapReduce programming model. Again, MapReduce also uses the master-slave concept, in which the input file is first broken into smaller ones and then each piece is fed to worker nodes, which process (map task) the data and then the master collects it (reduce task) and sends it back. This is depicted in a pictorial fashion in the following figure:
As shown in the preceding figure, Map sends the queries (code to data) to the nodes and then reduce collects the results and collates and sends them back. YARN does the parallel processing job here, and MapReduce gives a framework by which to distribute the code (query) across multiple nodes for execution/processing. MapReduce is a Java-based programming model inspired from Google.