We said that the data flow between Hadoop and a relational database is rarely a linear single direction process. Indeed the situation where data is processed within Hadoop and then inserted into a relational database is arguably the more common case. We will explore this now.
Thinking about how to copy the output of a MapReduce job into a relational database, we find similar considerations as when looking at the question of data import into Hadoop.
The obvious approach is to modify a reducer to generate the output for each key and its associated values and then to directly insert them into a database via JDBC. We do not have to worry about source column partitioning, as with the import case, but do still need to think about how much load we are placing on the database and whether we need to consider timeouts for long-running tasks. In addition, just as with the mapper situation, this approach tends to perform many single queries against the database, which is typically much less efficient than bulk operations.
Often, a superior approach is not to work around the usual MapReduce case of generating output files, as with the preceding example, but instead to exploit it.
All relational databases have the ability to ingest data from source files, either through custom tools or through the use of the LOAD DATA
statement. Within the reducer, therefore, we can modify the data output to make it more easily ingested into our relational destination. This obviates the need to consider issues such as reducers placing load on the database or how to handle long-running tasks, but it does require a second step external to our
MapReduce job.
3.145.40.189