178 | Big Data Simplied
Create a data file named ‘sqoop-data.txt’ in the local file system. This data file contains the
same schema of the ‘Employee’ table and insert some rows (fields data should be tab sepa-
rated) into it. Finally, push this file ‘sqoop-data.txt’ into HDFS using the command as men-
tioned below.
Hadoop fs -put /usr/local/sqoop-data.txt /data/sqoop-data.txt
Export all rows of ‘sqoop-data.txt’ into MySQL table ‘emp’ by the below command.
sqoop export --connect jdbc:mysql://localhost:3306/sqoopTest
--table employee --export-dir /data/sqoop-data.txt -m 1 --input-
fields-terminated-by ‘ ’
7.4.2 Sqoop IMPORT (Importing Fresh Table from MySQL to HIVE)
Create a table named emp in MySQL.
create table emp (emp_id int, emp_namevarchar(30), emp_add
varchar(30));
Insert dummy data into table emp.
INSERT INTO emp (emp_id,emp_name, emp_add) VALUES(1,”Alex”,”USA”);
INSERT INTO emp (emp_id,emp_name, emp_add)
VALUES(2,”Nilanjan”,”India”);
INSERT INTO emp (emp_id,emp_name, emp_add) VALUES(3,”Jasmin”,”UK”);
Created an external table into HIVE.
create external table emp_mysql(emp_id int,
emp_name string,
emp_add string)
row format delimited
fields terminated by ‘,’
location ‘/data/sqoopTest’;
Run command to IMPORT data in HIVE.
sqoop import --connect jdbc:mysql://localhost:3306/sqoopTest
--username root --password admin --table emp --split-by emp_id -m 1
--target-dir /data/sqoopTest
7.4.3 Flume
Flume is used to ingest streaming data into HDFS. Typically, streaming data might be log les.
Assume that an e-commerce web application wants to analyse the customer behaviour
from a particular region. To do so, they would need to move the available web log data into
M07 Big Data Simplified XXXX 01.indd 178 5/17/2019 2:50:12 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.117.212