Spark

As a general-purpose data engine, Apache Spark can integrate with Hive closely. Spark SQL has supported a subset of HQL and can leverage the Hive metastore to write or query data in Hive. This approach is also called Spark over Hive. To configure Spark, use Hive the metastore, you only need to copy the hive-site.xml to the ${SPARK_HOME}/conf directory. After that, running the spark-sql command will enter the Spark SQL interactive environment, where you can write SQL to query Hive tables.

On the other hand, Hive over Spark is a similar approach, but lets Hive use Spark as an alternative engine. In this case, users still stay in Hive and write HQL, but run over the Spark engine transparently. Hive over Spark requires the Yarn FairScheduler and set hive.execution.engine=spark. For more details, refer to https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.14.245