Chapter 3. Overview of Apache Drill

Apache Drill is a distributed schema-on-read query engine loosely associated with the Hadoop ecosystem. This chapter unpacks this statement so you understand what Drill is and how it works before we dive into the details of using, deploying, and extending Drill.

The Apache Hadoop Ecosystem

Many excellent books exist to describe Apache Hadoop and its components. Here we expand on the introduction in Chapter 1 with the key concepts needed to understand Drill.

Hadoop consists of the Hadoop Distributed File System (HDFS), MapReduce, and the YARN job scheduler. Drill is best thought of as an alternative to YARN and MapReduce for processing data stored in HDFS.

The extended Hadoop ecosystem (sometimes called “Hadoop and Friends”) includes a wide variety of tools. For our purposes these include the following:

  • Alternative storage engines (e.g., MapR–FS, and Amazon S3)

  • Database-like storage engines (e.g., HBase and MapR-DB)

  • Compute engines (e.g., MapReduce and Apache Spark)

  • Query engines (e.g., Apache Hive, Drill, Preso, Apache Impala, and Spark SQL)

  • Coordination tools (e.g., Apache ZooKeeper and etcd)

  • Cluster coordinators (e.g., manual, YARN, Mesos, and Docker/Kubernetes)

This list does not begin to touch on the many tools for coordinating workflows (e.g., Oozie and AirFlow), data ingest (e.g., Sqoop and Kafka), or many other purposes.

Drill Is a Low-Latency Query Engine

With so many options, you might wonder: where does Drill fit into this ecosystem? Drill is a low-latency query engine intended for interactive queries, especially those that power business intelligence tools such as Tableau.

Hadoop provides many query tool options.1 The granddaddy is Apache Hive: a SQL-like engine that generates MapReduce jobs. Hive is like a freight train: it handles the massive scale needed for complex ETL tasks but moves too slowly for interactive queries. Apache Drill, Apache Impala, Hive LLAP, and Presto are SQL query engines that target interactive uses. Although Impala is fast, it has historically been difficult to build and extend, works primarily with the Parquet file format and requires the Hive Metastore.

Drill’s sweet spot is that it works with a wide variety of file formats and is extendable by savvy Java programmers. (Later chapters explain how to create extensions for user-defined functions and file format plug-ins.) Because Drill is developed in Java, it is far easier to build than the C++-based Impala.

Unlike other query engines, Drill can retrieve data not just from HDFS, but also from a wide variety of other storage systems (MapR-FS, Amazon S3, HBase, Kafka, and more). The Drill community continues to add integrations with other systems.

A key difference between Drill on the one hand and Hive or Impala on the other is that Drill does not require that you define metadata before querying files. Although Drill can use the Hive metastore, most users achieve great results by pointing Drill directly at their files without the complexity of metadata, metadata refreshes, and so on.

Note

See Chapter 8 for more on the nuances of using Drill without metadata.

Distributed Processing with HDFS

HDFS stores data in (preferably) large files and provides a client API to access those files from any node in the Hadoop cluster. Historically, the best performance occurs when an application observes “data locality”: moving processing to the nodes that contain the data of interest.

HDFS is a distributed filesystem. It divides each file into a set of blocks, typically 128 MB or 256 MB in size, and distributes the blocks across a set of data nodes. For resilience, it replicates each block, typically creating three copies. Data locality thus means moving computation to one of the three available nodes for each block and then combining the results. This is exactly what the classic MapReduce algorithm does: it “maps” work to data nodes and then “reduces” (collects) the results.

Conceptually Drill works the same way, though as we will see, the details differ to minimize query latency.

Elements of a Drill System

Figure 3-1 shows how Drill fits into a typical Hadoop system.

Figure 3-1. Overview of Drill in the Hadoop ecosystem

Drill runs on each data node, typically alongside YARN (which handles Spark and MapReduce jobs). Drill reads data from HDFS using the HDFS client API. Because Drill resides on the same nodes as the data, data reads are local, meaning that reads are often on a single node and almost always on the same rack.

When run in Amazon Elastic Compute Cloud (Amazon EC2), Drill resides on a set of EC2 instances and typically reads data from files stored in Amazon S3: a configuration called “separation of compute and storage.” Such a configuration requires that all your data flows over the Amazon network. 

Drill uses ZooKeeper to coordinate the Drill cluster. Each Drillbit registers itself in ZooKeeper so that it can be found by the other Drillbits and by Drill clients. Drill also uses ZooKeeper for its configuration (the storage and format configuration discussed in later chapters) and to implement its admission control mechanism (discussed in Chapter 9).

To connect to Drill, you use a Drill client—typically the JDBC or ODBC interfaces—and some client application such as SQLLine (discussed later) or Tableau. The client application connects to ZooKeeper to learn which Drillbits are running, and then (typically) connects to a random Drillbit, which acts as the “Foreman” to process your queries. 

Drill Operation: The 30,000-Foot View

We now know enough to sketch out how Drill operates in the most general sense:

  1. You store your data into a distributed filesystem (such as HDFS) in one of Drill’s many supported formats (Parquet, JSON, CSV, and so on).

  2. You start Drillbits on each of your data nodes. Drill forms a cluster through ZooKeeper.

  3. You connect to Drill using a tool of your choice. The tool queries ZooKeeper to locate a Drillbit which becomes the Foreman.  You submit queries to the Foreman, which parses the SQL and distributes work out to the other “worker” Drillbits (the equivalent of the “map” step).

  4. The Drillbits obtain your data from the filesystem, perform the requested processing, and send the results back to the Foreman, which consolidates the results (the equivalent of the “reduce” step).

  5. The Foreman returns the results to your client tool, which displays them, charts them, and so on.

Chapter 2 showed how to set up Drill in a simple single-node “cluster” for testing and development; Chapter 9 shows how to deploy Drill into a production Hadoop-like (HDFS, MapR-FS, or Amazon S3) cluster.

Drill Is a Query Engine, Not a Database

Chapter 1 mentioned that one common source of confusion for people coming from the relational database world is that Drill uses SQL. If databases such as MySQL, Postgres, and Oracle also use SQL, doesn’t that mean that Drill is a database, too? Understanding this distinction is key to understanding Drill’s place in the world.

A full relational database includes a storage layer and thus provides two kinds of operations:

  • Data definition language (DDL) statements such as CREATE DATABASE, CREATE TABLE, CREATE INDEX, etc.)

  • Data manipulation language (DML) (also called “CRUD” for Create, Read, Update, and Delete) statements such as INSERT, SELECT, UPDATE, DELETE

With Drill, HDFS (or a similar system) provides the storage layer. As a result, Drill needs no statements to create a database.

Drill treats files (actually, collections of files, which we discuss later) as tables. Drill’s version of the CREATE TABLE statement is CREATE TABLE AS (CTAS), which writes files directly to HDFS. Further, because files are scanned sequentially (more on this momentarily), we do not have indexes, so there are no CREATE INDEX statements. (As discussed later, big data query engines use partitioning in place of indexes.)

Although you can add files to HDFS, each file is immutable after it is written. Thus, Drill supports no INSERT, UPDATE, or DELETE statements.

Files carry their own schema (the so-called “schema-free” or “schema-on-read” model). Thus, there are no DDL statements related to creating table, column, or index schemas.

If Drill does not handle storage or schema definition, what is left? In big data, the primary operation is the (distributed) execution of queries: the SELECT statement. Thus, to a first approximation, you can think of Drill as a SQL engine that performs SELECT statements across large distributed datasets.  

Drill Operation Overview

We’re now ready to look at the next level of detail of how Drill processes each query. Understanding these steps provides context for later chapters. Big data clusters are highly complex, and it can often be difficult to know where to look when problems occur. Understanding Drill’s operation will help you determine whether the problem is in your query, your configuration, your files, and so on.

Drill Components

Drill is composed of four main parts:

  • The Drill client, which connects to a Foreman, submits SQL statements, and receives results

  • The Foreman, a Drill server selected to maintain your session state and process your queries. Drill clients distribute the Foreman roles for sessions by randomly selecting one of the Drillbits to act as Foreman for that session.

  • The worker Drillbit servers, which do the actual work of running your query

  • The ZooKeeper server, which coordinates the Drillbits within the Drill cluster

SQL Session State

Drill is a SQL engine. SQL is a stateful language: commands you issue in your session influence how later commands behave. As a trivial example, a session often begins with a USE DATABASE statement to select the database to use. Later SQL commands interpret names, such as SELECT * FROM `customers`, in the context of the selected database.

Although Drill is designed to be completely symmetrical, SQL requires that we break this symmetry at the connection level. In Drill you can connect to any random Drillbit, but after you’re connected, all your subsequent queries must go to that same Drillbit so that they can reuse the session state stored in that Drillbit. In fact, the Foreman for a session is simply the Drillbit that maintains state for your specific connections. (Other Drillbits will act as the Foreman for other connections.)

The state that Drill maintains includes:

  • The database (USE DATABASE) or workspace (USE WORKSPACE) that you’ve selected

  • Session options selected with the UPDATE SESSION command

  • The state for in-flight queries (the Drill client can run multiple queries in parallel)

  • Temporary tables created by using the CREATE TEMPORARY TABLE AS (CTTAS) command

Session state lives only as long as the connection is active. If your connection drops, you will need to re-create session state when you reconnect. (This is why the Drill client cannot reconnect automatically. If it did, a query that worked fine one moment might fail the next because of the hidden loss of session state.)

Statement Preparation

Drill follows classic database practice to run a SQL statement:

  • Prepare the statement (translate from SQL into a physical execution plan).

  • Execute the physical plan to produce the desired results.

SQL is well known for providing a “relational calculus” format: you specify what you want, not how to get it. For example:

SELECT `name` FROM `customers.csv` WHERE `state` = 'CA'

to find the names of customers in California. Drill translates this to “relational algebra,” such as “scan the customers.csv table, and then apply a filter that selects those where state equals CA“). Databases use the term “prepare” for this translation process.

The preparation phase itself consists of a number of phases:

  1. Parse the SQL statement into an internal parse tree.

  2. Perform semantic analysis on the parse tree by resolving names the selected database, against the schema (set of tables) in that database, the set of columns in those tables, and the set of functions defined within Drill.

  3. Convert the SQL parse tree into a logical plan, which can be thought of as a block diagram of the major operations needed to perform the given query.

  4. Convert the logical plan into a physical plan by performing a cost-based optimization step that looks for the most efficient way to execute the logical plan.

  5. Convert the physical plan into an execution plan by determining how to distribute work across the available worker Drillbits.

Drill uses Apache Calcite for much of the processing (up to the physical plan). Although the preceding steps apply to all SQL statements, as we saw earlier, Drill is a query engine so our primary focus in on SELECT statements.

Figure 3-2 illustrates the steps.

Figure 3-2. Life cycle of a Drill query

Parsing and semantic analysis

The parse phase is the one that checks your SQL syntax. If you make a SQL syntax error, Drill will display a long error message that attempts to explain the error. (Error reporting is a work in progress in Drill.)

The next step looks up names, and this can seem complex to those accustomed to databases with schemas. At prepare time, Drill knows the names of databases (called storage plug-ins in Drill) and tables. However, Drill does not know the names of columns. So, it postpones some decisions about columns to runtime. In particular, Drill does not know the data types of columns until it runs the query. This means that type-related errors that you might expect to be caught at prepare time are, in Drill, found only at runtime.

Furthermore, Drill will accept any column name, even the names of columns that don’t actually exist in your table. Again, Drill does not know the set of available columns at prepare time. Because tables are just (collections of) files, it might even be that newer versions of the file have the requested column, but older files do not. Drill’s schema-on-read system can handle such scenarios, whereas a system based on metadata cannot.

Logical and physical plans

Once the names are known, Drill uses Calcite to work out a logical plan for the query. For example, Drill might identify that you want to join tables A and B. The next step, the physical plan, works out the best way to implement the join. (Like most query engines, Drill implements multiple join operators and chooses the lowest-cost alternative for each query.) 

The logical and physical plans are key to understanding how Drill operates and how to optimize queries. The Drill Web Console provides a query profile for each query. The query profile includes a visualization of the query plan in a form that is a hybrid of the logical and physical plans. (The diagram shows decisions made in the physical plan but omits many of the gory details, making the diagram closer to the logical plan.)

A query plan is a set of operators grouped into major fragments. An operator is a distinct operation such as “scan a file,” “filter a result set,” or “perform a join.” Often, several operators can be combined into a pipeline. For example, the scan operation is typically combined with a filter operation.

Because Drill is distributed, data must sometimes be exchanged among Drillbits (called a “shuffle” in MapReduce and Spark). For example, to perform a GROUP BY operation, like rows are collected from all scan operators and sent to the Drillbit that will perform the aggregation for each group.

The set of operations that can be done without an exchange is grouped into a thread of execution called a major fragment. The Drill web console query profile view uses colors to show the set of operators that belong to the same major fragment. Every major fragment starts with either a scan or an exchange receiver. Each major fragment ends with either an exchange sender or (for the root fragment) a send back to the Drill client.

Operators make up a tree. The root is the screen operator (the one that talks to the Drill client). The leaves are scan operators that read data into Drill. Likewise, major fragments form a tree, but at a higher level of abstraction. The Drill web console displays the tree with the screen (root) operator at the top, and the scans (leaves) at the bottom.

Data flows from the leaves to the intermediate nodes and ultimately to the root (screen). From the perspective of an internal node, Drill uses the term upstream to indicate the source of data, which is toward the leaves. Similarly, the term downstream indicates the destination of the data, which is toward the root. (Confusingly, in the Drill web console, upstream is toward the bottom of the image; downstream is toward the top).

Distribution

The final step of query preparation is to distribute the physical plan across nodes. Because each major fragment is a single thread of execution, Drill performs distribution at the level of the major fragment. If, for instance, a major fragment scans a file, Drill will slice the fragment into pieces: one for each input file (if scanning a directory) and for each HDFS block in each file. The result is a set of slices (called minor fragments)—the smallest possible units of work in Drill.

Drill distributes the set of minor fragments to Drillbits for execution. The slicing process is a bit complex but well worth understanding because it helps you to understand the query profile and how best to tune query execution.

When parallelizing fragments, Drill considers where best to place each. For a scan, each fragment corresponds to one HDFS block for one file, so the best place to execute that fragment is on a node with the data. Because HDFS uses three-way replication, Drill has three “best” choices for the node to hold the fragment. Preferring to place a fragment on the same node as the data is called data affinity. Data affinity is only possible when Drill runs on the same node as the data. When compute and storage are separated (as when using Amazon S3), data affinity is ignored and Drill simply divides up work randomly.

Suppose that your file has 1,000 blocks, but your cluster has only 100 CPUs. Drill could create 1,000 different scan fragments, but doing so would simply create far more threads than CPUs, causing unnecessary CPU contention. Instead, Drill picks an ideal slice target per node. (By default, this number is 70% of the number of cores on the node.) Drill will create no more than this number of minor fragments. It then assigns files or file blocks to minor fragments, combining blocks if needed. In this example, with 100 cores, Drill will create 70 units of work, each of which will read approximately 14 blocks. Of course, if your query reads just a few blocks, Drill will create fewer minor fragments than the maximum.

Parallelizing of internal operators is similar, but without data affinity considerations. Instead, Drill might group data based on hash keys (for joins and grouping) or other considerations.

The slice target mentioned earlier applies to the slices of each major fragment. At present, Drill slices each major fragment independently. If your query is complex and has, say, 10 major fragments, Drill will parallelize each to 70% of the number of cores on your system. The result will be seven threads per core. Further, each query is parallelized independently of other queries (which might be planned on different Drillbits). As a result, in extreme cases Drill might create far more threads than the system has cores, leading to thrashing. If you find yourself pushing Drill to these extremes, consider enabling admission control (discussed in Chapter 9) to limit the number of queries.

Drill uses ZooKeeper to determine the set of Drillbits in the cluster. Thus, if you add a new Drillbit to the cluster, Drill will use it for new queries as soon as the Drillbit registers itself with ZooKeeper.

It is worth noting that the root fragment (the one with the screen operator) is special: it always has just one minor fragment, and that minor fragment always runs on the Foreman node for that session.

For those familiar with Apache Spark, Drill and Spark use similar structures but different names. Table 3-1 provides a quick reference.

Table 3-1. Comparison of distribution terminology
Drill MapReduce Spark
Query Job Job
Major fragment Map or reduce Stage
Minor fragment Mapper, reducer Stage
Operator N/A Task
Exchange Shuffle Shuffle

Statement Execution

After Drill has a finished query plan, the next step is to do something with the plan. Drill has two choices:

  • Return the plan to the user (this is what happens if you use the EXPLAIN PLAN statement).

  • Execute the plan on the cluster (the most common case).

The Foreman is responsible for orchestrating the query process. It will:

  • Optionally wait for an opportunity to execute (if admission control is enabled).

  • Send minor fragments to Drillbits for execution.

  • Monitor progress of execution, watching for failures (due to problems in the query or the failure of a Drillbit).

  • Track minor fragments as they complete.

  • After the root fragment completes, shut down the query and write the query profile.

Drill uses a state machine for each query to track the progress of each minor fragment. The Drill web console collects this information to show the state of active queries. When a query completes, the final state of the query (along with a wealth of detailed data) is written to the query profile, which you can view in the Drill web console. In fact, the best way to develop a deeper understanding of the process described in this section is to spend time exploring the profile for a completed query.

Data representation

We have seen that Drill distributes work to Drillbits in the form of minor fragments, each of which contains a set of operators. Leaf fragments contain a set of one or more scan operators. The next question is: how does Drill represent the rows within the query?

Drill is a SQL engine, meaning that data is represented as rows and columns. JDBC and ODBC present query results one row at a time, with methods to access each column.

Drill, however, is a big data engine; processing data at the row level would not be very efficient. Thus, even though Drill processes data row by row, it does so by grouping rows into record batches that range in size from a few thousand up to 65,536 rows. Record batches are the unit of exchange at shuffles, and the units of data returned to the Drill client. The JDBC and ODBC drivers receive record batches and then iterate over them to return individual rows.

Suppose that your query reads a block of data that contains a million rows. Drill will read this data as a set of, say, 20 record batches with 50,000 records per batch. (The actual batch size varies depending on data format and size.) The minor fragment passes the record batches from one operator to the next, applying varying transforms. When a batch reaches the top of the fragment, Drill sends the batch over the wire to the next minor fragment up the tree.

Query engines typically use one of two formats for data: row oriented or columnar. Drill is a columnar engine. This means that Drill stores data as columns, termed value vectors. Think of a value vector as an array of values. For example, if you have rows of data that contain an integer column N, all values for N are stored together in an array. Rows are simply a common index into a group of value vectors. The Drill record batch is thus simply a collection of value vectors.

Some of Drill’s older materials talked about vectorized processing, the idea being that Drill might eventually use the Single Instruction, Multiple Data (SIMD) instructions of modern processors to speed up execution. At present, this is mostly still an aspiration, for two reasons. First, Drill uses SQL data types, which often include null values, which SIMD instructions do not support. Second, because Drill is SQL-based, it tends to work row by row rather than column by column, which makes it difficult to use a SIMD instruction to process a column. (The Drill project welcomes contributions to solve challenges such as this.)

You will encounter the columnar representation in two ways. First, the query profile displays the number of record batches processed by each operator and fragment. Second, if you create Drill extensions (as described in later chapters), you will work directly with Drill’s value vectors and record batches.

Drill’s value vectors were the starting point for the Apache Arrow project, which provided a general-purpose columnar format for a variety of tools. Although Drill does not yet use Arrow, such use is possible in the future. (Drill has implemented substantial additional functionality not yet available in Arrow, so additional work is needed before the two can reconverge.)

Low-Latency Features

We mentioned earlier that Drill is a low-latency query engine. Drill uses a number of techniques to reduce latency.

Long-lived Drillbits

Even though MapReduce and thus Hive are known for extreme scalability, they are not particularly fast. Part of this is due to the way that MapReduce itself works: YARN must start a new process for each mapper or reducer.

Spark is faster because each application reuses the same set of Spark executors for each of its jobs. However, each application uses a separate set of executors, which incurs startup costs.

Drill takes the Spark idea one step further: it uses a single executor (the Drillbit) per node and for all minor fragments on that node. This architecture is similar to that used for Impala (the impalad process) and for Hive LLAP (Live Long and Process).

Because Drillbits are long-running, Drill can execute minor fragments immediately upon receipt without the overhead of starting a new process.

This speed-up comes at a cost, however: a loss of workload isolation. In some cases, excessive memory use by one query can cause another query to run out of memory. Excessive compution by one query can slow down all queries. This “noisy neighbor” problem occurs in many concurrent systems, and it typically becomes a problem in Drill only under heavy load. Because even separate processes will run into problems under heavy load, Drill’s design is not an unreasonable trade-off. Drill’s admission control mechanism can help reduce the noisy-neighbor problem if it does occur.

Code generation

Spark requires separate executors for each application because Spark users express their applications as code (in Scala, Java, or Python). Similarly, MapReduce must distribute application code to each mapper or reducer. Drill avoids the code distribution step for two reasons:

  • Drill uses SQL as its input language, avoiding the need to execute application-specific code.

  • Drill generates Java code on each Drillbit for each query, allowing a single Drillbit to handle any number of queries.

Earlier, we discussed that Drill translates a SQL statement into a physical plan sent to the Drillbits. To gain maximum performance, each operator translates its part of the physical plan to Java code, which the operator then executes against each of its incoming record batches.

Local code generation not only avoids the cost and complexity of code distribution but is an essential aspect of Drill’s schema-on-read approach. Only when an operator sees the first batch of data will it know the type of each column. For example, a query might contain the clause A + B. Until we know the types of the columns, we don’t know whether we need to perform integer, floating-point, or decimal arithmetic. Further, different scan operators might see different versions of the file schema, and so each fragment might, in some cases, see a slightly different schema and generate different code. Thus, Drill cannot generate code up front, nor can the same code be used by all fragments.

Drill processes this sample expression in a Project operator. As soon as that operator sees the first batch of data it can determine, say, that A is an INT and B is a DOUBLE (floating-point number). This allows the operator to generate Java code to convert A from INT to DOUBLE, and then add the two DOUBLEs.

This is a trivial example. Code generation is used heavily in filter, aggregation, join, and other operations. This not only allows Drill to use the same generic Drillbit for all queries, but also execute (Drill-generated) code specific to each query.

Network exchanges

The final key performance tool in Drill is its “in-memory” processing model. MapReduce writes shuffle data to disk after the map phase, to be read in the reduce phase. Drill avoids intermediate disk files by transferring data directly over the network from the Drillbit running an upstream (sending) fragment to the Drillbit running the downstream (receiving) fragment.

The Drill value vector format is designed to optimize exchanges: the same data buffers used for processing data within operators is used in Drill’s Netty-based network layer. Drill uses a sophisticated handshake mechanism to implement backpressure: a signal to the producing fragments to stop sending if the downstream fragment can’t keep up. You will see this delay reflected in the times reported in the query profile.

If a query runs slower than you expect, it might be due to a related issue: that some operators (such as the sort operator), cannot finish until they receive all of their inputs. If data is skewed (some fragments process much more than others), the sort must wait for the slowest sender, even if all other fragments are completed. The query profile timing numbers and timing charts help you to identify such cases.

Although Drill uses network exchanges to avoid shuffle files, there are times when Drill will still write to disk. Buffering operators, such as sort, hash aggregate, and hash join operators, must store all (or for a join, part) of their input in memory. If the query includes more data than will fit in memory, Drill operators will spill data to a temporary file on the local disk (or, if configured to do so, to the distributed filesystem). Operators that support spilling gather spill statistics that are available in the query profile. Although spilling ensures that queries complete, spilling is slower than in-memory processing. If a query begins to slow unexpectedly as data sizes increase, check the query profile to see whether operators have begun to spill.

Conclusion

People coming to big data from a traditional relational database sometimes expect Drill to “just work,” assuming that, like a traditional database, Drill has full control over data layout, memory, CPU usage, plan optimization, and so on. The previous section should help you to understand the many complexities that Drill must manage in a big data system. Data is stored in HDFS. As we will soon see, the way that you store data (that is, partitioning) has a large impact on query performance. Because Drill is query-on-read, the format of the file can affect query success if the file format is ambiguous or inconsistent.

The performance of a query is heavily influenced by disk I/O performance, by network performance, by the amount of memory available to Drill, and by the load placed on the system by other Drill queries and other compute tools.

Apache Drill is a very handy tool to have in your big data toolkit. But Drill is not magic: it is only as good as the design of your data and cluster. The preceding overview has introduced some of the concepts that you should be aware of as you tune your application; we expand on these ideas in later chapters.

This chapter provided only a high-level overview of Drill’s operation. Much more detailed information is available online:

1 Distributed data storage is a complex and fascinating topic that we only touch upon here. For an in-depth discussion, see the excellent O’Reilly book Designing Data-Intensive Applications by Martin Kleppmann.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.214.32