Designing the schema

Something that has been repeated several times during this book is the importance of knowing our data and what it will be used for. And now, more than ever, in this practical exercise we will design our schema step by step, considering every detail involved in this.

In the previous section, we defined the data set that we want to extract from the web server access logs. The next step is to list the requirements that are associated with the clients of the database. As said previously, one process will be responsible for capturing the information from the log files and writing them in MongoDB, and another process will read the information already persisted in the database.

A point of concern is the performance during the document writing in MongoDB because it is important to ensure that the information will be generated almost in realtime. Since we do not have a previous estimation of the data volume per second, we will be optimistic. Let's consider that we will have a huge volume of data all the time coming from the web server to our MongoDB instance.

Taking into account this requirement, we will be worried about the data dimensions that will increase over time. More events imply that more documents will be inserted. These are the main requirements for our system to operate well.

At first sight, we can imagine that it is all about defining the document format and persisting it into a collection. But thinking in this way is to ignore the well-known MongoDB schema flexibility. So we will analyze the problem of throughput in order to define where and how to persist the information.

Capturing an event request

The web server throughput analysis is maybe the simplest task. In a simple way, the measure of the number of events will give us a number that represents the web server's throughput.

So, if for each generated event a write document is performed stating the time of this operation, does it mean that we can easily get the throughput? Yes! Thus, the easiest way to represent a MongoDB document where we can analyze the throughput is the following:

{

    "_id" : ObjectId("5518ce5c322fa17f4243f85f"),
    "request_line" : "191.32.254.162 [29/Mar/2015:18:35:26 -0400] "GET /media/catalog/product/image/2.jpg HTTP/1.1" 200 0.000 867"

}

When executing the count method in the document collection, we will get the web server's throughput value. Assuming that we have a collection called events, in order to find out the throughput, we must execute the following command in the mongod shell:

db.events.find({}).count()

This command returns the total of generated events until now on our web server. But, is this the number that we want? No. There is no point in having the total number of events without placing it in a given period of time. Would it be of any use to have 10,000 events processed by the web server until now without knowing when we started recording these events or even when the last event was generated?

If we want to count the events in a given period of time, the easiest way to do so is by including a field that will represent the event's creation date. An example for this document is shown as follows:

{

    "_id" : ObjectId("5518ce5c322fa17f4243f85f"),
    "request_line" : "191.32.254.162 [29/Mar/2015:18:35:26 -0400] "GET /media/catalog/product/image/2.jpg HTTP/1.1" 0.000 867",
    "date_created" : ISODate("2015-03-30T04:17:32.246Z")

}

As a result, we can check the total number of requests in the web server in a given period of time by executing a query. The simplest way to perform this query is to use the aggregation framework. The execution of the following command on the mongod shell will return the total of requests per minute:

db.events.aggregate(
{
    $group: {
        _id: {
            request_time: {
                month: {
                    $month: "$date_created"
                },
                day: {
                    $dayOfMonth: "$date_created"
                },
                year: {
                    $year: "$date_created"
                },
                hour: {
                    $hour: "$date_created"
                },
                min: {
                    $minute: "$date_created"
                }
            }
        },
        count: {
            $sum: 1
        }
    }
})

Note

The aggregation pipeline has its limits. If the command result returns a single document that exceeds the BSON document size, an error is produced. Since MongoDB's 2.6 release, the aggregate command returns a cursor, so it can return result sets of any size.

You can find more about aggregation pipeline limits in the MongoDB reference manual at http://docs.mongodb.org/manual/core/aggregation-pipeline-limits/.

In the command pipeline, we defined the $group stage to group the documents per day, month, year, hour, and minute. And we count everything using the $sum operator. From this aggregate command's execution, we will have, as an example, documents such as these:

{
    "_id": {
        "request_time": {
            "month": 3,
            "day": 30,
            "year": 2015,
            "hour": 4,
            "min": 48
        }
    },
    "count": 50
}
{
    "_id": {
        "request_time": {
            "month": 3,
            "day": 30,
            "year": 2015,
            "hour": 4,
            "min": 38
        }
    },
    "count": 13
}
{
    "_id": {
        "request_time": {
            "month": 3,
            "day": 30,
            "year": 2015,
            "hour": 4,
            "min": 17
        }
    },
    "count": 26
}

In this output, it is possible to know how many requests the web server received in a certain time period. This happens due to the $group operator behavior, which takes documents that match a query and then collects groups of documents based on one or more fields. We took each part of our $date_created field, such as the month, day, year, hour, and minute, to the group stage of the aggregation pipeline.

If you want to know which resource is accessed the most often in your web server with the higher throughput, none of these options fit this request. However, a fast solution for this problem is easily reachable. At first sight, the fastest way is to deconstruct the event and create a more complex document, as you can see in the following example:

{
    "_id" : ObjectId("5519baca82d8285709606ce9"),
    "remote_address" : "191.32.254.162",
    "date_created" : ISODate("2015-03-29T18:35:25Z"),
    "http_method" : "GET",
    "resource" : "/media/catalog/product/cache/1/image/200x267/9df78eab33525d08d6e5fb8d27136e95/2/_/2.jpg",
    "http_version" : "HTTP/1.1",
    "status": 200,
    "request_time" : 0,
    "request_length" : 867
}

By using this document design, it is possible to know the resource throughput per minute with the help of an aggregation framework:

db.events.aggregate([
    {
        $group: {
            _id: "$resource",
            hits: {
                $sum: 1
            }
        }
    },
    {
        $project: {
            _id: 0,
            resource: "$_id",
            throughput: {
                $divide: [
                    "$hits",
                    1440
                ]
            }
        }
    },
    {
        $sort: {
            throughput: -1
        }
    }
])

In the preceding pipeline, the first step is to group by resource and to count how many times a request on the resource occurred during an entire day. The next step is to use the operator $project and, together with the operator $divide, use the number of hits in a given resource and calculate the average per minute by dividing by 1,440 minutes, that is, the total of minutes in a day or 24 hours. Finally, we order the results in descending order to view which resources have the higher throughput.

To keep things clear, we will execute the pipeline step by step and explain the results for each step. In the execution of the first phase, we have the following:

db.events.aggregate([{$group: {_id: "$resource", hits: {$sum: 1}}}])

This execution groups the event collection documents by field resource and counts the number of occurrences of field hits when we use the operator $sum with the value 1. The returned result is demonstrated as follows:

{ "_id" : "/", "hits" : 5201 }
{ "_id" : "/legal/faq", "hits" : 1332 }
{ "_id" : "/legal/terms", "hits" : 3512 }

In the second phase of the pipeline, we use the operator $project, which will give us the value of hits per minute:

db.documents.aggregate([
    {
        $group: {
            _id: "$resource",
            hits: {
                $sum: 1
            }
        }
    },
    {
        $project: {
            _id: 0,
            resource: "$_id",
            throughput: {
                $divide: [
                    "$hits",
                    1440
                ]
            }
        }
    }
])

The following is the result of this phase:

{ "resource" : "/", "throughput" : 3.6118055555555557 }
{ "resource" : "/legal/faq", "throughput" : 0.925 }
{ "resource" : "/legal/terms", "throughput" : 2.438888888888889 }

The last phase of the pipeline is to order the results by throughput in descending order:

db.documents.aggregate([
    {
        $group: {
            _id: "$resource",
            hits: {
                $sum: 1
            }
        }
    },
    {
        $project: {
            _id: 0,
            resource: "$_id",
            throughput: {
                $divide: [
                    "$hits",
                    1440
                ]
            }
        }
    },
    {
        $sort: {
            throughput: -1
        }
    }
])

The output produced is like the following:

{ "resource" : "/", "throughput" : 3.6118055555555557 }
{ "resource" : "/legal/terms", "throughput" : 2.438888888888889 }
{ "resource" : "/legal/faq", "throughput" : 0.925 }

It looks like we succeeded in obtaining a good design for our document. Now we can extract the desired analysis and other analyses, as we will see further, and for that reason we can stop for now. Wrong! We will review our requirements, compare them to the model that we designed, and try to figure out whether it was the best solution.

Our desire is to know the measure of throughput per minute for all web server resources. In the model we designed, one document is created per event in our web server, and by using the aggregation framework we can calculate the information that we need for our analysis.

What can be wrong in this solution? Well, if you think it's the number of documents in the collection, you are right. One document per event can generate huge collections depending on the web server traffic. Obviously, we can adopt the strategy of using shards, and distribute the collection through many hosts. But first, we will see how we can take advantage of the schema flexibility in MongoDB in order to reduce the collection size and optimize the queries.

A one-document solution

Having one document per event may be advantageous if we consider that we will have a huge amount of information to create an analysis. But in the example that we are trying to solve, it is expensive to persist one document for each HTTP request.

We will take advantage of the schema flexibility in MongoDB that will help us to grow documents over time. The following proposal has the main goal of reducing the number of persisted documents, also optimizing the queries for read and write operations in our collection.

The document we are looking for should be able to provide us all the information needed in order to know the resource throughput in requests per minute; thus we can have a document with this structure:

  • A field with the resource
  • A field with the event date
  • A field with the minute that the event happened, and the total hits

The following document implements all the requirements described in the preceding list:

{
   "_id" : ObjectId("552005f5e2202a2f6001d7b0"),
   "resource" : "/",
   "date" : ISODate("2015-05-02T03:00:00Z"),
   "daily" : 215840,
   "minute" : {
      "0" : 90,

      "1" : 150,
      "2" : 143,
      ...
      "1349": 210
   }
}

With this document design, we can retrieve the number of events happening in a certain resource every minute. We can also know, by the daily field, the total requests during the day and use this to calculate whatever we want, such as requests per minute, or requests per hour, for instance.

To demonstrate the write and read operations we could make on this collection, we will make use of JavaScript code running on the Node.js platform. So, before continuing, we must make sure we have Node.js installed on our machine.

Note

If you need help, you will find more information at http://nodejs.org.

The firs thing we should do is to create a directory where our application will live. In a terminal, execute the following command:

mkdir throughput_project

Next we navigate to the directory we created and initiate the project:

cd throughput_project
npm init

Answer all the questions asked by the wizard to create the initial structure of our new project. At the moment, we will have a package.json file based on the answers we gave.

The next step is to set up the MongoDB driver for our project. We can do this by editing the package.json file, including the driver reference for its dependencies, or by executing the following command:

npm install mongodb --save

The preceding command will install the MongoDB driver for our project and save the reference in the package.json file. Our file should look like this:

{
  "name": "throughput_project",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo "Error: no test specified" && exit 1"
  },
  "author": "Wilson da Rocha França",
  "license": "ISC",
  "dependencies": {
    "mongodb": "^2.0.25"
  }
}

The last step is to create the app.js file with our sample code. The following is sample code that shows us how to count an event on our web server and record it in our collection:

var fs = require('fs'),
var util = require('util'),
var mongo = require('mongodb').MongoClient;
var assert = require('assert'),

// Connection URL
var url = 'mongodb://127.0.0.1:27017/monitoring;
// Create the date object and set hours, minutes,
// seconds and milliseconds to 00:00:00.000
var today = new Date();
today.setHours(0, 0, 0, 0);

var logDailyHit = function(db, resource, callback){
  // Get the events collection
  var collection = db.collection('events'),
  // Update daily stats
  collection.update({resource: resource, date: today},
    {$inc : {daily: 1}}, {upsert: true},
    function(error, result){
      assert.equal(error, null);
      assert.equal(1, result.result.n);
      console.log("Daily Hit logged");
      callback(result);
  });
}

var logMinuteHit = function(db, resource, callback) {
  // Get the events collection
  var collection = db.collection('events'),
  // Get current minute to update
  var currentDate = new Date();
  var minute = currentDate.getMinutes();
  var hour = currentDate.getHours();
  // We calculate the minute of the day
  var minuteOfDay = minute + (hour * 60);
  var minuteField = util.format('minute.%s', minuteOfDay);
  // Create a update object
  var update = {};
  var inc = {};
  inc[minuteField] = 1;
  update['$inc'] = inc;

  // Update minute stats
  collection.update({resource: resource, date: today},
    update, {upsert: true}, function(error, result){
      assert.equal(error, null);
      assert.equal(1, result.result.n);
      console.log("Minute Hit logged");
      callback(result);
  });
}

// Connect to MongoDB and log
mongo.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected to server");
  var resource = "/";
  logDailyHit(db, resource, function() {
    logMinuteHit(db, resource, function(){
      db.close();
      console.log("Disconnected from server")
      });
    });
});

The preceding sample code is quite simple. In it, we have the logDailyHit function, which is responsible for logging an event and incrementing one unit in the document daily field. The second function is the logMinuteHit function, which is responsible for logging the occurrence of an event and incrementing the document minute field that represents the current minute in the day. Both functions have an update query that has an upsert option with the value true if the document does not exist, in which case it will be created.

When we execute the following command, we will record an event on the resource "/". To run the code, just navigate to the project directory and execute the following command:

node app.js

If everything is fine, we should see the following output after running the command:

Connected to server
Daily Hit logged
Minute Hit logged
Disconnected from server

To get a feel for this, we will execute a findOne command on the mongod shell and watch the result:

db.events.findOne()
{
   "_id" : ObjectId("5520ade00175e1fb3361b860"),
   "resource" : "/",
   "date" : ISODate("2015-04-04T03:00:00Z"),
   "daily" : 383,
   "minute" : {
      "0" : 90,
      "1" : 150,
      "2" : 143
   }
}

In addition to everything that the previous models can give us, this one has some advantages over them. The first thing we notice is that, every time we register a new event happening on the web server, we will manipulate only one document. The next advantage also lies in how easy we can find the information we are looking for, given a specific resource, since we have the information for an entire day in one document, which will lead to us to manipulating fewer documents in each query.

The way this schema design deals with time will give us many benefits when we think about reports. Both textual and graphical representations can be easily extracted from this collection for historical or real-time analysis.

However, as well as the previous approaches, we will have to deal with a few limitations too. As we have seen, we increment both the daily field and a minute field in an event document as they occur on the web server. When no event in a resource was reported for that day, then a new document will be created since we are using the upsert option on the update query. The same thing will happen if an event occurs in a resource for the first time in a given minute—the $inc operator will create the new minute field and set "1" as the value. This means that our document will grow over time, and will exceed the size MongoDB allocated initially for it. MongoDB automatically performs a reallocation operation every time the space allocated to the document is full. This reallocation operation that happens through the entire day has a direct impact on the database's performance.

What we should do? Live with it? No. We could reduce the impact of a reallocation operation by adding a process that preallocates space for our document. In summary, we will give the application the responsibility for creating a document with all the minutes we can have in a day, and initializing every field with the value 0. By doing this, we will avoid too many reallocation operations by MongoDB during the day.

Note

To learn more about record allocation strategies, visit the MongoDB reference user manual at http://docs.mongodb.org/manual/core/storage/#record-allocation-strategies.

To give an example of how we can preallocate the document space, we can create a new function in our app.js file:

var fs = require('fs'),
var util = require('util'),
var mongo = require('mongodb').MongoClient;
var assert = require('assert'),

// Connection URL
var url = 'mongodb://127.0.0.1:27017/monitoring';

var preAllocate = function(db, resource, callback){
  // Get the events collection
  var collection = db.collection('events'),
  var now = new Date();
  now.setHours(0,0,0,0);
  // Create the minute document
  var minuteDoc = {};
  for(i = 0; i < 1440; i++){
    minuteDoc[i] = 0;
  }
  // Update minute stats
  collection.update(
      {resource: resource,
        date: now,
        daily: 0},
      {$set: {minute: minuteDoc}},
      {upsert: true}, function(error, result){
        assert.equal(error, null);
        assert.equal(1, result.result.n);
        console.log("Pre-allocated successfully!");
        callback(result);
  });
}

// Connect to MongoDB and log
mongo.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected to server");
  var resource = "/";
  preAllocate(db, resource, function(){
    db.close();
    console.log("Disconnected from server")
  });
});

Tip

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

To preallocate space in the current date to the "/" resource, just run the following command:

node app.js

The output of the execution is something like this:

Connected to server
Pre-allocated successfully!
Disconnected from server

We can run a findOne command on the mongod shell to check the new document. The document created is very long, so we will show just a piece of it:

db.events.findOne();
{
   "_id" : ObjectId("551fd893eb6efdc4e71260a0"),
   "daily" : 0,
   "date" : ISODate("2015-04-06T03:00:00Z"),
   "resource" : "/",
   "minute" : {
      "0" : 0,
      "1" : 0,
      "2" : 0,
      "3" : 0,
      ...
      "1439" : 0,
   }
}

It is recommended that we preallocate the document before midnight to ensure smooth functioning of the application. If we schedule this creation with an appropriate safety margin, we are not running any risk of creating the document for an event occurrence after midnight.

Well, with the reallocation problem solved, we can go back to the issue that initiated our document redesign: the growth of our data.

Even reducing the number of documents in our collection to one document per event per day, we can still run into a problem with storage space. This can happen when we have too many resources receiving events on our web server, and we cannot predict how many new resources we will have in our application's lifecycle. In order to solve this issue, we will use two different techniques: TTL indexes and sharding.

TTL indexes

It is not always the case that we need to have all log information stored on our servers forever. It has become a standard practice by operations people to limit the number of files stored on disk.

By the same reasoning, we can limit the number of documents we need living in our collection. To make this happen, we could create a TTL index on the date field, specifying how long one document will exist in the collection. Just remember that, once we create a TTL index, MongoDB will automatically remove the expired documents from the collection.

Suppose that the event hit information is useful just during one year. We will create an index on the date field with the property expireAfterSeconds with 31556926 as the value, which corresponds to one year in seconds.

The following command, executed on the mongod shell, creates the index on our events collection:

db.monitoring.createIndex({date: 1}, {expireAfterSeconds: 31556926})

If the index does not exist, the output should look like this:

{
   "createdCollectionAutomatically" : false,
   "numIndexesBefore" : 1,
   "numIndexesAfter" : 2,
   "ok" : 1
}

Once this is done, our documents will live in our collection for one year, based on the date field, and after this MongoDB will remove them automatically.

Sharding

If you are one of those people who have infinite resources and would like to have a lot of information stored on disk, then one solution to mitigate the storage space problem is to distribute the data by sharding your collection.

And, as we stated before, we should increase our efforts when we choose the shard key, since it is through the shard key that we will guarantee that our read and write operations will be equally distributed by the shards, that is, one query will target a single shard or a few shards on the cluster.

Once we have full control over how many resources (or pages) we have on our web server and how this number will grow or decrease, the resource name becomes a good choice for a shard key. However, if we have a resource that has more requests (or events) than others, then we will have a shard that will be overloaded. To avoid this, we will include the date field to compose the shard key, which will also give us better performance on query executions that include this field in the criteria.

Remember: our goal is not to explain the setup of a sharded cluster. We will present to you the command that shards our collection, taking into account that you previously created your sharded cluster.

To shard the events collection with the shard key we choose, we will execute the following command on the mongos shell:

mongos> sh.shardCollection("monitoring.events", {resource: 1, date: 1})

The expected output is:

{ "collectionsharded" : "monitoring.events", "ok" : 1 }

Tip

If our events collection has any document in it, we will need to create an index where the shard key is a prefix before sharding the collection. To create the index, execute the following command:

db.monitoring.createIndex({resource: 1, date: 1})

With the collection with the shard enabled, we will have more capacity to store data in the events collection, and a potential gain in performance as the data grows.

Now that we've designed our document and prepared our collection to receive a huge amount of data, let's perform some queries!

Querying for reports

Until now, we have focused our efforts on storing the data in our database. This does not mean that we are not concerned about read operations. Everything we did was made possible by outlining the profile of our application, and trying to cover all the requirements to prepare our database for whatever comes our way.

So, we will now illustrate some of the possibilities that we have to query our collection, in order to build reports based on the stored data.

If what we need is real-time information about the total hits on a resource, we can use our daily field to query the data. With this field, we can determine the total hits on a resource at a particular time of day, or even the average requests per minute on the resource based on the minute of the day.

To query the total hits based on the current time of the day, we will create a new function called getCurrentDayhits and, to query the average request per minute in a day, we will create the getCurrentMinuteStats function in the app.js file:

var fs = require('fs'),
var util = require('util'),
var mongo = require('mongodb').MongoClient;
var assert = require('assert'),

// Connection URL
var url = 'mongodb://127.0.0.1:27017/monitoring';

var getCurrentDayhitStats = function(db, resource, callback){
  // Get the events collection
  var collection = db.collection('events'),
  var now = new Date();
  now.setHours(0,0,0,0);
  collection.findOne({resource: "/", date: now},
    {daily: 1}, function(err, doc) {
    assert.equal(err, null);
    console.log("Document found.");
    console.dir(doc);
    callback(doc);
  });
}

var getCurrentMinuteStats = function(db, resource, callback){
  // Get the events collection
  var collection = db.collection('events'),
  var now = new Date();
  // get hours and minutes and hold
  var hour = now.getHours()
  var minute = now.getMinutes();
  // calculate minute of the day to create field name
  var minuteOfDay = minute + (hour * 60);
  var minuteField = util.format('minute.%s', minuteOfDay);
  // set hour to zero to put on criteria
  now.setHours(0, 0, 0, 0);
  // create the project object and set minute of the day value
  var project = {};
  project[minuteField] = 1;
  collection.findOne({resource: "/", date: now},
    project, function(err, doc) {
    assert.equal(err, null);
    console.log("Document found.");
    console.dir(doc);
    callback(doc);
  });
}

// Connect to MongoDB and log
mongo.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected to server");
  var resource = "/";
  getCurrentDayhitStats(db, resource, function(){
    getCurrentMinuteStats(db, resource, function(){
      db.close();
      console.log("Disconnected from server");
    });
  });
});

To see the magic happening, we should run the following command in the terminal:

node app.js

If everything is fine, the output should look like this:

Connected to server
Document found.
{ _id: 551fdacdeb6efdc4e71260a2, daily: 27450 }
Document found.
{ _id: 551fdacdeb6efdc4e71260a2, minute: { '183': 142 } }
Disconnected from server

Another possibility is to retrieve daily information to calculate the average requests per minute of a resource, or to get the set of data between two dates to build a graph or a table.

The following code has two new functions, getAverageRequestPerMinuteStats, which calculates the average number of requests per minute of a resource, and getBetweenDatesDailyStats, which shows how to retrieve the set of data between two dates. Let's see what the app.js file looks like:

var fs = require('fs'),
var util = require('util'),
var mongo = require('mongodb').MongoClient;
var assert = require('assert'),

// Connection URL
var url = 'mongodb://127.0.0.1:27017/monitoring';

var getAverageRequestPerMinuteStats = function(db, resource, callback){
  // Get the events collection
  var collection = db.collection('events'),
  var now = new Date();
  // get hours and minutes and hold
  var hour = now.getHours()
  var minute = now.getMinutes();
  // calculate minute of the day to get the avg
  var minuteOfDay = minute + (hour * 60);
  // set hour to zero to put on criteria
  now.setHours(0, 0, 0, 0);
  // create the project object and set minute of the day value
  collection.findOne({resource: resource, date: now},
    {daily: 1}, function(err, doc) {
    assert.equal(err, null);
    console.log("The avg rpm is: "+doc.daily / minuteOfDay);
    console.dir(doc);
    callback(doc);
  });
}

var getBetweenDatesDailyStats = function(db, resource, dtFrom, dtTo, callback){
  // Get the events collection
  var collection = db.collection('events'),
  // set hours for date parameters
  dtFrom.setHours(0,0,0,0);
  dtTo.setHours(0,0,0,0);
  collection.find({date:{$gte: dtFrom, $lte: dtTo}, resource: resource},
  {date: 1, daily: 1},{sort: [['date', 1]]}).toArray(function(err, docs) {
    assert.equal(err, null);
    console.log("Documents founded.");
    console.dir(docs);
    callback(docs);
  });
}

// Connect to MongoDB and log
mongo.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected to server");
  var resource = "/";
  getAverageRequestPerMinuteStats(db, resource, function(){
    var now = new Date();
    var yesterday = new Date(now.getTime());
    yesterday.setDate(now.getDate() -1);
    getBetweenDatesDailyStats(db, resource, yesterday, now, function(){
      db.close();
      console.log("Disconnected from server");
    });

  });
});

As you can see, there are many ways to query the data in the events collection. These were some very simple examples of how to extract the data, but they were functional and reliable ones.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.206.69