CHAPTER 10

image

Connecting to a Data Store

If you are building an application with Node.js, you are almost inevitably going to need some form of data storage. This can be as simple as in-memory storage or any number of data storage solutions. The Node.js community has created many drivers and connectivity bridges for nearly any data store you might encounter in your application development. In this chapter you will investigate how to use Node.js to connect to many of these, including the following:

  • MySQL
  • Microsoft SQL Server
  • PostgreSQL
  • MongoDB
  • CouchDB
  • Redis
  • Cassandra

image Note  This chapter contains a wide variety of databases to utilize with Node.js. It may not be in your best interest to spend your time installing each of these databases on your local machines or environments. This chapter focuses on the Node.js communication with these databases and not the installation and initialization of each database. Because this book is designed to focus on Node.js and how it can operate in various use cases and with all of these database types, you are encouraged to find the recipe that suits your specific database needs.

10-1. Connecting to MySQL

Problem

Many developers are first introduced to database programming via MySQL. Because of this, many also want to bring this familiarity, or bridge, from an existing application into a Node.js application. Because of this, you want to be able to connect to a MySQL database from within your Node.js code.

Solution

When you begin to integrate MySQL to your application, you must decide on a Node.js framework that you wish to use as the MySQL driver. If you choose to use npm and search for MySQL, you will likely see the mysql package listed at or near the top. You then install that package with the $ npm install mysql command.

Once the package is installed, you are ready to use MySQL in your Node.js application. To connect and execute queries, you can use a module similar to the one that you see in Listing 10-1. In this example you can utilize the MySQL sample database, Sakila, which you can install from the instructions found at http://dev.mysql.com/doc/sakila/en/sakila-installation.html.

Listing 10-1.  Connecting and Querying MySQL

/**
* mysql
*/
 
var mysql = require('mysql'),
 
var connectionConfig = {
        host: 'localhost',
        user: 'root',
        password: '',
        database: 'sakila'
};
 
var connection = mysql.createConnection(connectionConfig);
 
 
connection.connect(function(err) {
        console.log('connection::connected'),
});
 
connection.query('SELECT * FROM actor', function(err, rows, fields) {
  if (err) throw err;
 
  rows.forEach(function(row) {
        console.log(row.first_name, row.last_name);
  });
});
 
var actor = { first_name: 'Wil', last_name: 'Wheaton' };
connection.query('INSERT INTO actor SET ?', actor, function(err, results) {
        if (err) throw err;
 
        console.log(results);
});
 
connection.end(function(err) {
        console.log('connection::end'),
});

How It Works

Using the mysql module to connect to MySQL begins with a connection configuration object. The connection object in your solution simply provides the host, user, password, and database to which you wish to connect. These are the basic settings, but there are other options that can be configured on this object, as you see in Table 10-1.

Table 10-1. Connection Options for MySQL

Option Description
bigNumberStrings When used with supportBigNumbers, Big Numbers will be represented by strings in your JavaScript. Default: False
Charset Names the charset that you want to use for the connection. Default: UTF8_GENERAL_CI
Database Lists the name of the database on the MySQL server.
Debug Prints details using stdout. Default: False
Flags Lists nondefault connection flags to be used.
Host Provides the hostname of the database server to which you are connecting. Default: Localhost
insecureAuth Allows connecting to insecure (older) server authentication methods. Default: False
multipleStatements Allows more than one statement per query. This could allow for SQL injection attacks. Default: False
Password Lists the password of the MySQL user.
Port Gives the port number on the machine where the MySQL server instance resides. Default: 3306
queryFormat Creates a custom query function.
socketPath Provides the path to a Unix socket. This will cause the host and port to be ignored.
stringifyObjects Will stringify objects instead of converting their values. Default: False
supportBigNumbers Use this when using BIGINT or DECIMAL in your columns. Default: False
Timezone Lists the time zone for local dates. Default: Local
typecast Converts types to native JavaScript types. Default: True
User Lists the MySQL user to utilize for authentication.

Once you create your connection object, you are then able to instantiate a new connection to your MySQL server. This is done by calling mysql.createConnection(config), which will then instantiate the connection object and pass to it the ConnectionConfig() object.

You can see in Listing 10-2 that the connection object will actually attempt to create the connection in the Protocol module, which performs the necessary MySQL handshake in order to connect to the server.

Listing 10-2.  Connecting in the MySQL Module

module.exports = Connection;
Util.inherits(Connection, EventEmitter);
function Connection(options) {
  EventEmitter.call(this);
 
  this.config = options.config;
 
  this._socket        = options.socket;
  this._protocol      = new Protocol({config: this.config, connection: this});
  this._connectCalled = false;
  this.state          = "disconnected";
}

Now that you have a connection to the MySQL server, you are able to query by using that connection. In the solution you were able to perform two distinct types of queries.

The first query was an explicit select statement from the actor table in your database. This required nothing more than properly forming the query as a string to the first parameter of the connection.query method. The connection.query method can accept up to three arguments: sql, values, and a callback. If the values parameter is not present, it is detected by checking if it is a function, then just the SQL is enqueued to be executed on the server. The callback will be returned once the query has completed.

In the second query, you pass in  some values that you wish to have set within the database. These values are in the form of a JavaScript object, and they are passed in to the ‘?’ placeholder on your INSERT query. One of the great things about using this method is that the mysql module will attempt to safely escape all of the data that you are loading into your database. It does this in order to mitigate SQL injection attacks. There is an escape matrix that will perform different types of escaping for different value types in the mysql module (see Table 10-2).

Table 10-2. Escape Matrix

Value Type How It Is Converted
Arrays Turned to a list [‘a’, ‘b’] => ‘a’, ‘b’
Boolean ‘True’ / ‘false’ strings
Buffers Hex strings
Dates ‘YYYY-mm-dd HH:ii:ss’ strings
NaN/Infinity As is, because MySQL has nothing to convert them to
Nested Arrays Grouped lists [[‘a’,’b’], [‘c’,’d’]] => (‘a’, b’), (‘c’, ‘d’)
Numbers None
Objects Key-‘value’ pairs are generated; nested objects turn to strings
Strings Safely escaped
Undefined/null NULL

This is just a basic example for connecting and executing queries with MySQL by using the mysql module. There are other methods that you can utilize as well. You can stream the response from a query and bind to the events in order to perform specific actions on a row as it is returned, before continuing to the next. An example of this would be as shown in Listing 10-3.

Listing 10-3.  Streaming a Query

/**
* mysql
*/
 
var mysql = require('mysql'),
 
var connectionConfig = {
        host: 'localhost',
        user: 'root',
        password: '’,
        database: 'sakila'
};
 
var connection = mysql.createConnection(connectionConfig);
 
 
connection.connect(function(err) {
        console.log('connection::connected'),
});
 
var query = connection.query('SELECT * FROM actor'),
 
query.on('error', function(err) {
 
        console.log(err);
 
}).on('fields', function(fields) {
 
        console.log(fields);
 
}).on('result', function(row) {
        connection.pause();
        console.log(row);
        connection.resume();
}).on('end', function(err) {
        console.log('connection::end'),
});

Here you see that the query itself does not change; instead of passing a callback to the query method, we are binding to the events that are emitted from the query while it executes. So as the fields are parsed they are processed. Then for each row you process that data before moving on to the next record. This is done by using the connection.pause() function, then performing your actions, followed by the connection.resume() method.

Connecting and using MySQL in Node.js is straightforward when you use a framework such as the mysql module. If MySQL is your database of choice, it should not limit your ability to choose Node.js as your data access server.

10-2. Connecting to Microsoft SQL Server

Problem

You want to integrate your Node.js application into a Microsoft SQL Server instance.

Solution

Just as with MySQL, there are several solutions for finding a driver for Microsoft SQL Server with Node.js. One of the most popular packages is “tedious,” named after the Tabular Data Stream (TDS) protocol for connecting to SQL Server. You first install this package via npm with the $ npm install tedious command.

You then build a set of modules that interact with SQL Server. The first portion of this solution, Listing 10-4, utilizes tedious to create a connection to your SQL Server instance. The second part, as seen in Listing 10-5, is the module that contains the interaction with the data on your SQL Server instance.

image Note  SQL Server is a Microsoft product. As such, the following implementation will work only if your server is running Windows and SQL Server.

Listing 10-4.  Connecting to Your SQL Server Instance

/*
* Using MS SQL
*/
 
var TDS = require('tedious'),
        Conn = TDS.Connection,
        aModel = require('./10-2-1.js'),
 
var conn = new Conn({
                username: 'sa',
                password: 'pass',
        server: 'localhost',
        options: {
            database: 'Northwind',
            rowCollectionOnRequestCompletion: true
        });
 
function handleResult(err, res) {
        if (err) throw err;
        console.log(res);
}
 
conn.on('connect', function(err) {
        if (err) throw err;
 
        aModel.getByParameter(conn, 'parameter', handleResult);
 
        aModel.getByParameterSP(conn, 'parameter', handleResult);
});

Listing 10-5.  Querying Microsoft SQL Server

var TDS = require('tedious'),
        TYPES = TDS.TYPES,
        Request = TDS.Request;
var aModel = module.exports = {
        // Use vanilla SQL
        getByParameter: function(conn, parm, callback) {
                var q = 'select * from model (NOLOCK) where identifier = @parm';
 
                var req = new Request(q, function(err, rowcount, rows) {
                        callback( err, rows );
                });
                req.addParameter('parm', TYPES.UniqueIdentifierN, parm);
 
                conn.execSql(req);
        },
        // Use a Store Procedure
        getByParameterSP: function(conn, parm, callback) {
                var q = 'exec sp_getModelByParameter @parm';
                var req = new Request(q, function(err, rowcount, rows) {
                        callback( err, rows );
                });
                req.addParameter('parm', TYPES.UniqueIdentifierN, parm);
 
                conn.execSql(req);
        }
};

How It Works

When you first connect to a Microsoft SQL Server by using the tedious module, you first need to create a connection. This is done by using the TDS.Connection object and instantiating it with a configuration object. In your solution, to create the connection you send a username, password, server name, and a set of options to be used in the connection. There are a number of options that can be passed into this object, as shown in Table 10-3.

Table 10-3. TDS.Connection Configuration

Setting Description
options.cancelTimeout Time until a cancel request times out. Default: 5 seconds
options.connectTimeout Time to wait until the connection attempt times out. Default: 15 seconds
options.cryptoCredentialsDetails Object that will contain any credentials required for encryption. Default: Empty Object ‘{}’
options.database The name of the database to connect to
options.debug.data Boolean telling whether debug information about packet data is sent. Default: False
options.debug.packet Boolean telling whether debug information about packets will be sent. Default: False
options.debug.payload Boolean telling whether debug information about packet payload is sent. Default: False
options.debug.token Boolean telling whether debug information about stream tokens is sent. Default: False
options.encrypt Sets whether or not to encrypt requests. Default: False
options.instanceName The named instance to connect to.
options.isolationLevel The level of isolation on the server, or when the server will allow data to be seen from another operation. Default: Read Uncommitted (This is known as “dirty reads,” or the lowest level of isolation. A given transaction can see uncommitted transactions from another transaction.)
options.packetSize The size limit for packets that are sent to and from the server. Default: 4 KB
options.port The port on which to connect. This option is mutually exclusive with options.instanceName. Default: 1433
options.requestTimeout Time until a given request times out. Default: 15 seconds
options.rowCollectionOnDone Boolean that states that a row collection will be received when the ‘done’, ‘doneInProc’, and ‘doneProc’ events are emitted. Default: False
options.rowCollectionOnRequestCompletion Boolean that, when true, will provide a row collection in the Request callback. Default: False
options.tdsVersion The version of the TDS protocol that the connection is to use. Default: 7_2
options.textsize Sets the maximum width of any column for text data types. Default: 2147483647
.password The password associated with the username
.server The name or IP address of the server to which you wish to connect
.userName Username to use for connecting to the MS SQL Server instance (Note: Windows authentication connections are not supported.)

Once you have passed these options to the connection object, which is a Node.js EventEmitter, you then bind to the ‘connect’ event. There are several ways in which the ‘connect’ event can be emitted from the connection as described below:

  • A successful connection
  • A login failure
  • After connectTimeout has elapsed
  • After a socket error occurs during connection

Once you have successfully connected to SQL Server, you then call your module that contains your requests. TDS.Request is an EventEmitter that allows you to execute SQL either by a plain T-SQL string or a stored procedure. The request also accepts a callback, which will either be called directly, or the result will be applied to the ‘requestCompleted’ event.

Just as with many SQL Server implementations, you can pass parameters to the SQL that you wish to execute. In both of the examples from your solution (one an SQL text and one a stored procedure), you pass a named parameter. This named parameter is added to the request by using the Request.addParameter() method. The addParameter() method accepts up to four arguments: name, type, value, and an options object. The type that is used when adding a parameter can be any type in the TDS.Types object that is allowed to be part of a parameter. These are Bit, TinyInt, SmallInt, Int, BigInt, Float,  Real, SmallDateTime, DateTime, VarChar, Text, NVarChar, Null, UniqueIdentifier, and UniqueIdentifierN.

Once you have made the request object, and you added the parameters that you need, you can then execute the SQL statement by calling connection.execSql(<Request>) where you pass the request to the method. When the request completes, your callback executes and you can handle the result and the rows accordingly.

You now understand how to implement a connection to MS SQL Server by using Node.js and the tedious package to manage TDS connections.

10-3. Using PostgreSQL with Node.js

Problem

You are going to use PostgreSQL for your database and need to utilize this in your Node.js application.

Solution

There are several packages that can be used for connecting to PostgreSQL. This solution will be utilizing the node-postgres module, which is a low-level implementation of PostgreSQL. Listing 10-6 shows a simple example of connecting to a PostgreSQL instance and executing a simple query, then logging the result.

Listing 10-6.  Connecting to PostgreSQL and Executing a Query

/**
* PostgreSQL
*/
 
var pg = require('pg'),
 
var connectionString = 'tcp://postgres:pass@localhost/postgres';
 
var client = new pg.Client(connectionString);
 
client.connect(function(err) {
        if (err) throw err;
 
        client.query('SELECT EXTRACT(CENTURY FROM TIMESTAMP "2011-11-11 11:11:11")', function(err, result) {
                if (err)  throw err;
 
                console.log(result.rows[0]);
 
                client.end();
        });
});

How It Works

This solution begins by installing the node-postgres module by using $ npm install pg. You then can add this to your Node.js code. You then create a connection to your PostgreSQL instance by instantiating a new client.  The client constructor can parse the connection string parameters, and you can then create a connection as seen in Listing 10-7.

Listing 10-7.  Client Construct of node-postgres

var Client = function(config) {
  EventEmitter.call(this);
 
  this.connectionParameters = new ConnectionParameters(config);
  this.user = this.connectionParameters.user;
  this.database = this.connectionParameters.database;
  this.port = this.connectionParameters.port;
  this.host = this.connectionParameters.host;
  this.password = this.connectionParameters.password;
 
  var c = config || {};
 
  this.connection = c.connection || new Connection({
    stream: c.stream,
    ssl: c.ssl
  });
  this.queryQueue = [];
  this.binary = c.binary || defaults.binary;
  this.encoding = 'utf8';
  this.processID = null;
  this.secretKey = null;
  this.ssl = c.ssl || false;
};

Once you have created this connection, you next execute a query. This is done by calling client.query() and passing an SQL string as the first parameter. The second parameter can either be an array of values to be applied to the query, like you saw in Section 10-1, or the callback. The callback function will pass two arguments, an error if one exists, or the results of the query. The results, as you see, will contain an array of the rows returned. Once you have handled the results, you are able to then close the client connection by calling client.end(). The .end() method will close the connection via the connection.end() method.

Your example used a plaintext SQL statement for node-postgres to execute. There are two other methods for executing a query with node-postgres: parameterized, and a prepared statement.

Parameterized queries allow you to pass parameters to the query such as 'select description from products where name=$1', ['sandals']'. By using parameterized queries, you provide a greater level of protection from SQL injection attacks. They also perform more slowly than plaintext queries, because before each execution these statements are prepared and then executed.

The final type of query that you are able to execute with node-postgres is a prepared statement. One of these will be prepared once, and then for each session connection to postgres, the execution plan for this SQL query is cached so that if it is executed more than once it becomes the most efficient way to execute SQL with node-postgres. Like the parameterized queries, a prepared statement also provides a similar barrier to SQL injection attacks. The prepared statement is created by passing an object to the query method that has a name, text, and a values attribute. You can then call these prepared statements by the names that you provided for them.

Utilizing node-postgres allows you to interact directly and efficiently with PostgreSQL from your Node.js application. The next sections will be a departure from the traditional SQL interfaces with Node.js, and you will begin to investigate several no SQL options for connecting to Node.js.

10-4. Using Mongoose to Connect to MongoDB

Problem

You want to be able to utilize MongoDB in your Node.js application. To do this, you choose to integrate with Mongoose.

Solution

When you use MongoDB with your Node.js application, there are many drivers you can choose from to connect to your data store. However, what is likely the most widely used solution is to integrate your MongoDB instance with the Mongoose module. After installing with $ npm install mongoose, you can then create a connection to MongoDB by using the connection methods outlined in Listing 10-8.

Listing 10-8.  Connecting to MongoDB with Mongoose

/**
* Connecting to MongoDB with Mongoose
*/
 
var mongoose = require('mongoose'),
 
// simple connection string
// mongoose.connect('mongodb://localhost/test'),
mongoose.connect('mongodb://localhost/test', {
        db: { native_parser: false },
        server: { poolSize: 1 }
        // replset:  { rs_name : 'myReplicaSetName' },
        // user: 'username',
        // pass: 'password'
});
 
// using authentication
// mongoose.connect('mongodb://username:password@host/collection')
 
mongoose.connection.on('open', function() {
        console.log('huzzah! connection open'),
});
 
mongoose.connection.on('connecting', function() {
        console.log('connecting'),
});
 
mongoose.connection.on('connected', function() {
        console.log('connected'),
});
 
mongoose.connection.on('reconnected', function() {
        console.log('reconnected'),
});
 
mongoose.connection.on('disconnecting', function() {
        console.log('disconnecting'),
});
 
mongoose.connection.on('disconnected', function() {
        console.log('disconnected'),
});
 
mongoose.connection.on('error', function(error) {
        console.log('error', error);
});
 
mongoose.connection.on('close', function() {
        console.log('connection closed'),
});

How It Works

Connecting to MongoDB in general is not complex. It requires a MongoDB-specific uniform resource identifier (URI) scheme, which will point to a server (or servers) that can host your MongoDB data. In Mongoose, the same URI scheme is used, as seen in Listing 10-9, with the addition of several options that you saw in Listing 10-8.

Listing 10-9.  MongoDB Connection String

mongodb://[username:password@]host[:port][[,host2[:port2]...[,hostN[:portN][/database][?options]

For Mongoose, you use the mongoose.connect(<uri>, <options>) method. The options that you set in Mongoose can be set like any of those listed in Table 10-4.

Table 10-4. Mongoose Connection Options

Option Description
.auth The authentication mechanism options, including the source and type of mechanism to use.
.db Passed to the connection .db instance (e.g., {native_parser: true} will use native binary JSON [BSON] parsing).
.mongos Boolean that says to use the high-availability option for your .mongos. This should be set to true if connecting to multiple instances of Mongoose.
.pass The password associated with the username.
.replset This is the name of the replica set to use, assuming that the Mongoose instance you are connecting to is a member of a set.
.server Passed to the connection server instance (e.g., {poolSize: 1} number of pools).
.user The username to be used for authentication.

The connection object inherits for the Node.js EventEmitter, so you can see from your solution that there are many events that you are able to subscribe to by using Mongoose. These events are outlined and described in Table 10-5.

Table 10-5. Mongoose Connection Events

Event Description
‘close’ Emitted after disconnected and ‘onClose’ are executed on all connections.
‘connected’ Emitted once the successful connection to the database occurs.
‘connecting’ Emitted when connection.open or connection.openSet is executed on the connection.
‘disconnected’ Emitted after disconnecting.
‘disconnecting’ Emitted when the connection.close() event is executed.
‘error’ Emitted when an error occurs (i.e., when a Mongo instance is dropped).
‘fullsetup’ Emitted in a replica set when all nodes are connected.
‘open’ Emitted once the connection is opened to the MongoDB instance.
‘reconnected’ Emitted after a subsequent connection.

This is the basic scenario and setup for connecting to MongoDB by using Mongoose. In the next section you will investigate how to use Mongoose to smartly model a data store and retrieve it in MongoDB.

10-5. Modeling Data for Mongoose

Problem

You want to model your MongoDB data with Mongoose in your Node.js application.

Solution

When you model data with Mongoose, you need to utilize the mongoose.model() method. You do not necessarily need the mongoose.Schema method, but for the model created in Listing 10-10 it is utilized to build the schema for the model.

Listing 10-10.  Creating a Model with Mongoose

/**
* Modeling data with Mongoose
*/
 
var mongoose = require('mongoose'),
        Schema = mongoose.Schema,
        ObjectId = Schema.ObjectId;
 
mongoose.connect('mongodb://localhost/test'),
 
var productModel = new Schema({
        productId: ObjectId,
        name: String,
        description: String,
        price: Number
});
 
 
var Product = mongoose.model('Product', productModel);
 
var sandal = new Product({name: 'sandal', description: 'something to wear', price: 12});
 
sandal.save(function(err) {
        if (err) console.log(err);
 
        console.log('sandal created'),
});
 
Product.find({name: 'sandal'}).exec(function(err, product) {
        if (err) console.log(err);
 
        console.log(product);
});

How It Works

One of the reasons why Mongoose has become a high priority for anyone utilizing MongoDB in a Node.js application is due to the fact that Mongoose naturally models data. This means that you only need to generate a schema model once for your data and then you can use the model to fetch, update, and remove data from MongoDB.

In your solution, start by importing the mongoose.Schema object. The schema is created by passing a JavaScript object that contains the actual schema you wish to model, as well as an optional second parameter that contains the options for your model. The schema not only allows you to create the names of the fields in your model but also provides you with the opportunity to name specific types for the values in your schema. The types that are implemented in the schema instantiation appear as  { <fieldname> : <DataType> }. The available types are the following:

  • Array
  • BooleanBuffer
  • Date
  • Mixed
  • Number
  • ObjectId

StringThe options that you create for your schema are any that are shown in Table 10-6.

Table 10-6. Options for mongoose.Schema

Option Description
autoIndex Boolean that determines whether MongoDB will autogenerate an index. Default:  True
bufferCommands Boolean that determines whether the commands are buffered when a connection is lost, until a reconnect occurs. Default: True
capped Sets the MongoDB to be capped—meaning that the collection is a fixed size. Default: False
Collection String that sets the collection name.
id Returns the document’s _id field—or the hexString of the object. If set to false, this will be undefined. Default: True
_id Tells whether MongoDB will create the _id field when the model object is created. Default:  True
read Sets query.read options on the schema. This string determines whether your application will read from the primary, secondary,  or nearest Mongo in a replicated set. Options: ‘primary’ ‘primaryPreferred’ ‘secondary’ ‘secondaryPreferred’ ‘nearest’
safe Boolean that sets whether an error is passed to the callback. Default: True
shardKey Sets which sharded collection to target.
strict Boolean that ensures that nonmodel values passed to the constructor aren’t saved. Default: True
toJSON Converts the model to JavaScript Object Notation (JSON).
toObject Converts the model to a plain JavaScript object.
versionKey Sets the version of the schema when the model is created. Default: __v: 0

When you create your “product” schema, you create a simple object that contains a productId, which imports the ObjectId or the hexString of the product. Your model will also create a name as a string, a description as a string, and a price as a number for the products you wish to store and retrieve.

From the schema object, you now actually create a Mongoose model by using the mongoose.model() method and passing the name you chose for your model and the schema model itself. You are now able to use this new product model to create a product. You do this by passing the object that you wish to model in the document on your MongoDB server. In this solution, you create a sandal object. This is then saved by using the sandal.save() method, which takes a callback.

You can also find and remove data from your model. In this solution you query your model by using Product.find({ name: 'sandal' }), which will search for all products named “sandal” and return that in the exec() callback. From within the callback you have access to the array of all the products named “sandal.” If you wish to remove all or some of these, you could loop through these results and remove them individually, as shown in Listing 10-11.

Listing 10-11.  Using Mongoose to Remove Records

Product.find({name: 'sandal'}).exec(function(err, products) {
        if (err) console.log(err);
        console.log(products);
        for (var i = 0; i < products.length; i++) {
                if (i >= 3) {
                        products[i].remove(function() {
                                console.log('removing'),
                        });
                }
        }
});

You have seen how to connect to and implement a schema with Mongoose in your Node.js application. The Mongoose object model allows for clean implementation of your models to the MongoDB document database.

10-6. Connecting to CouchDB

Problem

You want to utilize CouchDB in your Node.js application.

Solution

CouchDB is a database that utilizes JSON documents, HTTP for its application programming interface (API), and JavaScript for MapReduce. Because of this, it has become a natural fit for many Node.js developers. There are several modules that can be used to build your Node.js application using CouchDB. In this solution you will utilize Nano, which is a lightweight module that supports CouchDB. It can be installed by using $ npm install nano.

In Listing 10-12, you will create a database and insert a document into that database. Next, you will update that document in the database. You will then, in Listing 10-13, retrieve the document and remove it from the database.

Listing 10-12.  Creating a Database and Documents in CouchDB with Nano

/**
* CouchDB
*/
 
var nano = require('nano')('http://localhost:5984'),
 
nano.db.create('products', function(err, body, header) {
        if (err) console.log(err);
 
        console.log(body, header);
});
 
var products = nano.db.use('products', function(err, body, header) {
        if (err) console.log(err);
 
        console.log(body, header);
});
 
products.insert({ name: 'sandals', description: 'for your feet', price: 12.00}, 'sandals', function(err, body, header) {
        if (err) console.log(err);
 
        console.log(body, header);
});
 
products.get('sandals', {ref_info: true}, function(err, body, header) {
        if (err) console.log(err);
 
        console.log(body, header);
});
 
// Updating in couchDB with Nano.
products.get('sandals', function(err, body, header) {
        if (!err) {
                products.insert({name: 'sandals', description: 'flip flops', price: 12.50, _rev: body._rev }, 'sandals', function(err, body, header) {
                        if (!err) {
                                console.log(body, header);
                        }
                });
        }
});

Listing 10-13.  Removing a Document from a CouchDB Database

var nano = require('nano')('http://localhost:5984'),
 
var products = nano.db.use('products'),
// deleting in couchDB with Nano.
products.get('sandals', function(err, body, header) {
        if (!err) {
                products.destroy( 'sandals', body._rev, function(err, body, header) {
                        if (!err) {
                                console.log(body, header);
                        }
                        nano.db.destroy('products'),
                });
        }
});

You can also create requests to CouchDB by using Nano through a single nano.request() interface, as shown in Listing 10-14.

Listing 10-14.  Using nano.request()

nano.request({
        db: 'products',
        doc: 'sandals',
        method: 'get'
        }, function(err, body, header) {
                if (err) console.log('request::err', err);
                console.log('request::body', body);
        });

How It Works

Using CouchDB is a natural fit for many Node.js applications. The Nano module was designed to be a “minimalistic CouchDB driver for Node.js.” Not only is Nano minimalistic, but it also supports using pipes, and you get direct access to errors from CouchDB.

In Listing 10-12 you first required the Nano module and connected to your server. Connecting to a server is as simple as pointing to a host and port that contain the CouchDB instance that is running. Next, you create a database on your CouchDB server that will hold all of the products that you wish to create. When you call any method with Nano, you can add a callback that will receive an error, body, and header parameter. When you first create a database with Nano, you will see the body and the request response JSON that looks like Listing 10-15.

Listing 10-15.  Callback after Creating “Products

Body: { ok: true }
Header: { location: 'http://localhost:5984/products',
  date: 'Sun, 28 Jul 2013 14:34:01 GMT',
  'content-type': 'application/json',
  'cache-control': 'must-revalidate',
  'status-code': 201,
  uri: 'http://localhost:5984/products' }

Once you have created your database, you then create your first product document. You create the “sandals” document by passing  a JavaScript object that contains the information about the product to the products.insert() method. The second argument is the name that you want to associate with this document. The body and header responses will let you know that the insert was okay, as seen in Listing 10-16.

Listing 10-16.  Inserting a Product

Body: { ok: true,
  id: 'sandals',
  rev: '1-e62b89a561374872bab560cef58d1d61' }
Header: { location: 'http://localhost:5984/products/sandals',
  etag: '"1-e62b89a561374872bab560cef58d1d61"',
  date: 'Sun, 28 Jul 2013 14:34:01 GMT',
  'content-type': 'application/json',
  'cache-control': 'must-revalidate',
  'status-code': 201,
  uri: 'http://localhost:5984/products/sandals' }

If you want to update a document in CouchDB with Nano, you need to get the revision identifier for the particular document you wish to update and then call the nano.insert() function again, passing in the revision identifier to that particular document you wish to update. In your solution you did this by using the nano.get() method, then using the body._rev revision identifier from the callback to update the document (see Listing 10-17).

Listing 10-17.  Updating an Existing Document

products.get('sandals', function(err, body, header) {
        if (!err) {
                products.insert({name: 'sandals', description: 'flip flops', price: 12.50, _rev: body._rev }, 'sandals', function(err, body, header) {
                        if (!err) {
                                console.log(body, header);
                        }
                });
        }
});

After you have created, inserted, and updated a document, you will likely want to be able to remove items from time to time. To do this, you also need a reference to the revision of the document that you are planning to remove. This means that you can first fetch the document with nano.get() and use the body._rev identifier from the callback to be passed into the nano.destroy() method. This will remove the document from your database. However, if you want to remove your database you can destroy a database in its entirety by calling nano.db.destroy(<DBNAME>);.

The key to the Nano framework is that all of these functions are really wrappers around the nano.request() method that you saw in Listing 10-14. In Listing 10-14 your request was made to the “sandals” document in the “products” database as an HTTP GET. This is the same as nano.get(). The equivalent for a destroy action is to use the HTTP Verb DELETE, so in the instance of your nano.db.destroy('products'), you are essentially writing nano.request({db: 'products', method: 'DELETE'}, callback);.

10-7. Using Redis

Problem

You want to utilize the Redis key-value store in your Node.js application.

Solution

Redis is an extremely powerful and popular key-value data store. Because it is so popular, there are many implementations for Node.js to choose from. Some are utilized to connect to the Express web server framework, and others are just as specified. One implementation that is recommended on the Redis website is node_redis, found athttps://github.com/mranney/node_redis. To install node_redis, you can utilize npm as follows: $ npm install redis.

Using redis_node is simple for those familiar with Redis because the API is the same. All of your get, set, hget, and hgetall commands are able to be executed as they would be directly from Redis itself. A simple example of getting and setting values and hash values is shown in Listing 10-18.

Listing 10-18.  Getting and Setting String and Hash Key-Value Pairs with node_redis

/**
* Redis
*/
 
var redis = require("redis"),
    client = redis.createClient();
 
client.on("error", function (err) {
    console.log("Error " + err);
});
 
client.set("key", "value", redis.print);
 
client.hset("hash key", "hashtest 1", "some value", redis.print);
client.hset(["hash key", "hashtest 2", "some other value"], redis.print);
client.hkeys("hash key", function (err, replies) {
    console.log(replies.length + " replies:");
    replies.forEach(function (reply, i) {
        console.log("    " + i + ": " + reply);
    });
    client.quit();
});
 
client.hgetall('hash key', function(err, replies) {
    replies.forEach(function(reply) {
        console.log(reply);
    });
});
 
 
client.get("key", function(err, reply) {
    if (err) console.log(err);
 
    console.log(reply);
});

Other times, instead of just storing hashes for session-level key-value storage, you may want to implement a loosely coupled publish and subscribe paradigm. This can be a very familiar method to many developers of Node.js applications who are already familiar with event-driven development but who want to leverage Redis for these purposes. An example of using publish and subscribe is shown in Listing 10-19.

Listing 10-19.  Publish and Subscribe Example

/**
* Pub/Sub
 
*/
 
var redis = require("redis"),
    subscriber = redis.createClient(),
    publisher = redis.createClient();
 
subscriber.on("subscribe", function (topic, count) {
    publisher.publish("event topic", "your event has occured");
});
 
subscriber.on("message", function (topic, message) {
    console.log("message recieved:: " + topic + ": " + message);
    subscriber.end();
    publisher.end();
});
 
subscriber.subscribe("event topic");

How It Works

Once you install node_redis via $ npm install redis, you have access to the full implementation of Redis in Node.js. As you saw in Listing 10-18, you can easily create a new client by utilizing redis.createClient(). The createClient() method will create a connection to the port and host of the Redis instance, which defaults to http://127.0.0.1:6379, and will then instantiate a RedisClient object, as shown in Listing 10-20.

Listing 10-20.  Node_redis createClient

exports.createClient = function (port_arg, host_arg, options) {
    var port = port_arg || default_port,
        host = host_arg || default_host,
        redis_client, net_client;
 
    net_client = net.createConnection(port, host);
 
    redis_client = new RedisClient(net_client, options);
 
    redis_client.port = port;
    redis_client.host = host;
 
    return redis_client;
};

The RedisClient inherits the Node.js EventEmitter and will emit several events, as shown in Table 10-7.

Table 10-7. RedisClient Events

Event Description
‘connect’ This event will be emitted at the same time as ‘ready’ unless the client option ‘no_ready_check’ is set to true, in which case only once the connection is established will this be emitted. Then you are free to send commands to Redis.
‘drain’ The RedisClient will emit ‘drain’ when the Transmission Control Protocol (TCP) connection to the Redis server has been buffering but is once again writable.
‘end’ Once the client connection to the Redis server has closed, this event is emitted.
‘error’ The RedisClient will emit ‘error’ when there is an exception from the Redis server.
‘idle’ The RedisClient will emit ‘idle’ once there are no outstanding messages that are awaiting a response.
‘ready’ The client will emit the ‘ready’ event once a connection is established to the Redis server and the server reports that it is ready to receive commands. If you send commands before the ‘ready’ event, they will be queued and executed just before this event is emitted.

In your solution, you then set a string value and a hash value. String values are set and retrieved by using client.set and client.get. You also used client.hset, client.hkeys, and client.hgetall in order to deal with hashes. These methods are directly equivalent with entering the commands directly (see Listing 10-21).

Listing 10-21.  Redis set, get, hset, hkeys, and hgetall

> set key value
OK
> get key
"value"
> hset 'hash key' 'hashtest 1' 'blah'
(integer) 0
> hset 'hash key' 'hashtest 2' 'cheese'
(integer) 0
> hkeys 'hash key'
1) "hashtest 1"
2) "hashtest 2"
> hgetall 'hash key'
1) "hashtest 1"
2) "blah"
3) "hashtest 2"
4) "cheese"

You then created a publish and subscribe solution. This can be utilized alongside the Node.js’s event model in order to create a loosely coupled integration between segregated portions of your application. To get started, you created two RedisClients called publisher and subscriber. First, you call subscriber.subscribe() on the topic you wish to listen for, then actually emit that event, using publisher.publish(<event name>), once the subscriber’s ‘subscribe’ event has been emitted. You then can bind the subscriber to the message event and perform some variety of action once this  event has been published.

You have now utilized Redis to store key-value pairs, as well as hashed keys in a data store with node_redis. You also have performed publish and subscribe methods by using Redis to power these messages.

10-8. Connecting to Cassandra

Problem

You are utilizing Cassandra to log events from your application and you would like to implement this logging with Node.js.

Solution

There are many different drivers for Cassandra across different programming languages. For Node.js, the package, “helenus,” installed with $ npm install helenus, is at the forefront because it provides bindings to both the thrift protocol and the Cassandra Query Language (CQL).

In Listing 10-22 you will create a logging mechanism that will log events that are occurring on your Node.js server.

Listing 10-22.  Using helenus to Create a Logging Application for Cassandra

var helenus = require('helenus'),
  pool = new helenus.ConnectionPool({
     hosts      : ['127.0.0.1:9160'],
     keyspace   : 'my_ks',
     user       : 'username',
     password   : 'pass',
     timeout    : 3000//,
     //cqlVersion : '3.0.0' // specify this if you're using Cassandra 1.1 and want to use CQL 3
});
 
var logger = module.exports = {
    /**
    * Logs data to the Cassandra cluster
    *
    * @param status     the status event that you want to log
    * @param message    the detailed message of the event
    * @param stack      the stack trace of the event
    * @param callback   optional callback
    */
    log: function(status, message, stack, callback) {
        pool.connect(function(err, keyspace){
            console.log('connected'),
            keyspace.get('logger', function(err, cf) {
                var dt = Date.parse(new Date());
                //Create a column
                var column = {};
                column['time'] = dt;
                column['status'] = status;
                column['message'] = message;
                column['stack'] = stack;
 
                var timeUUID = helenus.TimeUUID.fromTimestamp(new Date());
 
                cf.insert(timeUUID, column, function(err) {
                    if (err) {
                        console.log('error', err);
                    }
                    Console.log('insert complete'),
                    if (callback) {
                        callback();
                    } else {
                        return;
                    }
 
                });
            });
        });
    }
};

How It Works

The helenus module is a robust solution for connecting to the Cassandra database. In your solution, after importing the helenus module, you connect to Cassandra. This is done by passing a simple object to helenus.connectionPool(). The object that creates this connection pool contains several options, as outlined in Table 10-8.

Table 10-8. Connection Pool Options

Option Description
.cqlVersion Names the version of CQL you wish to utilize.
.hosts Provides an array of values that are the IP address and port for all the Cassandra instances in your cluster.
.keyspace Lists the keyspace on the Cassandra cluster that you wish to connect to initially.
.password Provides the password that you wish to use to connect to the node.
.timeout Is the timeout in milliseconds.
.user Gives the username for connecting to the node.

Once you have a connection, you can then call pool.connect(). Once the connect has occurred, the callback will provide a reference to the default keyspace that you configured in your connection pool.  There is, however, another way in which you can connect to a keyspace by using the pool.use('keyspacename', function(err, keyspace) {}); method.

You now have access to a keyspace on your Cassandra cluster. To gain access to the logger column family, you call keyspace.get('logger'...), which will fetch the column family and return a reference so that you can then operate on the column family directly.

Now that you have gained access to the column family to which you wish to write data, you can create the column that you want to insert. In this solution, assume that your logger column family has a row key that is a TimeUUID type, creating a unique timestamp for each entry. Helenus allows you to use this type of key easily because TimeUUID is a built-in type. You can access this type and create a new TimeUUID by using the fromTimestamp method on the object, as seen in Listing 10-23. You will also see that, if needed, helenus provides a way to generate UUID types.

Listing 10-23.  Creating a New TimeUUID

 > helenus.TimeUUID.fromTimestamp(new Date());
c19515c0-f7c4-11e2-9257-fd79518d2700
 
> new helenus.UUID();
7b451d58-548f-4602-a26e-2ecc78bae57c

Aside from the row key in your logger column family, you simply pass the timestamp, status, message, and stack trace of the event you wish to log. These all become parts of an object you named “column.” Now that you have the row key and the column values, you can insert them into Cassandra by calling the cf.insert method on your column family.

This solution leveraged JavaScript and objects to generate a model-like implementation that was converted to the Cassandra Thrift protocol in order to insert data. Helenus allows for other methods of inserting data by using the CQL language. A similar implementation to the one seen in Listing 10-22 but that utilizes CQL is shown in Listing 10-24. The step to retrieve the column family is omitted because CQL operates directly on the keyspace.

Listing 10-24.  Using CQL to Log Data to Cassandra

var helenus = require('helenus'),
    pool = new helenus.ConnectionPool({
       hosts      : ['127.0.0.1:9160'],
       keyspace   : 'my_ks',
       user       : 'username',
       password   : 'pass',
       timeout    : 3000//,
       //cqlVersion : '3.0.0' // specify this if you're using Cassandra 1.1 and want to use CQL 3
    });
 
var logger = module.exports = {
    /**
    * Logs data to the Cassandra cluster
    *
    * @param status     the status event that you want to log
    * @param message    the detailed message of the event
    * @param stack      the stack trace of the event
    * @param callback   optional callback
    */
    log: function(status, message, stack, callback) {
        pool.connect(function(err, keyspace){
            keyspace.get('logger', function(err, cf) {
                var dt = Date.parse(new Date());
                //Create a column
                var column = {};
                column['time'] = dt;
                column['status'] = status;
                column['message'] = message;
                column['stack'] = stack;
 
                var timeUUID = helenus.TimeUUID.fromTimestamp(new Date());
                var cqlInsert = 'INSERT INTO logger (log_time, time, status, message,stack)' +
                                'VALUES ( %s, %s, %s, %s, %s )';
 
                var cqlParams = [ timeUUID, column.time, column.status, column.message, column.stack ];
                pool.cql(cqlInsert, cqlParams, function(err, results) {
                    if (err) logger.log('ERROR', JSON.stringify(err), err.stack);
                });
            });
        });
    }
};
 
var queueObj = {};
var timeUUID = helenus.TimeUUID.fromTimestamp(new Date()) + '';
                    var cqlInsert = 'INSERT INTO hx_services_pha_card (card_id, card_definition_id, pig_query, display_template, tokens, trigger)' +
                                    'VALUES ( %s, %s, %s, %s, %s, %s )';
                                    
                    var cqlParams = [ timeUUID, queueObj.card_definition_id, queueObj.pig_query, queueObj.display_template, tokens.join(','), queueObj.trigger ];
                    pool.cql(cqlInsert, cqlParams, function(err, results) {
                        if (err) logger.log('ERROR', JSON.stringify(err), err.stack);
                    });

10-9. Using Riak with Node.js

Problem

You want to be able to utilize the highly scalable distributed database, Riak, with your Node.js application.

Solution

Riak is designed for high availability across distributed systems. It is designed to be fast and scalable, which makes it a good fit for many Node.js applications. In Listing 10-25, you will once again create a data store that will create, update, and retrieve your product data.

image Note  Riak is not supported on Windows machines currently. The following implementation should work on Linux or OSX.

Listing 10-25.  Using Riak with Node.js

/**
* RIAK
*/
 
var db = require('riak-js').getClient();
 
db.exists('products', 'huaraches', function(err, exists, meta) {
        if (exists) {
                db.remove('products', 'huaraches', function(err, value, meta) {
                        if (err) console.log(err);
                        console.log('removed huaraches'),
                });
        }
});
 
db.save('products', 'flops', { name: 'flip flops', description: 'super for your feet', price: 12.50}, function(err) {
        if (err) console.log(err);
        console.log('flip flops created'),
        process.emit('prod'),
});
 
db.save('products', 'flops', { name: 'flip flops', description: 'fun for your feet', price: 12.00}, function(err) {
        if (err) console.log(err);
        console.log('flip flops created'),
        process.emit('prod'),
});
 
db.save('products', 'huaraches', {name: 'huaraches', description: 'more fun for your feet', price: 20.00}, function(err) {
        if (err) console.log(err);
 
        console.log('huaraches created'),
        process.emit('prod'),
 
        db.get('products', 'huaraches', function(err, value, meta) {
                if (err) console.log(err);
                console.log(value);
        });
 
});
 
process.on('prod', function() {
 
        db.getAll('products', function(err, value, meta) {
                if (err) console.log(err);
                console.log(value);
        });
 
});

How It Works

To create this solution, you started with the Node.js module riak-js, installed by using $ npm install riak-js. You then connected to the server by using the getClient() method. This method, when left alone, will find the default client running on the local machine but can also be configured with an options object.

You are now connected to the Riak instance by using the db object. First, you see that the API is concise as you encounter db.exists(<bucket>, <key>, callback). This callback will return a value of true if the key exists in the bucket on your Riak node. If the bucket key did exist, you removed that particular set of data by simply pointing to that bucket key and using the db.remove() method.

Next, you saved some data to your Riak node by using the db.save method. This method accepts a bucket, key, and a value you wish to set for the bucket key. This can be a JavaScript object, a number, or a string that you wish to store for the key’s value. You also, as with all requests with riak-js, gain access to the callback function. The callbacks have three values: an error if one occurred, a value that is passed as the result of the riak-js method, and the meta object.

After you save your huaraches to the products bucket, you then retrieve this key by utilizing the db.get() function. Again, this method takes the bucket and the key in order to determine which data on your node you wish to retrieve. And the callback can contain the values and the meta associated with the data. There is one other way that you can access data by using riak-js. This is used to retrieve all the values for a given bucket. The resultant value in the callback will be an array of data associated with the bucket.

You have used riak-js to interact with your Riak cluster in Node.js. This provides a simple API to insert, read, update, and remove data on your node. Riak is a powerful distributed database solution that, aside from these simple tasks, can also perform more complex searches by using map and reduce functions through a similar API. In order to do this, you could search all products in your node by running the following commands (see Listing 10-26).

Listing 10-26.  Map Reduce with riak-js

db.mapreduce
        .add('products')
        .map(function(v) {
                return [Riak.mapValuesJson(v)[0]];
        })
        .run(function(err, value, meta) {
                console.log(value);
        });
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.161.225