Utilities

In the next group of modules, we will look at utilities. The functions chosen here are used across many different types of applications. We will cover everything from events and cryptology to buffers and npm.

Events

Events are used in many built-in Node objects. This is because emitting and listening for events is the perfect way to let another function know when to start executing. This is especially true in the asynchronous world of Node.js. Anytime we use the on function of an object, it means that it has inherited from EventEmitter. All of the examples will assume that the events variable is already created as follows:

var events = require('events');

EventEmitter

This is the parent class that can be inherited from to create a new EventEmitter:

events.EventEmitter

Description

Node.js has a fully featured event system that we can easily inherit from and implement. We do not need any extra frameworks or custom code. EventEmitter is the class to inherit from, and we will get every function in the rest of this section available.

Here is an example of setting up a custom the EventEmitter parameter:

var util = require('util');
var events = require('events');

function MyEventEmitter(){
    events.EventEmitter.call(this);
    this.test = function (emitThis) {
        this.emit('testEvent', emitThis);
    }
}

util.inherits(MyEventEmitter, events.EventEmitter);

var myEE = new MyEventEmitter();

myEE.on('testEvent', function (data) { console.log(data) });

myEE.test('test');

on

This function adds a listener for a specific event:

emitter.on(event, listenerFunction)
emitter.addListener(event, listenerFunction)

Description

The on function has become the preferred naming convention to add listeners to events. As an example, jQuery uses the exact same function name for their event listeners. The event handler is a string name of the event that will be emitted. The listenerFunction parameter is what will be executed when the event is emitted.

The listenerFunction parameter can be an anonymous function or a reference to a function. The preferred way of adding a listener is with a reference to a function. This will allow us to remove this specific listener at a later time.

Here is an example based on our new MyEventEmitter class:

var quickLog = function (data) {
    console.log('quickLog: ' + data);
}
myEE.on('testEvent', quickLog);

once

This works just like on, except it only executes once and then removes itself as a listener:

emitter.once(event, listenerFunction)

removeListener

This is the function that is used to remove a listener from an event:

emitter.removeListener(event, function)

Description

When we are done listening for this event, we will want to remove our listener. This will help prevent memory leaks. If we added an anonymous function as the listener, then we cannot remove it as we do not have a reference to it. Using our previous example from on, we will remove the listener:

myEE.removeListener('testEvent', quickLog);

removeAllListeners

This function will remove every listener for all events or a specific event:

emitter.removeAllListeners([event])

Description

This is essentially the nuclear option to remove listeners. This is indiscriminate with listeners. The removeAllListeners parameter will even remove listeners we did not add. Use this as a last resort.

An example that removes all the listeners from this event is shown here. If the event was left blank, it would remove all listeners for all events:

myEE.removeAllListeners('testEvent');

setMaxListeners

This function sets the number of listeners before Node.js and warns about a possible memory leak:

emitter.setMaxListeners(numberOfListeners)

Node.js has a helpful warning when the number of listeners exceeds a threshold. The default value is 10, so when you add the eleventh listener, Node.js will warn:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.

As a general rule, this is true. If we keep adding listeners to an event, there is a great chance for a memory leak. However, there will be times when we will need more than 10 listeners on an event. This is where we use setMaxListeners. If we set the max listeners to zero, then we can add as many as we want.

Here is an example of setting the max listeners to 50:

myEE.setMaxListeners(50);

emit

This is how we fire off an event:

emitter.emit(eventName, [argument], […])

If we have extended an object to be an event emitter, then we will want to emit some events! This is the function to do that. It will execute all the event listeners that have attached themselves to this event based on eventName.

Here is an example that shows a listener being added and then emitting an event:

myEE.on('testEvent', function (data) { console.log(data) });
myEE.emit('testEvent', 'Emit This!', 'Another Argument!');

Crypto

Every modern application needs cryptography. A great example of Node.js using cryptography is with HTTPS. We will not explore the inner workings of HTTPS as there is a module (the https module) that does this for us. We will look at the cryptographic functions used to hash, store, and check passwords.

In the same way as the other modules, we will require the crypto module and have it available for use in our examples. Here is what we will need:

var crypto = require('crypto');

createHash

This function will return a crypto.Hash object:

crypto.createHash(algorithm)

Description

The algorithms that can be used will be different for each system as it relies on OpenSSL. We can find out the algorithms that crypto can use by calling crypto.getHashes(). This will return an array of strings that can then be passed into createHash as the algorithm.

The return object from this function is a crypto.Hash object, which is covered in the next section.

Here is an example that creates an MD5 hash:

var md5 = crypto.createHash('md5');

The hash object

This is the object that is returned from crypto.createHash:

hash.update(data, [encoding])
hash.digest([encoding])

Description

Once we get a reference to the Hash object that has been returned, you will see that it has two functions. The first is update, which allows us to add to the data that will be hashed. This can be called multiple times. This is important if we want to hash a stream input.

The next function is digest. This will return digest based on the algorithm the hash object was created with. The string encoding for this function can be hex, binary, or BASE64.

Here is a full example of reading data from a file and then calculating the MD5 hash of the file.

var f = file.readFileSync(__dirname + '/test.txt');

var md5 = crypto.createHash('md5');
md5.update(f);
console.log(md5.digest('base64'));

Tip

Do not use hashing for passwords. The next function we will cover is much more secure than a simple hash. A digest hash is great for checking if some data has changed, as the hashes will be different if even one bit is different. In addition to this, it can be used as a key or identifier. A great example is using it for a memory cache and using the hash as the key. If the data is the same, the key will be the same.

pbkdf2

This function will use HMAC-SHA1 multiple times to create a derived key:

crypto.pbkdf2(password, salt, iterations, keyLength, callback)
crypto.pbkdf2Sync(password, salt, iterations, keyLength)

Return Type

Both pbkdf2 and pbkdf2Sync will return a derived key as a buffer.

Description

Pbkdf2 (password-based key derivation function 2) is designed for password storage. It is better than hashing because it is difficult. Hashing is designed to be a quick calculation. This is bad because modern CPUs can calculate thousands upon thousands of hashes a second. This makes cracking a hashed password easy.

Pbkdf2 fixes this using a work factor in iterations. The higher the iterations, the longer the calculation will take. Now, instead of calculating thousands a second, we can slow a CPU down to just a few a second. This is a significant decrease.

These are the parameters used:

  • password: This is the string we want to create a derived key for.
  • salt: This is a string that is combined with the password. Doing this ensures that the same password will not have the same hash if salt is different.
  • iterations: This is the work factor that instructs the function how many times to repeat. This can be increased as CPUs become faster. Currently, at least 10,000 should create a reasonably secure derived key.
  • keyLength: This is the desired length of the returned derived key.

Here is an example of creating a derived key for the string password and salt:

crypto.pbkdf2('password', 'salt', 10000, 32, function (err, key) {
    console.log(key.toString('base64'));
});

randomBytes

This returns cryptographically strong pseudo-random data:

crypto.randomBytes(length, [callback])

Return type

A buffer is returned.

Description

Random data is needed for various functions. While no data can truly be random, there are varying levels of randomness. The randomBytes parameter is random enough to use for cryptographic functions. A perfect use of randomBytes is for a salt to be used in pbkdf2. The salt variable is combined with the password to create a different hash even if the passwords are the same.

This function can be executed asynchronously or synchronously. It depends on whether there is a callback, which it would then execute asynchronously.

Here is a function to create a random salt for pbkdf2. If you compare this example to the previous one, you will see that this example outputs a unique string each time, while the previous one does not:

var random = crypto.randomBytes(256);

crypto.pbkdf2('password', random.toString('base64'), 10000, 32, function (err, key) {
    console.log(key.toString('base64'));
});

pseudoRandomBytes

This returns pseudo-random data:

crypto.pseudoRandomBytes(length, [calback])
crypto.pseudoRandomBytes(length)

Return Type

A buffer is returned.

Description

This functions exactly like randomBytes, except it is not cryptographically strong. This means that we should not use it with any cryptographic functions such as pbkdf2.

We can use it for anything else that requires randomness, a filename, or a cache key for example.

Here is a simple example of executing this function asynchronously:

crypto.pseudoRandomBytes(256, function (err, randomData) {
    console.log(randomData.toString('base64'));
});

Buffer

Node.js uses buffers internally for many things. We have seen this as we have had buffers returned for many functions. This is because anytime raw data or binary data needs to be stored, it will be stored in a buffer.

Buffers have a few quirks about them that we must keep in mind. First, buffers store data, but to utilize the data held inside them, we must encode it. We will cover the encodings and how to do this in this section. Second, buffers cannot be resized. When they are created, we must give a length, and that will always be the length of that buffer. We can, of course, create a new buffer that is larger and then copy over the data. Finally, a buffer is a global. We do not need to have var buffer = require('buffer'). Also, it can be utilized in any file at any time.

Buffer creation

The initialization function of a buffer is as shown in the following code snippet:

new Buffer(size)
new Buffer(array)
new Buffer(str, [encoding])

Return value

This returns a buffer object.

Description

There are three different ways to initialize a buffer. The first function uses an integer of the given size, the next one uses an array, and the final method can use just a string. The encoding is optional, as it defaults to UFT8.

This example demonstrates initializing using an array with the hello string in ASCII:

var hello = [72, 101, 108, 108, 111];
var buffer = new Buffer(hello);
console.log(buffer.toString('ascii'));

index

This gets the value at the specific index in the buffer:

buffer[index]

Return Value

This returns the value at the index.

Description

This works much like an array. Here is an example of getting the first index in the buffer:

var buffer = new Buffer('Hello!');
console.log(buffer[0]);

toString

This returns the buffer as a string based on the encoding:

buffer.toString([encoding], [start], [end])

Return Value

This returns a string.

Description

This is most likely the function that you will use to get the data out of a buffer. The first parameter is encoding, which is one of the following:

  • ASCII
  • UTF8
  • UTF16LE
  • BASE64
  • Binary
  • Hex

All the parameters are optional, so this means that they all have default values. Encoding is defaulted to UTF8, start is 0, and end is the end of the buffer.

Here is an example of creating a buffer and retrieving the data out of it. It explicitly defines each of the parameters:

var buffer = new Buffer('Hello this is a buffer');
console.log(buffer.toString('utf8', 0, buffer.length));

toJSON

This returns the contents of the buffer as a JavaScript object:

buffer.toJSON()

Return Value

This returns an array with the contents of the buffer.

Description

This will return contents of the buffer mapped to an array. This example is similar to the previous example, but with toJSON:

var buffer = new Buffer('Hello this is a buffer');
console.log(buffer.toJSON());

isBuffer

This is a class method that will determine whether an object is a buffer:

Buffer.isBuffer(objectToTest)

Return Value

This returns a Boolean value.

Description

Remember that this is a class method, so it can be executed without a new instance of buffer. Here is an example of using the function:

var buffer = new Buffer('Hello this is a buffer');
console.log(Buffer.isBuffer(buffer));

write

The following code writes to the buffer.

buffer.write(stringToWrite, [offset], [length], [encoding])

Return value

This returns an integer of the number of bytes written.

Description

This function will write the string passed into the buffer. Much like the other buffer functions, there are many optional parameters that have defaults. The default offset is 0, the length is buffer.length – offset, and the encoding is UTF8.

This example writes to a buffer twice using the return value from the first write to append the second string:

var buffer = new Buffer(12);
var written = buffer.write('Buffer ', 0, 7, 'utf8');
console.log(written);
buffer.write('time.', written);
console.log(buffer.toString());

byteLength

This function will get the length of a string in bytes based on the encoding:

buffer.byteLength(string, [encoding])

Return value

This returns an integer of the number of bytes needed.

Description

Buffers cannot be resized after initializing, so we may need to know how big a string is beforehand. This is where byteLength comes in.

Here is an example that determines the size of a string and then writes it to the buffer:

var byteLength = Buffer.byteLength('Buffer time.', 'utf8');
var buffer = new Buffer(byteLength);
var written = buffer.write('Buffer time.', 0, buffer.length, 'utf8');
console.log(buffer.toString());

readUInt

This will get an unsigned integer at a certain spot in the buffer:

buffer.readUInt8(offset, [noAssert])
buffer.readUInt16LE(offset, [noAssert])
buffer.readUInt16BE(offset, [noAssert])
buffer.readUInt32LE(offset, [noAssert])
buffer.readUInt32BE(offset, [noAssert])

Return Value

This returns an unsigned integer of the size used, 8 for readUInt8.

Description

Not all data stored in a buffer is exactly a byte. Sometimes, data will need to be read in 16-bit or 32-bit chunks. In addition to this, you can also specify whether the data is little endian or big endian, denoted by LE or BE in the function name, respectively.

The offset is which spot in the buffer to start at. The noAssert parameter will run validation if the offset size is in the buffer. By default, it is false.

Here is an example of setting data and then reading the data out with readUInt16:

var buffer = new Buffer(2);
buffer[0] = 0x1;
buffer[1] = 0x2;
console.log(buffer.readUInt16LE(0));
console.log(buffer.readUInt16BE(0));

writeUInt

This writes an unsigned integer to a buffer:

buffer.writeUInt8(value, offset, [noAssert])
buffer.writeUInt16LE(value, offset, [noAssert])
buffer.writeUInt16BE(value, offset, [noAssert])
buffer.writeUInt32LE(value, offset, [noAssert])
buffer.writeUInt32BE(value, offset, [noAssert])

Description

This is the exact opposite of the readUInt function. Sometimes, we need to write data that is larger than one byte in length. These functions make it simple.

The noAssert parameter is optional, and it defaults to false. This will not run any validation on the value or offset.

Here is an example of writing UInt16 to the buffer:

var buffer = new Buffer(4);
buffer.writeUInt16LE(0x0001, 0);
buffer.writeUInt16LE(0x0002, 2);
console.log(buffer);

Tip

Note that there are more read and write functions of this type for the buffer class. Instead of creating a very redundant section, I will list them. Remember that they work in the same fashion as the functions we have covered: readInt8, readInt16LE, readInt16BE, readInt32LE, readInt32BE, readDoubleLE, readDoubleBE, readFloatLE, and readFloatBE. There is a write() function that maps to each one of these as well.

Console

This is much like the console that is present in most modern browsers. The console essentially maps to stdout and stderr. This isn't a module, and each function is not very complex, so let's jump right in.

log

This writes to stdout:

console.log(message, […])

Description

This is probably the most used console function. It is great for debugging, and the output can be combined with piping to write to a file for a history. The multiple parameters create a string-like function.

Here is an example of using multiple parameters:

console.log('Multiple parameters in %s', 'console.log');

dir

This is an alias for util.inspect:

console.dir(object)

Description

Many times, the output of console.log and console.dir will be similar. However, when trying to look at an object, console.dir should be preferred.

time and timeEnd

These two functions are used together to mark the start and end of a timer:

console.time(label)
console.timeEnd(label)

Description

These two functions will always be used together. The console.time parameter will start a timer that can be stopped with console.timeEnd by passing the same label. When timeEnd is called, it will log the elapsed time between start and end in milliseconds.

Here is an example that uses setTimeout:

console.time('simple-timer');
setTimeout(function () {
    console.timeEnd('simple-timer');
}, 500);

trace

This logs to the console and includes a stack trace:

console.trace(message, […])

Description

This works much like console.log. The first parameter can be treated like a formatted string, with the other parameters supplying the additional input. The main difference is that console.trace will include a stack trace when it logs to the console.

Here is a simple example:

console.trace('This should be the first line.');

npm (Node Package Manager)

npm is not a Node.js module like. We will look at some of npm's features and uses, as almost every Node.js project will use npm.

Most modern platforms have a way of grouping together code that serves a function or purpose called packages. Node.js uses npm to track, update, pin, and install these packages.

init

This initializes a node module by creating package.json in the current directory:

npm init

Description

An interactive console session will ask you quite a few questions and use the answers to build a package.json file for you. This is a great way to kick off a new module. This will not delete your current the package.json file or any of the current properties if the package.json file exists.

package.json

This is the file that has all the information about your project.

Description

This is not a function or command, but it is the most important file in your project. It is what determines all the information about your project. Although technically, the only properties needed are name and version, there are many properties that can be set. If this file was created using npm in it, you will have quite a few already filled out. It would be tedious to list out all the possibilities. Here are just a few of the most useful: name, version, scripts, dependencies, devDependencies, authors, and license.

The npm docs at https://www.npmjs.org/doc/files/package.json.html go through all the settings their uses.

install

This is the command to install a package:

npm install 
npm install [package] [@version] [--save | --save-dev]

Description

This is the main way to install new packages and their dependencies. If npm install is called without any parameters, it will use package.json and install all the dependencies.

This command also allows you to install packages by executing it with the name of the package. This can be augmented by adding a version. If the version is omitted, it will install the newest version. In addition to this, you can use save flags that create a property in dependencies or devDependcies.

This is not the entire list of what npm install can do, but it is the most used list.

update

This will update a package to the newest version:

npm update [package]

Description

This command will update all the packages or a specific package to the newest version.

shrinkwrap

This will explicitly define all the dependencies for a project:

npm shrinkwrap

Description

This is different from the basic list of dependencies in package.json. Most packages have requirements of their own. When a package is installed, npm will go out and find the newest version that matches the dependency's specified version. This can lead to different versions of packages installed when run at different times. This is something that most developers want to avoid.

One way to combat this is to run npm shrinkwrap. It will create npm-shrinkwrap.json. This file will explicitly define the versions currently installed recursively for every package installed. This ensures that when you run npm install again, you will know what package versions will get installed.

run

This will run an arbitrary command:

npm run [script]

Description

The package.json file has a property named scripts. This is an object that can have a list of commands that can be run by npm. These scripts can be anything that runs from the command line.

There are three commands on npm that use these scripts objects. These are npm test, npm start, and npm stop. These commands map to test, start, and stop the scripts object, respectively.

Here is an example for a scripts object from package.json and the way to call it from npm:

package.json:
  "scripts": {
    "hey":  "echo hey"
}
npm run hey

Stream

Stream is an interface that is used by many internal objects. Any time data needs to be read or written, it is most likely done through a stream. This fits with the Node.js asynchronous paradigm. If we are reading a large file from the filesystem, we would create a listener to tell us when each chunk of data is ready to be read. This does not change if that file is coming from the network, an HTTP request, or stdin.

We are only going to cover using a stream, in this book. The stream interface can be implemented with your own objects as well.

Streams can be readable, writable, or duplex (both). We will cover readable and writable streams separately.

Readable

This is, of course, a stream that we can get data out of. A readable stream can be in one of the two different modes, flowing, or non-flowing. Which mode it is in depends on which events are listened for.

To put a stream in the flowing mode, you will just have to listen for the data event. Conversely, to put a stream in the non-flowing mode, you will have to listen for the readable event and then call the stream.read() function to retrieve data. The easiest way to understand the modes is to think about the data as a lot of chunks. In the flowing mode, every time a chunk is ready, the data event will fire, and you can read that chunk in the callback of the data event. In the non-flowing mode, the chunk will fire a readable event, and then, you will have to call read to get the chunk.

Here is a list of events:

  • readable
  • data
  • end
  • close
  • error

Here are two examples, one that uses flowing and one that uses non-flowing:

var fs = require('fs');
var readable = fs.createReadStream('test.txt');

readable.on('data', function (chunk) {
    console.log(chunk.toString());
});
var fs = require('fs');
readable.on('readable', function () {
    var chunk;
   while (chunk = readable.read()) {
        console.log(chunk.toString());
    }
});

read

Use with a non-flowing readable stream:

readable.read([size])
Return value

This returns either a string, buffer, or null. A string is returned if the encoding is set. A buffer is returned if the encoding is not set. Finally, null is returned when there is no data in the stream.

Description

This reads from a stream. Most of the time, the optional parameter size is not needed and should be avoided. Only use this with a non-flowing stream.

Here is a simple example that reads a file:

readable.on('readable', function () {
    var chunk;
   while (chunk = readable.read()) {
        console.log(chunk.toString());
    }
});

setEncoding

This sets the encoding of the stream:

stream.setEncoding(encoding)
Description

By default, a readable stream will output a buffer. This will set the encoding of the buffer, so a string is returned.

Here is the opening example with setEncoding used:

readable.setEncoding('utf8');
readable.on('readable', function () {
    var chunk;
   while (chunk = readable.read()) {
        console.log(chunk);
    }
});

resume and pause

These functions pause and resume a stream:

stream.pause()
stream.resume()
Description

Pause will stop a stream from emitting data events. If this is called on a non-flowing stream, it will be changed into a flowing stream and be paused. Resume will cause the stream to start emitting data events again.

Here is an example that pauses the stream for a few seconds before reading it:

readable.pause();
readable.on('data', function (chunk) {
    console.log(chunk.toString());
});
setTimeout(function () { readable.resume();}, 3000);

pipe

This allows you to take the output of a readable stream and send it to the input of a writable stream:

readable.pipe(writable, [options])
Return Value

This returns the writable stream so that piping can be chained.

Description

This is exactly the same as piping output in a shell.

A great design paradigm is to pipe from a stream to another stream that transforms the stream and then pipe that to the output. For example, you want to send a file over the network. You would open the file as a readable stream, pass it to a duplex stream that would compress it, and pipe the output of the compression to a socket.

Here is a simple example of piping output to stdout:

var readable = fs.createReadStream('test.txt');
readable.pipe(process.stdout);

writable

This is the stream the data goes to. This is a little simpler as there are really only two functions that matter: write and end.

Here are the events and details of when they fire:

  • drain: This fires when the internal buffer has written all the data
  • finish: This fires when the stream has been ended and all the data has been written
  • error: This fires when an error occurs

Note

It is important to note that a stream can be given more data than it can write in a timely fashion. This is especially true when writing to a network stream. Because of this, the events finish and drain to let your program know that the data has been sent.

write

This function writes to the stream:

writable.write(chunk, [encoding], [callback])
Return value

This returns a Boolean value if the stream has written completely.

Description

This is the main function of a writable stream. Data can be a buffer or string, encoding defaults to UTF8, and the callback is called when the current chunk of data has been written.

Here is a simple example:

var fs = require('fs');
var writable = fs.createWriteStream('WriteStream.txt');

var hasWritten = writable.write('Write this!', 'utf8', function () {
    console.log('The buffer has written');
});
end

This will close the stream, and no more data can be written:

writable.end([chunk], [encoding], [callback])
Description

When you are done writing to a stream, the end function should be called on it. All the parameters are optional, a chunk is data that you can write before the stream ends, encoding will default to UTF8, and the callback will be attached to the finish event.

Here is an example to end a writable stream:

var fs = require('fs');
var writable = fs.createWriteStream('WriteStream.txt');

writable.end('Last data written.', 'utf8', function () {
    //this runs when everything has been written.
});
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.176.72