Testing hapi applications with lab

As I mentioned earlier, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies.

Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called shot, which simulates network requests to a hapi server. Taking our first example of the hello world server given in Chapter 1, Introducing hapi.js, let's write a simple test for it:

const Code = require('code');
const Lab = require('lab');
const Hapi = require('hapi');
const lab = exports.lab = Lab.script();
lab.test('It will return Hello World', (done) => {
  const server = new Hapi.Server();
  server.connection();
  server.route({
    method: 'GET',
    path: '/',
    handler: function (request, reply) {
      return reply('Hello World
');
    }
  });
  server.inject('/', (res) => {
    Code.expect(res.statusCode).to.equal(200);
    Code.expect(res.result).to.equal('Hello World
');
    done();
  });
});

Now that we are more familiar with what a test script looks like, most of this will look familiar. However, you may have noticed we never made a call to server.start() in the test. This means the server was never started and no port assigned, but we can still make requests against it using the server.inject() API. Not having to start a server means less setting up and tearing down before and after tests, and that a test suite can run quicker, as less resources are required. server.inject() will still be used if with the same API irrespective of whether the server has been started or not. The server.inject() API is provided via the shot module (https://github.com/hapijs/shot), if you would like to read more about how the API works under the hood.

Code coverage

As I mentioned earlier in the chapter, having 100% code coverage is paramount in the hapi ecosystem, and in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task, where we don't know how many tests are enough, or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and that is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall.

Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate the lines to be executed for producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight the lines that are not covered by tests, so you know what has not been tested. This is extremely useful in identifying where to focus your testing effort.

It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag, followed by an integer. This is used for all the modules in the hapi ecosystem, and the threshold is set to 100, implying 100%.

Having a threshold of 100% for code coverage makes it much easier to manage the changes in a codebase. When any update or pull request is submitted, we can run the test suite against the changes, and know that all tests pass and all the code is covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us such as Travis CI (https://travis-ci.org/).

It's also worth knowing that the coverage report can be displayed in a number of formats; I suggest reading the lab documentation for a full list of these reports with explanations which is available at https://github.com/hapijs/lab.

Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the /lib directory, and call index.js.

It's worth noting here that not only is it a good testing practice but also a typical module structure in the hapi ecosystem to place all module code in a directory called lib, and the associated tests for each file within lib, in the test directory, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are located in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests, and see examples of it in use.

So, having separated our server from our tests, let's look at what our two files now look like; first, lib/index.js:

const Hapi = require('hapi');
const server = new Hapi.Server();
server.connection();
server.route({
  method: 'GET',
  path: '/',
  handler: function (request, reply) {
    return reply('Hello World
');
  }
});
module.exports = server;

The main change here is that we export our server at the end for another file to require and start it if necessary. Our test file in test/index.js will now look like:

const Code = require('code');
const Lab = require('lab');
const server = require('../lib/index.js');
const lab = exports.lab = Lab.script();
lab.test('It will return Hello World', (done) => {
  server.inject('/', (res) => {
    Code.expect(res.statusCode).to.equal(200);
    Code.expect(res.result).to.equal('Hello World
');
    done();
  });
});

Finally, for us to test our code coverage, we update our npm test script to include the coverage flag, --coverage or -c. If you run this, you'll find we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out the versions of hapi that this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on Node.js version 4.0.0. Will it work if run against hapi version 9 or 10? You can test this now by installing an older version with:

$ npm install hapi@10

This will give you an idea of how easy it can be to check whether your codebase works against different versions of libraries. If you have some time, it would be interesting to see how this example runs on the different versions of Node. (Hint: it breaks on any version earlier than 4.0.0!)

In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this lucky. With an increase in the complexity of our codebase, there is an increase in the complexity of our tests, which is where knowledge writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code.

Linting

Also built into lab is linting support. Linting is the process of using static analysis to check if a code style is adhered to. The rules of the code style can be defined by an .eslintrc or .jshintrc file. By default, lab will enforce the hapi style guide rules mentioned in the first chapter of this book.

The idea of linting is that all code will have the same structure, making it much easier to spot bugs and to keep the code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables.

To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules, as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I also recommend referring to the lab documentation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.233.205