Unit Testing and Functional Testing

Unit testing has become a primary part of good software development practice. It is a method by which individual units of source code are tested to ensure they function properly. Each unit is theoretically the smallest testable part of an application. 

In unit testing, each unit is tested separately, isolating the unit under test as much as possible from other parts of the application. If a test fails, you would want it to be due to a bug in your code rather than a bug in the package that your code happens to use. A common technique is to use mock objects or mock data to isolate individual parts of the application from one another.

Functional testing, on the other hand, doesn't try to test individual components. Instead, it tests the whole system. Generally speaking, unit testing is performed by the development team, while functional testing is performed by a Quality Assurance (QA) or Quality Engineering (QE) team. Both testing models are needed to fully certify an application. An analogy might be that unit testing is similar to ensuring that each word in a sentence is correctly spelled, while functional testing ensures that the paragraph containing that sentence has a good structure. 

Writing a book requires not just ensuring the words are correctly spelled, but ensuring that the words string together as useful grammatically correct sentences and chapters that convey the intended meaning. Similarly, a successful software application requires much more than ensuring each "unit" correctly behaves. Does the system as a whole perform the intended actions?

In this chapter, we'll cover the following topics:

  • Assertions as the basis of software tests
  • The Mocha unit testing framework and the Chai assertions library
  • Using tests to find bugs and fix the bug
  • Using Docker to manage test infrastructure
  • Testing a REST backend service
  • UI functional testing in a real web browser using Puppeteer
  • Improving UI testability with element ID attributes

By the end of this chapter, you will know how to use Mocha, as well as how to write test cases for both directly invoked code under test and for testing code accessed via REST services. You will have also learned how to use Docker Compose to manage test infrastructure, both on your laptop and on the AWS EC2 Swarm infrastructure from Chapter 12, Deploying Docker Swarm to AWS EC2 with Terraform

That's a lot of territory to cover, so let's get started.

Assert the basis of testing methodologies

Node.js has a useful built-in testing tool known as the assert module. Its functionality is similar to assert libraries in other languages. Namely, it's a collection of functions for testing conditions, and if the conditions indicate an error, the assert function throws an exception. It's not a complete test framework by any stretch of the imagination, but it can still be used for some amount of testing.

At its simplest, a test suite is a series of assert calls to validate the behavior of the thing being tested. For example, a test suite could instantiate the user authentication service, then make an API call and use assert methods to validate the result, then make another API call to validate its results, and so on.

Consider the following code snippet, which you can save in a file named deleteFile.mjs:

import fs from 'fs';

export function deleteFile(fname, callback) {
fs.stat(fname, (err, stats) => {
if (err)
callback(new Error(`the file ${fname} does not exist`));
else {
fs.unlink(fname, err => {
if (err) callback(new Error(`Could not
delete ${fname}`));
else callback();
});
}
});
}

The first thing to notice is this contains several layers of asynchronous callback functions. This presents a couple of challenges:  

  • Capturing errors from deep inside a callback
  • Detecting conditions where the callbacks are never called

The following is an example of using assert for testing. Create a file named test-deleteFile.mjs containing the following:

import assert from 'assert';
import { deleteFile } from './deleteFile.mjs';

deleteFile("no-such-file", (err) => {
assert.ok(err);
assert.ok(err instanceof Error);
assert.match(err.message, /does not exist/);
});

This is what's called a negative test scenario, in that it's testing whether requesting to delete a nonexistent file throws the correct error. The deleteFile function throws an error containing the text that does not exist if the file to be deleted does not exist.  This test ensures the correct error is thrown and would fail if the wrong error is thrown, or if no error is thrown.

If you are looking for a quick way to test, the assert module can be useful when used this way. Each test case would call a function, then use one or more assert statements to test the results. In this case, the assert statements first ensure that err has some kind of value, then ensures that value is an Error instance, and finally ensures that the message attribute has the expected text. If it runs and no messages are printed, then the test passes. But what happens if the deleteFile callback is never called? Will this test case catch that error?

$ node test-deleteFile.mjs 

No news is good news, meaning it ran without messages and therefore the test passed.

The assert module is used by many of the test frameworks as a core tool for writing test cases. What the test frameworks do is create a familiar test suite and test case structure to encapsulate your test code, plus create a context in which a series of test cases are robustly executed.

For example, we asked about the error of the callback function never being called. Test frameworks usually have a timeout so that if no result of any kind is supplied within a set number of milliseconds, then the test case is considered an error.

There are many styles of assertion libraries available in Node.js. Later in this chapter, we'll use the Chai assertion library (http://chaijs.com/), which gives you a choice between three different assertion styles (should, expect, and assert).

Testing a Notes model

Let's start our unit testing journey with the data models we wrote for the Notes application. Because this is unit testing, the models should be tested separately from the rest of the Notes application.

In the case of most of the Notes models, isolating their dependencies implies creating a mock database. Are you going to test the data model or the underlying database? Mocking out a database means creating a fake database implementation, which does not look like a productive use of our time. You can argue that testing a data model is really about testing the interaction between your code and the database. Since mocking out the database means not testing that interaction, we should test our code against the database engine in order to validate that interaction.

With that line of reasoning in mind, we'll skip mocking out the database, and instead run the tests against a database containing test data. To simplify launching the test database, we'll use Docker to start and stop a version of the Notes application stack that's set up for testing.

Let's start by setting up the tools.

Mocha and Chai­ the chosen test tools

If you haven't already done so, duplicate the source tree so that you can use it in this chapter. For example, if you had a directory named chap12, create one named chap13 containing everything from chap12 to chap13.

In the notes directory, create a new directory named test.

Mocha (http://mochajs.org/) is one of many test frameworks available for Node.js. As you'll see shortly, it helps us write test cases and test suites, and it provides a test results reporting mechanism. It was chosen over the alternatives because it supports Promises. It fits very well with the Chai assertion library mentioned earlier. 

While in the notes/test directory, type the following to install Mocha and Chai:

$ npm init
... answer the questions to create package.json
$ npm install [email protected] [email protected] [email protected] [email protected] --save-dev
...

This, of course, sets up a package.json file and installs the required packages.

Beyond Mocha and Chai, we've installed two additional tools. The first, cross-env, is one we've used before and it enables cross-platform support for setting environment variables on the command line. The second, npm-run-all, simplifies using package.json to drive build or test procedures.

For the documentation of cross-env, go to https://www.npmjs.com/package/cross-env.

For the documentation of npm-run-all, go to https://www.npmjs.com/package/npm-run-all.

With the tools set up, we can move on to creating tests.

Notes model test suite

Because we have several Notes models, the test suite should run against any model. We can write tests using the NotesStore API, and an environment variable should be used to declare the model to test. Therefore, the test script will load notes-store.mjs and call functions on the object it supplies. Other environment variables will be used for other configuration settings.

Because we've written the Notes application using ES6 modules, we have a small item to consider. Older Mocha releases only supported running tests in CommonJS modules, so this would require us to jump through a couple of hoops to test Notes modules.  But the current release of Mocha does support them, meaning we can freely use ES6 modules.

We'll start by writing a single test case and go through the steps of running that test and getting the results. After that, we'll write several more test cases, and even find a couple of bugs. These bugs will give us a chance to debug the application and fix any problems. We'll close out this section by discussing how to run tests that require us to set up background services, such as a database server.

Creating the initial Notes model test case

In the test directory, create a file named test-model.mjs containing the following. This will be the outer shell of the test suite:

import util from 'util';
import Chai from 'chai';
const assert = Chai.assert;
import { useModel as useNotesModel } from '../models/notes-store.mjs';

var store;

describe('Initialize', function() {
this.timeout(100000);
it('should successfully load the model', async function() {
try {
// Initialize just as in app.mjs
// If these execute without exception the test succeeds
store = await useNotesModel(process.env.NOTES_MODEL);
} catch (e) {
console.error(e);
throw e;
}
});
});

This loads in the required modules and implements the first test case.

The Chai library supports three flavors of assertions. We're using the assert style here, but it's easy to use a different style if you prefer.

For the other assertion styles supported by Chai, see http://chaijs.com/guide/styles/.

Chai's assertions include a very long list of useful assertion functions. For the documentation, see http://chaijs.com/api/assert/.

To load the model to be tested, we call the useModel function (renamed as useNotesModel). You'll remember that this uses the import() function to dynamically select the actual NotesStore implementation to use. The NOTES_MODEL environment variable is used to select which to load.

Calling this.timeout adjusts the time allowed for completing the test. By default, Mocha allows 2,000 milliseconds (2 seconds) for a test case to be completed. This particular test case might take longer than that, so we've given it more time.

The test function is declared as async.  Mocha can be used in a callback fashion, where Mocha passes in a callback to the test to invoke and indicate errors. However, it can also be used with async test functions, meaning that we can throw errors in the normal way and Mocha will automatically capture those errors to determine if the test fails.

Generally, Mocha looks to see if the function throws an exception or whether the test case takes too long to execute (a timeout situation). In either case, Mocha will indicate a test failure. That's, of course, simple to determine for non-asynchronous code. But Node.js is all about asynchronous code, and Mocha has two models for testing asynchronous code. In the first (not seen here), Mocha passes in a callback function, and the test code is to call the callback function. In the second, as seen here, it looks for a Promise being returned by the test function and determines a pass/fail regarding whether the Promise is in the resolve or reject state.

We are keeping the NotesStore model in the global store variable so that it can be used by all tests. The test, in this case, is whether we can load a given NotesStore implementation. As the comment states, if this executes without throwing an exception, the test has succeeded.  The other purpose of this test is to initialize the variable for use by other test cases.

It is useful to notice that this code carefully avoids loading app.mjs. Instead, it loads the test driver module, models/notes-store.mjs, and whatever module is loaded by useNotesModel. The NotesStore implementation is what's being tested, and the spirit of unit testing says to isolate it as much as possible.

Before we proceed further, let's talk about how Mocha structures tests.

With Mocha, a test suite is contained within a describe block. The first argument is a piece of descriptive text that you use to tailor the presentation of test results. The second argument is a function that contains the contents of the given test suite.

The it function is a test case. The intent is for us to read this as it should successfully load the module. Then, the code within the function is used to check that assertion.

With Mocha, it is important to not use arrow functions in the describe and it blocks. By now, you will have grown fond of arrow functions because of how much easier they are to write. However, Mocha calls these functions with a this object containing useful functions for Mocha. Because arrow functions avoid setting up a this object, Mocha would break.

Now that we have a test case written, let's learn how to run tests.

Running the first test case

Now that we have a test case, let's run the test. In the package.json file, add the following scripts section:

"scripts": {
"test-all": "npm-run-all test-notes-memory test-level test-notes-
fs test-notes-sqlite3 test-notes-sequelize-sqlite",
"test-notes-memory": "cross-env NOTES_MODEL=memory mocha test
-model",
"test-level": "cross-env NOTES_MODEL=level mocha test-model",
"test-notes-fs": "cross-env NOTES_MODEL=fs mocha test-model",
"pretest-notes-sqlite3": "rm -f chap13.sqlite3 && sqlite3
chap13.sqlite3 --init ../models/schema-sqlite3.sql </dev/null",
"test-notes-sqlite3": "cross-env NOTES_MODEL=sqlite3
SQLITE_FILE=chap13.sqlite3 mocha test-model",
"test-notes-sequelize-sqlite": "cross-env NOTES_MODEL=sequelize
SEQUELIZE_CONNECT=sequelize-sqlite.yaml mocha test-model"
}

What we've done here is create a test-all script that will run the test suite against the individual NotesStore implementations. We can run this script to run every test combination, or we can run a specific script to test just the one combination. For example, test-notes-sequelize-sqlite will run tests against SequelizeNotesStore using the SQLite3 database.

It uses npm-run-all to support running the tests in series. Normally, in a package.json script, we would write this:

"test-all": "npm run test-notes-memory && npm run test-level && npm 
run test-notes-fs && ..."

This runs a series of steps one after another, relying on a feature of the Bash shell. The npm-run-all tool serves the same purpose, namely running one package.json script after another in the series. The first advantage is that the code is simpler and more compact, making it easier to read, while the other advantage is that it is cross-platform. We're using cross-env for the same purpose so that the test scripts can be executed on Windows as easily as they can be on Linux or macOS.

For the test-notes-sequelize-sqlite test, look closely. Here, you can see that we need a database configuration file named sequelize-sqlite.yaml. Create that file with the following code:

dbname: notestest 
username:
password:
params:
dialect: sqlite
storage: notestest-sequelize.sqlite3
logging: false

This, as the test script name suggests, uses SQLite3 as the underlying database, storing it in the named file.

We are missing two combinations, test-notes-sequelize-mysql for SequelizeNotesStore using MySQL and test-notes-mongodb, which tests against MongoDBNotesStore. We'll implement these combinations later.

Having automated the run of all test combinations, we can try it out:

$ npm run test-all

> [email protected] test-all /Users/David/Chapter13/notes/test
> npm-run-all test-notes-memory test-level test-notes-fs test-notes-sqlite3 test-notes-sequelize-sqlite


> [email protected] test-notes-memory /Users/David/Chapter13/notes/test
> cross-env NOTES_MODEL=memory mocha test-model

Initialize
should successfully load the model

1 passing (8ms)
...

If all has gone well, you'll get this result for every test combination currently supported in the test-all script.

This completes the first test, which was to demonstrate how to create tests and execute them. All that remains is to write more tests.

Adding some tests

That was easy, but if we want to find what bugs we created, we need to test some functionality. Now, let's create a test suite for testing NotesStore, which will contain several test suites for different aspects of NotesStore.

What does that mean? Remember that the describe function is the container for a test suite and that the it function is the container for a test case. By simply nesting describe functions, we can contain a test suite within a test suite. It will be clearer what that means after we implement this:

describe('Model Test', function() {
describe('check keylist', function() {
before(async function() {
await store.create('n1', 'Note 1', 'Note 1');
await store.create('n2', 'Note 2', 'Note 2');
await store.create('n3', 'Note 3', 'Note 3');
});
...
after(async function() {
const keyz = await store.keylist();
for (let key of keyz) {
await store.destroy(key);
}
});
});
...
});

Here, we have a describe function that defines a test suite containing another describe function. That's the structure of a nested test suite.

We do not have test cases in the it function defined at the moment, but we do have the before and after functions.  These two functions do what they sound like; namely, the before function runs before all the test cases, while the after function runs after all the test cases have finished. The before function is meant to set up conditions that will be tested, while the after function is meant for teardown.

In this case, the before function adds entries to NotesStore, while the after function removes all entries. The idea is to have a clean slate after each nested test suite is executed.

The before and after functions are what Mocha calls a hook. The other hooks are beforeEach and afterEach. The difference is that the Each hooks are triggered before or after each test case's execution.

These two hooks also serve as test cases since the create and destroy methods could fail, in which case the hook will fail.

Between the before and after hook functions, add the following test cases:

it("should have three entries", async function() {
const keyz = await store.keylist();
assert.exists(keyz);
assert.isArray(keyz);
assert.lengthOf(keyz, 3);
});

it("should have keys n1 n2 n3", async function() {
const keyz = await store.keylist();
assert.exists(keyz);
assert.isArray(keyz);
assert.lengthOf(keyz, 3);
for (let key of keyz) {
assert.match(key, /n[123]/, "correct key");
}
});

it("should have titles Node #", async function() {
const keyz = await store.keylist();
assert.exists(keyz);
assert.isArray(keyz);
assert.lengthOf(keyz, 3);
var keyPromises = keyz.map(key => store.read(key));
const notez = await Promise.all(keyPromises);
for (let note of notez) {
assert.match(note.title, /Note [123]/, "correct title");
}
});

As suggested by the description for this test suite, the functions all test the keylist method.

For each test case, we start by calling keylist, then using assert methods to check different aspects of the array that is returned. The idea is to call NotesStore API functions, then test the results to check whether they matched the expected results.

Now, we can run the tests and get the following:

$ npm run test-all
...
> [email protected] test-notes-fs /Users/David/Chapter13/notes/test
> NOTES_MODEL=fs mocha test-model

Initialize
should successfully load the model (174ms)

Model Test
check keylist
should have three entries
should have keys n1 n2 n3
should have titles Node #

4 passing (226ms)
...

Compare the outputs with the descriptive strings in the describe and it functions. You'll see that the structure of this output matches the structure of the test suites and test cases. In other words, we should structure them so that they have well-structured test output.

As they say, testing is never completed, only exhausted. So, let's see how far we can go before exhausting ourselves.

More tests for the Notes model

That wasn't enough to test much, so let's go ahead and add some more tests:

describe('Model Test', function() {
...
describe('read note', function() {
before(async function() {
await store.create('n1', 'Note 1', 'Note 1');
});

it('should have proper note', async function() {
const note = await store.read('n1');
assert.exists(note);
assert.deepEqual({
key: note.key, title: note.title, body: note.body
}, {
key: 'n1',
title: 'Note 1',
body: 'Note 1'
});
});

it('Unknown note should fail', async function() {
try {
const note = await store.read('badkey12');
assert.notExists(note);
throw new Error('should not get here');
} catch(err) {
// An error is expected, so it is an error if
// the 'should not get here' error is thrown
assert.notEqual(err.message, 'should not get here');
}
});

after(async function() {
const keyz = await store.keylist();
for (let key of keyz) {
await store.destroy(key);
}
});
});
...
});

These tests check the read method. In the first test case, we check whether it successfully reads a known Note, while in the second test case, we have a negative test of what happens if we read a non-existent Note.

Negative tests are very important to ensure that functions fail when they're supposed to fail and that failures are indicated correctly.

The Chai Assertions API includes some very expressive assertions. In this case, we've used the deepEqual method, which does a deep comparison of two objects. You'll see that for the first argument, we pass in an object and that for the second, we pass an object that's used to check the first. To see why this is useful, let's force it to indicate an error by inserting FAIL into one of the test strings.

After running the tests, we get the following output:

> [email protected] test-notes-memory /Users/David/Chapter13/notes/test
> NOTES_MODEL=memory mocha test-model

Initialize
should successfully load the model

Model Test
check keylist
should have three entries
should have keys n1 n2 n3
should have titles Node #
read note
1) should have proper note
Unknown note should fail

5 passing (35ms)
1 failing

1) Model Test
read note
should have proper note:

AssertionError: expected { Object (key, title, ...) } to deeply equal { Object (key, title, ...) }
+ expected - actual
{
"body": "Note 1"
"key": "n1"
- "title": "Note 1"
+ "title": "Note 1 FAIL"
}

at Context.<anonymous> (file:///Users/David/Chapter13/notes/test/test-model.mjs:76:16)

This is what a failed test looks like. Instead of the checkmark, there is a number, and the number corresponds to a report below it. In the failure report, the deepEqual function gave us clear information about how the object fields differed. In this case, it is the test we forced to fail because we wanted to see how the deepEqual function works.

Notice that for the negative tests where the test passes if an error is thrown – we run it in a try/catch block. The throw new Error line in each case should not execute because the preceding code should throw an error. Therefore, we can check if the message in that thrown error is the message that arrives, and fail the test if that's the case.

Diagnosing test failures

We can add more tests because, obviously, these tests are not sufficient to be able to ship Notes to the public. After doing so, and then running the tests against the different test combinations, we will find this result for the SQLite3 combination:

$ npm run test-notes-sqlite3

> [email protected] test-notes-sqlite3 /Users/David/Chapter11/notes/test
> rm -f chap11.sqlite3 && sqlite3 chap11.sqlite3 --init ../models/schema-sqlite3.sql </dev/null && NOTES_MODEL=sqlite3 SQLITE_FILE=chap11.sqlite3 mocha test-model

Initialize
should successfully load the model (89ms)

Model Test
check keylist
should have three entries
should have keys n1 n2 n3
should have titles Node #
read note
should have proper note
1) Unknown note should fail
change note
after a successful model.update
destroy note
should remove note
2) should fail to remove unknown note

7 passing (183ms)
2 failing

1) Model Test
read note
Unknown note should fail:
Uncaught TypeError: Cannot read property 'notekey' of undefined
at Statement.<anonymous> (file:///Users/David/Chapter11/notes/models/notes-sqlite3.mjs:79:43)

2) Model Test
destroy note
should fail to remove unknown note:
AssertionError: expected 'should not get here' to not equal 'should
not get here'

+ expected - actual
at Context.<anonymous> (file:///Users/David/Chapter11/notes/test/test-
model.mjs:152:20)

Our test suite found two errors, one of which is the error we mentioned in Chapter 7Data Storage and Retrieval. Both failures came from the negative test cases. In one case, the test calls store.read("badkey12"), while in the other, it calls store.delete("badkey12").

It is easy enough to insert console.log calls and learn what is going on.

For the read method, SQLite3 gave us undefined for row. The test suite successfully calls the read function multiple times with a notekey value that does exist. Obviously, the failure is limited to the case of an invalid notekey value. In such cases, the query gives an empty result set and SQLite3 invokes the callback with undefined in both the error and the row values. Indeed, the equivalent SQL SELECT statement does not throw an error; it simply returns an empty result set. An empty result set isn't an error, so we received no error and an undefined row.

However, we defined read to throw an error if no such Note exists. This means this function must be written to detect this condition and throw an error.

There is a difference between the read functions in models/notes-sqlite3.mjs and models/notes-sequelize.mjs. On the day we wrote SequelizeNotesStore, we must have thought through this function more carefully than we did on the day we wrote SQLITE3NotesStore. In SequelizeNotesStore.read, there is an error that's thrown when we receive an empty result set, and it has a check that we can adapt. Let's rewrite the read function in models/notes-sqlite.mjs so that it reads as follows:

async read(key) {
var db = await connectDB();
var note = await new Promise((resolve, reject) => {
db.get("SELECT * FROM notes WHERE notekey = ?",
[ key ], (err, row) => {
if (err) return reject(err);
if (!row) {
reject(new Error(`No note found for ${key}`));
} else {

const note = new Note(row.notekey, row.title, row.body);
resolve(note);
}
});
});
return note;
}

If this receives an empty result, an error is thrown. While the database doesn't see empty results set as an error, Notes does. Furthermore, Notes already knows how to deal with a thrown error in this case. Make this change and that particular test case will pass. 

There is a second similar error in the destroy logic. In SQL, it obviously is not an SQL error if this SQL (from models/notes-sqlite3.mjs) does not delete anything:

db.run("DELETE FROM notes WHERE notekey = ?;", ... );

Unfortunately, there isn't a method in the SQL option to fail if it does not delete any records. Therefore, we must add a check to see if a record exists, namely the following:

async destroy(key) {
var db = await connectDB();
const note = await this.read(key);
return await new Promise((resolve, reject) => {
db.run("DELETE FROM notes WHERE notekey = ?;",
[ key ], err => {
if (err) return reject(err);
this.emitDestroyed(key);
resolve();
});
});
}

Therefore, we read the note and, as a byproduct, we verify the note exists. If the note doesn't exist, read will throw an error, and the DELETE operation will not even run.

When we run test-notes-sequelize-sqlite, there is also a similar failure in its destroy method. In models/notes-sequelize.mjs, make the following change:

async destroy(key) {
await connectDB();
const note = await SQNote.findOne({ where: { notekey: key } });
if (!note) {
throw new Error(`No note found for ${key}`);
} else {
await SQNote.destroy({ where: { notekey: key } });
}
this.emitDestroyed(key);
}

This is the same change; that is, to first read the Note corresponding to the given key, and if the Note does not exist, to throw an error.

Likewise, when running test-level, we get a similar failure, and the solution is to edit models/notes-level.mjs to make the following change:

async destroy(key) {
const db = await connectDB();
const note = Note.fromJSON(await db.get(key));
await db.del(key);
this.emitDestroyed(key);
}

As with the other NotesStore implementations, this reads the Note before trying to destroy it. If the read operation fails, then the test case sees the expected error.

These are the bugs we referred to in Chapter 7Data Storage and Retrieval. We simply forgot to check for these conditions in this particular model. Thankfully, our diligent testing caught the problem. At least, that's the story to tell the managers rather than telling them that we forgot to check for something we already knew could happen.

Testing against databases that require server setup MySQL and MongoDB

That was good, but we obviously won't run Notes in production with a database such as SQLite3 or Level. We can run Notes against the SQL databases supported by Sequelize (such as MySQL) and against MongoDB. Clearly, we've been remiss in not testing those two combinations. 

Our test results matrix reads as follows:

  • notes-fs: PASS
  • notes-memory: PASS
  • notes-level: 1 failure, now fixed
  • notes-sqlite3: 2 failures, now fixed
  • notes-sequelize: With SQLite3: 1 failure, now fixed
  • notes-sequelize: With MySQL: untested
  • notes-mongodb: Untested

The two untested NotesStore implementations both require that we set up a database server. We avoided testing these combinations, but our manager won't accept that excuse because the CEO needs to know we've completed the test cycles. Notes must be tested with a configuration similar to the production environments'.

In production, we'll be using a regular database server, with MySQL or MongoDB being the primary choices. Therefore, we need a way to incur a low overhead to run tests against those databases. Testing against the production configuration must be so easy that we should feel no resistance in doing so, to ensure that tests are run often enough to make the desired impact.

In this section, we made a lot of progress and have a decent start on a test suite for the NotesStore database modules. We learned how to set up test suites and test cases in Mocha, as well as how to get useful test reporting. We learned how to use package.json to drive test suite execution. We also learned about negative test scenarios and how to diagnose errors that come up.  

But we need to work on this issue of testing against a database server. Fortunately, we've already worked with a piece of technology that supports easily creating and destroying the deployment infrastructure. Hello, Docker!

In the next section, we'll learn how to repurpose the Docker Compose deployment as a test infrastructure.

Using Docker Swarm to manage test infrastructure

One advantage Docker gives is the ability to install the production environment on our laptop. In Chapter 12Deploying Docker Swarm to AWS EC2 Using Terraform, we converted a Docker setup that ran on our laptop so that it could be deployed on real cloud hosting infrastructure. That relied on converting a Docker Compose file into a Docker Stack file, along with customization for the environment we built on AWS EC2 instances.

In this section, we'll repurpose the Stack file as test infrastructure deployed to a Docker Swarm. One approach is to simply run the same deployment, to AWS EC2, and substitute new values for the var.project_name and var.vpc_name variables. In other words, the EC2 infrastructure could be deployed this way:

$ terraform apply --var project_name=notes-test --var vpc_name=notes-test-vpc

This would deploy a second VPC with a different name that's explicitly for test execution and that would not disturb the production deployment. It's quite common in Terraform to customize the deployment this way for different targets.

In this section, we'll try something different. We can use Docker Swarm in other contexts, not just the AWS EC2 infrastructure we set up. Specifically, it is easy to use Docker Swarm with the Docker for Windows or Docker for macOS that's running on our laptop.

What we'll do is configure Docker on our laptop so that it supports swarm mode and create a slightly modified version of the Stack file in order to run the tests on our laptop. This will solve the issue of running tests against a MySQL database server, and also lets us test the long-neglected MongoDB module. This will demonstrate how to use Docker Swarm for test infrastructure and how to perform semi-automated test execution inside the containers using a shell script.

Let's get started.

Using Docker Swarm to deploy test infrastructure

We had a great experience using Docker Compose and Swarm to orchestrate Notes application deployment on both our laptop and our AWS infrastructure. The whole system, with five independent services, is easily described in compose-local/docker-compose.yml and compose-swarm/docker-compose.yml. What we'll do is duplicate the Stack file, then make a couple of small changes required to support test execution in a local swarm.

To configure the Docker installation on our laptop for swarm mode, simply type the following:

$ docker swarm init

As before, this will print a message about the join token. If desired, if you have multiple computers in your office, it might be interesting for you to experiment with setting up a local Swarm. But for this exercise, that's not important. This is because we can do everything required with a single-node Swarm.

This isn't a one-way street, meaning that when you're done with this exercise, it is easy to turn off swarm mode. Simply shut down anything deployed to your local Swarm and run the following command:

$ docker swarm leave --force

Normally, this is used for a host that you wish to detach from an existing swarm. If there is only one host remaining in a swarm, the effect will be to shut down the swarm.

Now that we know how to initialize swarm mode on our laptop, let's set about creating a stack file suitable for use on our laptop.

Create a new directory, compose-stack-test-local, as a sibling to the notes, users, and compose-local directories. Copy compose-stack/docker-compose.yml to that directory. We'll be making several small changes to this file and no changes to the existing Dockerfiles. As much as it is possible, it is important to test the same containers that are used in the production deployment. This means it's acceptable to inject test files into the containers, but not modify them.

Make every deploy tag look like this:

deploy:
replicas: 1

This deletes the placement constraints we declared for use on AWS EC2 and sets it to one replica for each service. For a single-node cluster, we don't worry about placement, of course, and there is no need for more than one instance of any service.

For the database services, remove the volumes tag. Using this tag is required when it's necessary to persist in the database data directory. For test infrastructure, the data directory is unimportant and can be thrown away at will. Likewise, remove the top-level volumes tag.

For the svc-notes and svc-userauth services, make these changes:

services:
...
svc-userauth:
image: compose-stack-test-local/svc-userauth
...
ports:
- "5858:5858"
...
environment:
SEQUELIZE_CONNECT: sequelize-docker-mysql.yaml
SEQUELIZE_DBHOST: db-userauth
...
svc-notes:
image: compose-stack-test-local/svc-notes
...
volumes:
- type: bind

source: ../notes/test
target: /notesapp/test
- type: bind
source: ../notes/models/schema-sqlite3.sql
target: /notesapp/models/schema-sqlite3.sql
ports:

- "3000:3000"
environment:
...
TWITTER_CALLBACK_HOST: "http://localhost:3000"
SEQUELIZE_CONNECT: models/sequelize-docker-mysql.yaml
SEQUELIZE_DBHOST: db-notes
NOTES_MODEL: sequelize
...
...

This injects the files required for testing into the svc-notes container. Obviously, this is the test directory that we created in the previous section for the Notes service. Those tests also require the SQLite3 schema file since it is used by the corresponding test script. In both cases, we can use bind mounts to inject the files into the running container.

The Notes test suite follows a normal practice for Node.js projects of putting test files in the test directory. When building the container, we obviously don't include the test files because they're not required for deployment. But running tests requires having that directory inside the running container. Fortunately, Docker makes this easy. We simply mount the directory into the correct place.

The bottom line is this approach gives us the following advantages:

  • The test code is in notes/testwhere it belongs.
  • The test code is not copied into the production container.
  • In test mode, the test directory appears where it belongs.

For Docker (using docker run) and Docker Compose, the volume is mounted from a directory on the localhost. But for swarm mode, with a multi-node swarm, the container could be deployed on any host matching the placement constraints we declare. In a swarm, bind volume mounts like the ones shown here will try to mount from a directory on the host that the container has been deployed in. But we are not using a multi-node swarm; instead, we are using a single-node swarm. Therefore, the container will mount the named directory from our laptop, and all will be fine. But as soon as we decide to run testing on a multi-node swarm, we'll need to come up with a different strategy for injecting these files into the container.

We've also changed the ports mappings. For svc-userauth, we've made its port visible to give ourselves the option of testing the REST service from the host computer. For the svc-notes service, this will make it appear on port 3000. In the environment section, make sure you did not set a PORT variable. Finally, we adjust TWITTER_CALLBACK_HOST so that it uses localhost:3000 since we're deploying on the localhost.

For both services, we're changing the image tag from the one associated with the AWS ECR repository to one of our own designs. We won't be publishing these images to an image repository, so we can use any image tag we like.  

For both services, we are using the Sequelize data model, using the existing MySQL-oriented configuration file, and setting the SEQUELIZE_DBHOST variable to refer to the container holding the database. 

We've defined a Docker Stack file that should be useful for deploying the Notes application stack in a Swarm. The difference between the deployment on AWS EC2 and here is simply the configuration. With a few simple configuration changes, we've mounted test files into the appropriate container, reconfigured the volumes and the environment variables, and changed the deployment descriptors so that they're suitable for a single-node swarm running on our laptop.

Let's deploy this and see how well we did.

Executing tests under Docker Swarm

We've repurposed our Docker Stack file so that it describes deploying to a single-node swarm, ensuring the containers are set up to be useful for testing. Our next step is to deploy the Stack to a swarm and execute the tests inside the Notes container.

To set it up, run the following commands:

$ docker swarm init
... ignore the output showing the docker swarm join command

$ printf '...' | docker secret create TWITTER_CONSUMER_SECRET -
$ printf '...' | docker secret create TWITTER_CONSUMER_KEY -

We run swarm init to turn on swarm mode on our laptop, then add the two TWITTER secrets to the swarm. Since it is a single-node swarm, we don't need to run a docker swarm join command to add new nodes to the swarm.

Then, in the compose-stack-test-local directory, we can run these commands:

$ docker-compose build
...
Building svc-userauth
...
Successfully built 876860f15968
Successfully tagged compose-stack-test-local/svc-userauth:latest
Building svc-notes
...
Successfully built 1c4651c37a86
Successfully tagged compose-stack-test-local/svc-notes:latest

$ docker stack deploy --compose-file docker-compose.yml notes
Ignoring unsupported options: build, restart
...
Creating network notes_authnet
Creating network notes_svcnet
Creating network notes_frontnet
Creating service notes_db-userauth
Creating service notes_svc-userauth
Creating service notes_db-notes
Creating service notes_svc-notes
Creating service notes_redis

Because a Stack file is also a Compose file, we can run docker-compose build to build the images. Because of the image tags, this will automatically tag the images so that they match the image names we specified.

Then, we use docker stack deploy, as we did when deploying to AWS EC2. Unlike the AWS deployment, we do not need to push the images to repositories, which means we do not need to use the --with-registry-auth option. This will behave almost identically to the swarm we deployed to EC2, so we explore the deployed services in the same way:

$ docker service ls
... output of current services
$ docker service ps notes_svc-notes
... status information for the named service
$ docker ps
... running container list for local host

Because this is a single-host swarm, we don't need to use SSH to access the swarm nodes, nor do we need to set up remote access using docker context. Instead, we run the Docker commands, and they act on the Docker instance on the localhost. 

The docker ps command will tell us the precise container name for each service. With that knowledge, we can run the following to gain access:

$ docker exec -it notes_svc-notes.1.c8ojirrbrv2sfbva9l505s3nv bash
root@265672675de1:/notesapp#
root@265672675de1:/notesapp# cd test
root@265672675de1:/notesapp/test# apt-get -y install sqlite3
...
root@265672675de1:/notesapp/test# rm -rf node_modules/
root@265672675de1:/notesapp/test# npm install
...

Because, in swarm mode, the containers have unique names, we have to run docker ps to get the container name, then paste it into this command to start a Bash shell inside the container.

Inside the container, we see the test directory is there as expected. But we have a couple of setup steps to perform. The first is to install the SQLite3 command-line tools since the scripts in package.json use that command. The second is to remove any existing node_modules directory because we don't know if it was built for this container or for the laptop. After that, we need to run npm install to install the dependencies.

Having done this, we can run the tests:

root@265672675de1:/notesapp/test# npm run test-all
...

The tests should execute as they did on our laptop, but they're running inside the container instead. However, the MySQL test won't have run because the package.json scripts are not set up to run that one automatically. Therefore, we can add this to package.json:

"test-notes-sequelize-mysql": "cross-env NOTES_MODEL=sequelize 
SEQUELIZE_CONNECT=../models/sequelize-docker-mysql.yaml
SEQUELIZE_DBHOST=db-notes mocha test-model"

This is the command that's required to execute the test suite against the MySQL database.

Then, we can run the tests against MySQL, like so:

root@265672675de1:/notesapp/test# npm run test-notes-sequelize-mysql
...

The tests should execute correctly against MySQL.

To automate this, we can create a file named run.sh containing the following code:

#!/bin/sh
SVC_NOTES=$1
# docker exec -it ${SVC_NOTES} apt-get -y install sqlite3
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
rm -rf node_modules
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm install
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-memory
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-fs
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-level
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-sqlite3
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-sequelize-sqlite
docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-sequelize-mysql
# docker exec -it --workdir /notesapp/test -e DEBUG= ${SVC_NOTES}
npm run test-notes-mongodb

The script executes each script in notes/test/package.json individually. If you prefer, you can replace these with a single line that executes npm run test-all.

This script takes a command-line argument for the container name holding the svc-notes service. Since the tests are located in that container, that's where the tests must be run. The script can be executed like so:

$ sh run.sh notes_svc-notes.1.c8ojirrbrv2sfbva9l505s3nv

This runs the preceding script, which will run each test combination individually and also make sure the DEBUG variable is not set. This variable is set in the Dockerfile and causes debugging information to be printed among the test results output. Inside the script, the --workdir option sets the current directory of the command's execution in the test directory to simplify running the test scripts.

Of course, this script won't execute as-is on Windows. To convert this for use on PowerShell, save the text starting at the second line into run.ps1, and then change SVC_NOTES references into %SVC_NOTES% references.

We have succeeded in semi-automating test execution for most of our test matrix. However, there is a glaring hole in the test matrix, namely the lack of testing on MongoDB. Plugging that hole will let us see how we can set up MongoDB under Docker.

MongoDB setup under Docker and testing Notes against MongoDB

In Chapter 7, Data Storage and Retrieval, we developed MongoDB support for Notes. Since then, we've focused on Sequelize. To make up for that slight, let's make sure we at least test our MongoDB support. Testing on MongoDB simply requires defining a container for the MongoDB database and a little bit of configuration.

Visit https://hub.docker.com/_/mongo/ for the official MongoDB container. You'll be able to retrofit this in order to deploy the Notes application running on MongoDB.

Add the following code to compose-stack-test-local/docker-compose.yml:

  # Uncomment this for testing MongoDB
db-notes-mongo:
image: mongo:4.2
container_name: db-notes-mongo
networks:
- frontnet
# volumes:
# - ./db-notes-mongo:/data/db

That's all that's required to add a MongoDB container to a Docker Compose/Stack file. We've connected it to frontnet so that the database is accessible by svc-notes. If we wanted the svc-notes container to use MongoDB, we'd need some environment variables (MONGO_URL, MONGO_DBNAME, and NOTES_MODEL) to tell Notes to use MongoDB. 

But we'd also run into a problem that we created for ourselves in Chapter 9, Dynamic Client/Server Interaction with Socket.IO. In that chapter, we created a messaging subsystem so that our users can leave messages for each other. That messaging system is currently implemented to store messages in the same Sequelize database where the Notes are stored. But to run Notes with no Sequelize database would mean a failure in the messaging system. Obviously, the messaging system can be rewritten, for instance, to allow storage in a MongoDB database, or to support running both MongoDB and Sequelize at the same time.

Because we were careful, we can execute code in models/notes-mongodb.mjs without it being affected by other code. With that in mind, we'll simply execute the Notes test suite against MongoDB and report the results.

Then, in notes/test/package.json, we can add a line to facilitate running tests on MongoDB:

"test-notes-mongodb": "cross-env MONGO_URL=mongodb://db-notes-mongo/
MONGO_DBNAME=chap13-test NOTES_MODEL=mongodb mocha --no-timeouts test-
model"

We simply added the MongoDB container to frontnet, making the database available at the URL shown here. Hence, it's simple to now run the test suite using the Notes MongoDB model. 

The --no-timeouts option was necessary to avoid a spurious error while testing the suite against MongoDB. This option instructs Mocha to not check whether a test case execution takes too long.

The final requirement is to add the following line to run.sh (or run.ps1 for Windows):

docker exec -it --workdir /notesapp/test -e DEBUG= notes-test 
npm run test-notes-mongodb

This ensures MongoDB can be tested alongside the other test combinations. But when we run this, an error might crop up:

(node:475) DeprecationWarning: current Server Discovery and Monitoring 
engine is deprecated, and will be removed in a future version. To use
the new Server Discover and Monitoring engine, pass option
{ useUnifiedTopology: true } to the MongoClient constructor.

The problem is that the initializer for the MongoClient object has changed slightly. Therefore, we must modify notes/models/notes-mongodb.mjs with this new connectDB function:

const connectDB = async () => { 
if (!client) {
client = await MongoClient.connect(process.env.MONGO_URL, {
useNewUrlParser: true, useUnifiedTopology: true
});
}
}

This adds a pair of useful configuration options, including the option explicitly named in the error message. Otherwise, the code is unchanged.

To make sure the container is running with the updated code, rerun the docker-compose build and docker stack deploy steps shown earlier. Doing so rebuilds the images, and then updates the services. Because the svc-notes container will relaunch, you'll need to install the Ubuntu sqlite3 package again.

Once you've done that, the tests will all execute correctly, including the MongoDB combination.

We can now report the final test results matrix to the manager:

  • models-fs: PASS
  • models-memory: PASS
  • models-levelup: 1 failure, now fixed, PASS
  • models-sqlite3: Two failures, now fixed, PASS
  • models-sequelize with SQLite3: 1 failure, now fixed, PASS
  • models-sequelize with MySQL: PASS
  • models-mongodb: PASS

The manager will tell you "good job" and then remember that the models are only a portion of the Notes application. We've left two areas completely untested:

  • The REST API for the user authentication service
  • Functional testing of the user interface

In this section, we've learned how to repurpose a Docker Stack file so that we can launch the Notes stack on our laptop. It took a few simple reconfigurations of the Stack file and we were ready to go, and we even injected the files that are useful for testing. With a little bit more work, we finished testing against all configuration combinations of the Notes database modules.

Our next task is to handle testing the REST API for the user authentication service.

Testing REST backend services

It's now time to turn our attention to the user authentication service. We've mentioned testing this service, saying that we'll get to them later. We developed a command-line tool for both administration and ad hoc testing. While that has been useful all along, it's time to get cracking with some real tests.

There's a question of which tool to use for testing the authentication service. Mocha does a good job of organizing a series of test cases, and we should reuse it here. But the thing we have to test is a REST service. The customer of this service, the Notes application, uses it through the REST API, giving us a perfect rationalization to test the REST interface rather than calling the functions directly. Our ad hoc scripts used the SuperAgent library to simplify making REST API calls. There happens to be a companion library, SuperTest, that is meant for REST API testing. It's easy to use that library within a Mocha test suite, so let's take that route.

For the documentation on SuperTest, look here: https://www.npmjs.com/package/supertest.

Create a directory named compose-stack-test-local/userauth. This directory will contain a test suite for the user authentication REST service. In that directory, create a file named test.mjs that contains the following code:

import Chai from 'chai';
const assert = Chai.assert;
import supertest from 'supertest';
const request = supertest(process.env.URL_USERS_TEST);
const authUser = 'them';
const authKey = 'D4ED43C0-8BD6-4FE2-B358-7C0E230D11EF';

describe('Users Test', function() {
...
});

This sets up Mocha and the SuperTest client. The URL_USERS_TEST environment variable specifies the base URL of the server to run the test against. You'll almost certainly be using http://localhost:5858, given the configuration we've used earlier, but it can be any URL pointing to any host. SuperTest initializes itself a little differently to SuperAgent.

The SuperTest module supplies a function, and we call that function with the URL_USERS_TEST variable. That gives us an object, which we call request, that is used for interacting with the service under test.

We've also set up a pair of variables to store the authentication user ID and key. These are the same values that are in the user authentication server. We simply need to supply them when making API calls.

Finally, there's the outer shell of the Mocha test suite. So, let's start filling in the before and after test cases:

before(async function() {
await request
.post('/create-user')
.send({
username: "me", password: "w0rd", provider: "local",
familyName: "Einarrsdottir", givenName: "Ashildr",
middleName: "",
emails: [], photos: []
})
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
});

after(async function() {
await request
.delete('/destroy/me')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
});

These are our before and after tests. We'll use them to establish a user and then clean them up by removing the user at the end.

This gives us a taste of how the SuperTest API works. If you refer back to cli.mjs, you'll see the similarities to SuperAgent.

The post and delete methods we can see here declare the HTTP verb to use. The send method provides an object for the POST operation. The set method sets header values, while the auth method sets up authentication:

describe('List user', function() {
it("list created users", async function() {
const res = await request.get('/list')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
assert.exists(res.body);
assert.isArray(res.body);
assert.lengthOf(res.body, 1);
assert.deepEqual(res.body[0], {
username: "me", id: "me", provider: "local",
familyName: "Einarrsdottir", givenName: "Ashildr",
middleName: "",
emails: [], photos: []
});
});
});

Now, we can test some API methods, such as the /list operation.

We have already guaranteed that there is an account in the before method, so /list should give us an array with one entry. 

This follows the general pattern for using Mocha to test a REST API method. First, we use SuperTest's request object to call the API method and await its result. Once we have the result, we use assert methods to validate it is what's expected.

Add the following test cases:

describe('find user', function() {
it("find created users", async function() {
const res = await request.get('/find/me')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
assert.exists(res.body);
assert.isObject(res.body);
assert.deepEqual(res.body, {
username: "me", id: "me", provider: "local",
familyName: "Einarrsdottir", givenName: "Ashildr",
middleName: "",
emails: [], photos: []
});
});
it('fail to find non-existent users', async function() {
var res;
try {
res = await request.get('/find/nonExistentUser')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
} catch(e) {
return; // Test is okay in this case
}
assert.exists(res.body);
assert.isObject(res.body);
assert.deepEqual(res.body, {});
});
});

We are checking the /find operation in two ways:

  • Positive test: Looking for the account we know exists – failure is indicated if the user account is not found
  • Negative test: Looking for the one we know does not exist – failure is indicated if we receive something other than an error or an empty object

Add the following test case:

describe('delete user', function() {
it('delete nonexistent users', async function() {
let res;
try {
res = await request.delete('/destroy/nonExistentUser')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
} catch(e) {
return; // Test is okay in this case
}
assert.exists(res);
assert.exists(res.error);
assert.notEqual(res.status, 200);
});
});

Finally, we should check the /destroy operation. This operation is already checked the after method, where we destroy a known user account. We also need to perform the negative test and verify its behavior against an account we know does not exist.

The desired behavior is that either an error is thrown or the result shows an HTTP status indicating an error. In fact, the current authentication server code gives a 500 status code, along with some other information.

This gives us enough tests to move forward and automate the test run.

In compose-stack-test-local/docker-compose.yml, we need to inject the test.js script into the svc-userauth-test container. We'll add that here:

svc-userauth-test:
...
volumes:
- type: bind
source: ./userauth
target: /userauth/test

This injects the userauth directory into the container as the /userauth/test directory. As we did previously, we then must get into the container and run the test script.

The next step is creating a package.json file to hold any dependencies and a script to run the test:

{
"name": "userauth-test",
"version": "1.0.0",
"description": "Test suite for user authentication server",
"scripts": {
"test": "cross-env URL_USERS_TEST=http://localhost:5858 mocha
test.mjs"
},
"dependencies": {
"chai": "^4.2.0",
"mocha": "^7.1.1",
"supertest": "^4.0.2",
"cross-env": "^7.0.2"
}
}

In the dependencies, we list Mocha, Chai, SuperTest, and cross-env. Then, in the test script, we run Mocha along with the required environment variable.  This should run the tests.

We could use this test suite from our laptop. Because the test directory is injected into the container the tests, we can also run them inside the container. To do so, add the following code to run.sh:

SVC_USERAUTH=$2
...
docker exec -it -e DEBUG= --workdir /userauth/test ${SVC_USERAUTH}
rm -rf node_modules
docker exec -it -e DEBUG= --workdir /userauth/test ${SVC_USERAUTH}
npm install
docker exec -it -e DEBUG= --workdir /userauth/test ${SVC_USERAUTH}
npm run test

This adds a second argument  in this case, the container name for svc-userauth. We can then run the test suite, using this script to run them inside the container. The first two commands ensure the installed packages were installed for the operating system in this container, while the last runs the test suite.

Now, if you run the run.sh test script, you'll see the required packages get installed. Then, the test suite will be executed.

The result will look like this:

$ sh run.sh notes_svc-notes.1.c8ojirrbrv2sfbva9l505s3nv notes_svc-userauth.1.puos4jqocjji47vpcp9nrakmy 
...

> [email protected] test /userauth/test
> cross-env URL_USERS_TEST=http://localhost:5858 mocha test.mjs

Users Test
List user
list created users
find user
find created users
fail to find non-existent users
delete user
delete nonexistent users


4 passing (312ms)

Because URL_USERS_TEST can take any URL, we could run the test suite against any instance of the user authentication service. For example, we could test an instance deployed on AWS EC2 from our laptop using a suitable value for URL_USERS_TEST.

We're making good progress. We now have test suites for both the Notes and User Authentication services. We have learned how to test a REST service using the REST API. This is different than directly calling internal functions because it is an end-to-end test of the complete system, in the role of a consumer of the service.

Our next task is to automate test results reporting.

Automating test results reporting

It's cool we have automated test execution, and Mocha makes the test results look nice with all those checkmarks. But what if management wants a graph of test failure trends over time? There could be any number of reasons to report test results as data rather than as a user-friendly printout on the console.

For example, tests are often not run on a developer laptop or by a quality team tester, but by automated background systems. The CI/CD model is widely used, in which tests are run by the CI/CD system on every commit to the shared code repository. When fully implemented, if the tests all pass on a particular commit, then the system is automatically deployed to a server, possibly the production servers. In such a circumstance, the user-friendly test result report is not useful, and instead, it must be delivered as data that can be displayed on a CI/CD results dashboard website.

Mocha uses what's called a Reporter to report test results. A Mocha Reporter is a module that prints data in whatever format it supports. More information on this can be found on the Mocha website: https://mochajs.org/#reporters.

You will find the current list of available reporters like so:

# mocha --reporters

dot - dot matrix
doc - html documentation
spec - hierarchical spec list
json - single json object
progress - progress bar
list - spec-style listing
tap - test-anything-protocol
...

Then, you can use a specific Reporter, like so:

root@df3e8a7561a7:/userauth/test# npm run test -- --reporter tap

> [email protected] test /userauth/test
> cross-env URL_USERS_TEST=http://localhost:5858 mocha test.mjs "--reporter" "tap"

1..4
ok 1 Users Test List user list created users
ok 2 Users Test find user find created users
ok 3 Users Test find user fail to find non-existent users
ok 4 Users Test delete user delete nonexistent users
# tests 4
# pass 4
# fail 0

In the npm run script-name command, we can inject command-line arguments, as we've done here. The -- token tells npm to append the remainder of its command line to the command that is executed. The effect is as if we had run this:

root@df3e8a7561a7:/userauth/test# URL_USERS_TEST=http://localhost:5858 mocha test.mjs "--reporter" "tap"

For Mocha, the --reporter option selects which Reporter to use. In this case, we selected the TAP reporter, and the output follows that format.

Test Anything Protocol (TAP) is a widely used test results format that increases the possibility of finding higher-level reporting tools. Obviously, the next step would be to save the results into a file somewhere, after mounting a host directory into the container.

In this section, we learned about the test results reporting formats supported by Mocha. This will give you a starting point for collecting long-term results tracking and other useful software quality metrics. Often, software teams rely on quality metrics trends as part of deciding whether a product can be shipped to the public.

In the next section, we'll round off our tour of testing methodologies by learning about a framework for frontend testing.

Frontend headless browser testing with Puppeteer

A big cost area in testing is manual user interface testing. Therefore, a wide range of tools has been developed to automate running tests at the HTTP level. Selenium is a popular tool implemented in Java, for example. In the Node.js world, we have a few interesting choices. The chai-http plugin to Chai would let us interact at the HTTP level with the Notes application while staying within the now-familiar Chai environment. 

However, in this section, we'll use Puppeteer (https://github.com/GoogleChrome/puppeteer). This tool is a high-level Node.js module used to control a headless Chrome or Chromium browser, using the DevTools protocol. This protocol allows tools to instrument, inspect, debug, and profile Chromium or Chrome browser instances. The key result is that we can test the Notes application in a real browser so that we have greater assurance it behaves correctly for users. 

The Puppeteer website has extensive documentation that's worth reading: https://pptr.dev/.

Puppeteer is meant to be a general-purpose test automation tool and has a strong feature set for that purpose. Because it's easy to make web page screenshots with Puppeteer, it can also be used in a screenshot service.

Because Puppeteer is controlling a real web browser, your user interface tests will be very close to live browser testing, without having to hire a human to do the work. Because it uses a headless version of Chrome, no visible browser window will show on your screen, and tests can be run in the background instead. It can also drive other browsers by using the DevTools protocol.

First, let's set up a directory to work in.

Setting up a Puppeteer-based testing project directory

First, let's set up the directory that we'll install Puppeteer in, as well as the other packages that will be required for this project:

$ mkdir test-compose/notesui
$ cd test-compose/notesui
$ npm init
... answer the questions
$ npm install
puppeteer@^4.x mocha@^7.x chai@^4.x supertest@^4.x bcrypt@^4.x
[email protected]
--save

This installs not just Puppeteer, but Mocha, Chai, and Supertest. We'll also be using the package.json file to record scripts.

During installation, you'll see that Puppeteer causes Chromium to be downloaded, like so:

Downloading Chromium r756035 - 118.4 Mb [======= ] 35% 30.4s 

The Puppeteer package will launch that Chromium instance as needed, managing it as a background process and communicating with it using the DevTools protocol.

The approach we'll follow is to test against the Notes stack we've deployed in the test Docker infrastructure. Therefore, we need to launch that infrastructure:

$ cd ..
$ docker stack deploy --compose-file docker-compose.yml notes
... as before

Depending on what you need to do, docker-compose build might also be required. In any case, this brings up the test infrastructure and lets you see the running system.

We can use a browser to visit http://localhost:3000 and so on. Because this system won't contain any users, our test script will have to add a test user so that the test can log in and add notes. 

Another item of significance is that tests will be running in an anonymous Chromium instance. Even if we use Chrome as our normal desktop browser, this Chromium instance will have no connection to our normal desktop setup. That's a good thing from a testability standpoint since it means your test results will not be affected by your personal web browser configuration. On the other hand, it means Twitter login testing is not possible, because that Chromium instance does not have a Twitter login session.

With those things in mind, let's write an initial test suite. We'll start with a simple initial test case to prove we can run Puppeteer inside Mocha. Then, we'll test the login and logout functionality, the ability to add notes, and a couple of negative test scenarios. We'll close this section with a discussion on improving testability in HTML applications. Let's get started.

Creating an initial Puppeteer test for the Notes application stack

Our first test goal is to set up the outline of a test suite. We will need to do the following, in order:

  1. Add a test user to the user authentication service.
  2. Launch the browser.
  3. Visit the home page.
  4. Verify the home page came up.
  5. Close the browser.
  6. Delete the test user.

This will establish that we have the ability to interact with the launched infrastructure, start the browser, and see the Notes application. We will continue with the policy and clean up after the test to ensure a clean environment for subsequent test runs and will add, then remove, a test user.

In the notesui directory, create a file named uitest.mjs containing the following code:

import Chai from 'chai';
const assert = Chai.assert;
import supertest from 'supertest';
const request = supertest(process.env.URL_USERS_TEST);
const authUser = 'them';
const authKey = 'D4ED43C0-8BD6-4FE2-B358-7C0E230D11EF';
import { default as bcrypt } from 'bcrypt';
const saltRounds = 10;
import puppeteer from 'puppeteer';

async function hashpass(password) {
let salt = await bcrypt.genSalt(saltRounds);
let hashed = await bcrypt.hash(password, salt);
return hashed;
}

This imports and configures the required modules. This includes setting up bcrypt support in the same way that is used in the authentication server. We've also copied in the authentication key for the user authentication backend service. As we did for the REST test suite, we will use the SuperTest library to add, verify, and remove the test user using the REST API snippets copied from the REST tests.

Add the following test block:

describe('Initialize test user', function() {
it('should successfully add test user', async function() {
await request.post('/create-user').send({
username: "testme", password: await hashpass("w0rd"),
provider: "local",
familyName: "Einarrsdottir", givenName: "Ashildr",
middleName: "TEST", emails: [ "[email protected]" ],
photos: []
})
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
});
});

This adds a user to the authentication service. Refer back and you'll see this is similar to the test case in the REST test suite. If you want a verification phase, there is another test case that calls the /find/testme endpoint to verify the result. Since we've already verified the authentication system, we do not need to reverify it here. We just need to ensure we have a known test user we can use for scenarios where the browser must be logged in.

Keep this at the very end of uitest.mjs:

describe('Destroy test user', function() {
it('should successfully destroy test user', async function() {
await request.delete('/destroy/testme')
.set('Content-Type', 'application/json')
.set('Acccept', 'application/json')
.auth(authUser, authKey);
});
});

At the end of the test execution, we should run this to delete the test user. The policy is to clean up after we execute the test. Again, this was copied from the user authentication service test suite. Between those two, add the following:

describe('Notes', function() {
this.timeout(100000);
let browser;
let page;

before(async function() {
browser = await puppeteer.launch({
sloMo: 500, headless: false
});
page = await browser.newPage();
});

it('should visit home page', async function() {
await page.goto(process.env.NOTES_HOME_URL);
await page.waitForSelector('a.nav-item[href="/users/login"]');
});

// Other test scenarios go here.

after(async function() {
await page.close();
await browser.close();
});
});

Remember that within describe, the tests are the it blocks. The before block is executed before all the it blocks, and the after block is executed afterward.

In the before function, we set up Puppeteer by launching a Puppeteer instance and starting a new Page object. Because puppeteer.launch has the headless option set to false, we'll see a browser window on the screen. This will be useful so we can see what's happening. The sloMo option also helps us see what's happening by slowing down the browser interaction. In the after function, we call the close method on those objects in order to close out the browser. The puppeteer.launch method takes an options object, with a long list of attributes that are worth learning about.

The browser object represents the entire browser instance that the test is being run on. In contrast, the page object represents what is essentially the currently open tab in the browser. Most Puppeteer functions execute asynchronously. Therefore, we can use async functions and the await keywords.

The timeout setting is required because it sometimes takes a longish time for the browser instance to launch. We're being generous with the timeout to minimize the risk of spurious test failures.

For the it clause, we do a tiny amount of browser interaction. Being a wrapper around a browser tab, the page object has methods related to managing an open tab. For example, the goto method tells the browser tab to navigate to the given URL. In this case, the URL is the Notes home page, which is passed in as an environment variable.

The waitForSelector method is part of a group of methods that wait for certain conditions. These include waitForFileChooser, waitForFunction, waitForNavigationwaitForRequest, waitForResponse, and waitForXPath. These, and the waitFor method, all cause Puppeteer to asynchronously wait for a condition to happen in the browser. The purpose of these methods is to give the browser time to respond to some input, such as clicking on a button. In this case, it waits until the web page loading process has an element visible at the given CSS selector. That selector refers to the Login button, which will be in the header.

In other words, this test visits the Notes home page and then waits until the Login button appears. We could call that a simple smoke test that's quickly executed and determines that the basic functionality is there.

Executing the initial Puppeteer test

We have the beginning of a Puppeteer-driven test suite for the Notes application. We have already launched the test infrastructure using docker-compose. To run the test script, add the following to the scripts section of the package.json file:

"test": "cross-env URL_USERS_TEST=http://localhost:5858
NOTES_HOME_URL=http://localhost:3000 mocha uitest.mjs"

The test infrastructure we deployed earlier exposes the user authentication service on port 5858 and the Notes application on port 3000. If you want to test against a different deployment, adjust these URLs appropriately. Before running this, the Docker test infrastructure must be launched, which should have already happened.

Let's try running this initial test suite:

$ npm run test

> [email protected] test /Users/David/Chapter13/compose-test/notesui
> URL_USERS_TEST=http://localhost:5858 NOTES_HOME_URL=http://localhost:3000 mocha uitest.mjs

Initialize test user
should successfully add test user (125ms)
Notes
should visit home page (1328ms)
Destroy test user
should successfully destroy test user (53ms)

3 passing (5s)

We have successfully created the structure that we can run these tests in. We have set up Puppeteer and the related packages and created one useful test. The primary win is to have a structure to build further tests on top of.

Our next step is to add more tests.

Testing login/logout functionality in Notes

In the previous section, we created the outline within which to test the Notes user interface. We didn't do much testing regarding the application, but we proved that we can test Notes using Puppeteer.

In this section, we'll add an actual test. Namely, we'll test the login and logout functionality. The steps for this are as follows:

  1. Log in using the test user identity.
  2. Verify that the browser was logged in.
  3. Log out.
  4. Verify that the browser is logged out.

In uitest.js, insert the following test code:

describe('should log in and log out correctly', function() {
this.timeout(100000);

it('should log in correctly', async function() {
await page.click('a.nav-item[href="/users/login"]');
await page.waitForSelector('form[action="/users/login"]');
await page.type('[name=username]', "testme", {delay: 100});
await page.type('[name=password]', "w0rd", {delay: 100});
await page.keyboard.press('Enter');
await page.waitForNavigation({
'waitUntil': 'domcontentloaded'
});
});

it('should be logged in', async function() {
assert.isNotNull(await page.$('a[href="/users/logout"]'));
});

it('should log out correctly', async function() {
await page.click('a[href="/users/logout"]');
});

it('should be logged out', async function() {
await page.waitForSelector('a.nav-item[href="/users/login"]');
});
});

// Other test scenarios go here.

This is our test implementation for logging in and out. We have to specify the timeout value because it is a new describe block.

The click method takes a CSS selector, meaning this first click event is sent to the Login button. A CSS selector, as the name implies, is similar to or identical to the selectors we'd write in a CSS file. With a CSS selector, we can target specific elements on the page.

To determine the selector to use, look at the HTML for the templates and learn how to describe the element you wish to target. It may be necessary to add ID attributes into the HTML to improve testability.

The Puppeteer documentation refers to the CSS Selectors documentation on the Mozilla Developer Network website: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors.

Clicking on the Login button will, of course, cause the Login page to appear. To verify this, we wait until the page contains a form that posts to /users/login. That form is in login.hbs.

The type method acts as a user typing text. In this case, the selectors target the Username and Password fields of the login form. The delay option inserts a pause of 100 milliseconds after typing each character. It was noted in testing that sometimes, the text arrived with missing letters, indicating that Puppeteer can type faster than the browser can accept.

The page.keyboard object has various methods related to keyboard events. In this case, we're asking to generate the equivalent to pressing Enter on the keyboard. Since, at that point, the focus is in the Login form, that will cause the form to be submitted to the Notes application. Alternatively, there is a button on that form, and the test could instead click on the button.

The waitForNavigation method has a number of options for waiting on page refreshes to finish. The selected option causes a wait until the DOM content of the new page is loaded.

The $ method searches the DOM for elements matching the selector, returning an array of matching elements. If no elements match, null is returned instead. Therefore, this is a way to test whether the application got logged in, by looking to see if the page has a Logout button.

To log out, we click on the Logout button. Then, to verify the application logged out, we wait for the page to refresh and show a Login button:

$ npm run test

> [email protected] test /Users/David/Chapter13/compose-test/notesui
> URL_USERS_TEST=http://localhost:5858 NOTES_HOME_URL=http://localhost:3000 mocha uitest.mjs

Initialize test user
should successfully add test user (188ms)
should successfully verify test user exists

Notes
should visit home page (1713ms)
log in and log out correctly
should log in correctly (2154ms)
should be logged in
should log out correctly (287ms)
should be logged out (55ms)

Destroy test user
should successfully destroy test user (38ms)
should successfully verify test user gone (39ms)

9 passing (7s)

With that, our new tests are passing. Notice that the time required to execute some of the tests is rather long. Even longer times were observed while debugging the test, which is why we set long timeouts.

That's good, but of course, there is more to test, such as the ability to add a Note.

Testing the ability to add Notes

We have a test case to verify login/logout functionality. The point of this application is adding notes, so we need to test this feature. As a side effect, we will learn how to verify page content with Puppeteer.

To test this feature, we will need to follow these steps:

  1. Log in and verify we are logged in.
  2. Click the Add Note button to get to the form.
  3. Enter the information for a Note.
  4. Verify that we are showing the Note and that the content is correct.
  5. Click on the Delete button and confirm deleting the Note.
  6. Verify that we end up on the home page.
  7. Log out.

You might be wondering "Isn't it duplicative to log in again?" The previous tests focused on login/logout. Surely that could have ended with the browser in the logged-in state? With the browser still logged in, this test would not need to log in again. While that is true, it would leave the login/logout scenario incompletely tested. It would be cleaner for each scenario to be standalone in terms of whether or not the user is logged in. To avoid duplication, let's refactor the test slightly.

In the outermost describe block, add the following two functions:

describe('Notes', function() {
this.timeout(100000);
let browser;
let page;

async function doLogin() {
await page.click('a.nav-item[href="/users/login"]');
await page.waitForSelector('form[action="/users/login"]');
await page.type('[name=username]', "testme", {delay: 150});
await page.type('[name=password]', "w0rd", {delay: 150});
await page.keyboard.press('Enter');
await page.waitForNavigation({
'waitUntil': 'domcontentloaded'
});
}

async function checkLogin() {
const btnLogout = await page.$('a[href="/users/logout"]');
assert.isNotNull(btnLogout);
}
...
});

This is the same code as the code for the body of the test cases shown previously, but we've moved the code to their own functions. With this change, any test case that wishes to log into the test user can use these functions.

Then, we need to change the login/logout tests to this:

describe('log in and log out correctly', function() {
this.timeout(100000);

it('should log in correctly', doLogin);
it('should be logged in', checkLogin);
...
});

All we've done is move the code that had been here into their own functions. This means we can reuse those functions in other tests, thus avoiding duplicative code.

Add the following code for the Note creation test suite to uitest.mjs:

describe('allow creating notes', function() {
this.timeout(100000);

it('should log in correctly', doLogin);
it('should be logged in', checkLogin);

it('should go to Add Note form', async function() {
await page.click('a[href="/notes/add"]');
await page.waitForSelector('form[action="/notes/save"]');
await page.type('[name=notekey]', "testkey", {delay: 200});
await page.type('[name=title]', "Test Note Subject", {delay:
150});
await page.type('[name=body]',
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua.",
{ delay: 100 });
await page.click('button[type="submit"]');
});

it('should view newly created Note', async function() {
await page.waitForSelector('h3#notetitle');
assert.include(
await page.$eval('h3#notetitle', el => el.textContent),
"Test Note Subject"
);
assert.include(
await page.$eval('#notebody', el => el.textContent),
"Lorem ipsum dolor"
);
assert.include(page.url(), '/notes/view');
});

it('should delete newly created Note', async function() {
assert.isNotNull(await page.$('a#notedestroy'));
await page.click('a#notedestroy');
await page.waitForSelector('
form[action="/notes/destroy/confirm"]');
await page.click('button[type="submit"]');
await page.waitForSelector('#notetitles');
assert.isNotNull(await page.$('a[href="/users/logout"]'));
assert.isNotNull(await page.$('a[href="/notes/add"]'));
});

it('should log out', async function() {
await page.click('a[href="/users/logout"]');
await page.waitForSelector('a[href="/users/login"]');
});
});

These are our test cases for adding and deleting Notes. We start with the doLogin and checkLogin functions to ensure the browser is logged in.

After clicking on the Add Note button and waiting for the browser to show the form in which we enter the Note details, we need to enter text into the form fields. The page.type method acts as a user typing on a keyboard and types the given text into the field identified by the selector.

The interesting part comes when we verify the note being shown. After clicking the Submit button, the browser is, of course, taken to the page to view the newly created Note. To do this, we use page.$eval to retrieve text from certain elements on the screen.

The page.$eval method scans the page for matching elements, and for each, it calls the supplied callback function. The callback function is given the element, and in our case, we call the textContent method to retrieve the textual form of the element. Then, we're able to use the assert.include function to test that the element contains the required text.

The page.url() method, as its name suggests, returns the URL currently being viewed. We can test whether that URL contains /notes/view to be certain the browser is viewing a note.

To delete the note, we start by verifying that the Delete button is on the screen. Of course, this button is there if the user is logged in. Once the button is verified, we click on it and wait for the FORM that confirms that we want to delete the Note. Once it shows up, we can click on the button, after which we are supposed to land on the home page.

Notice that to find the Delete button, we need to refer to a#notedestroy. As it stands, the template in question does not have that ID anywhere. Because the HTML for the Delete button was not set up so that we could easily create a CSS selector, we must edit views/noteedit.hbs to change the Delete button to this:

<a class="btn btn-outline-dark" id="notedestroy"
href="/notes/destroy?key={{notekey}}"
role="button">Delete</a>

All we did was add the ID attribute. This is an example of improving testability, which we'll discuss later.

A technique we're using is to call page.$ to query whether the given element is on the page. This method inspects the page, returning an array containing any matching elements. We are simply testing if the return value is non-null because page.$ returns null if there are no matching elements. This makes for an easy way to test if an element is present.

We end this by logging out by clicking on the Logout button.

Having created these test cases, we can run the test suite again:

$ npm run test

> [email protected] test /Users/David/Chapter13/compose-test/notesui
> URL_USERS_TEST=http://localhost:5858 NOTES_HOME_URL=http://localhost:3000 mocha uitest.mjs

Initialize test user
should successfully add test user (228ms)
should successfully verify test user exists (46ms)

Notes
should visit home page (2214ms)
log in and log out correctly
should log in correctly (2567ms)
should be logged in
should log out correctly (298ms)
should be logged out
allow creating notes
should log in correctly (2288ms)
should be logged in
should go to Add Note form (18221ms)
should view newly created Note (39ms)
should delete newly created Note (1225ms)

12 passing (1m)

We have more passing tests and have made good progress. Notice how one of the test cases took 18 seconds to finish. That's partly because we slowed text entry down to make sure it is correctly received in the browser, and there is a fair amount of text to enter. There was a reason we increased the timeout.

In earlier tests, we had success with negative tests, so let's see if we can find any bugs that way.

Implementing negative tests with Puppeteer

Remember that a negative test is used to purposely invoke scenarios that will fail. The idea is to ensure the application fails correctly, in the expected manner.

We have two scenarios for an easy negative test:

  • Attempt to log in using a bad user ID and password
  • Access a bad URL

Both of these are easy to implement, so let's see how it works.

Testing login with a bad user ID

A simple way to ensure we have a bad username and password is to generate random text strings for both. An easy way to do that is with the uuid package. This package is about generating Universal Unique IDs (that is, UUIDs), and one of the modes of using the package simply generates a unique random string. That's all we need for this test; it is a guarantee that the string will be unique.

To make this crystal clear, by using a unique random string, we ensure that we don't accidentally use a username that might be in the database. Therefore, we will be certain of supplying an unknown username when trying to log in.

In uitest.mjs, add the following to the imports:

import { v4 as uuidv4 } from 'uuid';

There are several methods supported by the uuid package, and the v4 method is what generates random strings.

Then, add the following scenario:

describe('reject unknown user', function() {
this.timeout(100000);

it('should fail to log in unknown user correctly', async function() {
assert.isNotNull(await page.$('a[href="/users/login"]'));
await page.click('a.nav-item[href="/users/login"]');
await page.waitForSelector('form[action="/users/login"]');
await page.type('[name=username]', uuidv4(), {delay: 150});
await page.type('[name=password]',
await hashpass(uuidv4()), {delay: 150});
await page.keyboard.press('Enter');
await page.waitForSelector('form[action="/users/login"]');
assert.isNotNull(await page.$('a[href="/users/login"]'));
assert.isNotNull(await page.$('form[action="/users/login"]'));
});
});

This starts with the login scenario. Instead of a fixed username and password, we instead use the results of calling uuidv4(), or the random UUID string.

This does the login action, and then we wait for the resulting page. In trying this manually, we learn that it simply returns us to the login screen and that there is no additional message. Therefore, the test looks for the login form and ensures there is a Login button. Between the two, we are certain the user is not logged in.

We did not find a code error with this test, but there is a user experience error: namely, the fact that, for a failed login attempt, we simply show the login form and do not provide a message (that is, unknown username or password), which leads to a bad user experience. The user is left feeling confused over what just happened. So, let's put that on our backlog to fix.

Testing a response to a bad URL 

Our next negative test is to try a bad URL in Notes. We coded Notes to return a 404 status code, which means the page or resource was not found. The test is to ask the browser to visit the bad URL, then verify that the result uses the correct error message.

Add the following test case:

describe('reject unknown URL', function() {
this.timeout(100000);

it('should fail on unknown URL correctly', async function() {
let u = new URL(process.env.NOTES_HOME_URL);
u.pathname = '/bad-unknown-url';
let response = await page.goto(u.toString());
await page.waitForSelector('header.page-header');
assert.equal(response.status(), 404);
assert.include(
await page.$eval('h1', el => el.textContent),
"Not Found"
);
assert.include(
await page.$eval('h2', el => el.textContent),
"404"
);
});
});

This computes the bad URL by taking the URL for the home page (NOTES_HOME_URL) and setting the pathname portion of the URL to /bad-unknown-url. Since there is no route in Notes for this path, we're certain to get an error. If we wanted more certainty, it seems we could use the uuidv4() function to make the URL random.

Calling page.goto() simply gets the browser to go to the requested URL. For the subsequent page, we wait until a page with a header element shows up. Because this page doesn't have much on it, the header element is the best choice for determining when we have the subsequent page.

To check the 404 status code, we call response.status(), which is the status code that's received in the HTTP response. Then, we call page.$eval to get a couple of items from the page and make sure they contain the text that's expected.

In this case, we did not find any code problems, but we did find another user experience problem. The error page is downright ugly and user-unfriendly. We know the user experience team will scream about this, so add it to your backlog to do something to improve this page.

In this section, we wrapped up test development by creating a couple of negative tests. While this didn't result in finding code bugs, we found a pair of user experience problems. We know this will result in an unpleasant discussion with the user experience team, so we've proactively added a task to the backlog to fix those pages. But we also learned about being on the lookout for any kind of problem that crops up along the way. It's well-known that the lowest cost of fixing a problem is the issues found by the development or testing team. The cost of fixing problems goes up tremendously when it is the user community reporting the problems.

Before we wrap up this chapter, we need to talk a little more in-depth about testability.

Improving testability in the Notes UI

While the Notes application displays well in the browser, how do we write test software to distinguish one page from another? As we saw in this section, the UI test often performed an action that caused a page refresh and had to wait for the next page to appear. This means our test must be able to inspect the page and work out whether the browser is displaying the correct page. An incorrect page is itself a bug in the application. Once the test determines it is the correct page, it can then validate the data on the page.

The bottom line is a requirement stating that each HTML element must be easily addressable using a CSS selector. 

While in most cases it is easy to code a CSS selector for every element, in a few cases, this is difficult. The Software Quality Engineering (SQE) manager has requested our assistance. At stake is the testing budget, which will be stretched further the more the SQE team can automate their tests.

All that's necessary is to add a few id or class attributes to HTML elements to improve testability. With a few identifiers and a commitment to maintaining those identifiers, the SQE team can write repeatable test scripts to validate the application.

We have already seen one example of this: the Delete button in views/noteview.hbs. It proved impossible to write a CSS selector for that button, so we added an ID attribute that let us write the test.

In general, testability is about adding things to an API or user interface for the benefit of software quality testers. For an HTML user interface, that means making sure test scripts can locate any element in the HTML DOM. And as we've seen, the id and class attributes go a long way to satisfying that need.

In this section, we learned about user interface testing as a form of functional testing. We used Puppeteer, a framework for driving a headless Chromium browser instance, as the vehicle for testing the Notes user interface. We learned how to automate user interface actions and how to verify that the web pages that showed up matched with their correct behavior. That included test scenarios covering login, logout, adding notes, and logging in with a bad user ID. While this didn't discover any outright failures, watching the user interaction told us of some usability problems with Notes.

With that, we are ready to close out this chapter.

Summary

We covered a lot of territory in this chapter and looked at three distinct areas of testing: unit testing, REST API testing, and UI functional tests. Ensuring that an application is well tested is an important step on the road to software success. A team that does not follow good testing practices is often bogged down with fixing regression after regression.

First, we talked about the potential simplicity of simply using the assert module for testing. While the test frameworks, such as Mocha, provide great features, we can go a long way with a simple script.

There is a place for test frameworks, such as Mocha, if only to regularize our test cases and to produce test results reports. We used Mocha and Chai for this, and these tools were quite successful. We even found a couple of bugs with a small test suite.

When starting down the unit testing road, one design consideration is mocking out dependencies. But it's not always a good use of our time to replace every dependency with a mock version. As a result, we ran our tests against a live database, but with test data.

To ease the administrative burden of running tests, we used Docker to automate setting up and tearing down the test infrastructure. Just as Docker was useful in automating the deployment of the Notes application, it's also useful in automating test infrastructure deployment.

Finally, we were able to test the Notes web user interface in a real web browser. We can't trust that unit testing will find every bug; some bugs will only show up in the web browser. 

In this book, we've covered the full life cycle of Node.js development, from concept, through various stages of development, to deployment and testing. This will give you a strong foundation from which to start developing Node.js applications.

In the next chapter, we'll explore another critical area security. We'll start by using HTTPS to encrypt and authenticate user access to Notes. We'll use several Node.js packages to limit the chances of security intrusions. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.242.165