Our first Mocha test

First, let's add a new command to the scripts field of our package.json file:

"test": "mocha --exit test/ --require babel-hook --require @babel/polyfill --recursive"

If you now execute npm run test, we'll run the mocha package in the test folder, which we'll create in a second. The preceding --require option loads the specified file or package. We'll also load a babel-hook.js file, which we'll create as well. The --recursive parameter tells Mocha to run through the complete file tree of the test folder, not just the first layer. This behavior is useful because it allows us to structure our tests in multiple files and folders.

Let's begin with the babel-hook.js file by adding it to the root of our project, next to the package.json file. Insert the following code:

require("@babel/register")({
"plugins": [
"require-context-hook"
],
"presets": ["@babel/env","@babel/react"]
});

The purpose of this file is to give us an alternative Babel configuration file to our standard .babelrc file. If you compare both files, you should see that we use the require-context-hook plugin. We already use this plugin when starting the back end with npm run server. It allows us to import our Sequelize models using a regular expression.

If we start our test with npm run test, we require this file at the beginning. Inside the babel-hook.js file, we load @babel/register, which compiles all the files that are imported afterward in our test according to the preceding configuration.

Notice that when running a production build or environment, the production database is also used. All changes are made to this database. Verify that you have configured the database credentials correctly in the server's configuration folder. You have only to set the host, username, password, and database environment variables correctly.

This gives us the option to start our back end server from within our test file and render our application on the server. The preparation for our test is now finished. Create a folder named test inside the root of our project to hold all runnable tests. Mocha will scan all files or folders, and all tests will be executed. To get a basic test running, create app.test.js. This is the main file, which makes sure that our back end is running and in which we can subsequently define further tests. The first version of our test looks as follows:

const assert = require('assert');
const request = require('request');
const expect = require('chai').expect;
const should = require('chai').should();

describe('Graphbook application test', function() {

it('renders and serves the index page', function(done) {
request('http://localhost:8000', function(err, res, body) {
should.not.exist(err);
should.exist(res);
expect(res.statusCode).to.be.equal(200);
assert.ok(body.indexOf('<html') !== -1);
done(err);
});
});

});

Let's take a closer look at what's happening here:

  1. We import the Node.js assert function. It gives us the ability to verify the value or the type of a variable.
  2. We import the request package, which we use to send queries against our back end.
  1. We import two Chai functions, expect and should, from the chai package. Neither of these is included in Mocha, but they both improve the test's functionality significantly.
  2. The beginning of the test starts with the describe function. Because Mocha executes the app.test.js file, we're in the correct scope and can use all Mocha functions. The describe function is used to structure your test and its output.
  3. We use the it function, which initiates the first test.

The it function can be understood as a feature of our application that we want to test inside the callback function. As the first parameter, you should enter a sentence, such as 'it does this and that', that's easily readable. The function itself waits for the complete execution of the callback function in the second parameter. The result of the callback will either be that all assertions were successful, or that, for some reason, a test failed or the callback didn't complete in a reasonable amount of time.

The describe function is the header of our test's output. Then, we have a new row for each it function we execute. Each row represents a single test step. The it function passes a done function to the callback. The done function has to be executed once all assertions are finished and there's nothing left to do. If it isn't executed in a certain amount of time, the current test is marked as failed. In the preceding code, the first thing we did was send an HTTP GET request to http://localhost:8000, which is accepted by our back end server. The expected answer will be in the form of server-side rendered HTML created through React.

To prove that the response holds this information, we make some assertions in our preceding test:

  1. We use the should function from Chai. The great thing is that it's chainable and represents a sentence that directly explains the meaning of what we're doing. The should.not.exist function chain makes sure that the given value is empty. The result is true if the value is undefined or null, for example. The consequence is that when the err variable is filled, the assertion fails and so our test, 'renders and serves the index page', fails too.
  2. The same goes for the should.exist line. It makes sure that the res variable, which is the response given by the back end, is filled. Otherwise, there's a problem with the back end.
  3. The expect function can also represent a sentence, like both functions before. We expect res.statusCode to have a value of 200. This assertion can be written as expect(res.statusCode).to.be.equal(200). We can be sure that everything has gone well if the HTTP status is 200.
  1. If nothing has failed so far, we check whether the returned body, which is the third callback parameter of the request function, is valid. For our test scenario, we only need to check whether it contains an html tag.
  2. We execute the done function. We pass the err object as a parameter. The result of this function is much like the should.not.exist function. If you pass a filled error object to the done function, the test fails. The tests become more readable when using the Chai syntax.

If you execute npm run test now, you'll receive the following error:

Our first should.not.exist assertion failed and threw an error. This is because we didn't start the back end when we ran the test. Start the back end in a second terminal with the correct environment variables using npm run server and rerun the test. Now, the test is successful:

The output is good, but the process isn't very intuitive. The current workflow is hard to implement when running the tests automatically while deploying your application or pushing new commits to your version-control system. We'll change this behavior next.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.127.232