Quality assurance is a phrase that is prone to send shivers down the spines of developers—which is unfortunate. After all, don’t you want to make quality software? Of course you do. So it’s not the end goal that’s the sticking point; it’s the politics of the matter. I’ve found that two common situations arise in web development:
There’s usually a QA department, and, unfortunately, an adversarial relationship springs up between QA and development. This is the worst thing that can happen. Both departments are playing on the same team, for the same goal, but QA often defines success as finding more bugs, while development defines success as generating fewer bugs, and that serves as the basis for conflict and competition.
Often, there is no QA department; the development staff is expected to serve the dual role of establishing QA and developing software. This is not a ridiculous stretch of the imagination or a conflict of interest. However, QA is a very different discipline than development, and it attracts different personalities and talents. This is not an impossible situation, and certainly there are developers out there who have the QA mind-set, but when deadlines loom, it’s usually QA that gets the short shrift, to the project’s detriment.
With most real-world endeavors, multiple skills are required, and increasingly, it’s harder to be an expert in all of those skills. However, some competency in the areas for which you are not directly responsible will make you more valuable to the team and make the team function more effectively. A developer acquiring QA skills offers a great example: these two disciplines are so tightly intertwined that cross-disciplinary understanding is extremely valuable.
It is also common to shift activities traditionally done by QA to development, making developers responsible for QA. In this paradigm, software engineers who specialize in QA act almost as consultants to developers, helping them build QA into their development workflow. Whether QA roles are divided or integrated, it is clear that understanding QA is beneficial to developers.
This book is not for QA professionals; it is aimed at developers. So my goal is not to make you a QA expert but to give you some experience in that area. If your organization has a dedicated QA staff, it will make it easier for you to communicate and collaborate with them. If you do not, it will give you a starting point to establishing a comprehensive QA plan for your project.
In this chapter, you’ll learn the following:
Quality fundamentals and effective habits
The types of tests (unit and integration)
How to write unit tests with Jest
How to write integration tests with Puppeteer
How to configure ESLint to help prevent common errors
What continuous integration is and where to start learning about it
Development is, by and large, a creative process: envisioning something and then translating it into reality. QA, in contrast, lives more in the realm of validation and order. As such, a large part of QA is simply a matter of knowing what needs to be done and making sure it gets done. It is a discipline well-suited for checklists, procedures, and documentation. I would go so far as to say the primary activity of QA is not the testing of software itself but the creation of a comprehensive, repeatable QA plan.
I recommend the creation of a QA plan for every project, no matter how big or small (yes, even your weekend “fun” project!). The QA plan doesn’t have to be big or elaborate; you can put it in a text file or a word processing document or a wiki. The objective of the QA plan is to record all of the steps you’ll take to ensure that your product is functioning as intended.
In whatever form it takes, the QA plan is a living document. You will update it in response to the following:
New features
Changes in existing features
Removed features
Changes in testing technologies or techniques
Defects that were missed by the QA plan
That last point deserves special mention. No matter how robust your QA is, defects will happen. And when they do, you should ask yourself, “How could we have prevented this?” When you answer that question, you can modify your QA plan accordingly to prevent future instances of this type of defect.
By now you might be getting a feel for the not insignificant effort involved in QA, and you might be reasonably wondering how much effort you want to put into it.
QA can be expensive—sometimes very expensive. So is it worth it? It’s a complicated formula with complicated inputs. Most organizations operate on some kind of “return on investment” model. If you spend money, you must expect to receive at least as much money in return (preferably more). With QA, though, the relationship can be muddy. A well-established and well-regarded product, for example, may be able to get by with quality issues for longer than a new and unknown project. Obviously, no one wants to produce a low-quality product, but the pressures in technology are high. Time-to-market can be critical, and sometimes it’s better to come to market with something that’s less than perfect than to come to market with the perfect product months later.
In web development, quality can be broken down into four dimensions:
Reach refers to the market penetration of your product: the number of people viewing your website or using your service. There’s a direct correlation between reach and profitability: the more people who visit the website, the more people who buy the product or service. From a development perspective, search engine optimization (SEO) will have the biggest impact on reach, which is why we will be including SEO in our QA plan.
Once people are visiting your site or using your service, the quality of your site’s functionality will have a large impact on user retention; a site that works as advertised is more likely to drive return visits than one that isn’t. Functionality offers the most opportunity for test automation.
Where functionality is concerned with functional correctness, usability evaluates human-computer interaction (HCI). The fundamental question is, “Is the functionality delivered in a way that is useful to the target audience?” This often translates to “Is it easy to use?” though the pursuit of ease can often oppose flexibility or power; what seems easy to a programmer might be different from what seems easy to a nontechnical consumer. In other words, you must consider your target audience when assessing usability. Since a fundamental input to a usability measurement is a user, usability is not usually something that can be automated. However, user testing should be included in your QA plan.
Aesthetics is the most subjective of the four dimensions and is therefore the least relevant to development. While there are few development concerns when it comes to your site’s aesthetics, routine reviews of your site’s aesthetics should be part of your QA plan. Show your site to a representative sample audience, and find out if it feels dated or does not invoke the desired response. Keep in mind that aesthetics is time sensitive (aesthetic standards shift over time) and audience specific (what appeals to one audience may be completely uninteresting to another).
While all four dimensions should be addressed in your QA plan, functionality testing and SEO can be tested automatically during development, so that will be the focus of this chapter.
Broadly speaking, in your website, there are two “realms”: logic (often called business logic, a term I eschew because of its bias toward commercial endeavor) and presentation. You can think of your website’s logic existing in kind of a pure intellectual domain. For example, in our Meadowlark Travel scenario, there might be a rule that a customer must possess a valid driver’s license before renting a scooter. This is a simple data-based rule: for every scooter reservation, the user needs a valid driver’s license. The presentation of this is disconnected. Perhaps it’s just a checkbox on the final form of the order page, or perhaps the customer has to provide a valid driver’s license number, which is validated by Meadowlark Travel. It’s an important distinction, because things should be as clear and simple as possible in the logic domain, whereas the presentation can be as complicated or as simple as it needs to be. The presentation is also subject to usability and aesthetic concerns, whereas the business domain is not.
Whenever possible, you should seek a clear delineation between your logic and presentation. There are many ways to do that, and in this book, we will be focusing on encapsulating logic in JavaScript modules. Presentation, on the other hand, will be a combination of HTML, CSS, multimedia, JavaScript, and frontend frameworks like React, Vue, or Angular.
The type of testing we will be considering in this book falls into two broad categories: unit testing and integration testing (I am considering system testing to be a type of integration testing). Unit testing is very fine-grained, testing single components to make sure they function properly, whereas integration testing tests the interaction between multiple components or even the whole system.
In general, unit testing is more useful and appropriate for logic testing. Integration testing is useful in both realms.
In this book, we will be using the following techniques and software to accomplish thorough testing:
Unit tests cover the smallest units of functionality in your application, usually a single function. They are almost always written by developers, not QA (though QA should be empowered to assess the quality and coverage of unit tests). In this book, we’ll be using Jest for unit tests.
Integration tests cover larger units of functionality, usually involving multiple parts of your application (functions, modules, subsystems, etc.). Since we are building web applications, the “ultimate” integration test is to render the application in a browser, manipulate that browser, and verify that the application behaves as expected. These tests are typically more complicated to set up and maintain, and since the focus of this book isn’t QA, we’ll have only one simple example of this, using Puppeteer and Jest.
Linting isn’t about finding errors but potential errors. The general concept of linting is that it identifies areas that could represent possible errors, or fragile constructs that could lead to errors in the future. We will be using ESLint for linting.
Let’s start with Jest, our test framework (which will run both unit and integration tests).
I struggled somewhat to decide which testing framework to use in this book. Jest began its life as a framework to test React applications (and it is still the obvious choice for that), but Jest is not React-specific and is an excellent general-purpose testing framework. It’s certainly not the only one: Mocha, Jasmine, Ava, and Tape are also excellent choices.
In the end, I chose Jest because I feel it offers the best overall experience (an opinion backed by Jest’s excellent scores in the State of JavaScript 2018 survey). That said, there are a lot of similarities among the testing frameworks mentioned here, so you should be able to take what you learn and apply it to your favorite test framework.
To install Jest, run the following from your project root:
npm install --save-dev jest
(Note that we use --save-dev
here; this tells npm that this is a
development dependency and is not needed for the application itself to
function; it will be listed in the devDependencies
section of the
package.json file instead of the dependencies
section.)
Before we move on, we need a way to run Jest (which will run any tests in
our project). The conventional way to do that is to add a script to
package.json. Edit package.json (ch05/package.json in the
companion repo), and modify the scripts
property (or
add it if it doesn’t exist):
"scripts": { "test": "jest" },
Now you can run all the tests in your project simply by typing that:
npm test
If you try that now, you’ll probably get an error that there aren’t any tests configured…because we haven’t added any yet. So let’s write some unit tests!
Now we’ll turn our attention to unit testing. Since the focus of unit testing is on isolating a single function or component, we’ll first need to learn about mocking, an important technique for achieving that isolation.
One of the challenges you’ll frequently face is how to write code that is “testable.” In general, code that tries to do too much or assumes a lot of dependencies is harder to test than focused code that assumes few or no dependencies.
Whenever you have a dependency, you have something that needs to be mocked (simulated) for effective testing. For example, our primary dependency is Express, which is already thoroughly tested, so we don’t need or want to test Express itself, just how we use it. The only way we can determine if we’re using Express correctly is to simulate Express itself.
The routes we currently have (the home page, About page, 404 page, and 500
page) are pretty difficult to test because they assume three dependencies
on Express: they assume we have an Express app (so we can have app.get
),
as well as request and response objects. Fortunately, it’s pretty easy to
eliminate the dependence on the Express app itself (the request and
response objects are harder…more on that later). Fortunately, we’re not
using very much functionality from the response object (we’re using only
the render
method), so it will be easy to mock it, which we will see
shortly.
We don’t really have a lot of code in our application to test yet. To
date, we’ve currently added only a handful of route handlers and the
getFortune
function.
To make our app more testable, we’re going to extract the actual route handlers to their own library. Create a file lib/handlers.js (ch05/lib/handlers.js in the companion repo):
const
fortune
=
require
(
'./fortune'
)
exports
.
home
=
(
req
,
res
)
=>
res
.
render
(
'home'
)
exports
.
about
=
(
req
,
res
)
=>
res
.
render
(
'about'
,
{
fortune
:
fortune
.
getFortune
()
})
exports
.
notFound
=
(
req
,
res
)
=>
res
.
render
(
'404'
)
exports
.
serverError
=
(
err
,
req
,
res
,
next
)
=>
res
.
render
(
'500'
)
Now we can rewrite our meadowloark.js application file to use these handlers (ch05/meadowlark.js in the companion repo):
// typically at the top of the file
const
handlers
=
require
(
'./lib/handlers'
)
app
.
get
(
'/'
,
handlers
.
home
)
app
.
get
(
'/about'
,
handlers
.
about
)
// custom 404 page
app
.
use
(
handlers
.
notFound
)
// custom 500 page
app
.
use
(
handlers
.
serverError
)
It’s easier now to test those handlers: they are just functions that take request and response objects, and we need to verify that we’re using those objects correctly.
There are multiple ways to identify tests to Jest. The two most common are to put tests in subdirectories named __test__ (two underscores before and after test) and to name files with the extension .test.js. I personally like to combine the two techniques because they both serve a purpose in my mind. Putting tests in __test__ directories keeps my test from cluttering up my source directories (otherwise, everything will look doubled in your source directory…you’ll have a foo.test.js for every file foo.js), and having the .test.js extension means that if I’m looking at a bunch of tabs in my editor, I can see at a glance what is a test and what is source code.
So let’s create a file called lib/__tests__/handlers.test.js (ch05/lib/__tests__/handlers.test.js in the companion repo):
const
handlers
=
require
(
'../handlers'
)
test
(
'home page renders'
,
()
=>
{
const
req
=
{}
const
res
=
{
render
:
jest
.
fn
()
}
handlers
.
home
(
req
,
res
)
expect
(
res
.
render
.
mock
.
calls
[
0
][
0
]).
toBe
(
'home'
)
})
If you’re new to testing, this will probably look pretty weird, so let’s break it down.
First, we import the code we’re trying to test (in this case, the route handlers). Then each test has a description; we’re trying to describe what’s being tested. In this case, we want to make sure that the home page gets rendered.
To invoke our render, we need request and response objects. We’d be writing code all week if we wanted to simulate the whole request and response objects, but fortunately we don’t actually need much from them. We know that we don’t need anything at all from the request object in this case (so we just use an empty object), and the only thing we need from the response object is a render method. Note how we construct the render function: we just call a Jest method called jest.fn(). This creates a generic mock function that keeps track of how it’s called.
Finally, we get to the important part of the test: assertions. We’ve gone
to all the trouble to invoke the code we’re testing, but how do we assert
that it did what it should? In this case, what the code should do is call
the render
method of the response object with the string home
.
Jest’s mock function keeps track of all the times it got called, so all we
have to do is verify it got called exactly once (it would probably be a
problem if it got called twice), which is what the first expect
does, and
that it gets called with home
as its first argument (the first array
index specifies which invocation, and the second one specifies which
argument).
It can get tedious to constantly be rerunning your tests every time
you make a change to your code. Fortunately, most test frameworks have a
“watch” mode that constantly monitors your code and tests for changes and
reruns them automatically. To run your tests in watch mode, type npm
test -- --watch
(the extra double-dash is necessary to let npm know to
pass the --watch
argument to Jest).
Go ahead and change your home
handler to render something other than the
home view; you’ll notice that your test has now failed, and you caught a
bug!
We can now add tests for our other routes:
test
(
'about page renders with fortune'
,
()
=>
{
const
req
=
{}
const
res
=
{
render
:
jest
.
fn
()
}
handlers
.
about
(
req
,
res
)
expect
(
res
.
render
.
mock
.
calls
.
length
).
toBe
(
1
)
expect
(
res
.
render
.
mock
.
calls
[
0
][
0
]).
toBe
(
'about'
)
expect
(
res
.
render
.
mock
.
calls
[
0
][
1
])
.
toEqual
(
expect
.
objectContaining
({
fortune
:
expect
.
stringMatching
(
/W/
),
}))
})
test
(
'404 handler renders'
,
()
=>
{
const
req
=
{}
const
res
=
{
render
:
jest
.
fn
()
}
handlers
.
notFound
(
req
,
res
)
expect
(
res
.
render
.
mock
.
calls
.
length
).
toBe
(
1
)
expect
(
res
.
render
.
mock
.
calls
[
0
][
0
]).
toBe
(
'404'
)
})
test
(
'500 handler renders'
,
()
=>
{
const
err
=
new
Error
(
'some error'
)
const
req
=
{}
const
res
=
{
render
:
jest
.
fn
()
}
const
next
=
jest
.
fn
()
handlers
.
serverError
(
err
,
req
,
res
,
next
)
expect
(
res
.
render
.
mock
.
calls
.
length
).
toBe
(
1
)
expect
(
res
.
render
.
mock
.
calls
[
0
][
0
]).
toBe
(
'500'
)
})
Note some extra functionality in the “about” and server error tests. The
“about” render function gets called with a fortune, so we’ve added an
expectation that it will get a fortune that is a string that contains at
least one character. It’s beyond the scope of this book to describe all of
the functionality that is available to you through Jest and its expect
method, but you can find comprehensive documentation on the
Jest home page. Note that the server error handler takes
four arguments, not two, so we have to provide additional mocks.
You might be realizing that tests are not a “set it and forget it” affair. For example, if we renamed our “home” view for legitimate reasons, our test would fail, and then we would have to fix the test in addition to fixing the code.
For this reason, teams put a lot of effort into setting realistic expectations about what should be tests and how specific the tests should be. For example, we didn’t have to check to see if the “about” handler was being called with a fortune…which would save us from having to fix the test if we ditch that feature.
Furthermore, I can’t offer much advice about how thoroughly you should test your code. I would expect you to have very different standards for testing code for avionics or medical equipment than for testing the code behind a marketing website.
What I can offer you is a way to answer the question, “How much of my code is tested?” The answer to that is called code coverage, which we’ll discuss next.
Code coverage offers a quantitative answer to how much of your code is tested, but like most topics in programming, there are no simple answers.
Jest helpfully provides some automated code coverage analysis. To see how much of your code is tested, run the following:
npm test -- --coverage
If you’ve been following along, you should see a bunch of reassuringly green “100%” coverage numbers for the files in lib. Jest will report on the coverage percentage of statements (Stmts), branches, functions (Funcs), and lines.
Statements are referring to JavaScript statements, such as every expression, control
flow statement, etc. Note that you could have 100% line coverage but not
100% statement coverage because you can put multiple statements on a single
line in JavaScript. Branch coverage refers to control flow statements,
such as if-else
. If you have an if-else
statement and your test
exercises only the if
part, you will have 50% branch coverage for that
statement.
You may note that meadowlark.js does not have 100% coverage. This is not necessarily a problem; if you look at our refactored meadowlark.js file, you’ll see that most of what’s in there now is simply configuration…we’re just gluing things together. We’re configuring Express with the relevant middleware and starting the server. Not only would it be hard to meaningfully test this code, but it’s a reasonable argument that you shouldn’t have to since it’s merely assembling well-tested code.
You could even make the argument that the tests we’ve written so far are not particularly useful; they’re also just verifying that we’re configuring Express correctly.
Once again, I have no easy answers. At the end of the day, the type of application you’re building, your level of experience, and the size and configuration of your team will have a large impact on how far down the test rabbit hole you go. I encourage you to err on the side of too much testing than not enough, but with experience, you’ll find the “just right” sweet spot.
There’s currently nothing interesting to test in our application; we just have a couple of pages and there’s no interaction. So before we write an integration test, let’s add some functionality that we can test. In the interest of keeping things simple, we’ll let that functionality be a link that allows you to get from the home page to the About page. It doesn’t get much simpler than that! And yet, as simple as that would appear to a user, it is a true integration test because it’s exercising not only two Express route handlers, but also the HTML and the DOM interaction (the user clicking the link and the resulting page navigation). Let’s add a link to views/home.handlebars:
<p>
Questions? Checkout out our<a
href=
"/about"
data-test-id=
"about"
>
About Us</a>
page!</p>
You might be wondering about the data-test-id
attribute. To make
testing, we need some way to identify the link so we can (virtually)
click it. We could have used a CSS class for this, but I prefer to
reserve classes for styling and use data attributes for automation. We also
could have searched for the text About Us, but that would be a
fragile and expensive DOM search. We also could have queried against the
href
parameter, which would make sense (but then it would be harder
to make this test fail, which we want to do for educational purposes).
We can go ahead and run our application and verify with our clumsy human hands that the functionality works as intended before we move on to something more automated.
Before we jump into installing Puppeteer and writing an integration test,
we need to modify our application so that it can be required as a module
(right now it is designed only to be run directly). The way to do that in
Node is a little opaque: at the bottom of meadowlark.js, replace the
call to app.listen
with the following:
if
(
require
.
main
===
module
)
{
app
.
listen
(
port
,
()
=>
{
console
.
log
(
`Express started on http://localhost:
${
port
}
`
+
'; press Ctrl-C to terminate.'
)
})
}
else
{
module
.
exports
=
app
}
I’ll skip the technical explanation for this as it’s rather tedious, but if
you’re curious, a careful reading of
Node’s
module documentation will make it clear. What’s important to know is that
if you run a JavaScript file directly with node, require.main
will equal
the global module
; otherwise, it’s being imported from another module.
Now that we’ve got that out of the way, we can install Puppeteer. Puppeteer is essentially a controllable, headless version of Chrome. (Headless simply means that the browser is capable of running without actually rendering a UI on-screen.) To install Puppeteer:
npm install --save-dev puppeteer
We’ll also install a small utility to find an open port so that we don’t get a lot of test errors because our app can’t start on the port we requested:
npm install --save-dev portfinder
Now we can write an integration that does the following:
Starts our application server on an unoccupied port
Launches a headless Chrome browser and opens a page
Navigates to our application’s home page
Finds a link with data-test-id="about"
and clicks it
Waits for the navigation to happen
Verifies that we are on the /about page
Create a directory called integration-tests (you’re welcome to call it something else if you like) and a file in that directory called basic-navigation.test.js (ch05/integration-tests/basic-navigation.test.js in the companion repo):
const
portfinder
=
require
(
'portfinder'
)
const
puppeteer
=
require
(
'puppeteer'
)
const
app
=
require
(
'../meadowlark.js'
)
let
server
=
null
let
port
=
null
beforeEach
(
async
()
=>
{
port
=
await
portfinder
.
getPortPromise
()
server
=
app
.
listen
(
port
)
})
afterEach
(()
=>
{
server
.
close
()
})
test
(
'home page links to about page'
,
async
()
=>
{
const
browser
=
await
puppeteer
.
launch
()
const
page
=
await
browser
.
newPage
()
await
page
.
goto
(
`http://localhost:
${
port
}
`
)
await
Promise
.
all
([
page
.
waitForNavigation
(),
page
.
click
(
'[data-test-id="about"]'
),
])
expect
(
page
.
url
()).
toBe
(
`http://localhost:
${
port
}
/about`
)
await
browser
.
close
()
})
We are using Jest’s beforeEach
and afterEach
hooks to start our server
before each test and stop it after each test (right now we have only one
test, so this will really be meaningful when we add more tests). We could
instead use beforeAll
and afterAll
so we’re not starting and tearing
down our server for every test, which may speed up your tests, but at the
cost of not having a “clean” environment for each test. That is, if one of
your tests makes changes that affect the outcome of future tests, you’re
introducing hard-to-maintain dependencies.
Our actual test uses Puppeteer’s API, which gives us a lot of DOM query
functionality. Note that almost everything here is asynchronous, and we’re
using await
liberally to make the test easier to read and write (almost
all of the Puppeteer API calls return a promise).1 We wrap the
navigation and the click together in a call to Promise.all
to prevent
race conditions per the Puppeteer documentation.
There’s far more functionality in the Puppeteer API than I could hope to cover in this book. Fortunately, it has excellent documentation.
Testing is a vital backstop in ensuring the quality of your product, but it’s not the only tool at your disposal. Linting helps you prevent common errors in the first place.
A good linter is like having a second set of eyes: it will spot things that will slide right past our human brains. The original JavaScript linter is Douglas Crockford’s JSLint. In 2011, Anton Kovalyov forked JSLint, and JSHint was born. Kovalyov found that JSLint was becoming too opinionated, and he wanted to create a more customizable, community-developed JavaScript linter. After JSHint came Nicholas Zakas’ ESLint, which has become the most popular choice (it won by a landslide in the 2017 State of JavaScript survey). In addition to its ubiquity, ESLint appears to be the most actively maintained linter, and I prefer its flexible configuration over JSHint, and it is what I am recommending.
ESLint can be installed on a per project basis or globally. To avoid inadvertently breaking things, I try to avoid global installations (for example, if I install ESLint globally and update it frequently, old projects may no longer lint successfully because of breaking changes, and now I have to do the extra work of updating my project).
To install ESLint in your project:
npm install --save-dev eslint
ESLint requires a configuration file to tell it which rules to apply. Doing this from scratch would be a time-consuming task, so fortunately ESLint provides a utility for creating one for you. From your project root, run the following:
./node_modules/.bin/eslint --init
If we installed ESLint globally, we could just use eslint --init
. The
awkward ./node_modules/.bin
path is required to directly run
locally installed utilities. We’ll see soon that we don’t have to do that
if we add utilities to the scripts
section of our package.json file,
which is recommended for things we do frequently. However, creating an
ESLint configuration is something we have to do only once per project.
ESLint will ask you some questions. For most of them, it’s safe to choose the defaults, but a couple deserve note:
Since we’re using Node (as opposed to code that will run in the browser), you’ll want to choose “CommonJS (require/exports).” You may have client-side JavaScript in your project too, in which case you may want a separate lint configuration. The easiest way to do this is to have two separate projects, but it is possible to have multiple ESLint configurations in the same project. Consult the ESLint documentation for more information.
Unless you see Express on there (I don’t at the time of this writing), choose “None of these.”
Choose Node.
Now that ESLint is set up, we need a convenient way of running it. Add the
following to the scripts
section of your package.json:
"lint": "eslint meadowlark.js lib"
Note that we have to explicitly tell ESLint what files and directories we want to lint. This is an argument for collecting all of your source under one directory (usually src).
Now brace yourself and run the following:
npm run lint
You’ll probably see a lot of unpleasant-looking errors—that’s usually what happens when you first run ESLint. However, if you’ve been following along with the Jest test, there will be some spurious errors related to Jest, which look like this:
3:1 error 'test' is not defined no-undef 5:25 error 'jest' is not defined no-undef 7:3 error 'expect' is not defined no-undef 8:3 error 'expect' is not defined no-undef 11:1 error 'test' is not defined no-undef 13:25 error 'jest' is not defined no-undef 15:3 error 'expect' is not defined no-undef
ESLint (quite sensibly) doesn’t appreciate unrecognized global variables.
Jest injects global variables (notably test
, describe
, jest
, and
expect
). Fortunately, this is an easy problem to fix. In your project
root, open the .eslintrc.js file (this is the ESLint configuration). In
the env
section, add the following:
"jest": true,
Now if you run npm run lint
again, you should see a lot fewer errors.
So what to do about the remaining errors? Here’s where I can offer wisdom but no specific guidance. Broadly speaking, a linting error has one of three causes:
It’s a legitimate problem, and you should fix it. It may not always be obvious, in which case you may need to refer to the ESLint documentation for the particular error.
It’s a rule you don’t agree with, and you can simply disable it. Many of the rules in ESLint are a matter of opinion. I’ll demonstrate disabling a rule in a moment.
You agree with the rule, but there’s an instance where it’s infeasible or costly to fix in some specific circumstance. For those situations, you can disable rules for only specific lines in a file, which we’ll also see an example of.
If you’ve been following along, you should currently see the following errors:
/Users/ethan/wdne2e-companion/ch05/meadowlark.js 27:5 error Unexpected console statement no-console /Users/ethan/wdne2e-companion/ch05/lib/handlers.js 10:39 error 'next' is defined but never used no-unused-vars
ESLint complains about console logging because it’s not necessarily a good
way to provide output for your application; it can be noisy and
inconsistent, and, depending on how you run it, the output can get swept
under the rug. However, for our use, let’s say it doesn’t bother us and we
want to disable that rule. Open your .eslintrc file, find the rules
section (if there isn’t a rules
section, create one at the top level of
the exported object), and add the following rule:
"rules"
:
{
"no-console"
:
"off"
,
},
Now if we run npm run lint
again, we’ll see that error is no more! The
next one is a little trickier….
Open lib/handlers.js and consider the line in question:
exports
.
serverError
=
(
err
,
req
,
res
,
next
)
=>
res
.
render
(
'500'
)
ESLint is correct; we’re providing next
as an argument but not doing
anything with it (we’re also not doing anything with err
and req
, but
because of the way JavaScript treats function arguments, we have to put
something there so we can get at res
, which we are using).
You may be tempted to just remove the next
argument. “What’s the harm?”
you may think. And indeed, there would be no runtime errors, and your
linter would be happy…but a hard-to-see harm would be done: your custom
error handler would stop working! (If you want to see for yourself, throw
an exception from one of your routes and try visiting it, and then remove the
next
argument from the serverError
handler.)
Express is doing something subtle here: it’s using the number of
actual arguments you pass to it to recognize that it’s supposed to be an
error handler. Without that next
argument—whether you use it or not—Express no longer recognizes it as an error handler.
What the Express team has done with the error handler is undeniably “clever,” but clever code can often be confusing, easy to break, or inscrutable. As much as I love Express, this is one choice I think the team got wrong: I think it should have found a less idiosyncratic and more explicit way to specify an error handler.
We can’t change our handler code, and we need our error handler, but we like this rule and don’t want to disable it. We could just live with the error, but the errors will accumulate and be a constant irritation, and they will eventually corrode the very point of having a linter. Fortunately, we can fix it by disabling that rule for that single line. Edit lib/handlers.js and add the following around your error handler:
// Express recognizes the error handler by way of its four
// arguments, so we have to disable ESLint's no-unused-vars rule
/* eslint-disable no-unused-vars */
exports
.
serverError
=
(
err
,
req
,
res
,
next
)
=>
res
.
render
(
'500'
)
/* eslint-enable no-unused-vars */
Linting can be a little frustrating at first—it may feel like it’s constantly tripping you up. And certainly you should feel free to disable rules that don’t suit you. Eventually, you will find it less and less frustrating as you learn to avoid the common mistakes that linting is designed to catch.
Testing and linting are undeniably useful, but any tool is worthless if you never use it! It may seem crazy that you would go to the time and trouble to write unit tests and set up linting, but I’ve seen it happen, especially when the pressure is on. Fortunately, there is a way to ensure that these helpful tools don’t get forgotten: continuous integration.
I’ll leave you with another extremely useful QA concept: continuous integration (CI). It’s especially important if you’re working on a team, but even if you’re working on your own, it can provide some helpful discipline.
Basically, CI runs some or all of your tests every time you contribute code to a source code repository (you can control which branches this applies to). If all of the tests pass, nothing usually happens (you may get an email saying “good job,” depending on how your CI is configured).
If, on the other hand, there are failures, the consequences are usually more…public. Again, it depends on how you configure your CI, but usually the entire team gets an email saying that you “broke the build.” If your integration master is really sadistic, sometimes your boss is also on that email list! I’ve even known teams that set up lights and sirens when someone broke the build, and in one particularly creative office, a tiny robotic foam missile launcher fired soft projectiles at the offending developer! It’s a powerful incentive to run your QA toolchain before committing.
It’s beyond the scope of this book to cover installing and configuring a CI server, but a chapter on QA wouldn’t be complete without mentioning it.
Currently, the most popular CI server for Node projects is Travis CI. Travis CI is a hosted solution, which can be appealing (it saves you from having to set up your own CI server). If you’re using GitHub, it offers excellent integration support. CircleCI is another option.
If you’re working on a project on your own, you may not get much benefit from a CI server, but if you’re working on a team or an open source project, I highly recommend looking into setting up CI for your project.
This chapter covered a lot of ground, but I consider these essential real-world skills in any development framework. The JavaScript ecosystem is dizzyingly large, and if you’re new to it, it can be hard to know where to start. I hope this chapter pointed you in the right direction.
Now that we have some experience with these tools, we’ll turn our attention to some fundamentals of the Node and Express objects that bracket everything that happens in an Express application: the request and response objects.
1 If you are unfamiliar with await
, I recommend this article by Tamas Piros.
18.222.22.244