Chapter 4. Tidying Up

In the previous two chapters, we were just experimenting: dipping our toes into the waters, so to speak. Before we proceed to more complex functionality, we’re going to do some housekeeping and build some good habits into our work.

In this chapter, we’ll start our Meadowlark Travel project in earnest. Before we start building the website itself, though, we’re going to make sure we have the tools we need to produce a high-quality product.

Tip

The running example in this book is not necessarily one you have to follow. If you’re anxious to build your own website, you could follow the framework of the running example but modify it accordingly so that by the time you finish this book, you could have a finished website!

File and Directory Structure

Structuring applications has spawned many a religious debate, and there’s no one right way to do it. However, there are some common conventions that are helpful to know about.

It’s typical to try to restrict the number of files in your project root. Typically, you’ll find configuration files (like package.json), a README.md file, and a bunch of directories. Most source code goes under a directory often called src. For the sake of brevity, we won’t be using that convention in this book (nor does the Express scaffolding application do this, surprisingly). For real-world projects, you’ll probably eventually find that your project root gets cluttered if you’re putting source code there, and you’ll want to collect those files under a directory like src.

I’ve also mentioned that I prefer to name my main application file (sometimes called the entry point) after the project itself (meadowlark.js) as opposed to something generic like index.js, app.js, or server.js.

It’s largely up to you how to structure your application, and I recommend providing a road map to your structure in the README.md file (or a readme linked from it).

At minimum, I recommend you always have the following two files in your project root: package.json and README.md. The rest is up to your imagination.

Best Practices

The phrase best practices is one you hear thrown around a lot these days, and it means that you should “do things right” and not cut corners (we’ll talk about what this means specifically in a moment). No doubt you’ve heard the engineering adage that your options are “fast,” “cheap,” and “good,” and you can pick any two. The thing that’s always bothered me about this model is that it doesn’t take into account the accrual value of doing things correctly. The first time you do something correctly, it may take five times as long to do it as it would have to do it quick and dirty. The second time, though, it’s going to take only three times as long. By the time you’ve done it correctly a dozen times, you’ll be doing it almost as fast as the quick and dirty way.

I had a fencing coach who would always remind us that practice doesn’t make perfect; practice makes permanent. That is, if you do something over and over again, eventually it will become automatic, rote. That is true, but it says nothing about the quality of the thing you are practicing. If you practice bad habits, then bad habits become rote. Instead, you should follow the rule that perfect practice makes perfect. In that spirit, I encourage you to follow the rest of the examples in this book as if you were making a real-live website, as if your reputation and remuneration were depending on the quality of the outcome. Use this book to not only learn new skills but to practice building good habits.

The practices we will be focusing on are version control and QA. In this chapter, we’ll be discussing version control, and we’ll discuss QA in the next chapter.

Version Control

I hope I don’t have to convince you of the value of version control (if I did, that might take a whole book itself). Broadly speaking, version control offers these benefits:

Documentation

Being able to go back through the history of a project to see the decisions that were made and the order in which components were developed can be valuable documentation. Having a technical history of your project can be quite useful.

Attribution

If you work on a team, attribution can be hugely important. Whenever you find something in code that is opaque or questionable, knowing who made that change can save you many hours. It could be that the comments associated with the change are sufficient to answer your questions, and if not, you’ll know who to talk to.

Experimentation

A good version control system enables experimentation. You can go off on a tangent, trying something new, without fear of affecting the stability of your project. If the experiment is successful, you can fold it back into the project, and if it is not successful, you can abandon it.

Years ago, I made the switch to distributed version control systems (DVCSs). I narrowed my choices down to Git and Mercurial and went with Git, because of its ubiquity and flexibility. Both are excellent and free version control systems, and I recommend you use one of them. In this book, we will be using Git, but you are welcome to substitute Mercurial (or another version control system altogether).

If you are unfamiliar with Git, I recommend Jon Loeliger’s excellent Version Control with Git (O’Reilly). Also, GitHub has a good listing of Git learning resources.

How to Use Git with This Book

First, make sure you have Git. Type git --version. If it doesn’t respond with a version number, you’ll need to install Git. See the Git documentation for installation instructions.

There are two ways to follow along with the examples in this book. One is to type out the examples yourself and follow along with the Git commands. The other is to clone the companion repository I am using for all of the examples and check out the associated files for each example. Some people learn better by typing out examples, while some prefer to just see and run the changes without having to type it all in.

If You’re Following Along by Doing It Yourself

We already have a very rough framework for our project: some views, a layout, a logo, a main application file, and a package.json file. Let’s go ahead and create a Git repository and add all those files.

First, we go to the project directory and initialize a Git repository there:

git init

Now before we add all the files, we’ll create a .gitignore file to help prevent us from accidentally adding things we don’t want to add. Create a text file called .gitignore in your project directory in which you can add any files or directories you want Git to ignore by default (one per line). It also supports wildcards. For example, if your editor creates backup files with a tilde at the end (like meadowlark.js~), you might put *~ in the .gitignore file. If you’re on a Mac, you’ll want to put .DS_Store in there. You’ll also want to put node_modules in there (for reasons that will be discussed soon). So for now, the file might look like this:

node_modules
*~
.DS_Store
Note

Entries in the .gitignore file also apply to subdirectories. So if you put *~ in the .gitignore in the project root, all such backup files will be ignored even if they are in subdirectories.

Now we can add all of our existing files. There are many ways to do this in Git. I generally favor git add -A, which is the most sweeping of all the variants. If you are new to Git, I recommend you either add files one by one (git add meadowlark.js, for example) if you want to commit only one or two files, or add all of your changes (including any files you might have deleted) using git add -A. Since we want to add all the work we’ve already done, we’ll use the following:

git add -A
Tip

Newcomers to Git are commonly confused by the git add command; it adds changes, not files. So if you’ve modified meadowlark.js, and then you type git add meadowlark.js ,what you’re really doing is adding the changes you’ve made.

Git has a “staging area,” where changes go when you run git add. So the changes we’ve added haven’t actually been committed yet, but they’re ready to go. To commit the changes, use git commit:

git commit -m "Initial commit."

The -m "Initial commit." allows you to write a message associated with this commit. Git won’t even let you make a commit without a message, and for good reason. Always strive to make meaningful commit messages; they should briefly but concisely describe the work you’ve done.

If You’re Following Along by Using the Official Repository

To get the official repository for this book, run git clone:

git clone https://github.com/EthanRBrown/web-development-with-node-and-express-2e

This repository has a directory for each chapter that contains code samples. For example, the source code for this chapter can be found in the ch04 directory. The code samples in each chapter are generally numbered for ease of reference. Throughout the repository, I have liberally added README.md files containing additional notes about the samples.

Note

In the first version of this book, I took a different approach with the repository, with a linear history as if you were developing an increasingly sophisticated project. While this approach pleasantly mirrored the way a project in the real world might develop, it caused a lot of headache, both for me and for my readers. As npm packages changed, the code samples would change, and short of rewriting the entire history of the repo, there was no good way to update the repository or note the changes in the text. While the chapter-per-directory approach is more artificial, it allows the text to be synced more closely with the repository and also enables easier community contribution.

As this book is updated and improved, the repository will also be updated, and when it is, I will add a version tag so you can check out a version of the repository that corresponds to the version of the book you’re reading now. The current version of the repository is 2.0.0. I am roughly following semantic versioning principles here (more on this later in this chapter); the PATCH increment (the last number) represents minor changes that shouldn’t impact your ability to follow along with the book. That is, if the repo is at version 2.0.15, that should still correspond with this version of the book. However, if the MINOR increment (the second number) is different (2.1.0), that indicates that the content in the companion repo may have diverged from what you’re reading, and you may want to check out a tag starting with 2.0.

The companion repo liberally makes use of README.md files to add additional explanation to the code samples.

Note

If at any point you want to experiment, keep in mind that the tag you have checked out puts you in what Git calls a “detached HEAD” state. While you are free to edit any files, it is unsafe to commit anything you do without creating a branch first. So if you do want to base an experimental branch off of a tag, simply create a new branch and check it out, which you can do with one command: git checkout -b experiment (where experiment is the name of your branch; you can use whatever you want). Then you can safely edit and commit on that branch as much as you want.

npm Packages

The npm packages that your project relies on reside in a directory called node_modules. (It’s unfortunate that this is called node_modules and not npm_packages, as Node modules are a related but different concept.) Feel free to explore that directory to satisfy your curiosity or to debug your program, but you should never modify any code in this directory. In addition to that being bad practice, all of your changes could easily be undone by npm.

If you need to make a modification to a package your project depends on, the correct course of action would be to create your own fork of the package. If you do go this route and feel that your improvements would be useful to others, congratulations: you’re now involved in an open source project! You can submit your changes, and if they meet the project standards, they’ll be included in the official package. Contributing to existing packages and creating customized builds is beyond the scope of this book, but there is a vibrant community of developers out there to help you if you want to contribute to existing packages.

Two of the main purposes of the package.json file are to describe your project and to list its dependencies. Go ahead and look at your package.json file now. You should see something like this (the exact version numbers will probably be different, as these packages get updated often):

{
  "dependencies": {
    "express": "^4.16.4",
    "express-handlebars": "^3.0.0"
  }
}

Right now, our package.json file contains only information about dependencies. The caret (^) in front of the package versions indicates that any version that starts with the specified version number—up to the next major version number—will work. For example, this package.json indicates that any version of Express that starts with 4.0.0 will work, so 4.0.1 and 4.9.9 would both work, but 3.4.7 would not, nor would 5.0.0. This is the default version specificity when you use npm install, and is generally a pretty safe bet. The consequence of this approach is that if you want to move up to a newer version, you will have to edit the file to specify the new version. Generally, that’s a good thing because it prevents changes in dependencies from breaking your project without your knowing about it. Version numbers in npm are parsed by a component called semver (for “semantic versioning”). If you want more information about versioning in npm, consult the Semantic Versioning Specification and this article by Tamas Piros.

Note

The Semantic Versioning Specification states that software using semantic versioning must declare a “public API.” I’ve always found this wording to be confusing; what they really mean is “someone must care about interfacing with your software.” If you consider this in the broadest sense, it could really be construed to mean anything. So don’t get hung up on that part of the specification; the important details are in the format.

Since the package.json file lists all the dependencies, the node_modules directory is really a derived artifact. That is, if you were to delete it, all you would have to do to get the project working again would be to run npm install, which will re-create the directory and put all the necessary dependencies in it. It is for this reason that I recommend putting node_modules in your .gitignore file and not including it in source control. However, some people feel that your repository should contain everything necessary to run the project and prefer to keep node_modules in source control. I find that this is “noise” in the repository, and I prefer to omit it.

Note

As of version of 5 of npm, an additional file, package-lock.json, will be created. Whereas package.json can be “loose” in its specification of dependency versions (with the ^ and ~ version modifiers), package-lock.json records the exact versions that were installed, which can be helpful if you need to re-create the exact dependency versions in your project. I recommend you check this file into source control and don’t modify it by hand. See the package-lock.json documentation for more information.

Project Metadata

The other purpose of the package.json file is to store project metadata, such as the name of the project, authors, license information, and so on. If you use npm init to initially create your package.json file, it will populate the file with the necessary fields for you, and you can update them at any time. If you intend to make your project available on npm or GitHub, this metadata becomes critical. If you would like more information about the fields in package.json, see the package.json documentation. The other important piece of metadata is the README.md file. This file can be a handy place to describe the overall architecture of the website, as well as any critical information that someone new to the project might need. It is in a text-based wiki format called Markdown. Refer to the Markdown documentation for more information.

Node Modules

As mentioned earlier, Node modules and npm packages are related but different concepts. Node modules, as the name implies, offer a mechanism for modularization and encapsulation. npm packages provide a standardized scheme for storing, versioning, and referencing projects (which are not restricted to modules). For example, we import Express itself as a module in our main application file:

const express = require('express')

require is a Node function for importing a module. By default, Node looks for modules in the directory node_modules (it should be no surprise, then, that there’s an express directory inside of node_modules). However, Node also provides a mechanism for creating your own modules (you should never create your own modules in the node_modules directory). In addition to modules installed into node_modules via a package manager, there are more than 30 “core modules” provided by Node, such as fs, http, os, and path. To see the whole list, see this illuminating Stack Overflow question and refer to the official Node documentation.

Let’s see how we can modularize the fortune cookie functionality we implemented in the previous chapter.

First let’s create a directory to store our modules. You can call it whatever you want, but lib (short for “library”) is a common choice. In that folder, create a file called fortune.js (ch04/lib/fortune.js in the companion repo):

const fortuneCookies = [
  "Conquer your fears or they will conquer you.",
  "Rivers need springs.",
  "Do not fear what you don't know.",
  "You will have a pleasant surprise.",
  "Whenever possible, keep it simple.",
]

exports.getFortune = () => {
  const idx = Math.floor(Math.random()*fortuneCookies.length)
  return fortuneCookies[idx]
}

The important thing to note here is the use of the global variable exports. If you want something to be visible outside of the module, you have to add it to exports. In this example, the function getFortune will be available from outside this module, but our array fortuneCookies will be completely hidden. This is a good thing: encapsulation allows for less error-prone and fragile code.

Note

There are several ways to export functionality from a module. We will be covering different methods throughout the book and summarizing them in Chapter 22.

Now in meadowlark.js, we can remove the fortuneCookies array (though there would be no harm in leaving it; it can’t conflict in any way with the array of the same name defined in lib/fortune.js). It is traditional (but not required) to specify imports at the top of the file, so at the top of the meadowlark.js file, add the following line (ch04/meadowlark.js in the companion repo):

const fortune = require('./lib/fortune')

Note that we prefix our module name with ./. This signals to Node that it should not look for the module in the node_modules directory; if we omitted that prefix, this would fail.

Now in our route for the About page, we can utilize the getFortune method from our module:

app.get('/about', (req, res) => {
  res.render('about', { fortune: fortune.getFortune() } )
})

If you’re following along, let’s commit those changes:

git add -A git commit -m "Moved 'fortune cookie' into module."

You will find modules to be a powerful and easy way to encapsulate functionality, which will improve the overall design and maintainability of your project, as well as make testing easier. Refer to the official Node module documentation for more information.

Note

Node modules are sometimes called CommonJS (CJS) modules, in reference to an older specification that Node took inspiration from. The JavaScript language is adopting an official packaging mechanism, called ECMAScript Modules (ESM). If you’ve been writing JavaScript in React or another progressive frontend language, you may already be familiar with ESM, which uses import and export (instead of exports, module.exports, and require). For more information, see Dr. Axel Rauschmayer’s blog post “ECMAScript 6 modules: the final syntax”.

Conclusion

Now that we’re armed with some more information about Git, npm, and modules, we’re ready to discuss how we can produce a better product by employing good quality assurance (QA) practices in our coding.

I encourage you to keep in mind the following lessons from this chapter:

  • Version control makes the software development process safer and more predictable, and I encourage you to use it even for small projects; it builds good habits!

  • Modularization is an important technique for managing the complexity of software. In addition to providing a rich ecosystem of modules others have developed through npm, you can package your own code in modules to better organize your project.

  • Node modules (also called CJS) use a different syntax than ECMAScript modules (ESM), and you may have to switch between the two syntaxes when you go between frontend and backend code. It’s a good idea to be familiar with both.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.119.66