Setting Up Node.js

Before getting started with using Node.js, you must set up your development environment. While it's very easy to set up, there are a number of considerations to think about, including whether to install Node.js using the package management system, satisfying the requirements for installing native code Node.js packages, and deciding what the best editor is to use with Node.js. In the following chapters, we'll use this environment for development and non-production deployment.

In this chapter, we will cover the following topics:

  • How to install Node.js from source and prepackaged binaries on Linux, macOS, or Windows
  • How to install node package manager (npm) and some other popular tools 
  • The Node.js module system
  • Node.js and JavaScript language improvements from the ECMAScript committee

System requirements

Node.js runs on POSIX-like OSes, various UNIX derivatives (Solaris, for example), and UNIX-workalike OSes (such as Linux, macOS, and so on), as well as on Microsoft Windows. It can run on machines both large and small, including tiny ARM devices, such as Raspberry Pi—a microscale embeddable computer for DIY software/hardware projects.

Node.js is now available via package management systems, limiting the need to compile and install from the source.

Because many Node.js packages are written in C or C++, you must have a C compiler (such as GCC), Python 2.7 (or later), and the node-gyp package. Since Python 2 will be end-of-lifed by the end of 2019, the Node.js community is rewriting its tools for Python 3 compatibility. If you plan on using encryption in your networking code, you will also need the OpenSSL cryptographic library. Modern UNIX derivatives almost certainly come with this and Node.js's configure script—used when installing from the source—will detect their presence. If you need to install it, Python is available at http://python.org and OpenSSL is available at http://openssl.org.

Now that we have covered the requirements for running Node.js, let's learn how to install it.

Installing Node.js using package managers

The preferred method for installing Node.js is to use the versions available in package managers, such as apt-get, or MacPorts. Package managers make your life easier by helping to maintain the current version of the software on your computer, ensuring to update dependent packages as necessary, all by typing a simple command, such as apt-get update. Let's go over installation from a package management system first.

For the official instructions on installing from package managers, go to https://nodejs.org/en/download/package-manager/.

Installing Node.js on macOS with MacPorts

The MacPorts project (http://www.macports.org/) has been packaging a long list of open-source software packages for macOS for years and they have packaged Node.js. The commands it manages are, by default, installed on /opt/local/bin. After you have installed MacPorts using the installer on their website, installing Node.js is very simple, making the Node.js binaries available in the directory where MacPorts installs commands:

$ port search nodejs npm
...
nodejs8 @8.16.2 (devel, net)
Evented I/O for V8 JavaScript

nodejs10 @10.16.3 (devel, net)
Evented I/O for V8 JavaScript

nodejs12 @12.13.0 (devel, net)
Evented I/O for V8 JavaScript

nodejs14 @14.0.0 (devel, net)
Evented I/O for V8 JavaScript
...


npm6 @6.14.4 (devel)
node package manager

$ sudo port install nodejs14 npm6 .. long log of downloading and installing prerequisites and Node $ which node /opt/local/bin/node $ node --version v14.0.0

If you have followed the directions for setting up MacPorts, the MacPorts directory is already in your PATH environment variable. Running the nodenpm, or npx commands is then simple. This proves Node.js has been installed and the installed version matched what you asked for.

MacPorts isn't the only tool for managing open source software packages on macOS.

Installing Node.js on macOS with Homebrew

Homebrew is another open source software package manager for macOS, which some say is the perfect replacement for MacPorts. It is available through their home page at http://brew.sh/. After installing Homebrew using the instructions on their website and ensuring that it is correctly set up, use the following code:

$ brew update
... long wait and lots of output
$ brew search node
==> Searching local taps...
node libbitcoin-node node-build node@8 nodeenv
leafnode llnode node node@10 node@12 nodebrew nodenv
==> Searching taps on GitHub...
caskroom/cask/node-profiler
==> Searching blacklisted, migrated and deleted formulae...

Then, install like this:

$ brew install node
...
==> Installing node
==> ownloading https://homebrew.bintray.com/bottles/node-14.0.0_1.high_sierra.bottle.tar.gz
########################... 100.0%
==> Pouring node-14.0.0_1.high_sierra.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
/usr/local/etc/bash_completion.d
==> Summary
/usr/local/Cellar/node/14.0.0_1: 4,660 files, 60MB

Like MacPorts, Homebrew installs commands on a public directory, which defaults to /usr/local/bin. If you have followed the Homebrew instructions to add that directory to your PATH variable, run the Node.js command as follows:

$ node --version
v14.0.0

This proves Node.js has been installed and the installed version matched what you asked for.

Of course, macOS is only one of many operating systems we might use.

Installing Node.js on Linux, *BSD, or Windows from package management systems

Node.js is now available through most package management systems. Instructions on the Node.js website currently list packaged versions of Node.js for a long list of Linux, as well as FreeBSD, OpenBSD, NetBSD, macOS, and even Windows. Visit https://nodejs.org/en/download/package-manager/ for more information.

For example, on Debian and other Debian-based Linux distributions (such as Ubuntu), use the following commands:

$ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
[sudo] password for david:

## Installing the NodeSource Node.js 14.x repo...


## Populating apt-get cache...

... much apt-get output
## Run `sudo apt-get install -y nodejs` to install Node.js 13.x and npm
## You may also need development tools to build native addons:
sudo apt-get install gcc g++ make
$ sudo apt-get install -y nodejs
... Much output $ sudo apt-get install -y gcc g++ make build-essential

... Much output

This adds the NodeSource APT repository to the system, updates the package data, and prepares the system so that you can install Node.js packages. It also instructs us on how to install Node.js and the required compiler and developer tools.

To download other Node.js versions (this example shows version 14.x), modify the URL to suit you:

$ node --version
v14.0.0

The commands will be installed in /usr/bin and we can test whether the version downloaded is what we asked for.

Windows is starting to become a place where Unix/Linux geeks can work, thanks to a new tool called the Windows subsystem for Linux (WSL).

Installing Node.js in WSL

WSL lets you install Ubuntu, openSUSE, or SUSE Linux Enterprise on Windows. All three are available via the store built into Windows 10. You may need to update your Windows device for the installation to work. For the best experience, install WSL2, which is a major overhaul of WSL, offering an improved integration between Windows and Linux.

Once installed, the Linux-specific instructions will install Node.js in the Linux subsystem.

The process may require elevated privileges on Windows.

Opening an administrator-privileged PowerShell on Windows

Some of the commands that you'll run while installing tools on Windows are to be executed in a PowerShell window with elevated privileges. We are mentioning this because during the process of enabling WSL, a command will need to be run in a PowerShell window.

The process is simple:

  1. In the Start menu, enter PowerShell in the application's search box. The resulting menu will list PowerShell.
  2. Right-click the PowerShell entry.
  3. The context menu that comes up will have an entry called Run as Administrator. Click on that.

The resulting command window will have administrator privileges and the title bar will say Administrator: Windows PowerShell.

In some cases, you will be unable to use Node.js from package management systems.

Installing the Node.js distribution from nodejs.org

The https://nodejs.org/en/ website offers built-in binaries for Windows, macOS, Linux, and Solaris. We can simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's better to use the package management system. That's because you'll find it easier to stay up to date with the latest version. However, that doesn't serve all people because of the following reasons:

  • Some will prefer to install a binary rather than deal with the package manager.
  • Their chosen system doesn't have a package management system.
  • The Node.js implementation in their package management system is out of date.

Simply go to the Node.js website and you'll see something as in the following screenshot. The page does its best to determine your OS and supply the appropriate download. If you need something different, click on the DOWNLOADS link in the header for all possible downloads:

For macOS, the installer is a PKG file that gives the typical installation process. For Windows, the installer simply takes you through the typical install wizard process.

Once you are finished with the installer, you have command-line tools, such as node and npm, which you can run Node.js programs with. On Windows, you're supplied with a version of the Windows command shell preconfigured to work nicely with Node.js.

As you have just learned, most of us will be perfectly satisfied with installing prebuilt packages. However, there are times when we must install Node.js from a source.

Installing from the source on POSIX-like systems

Installing the prepackaged Node.js distributions is the preferred installation method. However, installing Node.js from a source is desirable in a few situations:

  • It can let you optimize the compiler settings as desired.
  • It can let you cross-compile, say, for an embedded ARM system.
  • You might need to keep multiple Node.js builds for testing.
  • You might be working on Node.js itself.

Now that you have a high-level view, let's get our hands dirty by mucking around in some build scripts. The general process follows the usual configure, make, and make install routine that you may have already performed with other open source software packages. If not, don't worry, we'll guide you through the process.

The official installation instructions are in README.md, contained in the source distribution at https://github.com/nodejs/node/blob/master/README.md.

Installing prerequisites

There are three prerequisites: a C compiler, Python, and the OpenSSL libraries. The Node.js compilation process checks for their presence and will fail if the C compiler or Python is not present. These sorts of commands will check for their presence:

$ cc --version
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin $ python
Python 2.7.16 (default, Oct 16 2019, 00:35:27)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
Go to https://github.com/nodejs/node/blob/master/BUILDING.md for details on the requirements.

The specific method for installing these depends on your OS.

The Node.js build tools are in the process of being updated to support Python 3.x. Python 2.x is in an end-of-life process, slated for the end of 2019, so it is therefore recommended that you update to Python 3.x.

Before we can compile the Node.js source, we must have the correct tools installed and on macOS, there are a couple of special considerations.

Installing developer tools on macOS

Developer tools (such as GCC) are an optional installation on macOS. Fortunately, they're easy to acquire.

You start with Xcode, which is available for free through the Macintosh app store. Simply search for Xcode and click on the Get button. Once you have Xcode installed, open a Terminal window and type the following:

$ xcode-select --install

This installs the Xcode command-line tools:

For additional information, visit http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/.

Now that we have the required tools installed, we can proceed with compiling the Node.js source.

Installing from the source for all POSIX-like systems

Compiling Node.js from the source follows this familiar process:

  1. Download the source from http://nodejs.org/download.
  2. Configure the source for building using ./configure.
  3. Run make, then make install.

The source bundle can be downloaded through your browser or as follows, substituting your preferred version:

$ mkdir src
$ cd src
$ wget https://nodejs.org/download/release/v14.0.0/node-v14.0.0.tar.gz
$ tar xvfz node-v14.0.0.tar.gz
$ cd node-v14.0.0

Now, we configure the source so that it can be built. This is just like with many other open source packages and there is a long list of options to customize the build:

$ ./configure --help

To cause the installation to land in your home directory, run it this way:

$ ./configure --prefix=$HOME/node/14.0.0
..output from configure  

If you're going to install multiple Node.js versions side by side, it's useful to put the version number in the path like this. That way, each version will sit in a separate directory. It will then be a simple matter of switching between Node.js versions by changing the PATH variable appropriately:

# On bash shell:
$ export PATH=${HOME}/node/VERSION-NUMBER/bin:${PATH}
# On csh
$ setenv PATH ${HOME}/node/VERSION-NUMBER/bin:${PATH}

A simpler way to install multiple Node.js versions is by using the nvm script, which will be described later.

If you want to install Node.js in a system-wide directory, simply leave off the --prefix option and it will default to installing in /usr/local.

After a moment, it'll stop and will likely have successfully configured the source tree for installation in your chosen directory. If this doesn't succeed, the error messages that are printed will describe what needs to be fixed. Once the configure script is satisfied, you can move on to the next step.

With the configure script satisfied, you compile the software:

$ make
.. a long log of compiler output is printed
$ make install

If you are installing on a system-wide directory, perform the last step this way instead:

$ make
$ sudo make install

Once installed, you should make sure that you add the installation directory to your PATH variable, as follows:

$ echo 'export PATH=$HOME/node/14.0.0/bin:${PATH}' >>~/.bashrc
$ . ~/.bashrc  

Alternatively, for csh users, use this syntax to make an exported environment variable:

$ echo 'setenv PATH $HOME/node/14.0.0/bin:${PATH}' >>~/.cshrc
$ source ~/.cshrc  

When the build is installed, it creates a directory structure, as follows:

$ ls ~/node/14.0.0/
bin   include   lib   share
$ ls ~/node/14.0.0/bin 
node npm npx

Now that we've learned how to install Node.js from the source on UNIX-like systems, we get to do the same on Windows.

Installing from the source on Windows

The BUILDING.md document referenced previously has instructions. You can use the build tools from Visual Studio or the full Visual Studio 2017 or 2019 product: 

Three additional tools are required:

Then, run the included .vcbuild script to perform the build. 

We've learned how to install one Node.js instance, so let's now take it to the next level by installing multiple instances.

Installing multiple Node.js instances with nvm

Normally, you wouldn't install multiple versions of Node.js—doing so adds complexity to your system. But if you are hacking on Node.js itself or testing your software against different Node.js releases, you may want to have multiple Node.js installations. The method to do so is a simple variation on what we've already discussed.

Earlier, while discussing building Node.js from the source, we noted that you can install multiple Node.js instances in separate directories. It's only necessary to build from the source if you need a customized Node.js build but most folks would be satisfied with pre-built Node.js binaries. They, too, can be installed on separate directories.

Switching between Node.js versions is simply a matter of changing the PATH variable (on POSIX systems), as in the following code, using the directory where you installed Node.js:

$ export PATH=/usr/local/node/VERSION-NUMBER/bin:${PATH}  

It starts to get a little tedious maintaining this after a while. For each release, you have to set up Node.js, npm, and any third-party modules you desire in your Node.js installation. Also, the command shown to change PATH is not quite optimal. Inventive programmers have created several version managers to simplify managing multiple Node.js/npm releases and provide commands to change PATH the smart way:

Both maintain multiple, simultaneous versions of Node.js and let you easily switch between versions. Installation instructions are available on their respective websites.

For example, with nvm, you can run commands such as these:

$ nvm ls
...
v6.4.0
...
v6.11.2
v8.9.3
v10.15.2
...
v12.13.1
...
v14.0.0
-> system
default -> 12.9.1 (-> v12.9.1)
node -> stable (-> v12.13.1) (default)
stable -> 12.13 (-> v12.13.1) (default)
$ nvm use 10
Now using node v10.15.2 (npm v6.4.1)
$ node --version
v10.15.2
$ nvm use 4.9
Now using node v4.9.1 (npm v2.15.11) $ node --version v4.9.1

$ nvm install 14
Downloading and installing node v14.0.0...
Downloading https://nodejs.org/dist/v14.0.0/node-v14.0.0-darwin-x64.tar.xz...
############... 100.0%
Computing checksum with shasum -a 256
Checksums matched!
Now using node v14.0.0 (npm v6.14.4)
$ node --version
v14.0.0
$ which node
/Users/david/.nvm/versions/node/v14.0.0/bin/node
$ /usr/local/bin/node --version
v13.13.0
$ /opt/local/bin/node --version
v13.13.0

In this example, we first listed the available versions. Then, we demonstrated how to switch between Node.js versions, verifying the version changed each time. We also installed and used a new version using nvm. Finally, we showed the directory where nvm installs Node.js packages versus Node.js versions that are installed using MacPorts or Homebrew.

This demonstrates that you can have Node.js installed system-wide, keep multiple private Node.js versions managed by nvm, and switch between them as needed. When new Node.js versions are released, they are simple to install with nvm, even if the official package manager for your OS hasn't yet updated its packages.

Installing nvm on Windows

Unfortunately, nvm doesn't support Windows. Fortunately, a couple of Windows-specific clones of the nvm concept exist:

Another route is to use WSL. Because in WSL you're interacting with a Linux command line, you can use nvm itself. But let's stay focused on what you can do in Windows.

Many of the examples in this book were tested using the nvm-windows application. There are slight behavior differences but it acts largely the same as nvm for Linux and macOS. The biggest change is the version number specifier in the nvm use and nvm install commands.

With nvm for Linux and macOS, you can type a simple version number, such as nvm use 8, and it will automatically substitute the latest release of the named Node.js version. With nvm-windows, the same command acts as if you typed nvm use 8.0.0. In other words, with nvm-windows, you must use the exact version number. Fortunately, the list of supported versions is easily available using the nvm list available command.

Using a tool such as nvm simplifies the process of testing a Node.js application against multiple Node.js versions.

Now that we can install Node.js, we need to make sure we are installing any Node.js module that we want to use. This requires having build tools installed on our computer.

Requirements for installing native code modules

While we won't discuss native code module development in this book, we do need to make sure that they can be built. Some modules in the npm repository are native code and they must be compiled with a C or C++ compiler to build the corresponding .node files (the .node extension is used for binary native code modules).

The module will often describe itself as a wrapper for some other library. For example, the libxslt and libxmljs modules are wrappers around the C/C++ libraries of the same name. The module includes the C/C++ source code and when installed, a script is automatically run to do the compilation with node-gyp.

The node-gyp tool is a cross-platform command-line tool written in Node.js for compiling native add-on modules for Node.js. We've mentioned native code modules several times and it is this tool that compiles them for use with Node.js.

You can easily see this in action by running these commands:

$ mkdir temp
$ cd temp
$ npm install libxmljs libxslt  

This is done in a temporary directory, so you can delete it afterward. If your system does not have the tools installed to compile native code modules, you'll see error messages. Otherwise, you'll see a node-gyp execution in the output, followed by many lines of text obviously related to compiling C/C++ files.

The node-gyp tool has prerequisites similar to those for compiling Node.js from the source—namely, a C/C++ compiler, a Python environment, and other build tools, such as Git. For Unix, macOS, and Linux systems, those are easy to come by. For Windows, you should install the following:

Normally, you don't need to worry about installing node-gyp. That's because it is installed behind the scenes as part of npm. That's done so that npm can automatically build native code modules.

Its GitHub repository contains documentation; go to https://github.com/nodejs/node-gyp.

Reading the node-gyp documentation in its repository will give you a clearer understanding of the compilation prerequisites discussed previously and of developing native code modules.

This is an example of a non-explicit dependency. It is best to explicitly declare all the things that a software package depends on. In Node.js, dependencies are declared in package.json so that the package manager (npm or yarn) can download and set up everything. But these compiler tools are set up by the OS package management system, which is outside the control of npm or yarn. Therefore, we cannot explicitly declare those dependencies.

We've just learned that Node.js supports modules written not just in JavaScript, but also in other programming languages. We've also learned how to support the installation of such modules. Next, we will learn about Node.js version numbers.

Choosing Node.js versions to use and the version policy

We just threw around so many different Node.js version numbers in the previous section that you may have become confused about which version to use. This book is targeted at Node.js version 14.x and it's expected that everything we'll cover is compatible with Node.js 10.x and any subsequent release.

Starting with Node.js 4.x, the Node.js team has followed a dual-track approach. The even-numbered releases (4.x, 6.x, 8.x, and so on) are what they're calling long term support (LTS), while the odd-numbered releases (5.x, 7.x, 9.x, and so on) are where current new feature development occurs. While the development branch is kept stable, the LTS releases are positioned as being for production use and will receive updates for several years.

At the time of writing, Node.js 12.x is the current LTS release; Node.js 14.x has been released and will eventually become the LTS release.

A major impact of each new Node.js release, beyond the usual performance improvements and bug fixes, is the bringing in of the latest V8 JavaScript engine release. In turn, this means bringing in more of the ES2015/2016/2017 features as the V8 team implements them. In Node.js 8.x, the async/await functions arrived and in Node.js 10.x, support for the standard ES6 module format has arrived. In Node.js 14.x that module format will be completely supported.

A practical consideration is whether a new Node.js release will break your code. New language features are always being added as V8 catches up with ECMAScript and the Node.js team sometimes makes groundbreaking changes to the Node.js API. If you've tested on one Node.js version, will it work on an earlier version? Will a Node.js change break some assumptions we made?

What npm does is ensure that our packages execute on the correct Node.js version. This means that we can specify the compatible Node.js versions for a package in the package.json file (which we'll explore in Chapter 3, Exploring Node.js Modules).

We can add an entry to package.json as follows:

engines: { 
  "node": ">=8.x" 
} 

This means exactly what it implies—that the given package is compatible with Node.js version 8.x or later.

Of course, your development environment(s) could have several Node.js versions installed. You'll need the version your software is declared to support, plus any later versions you wish to evaluate.

We have just learned how the Node.js community manages releases and version numbers. Our next step is to discuss which editor to use.

Choosing editors and debuggers for Node.js

Since Node.js code is JavaScript, any JavaScript-aware editor will be useful. Unlike some other languages that are so complex that an IDE with code completion is a necessity, a simple programming editor is perfectly sufficient for Node.js development.

Two editors are worth shouting out because they are written in Node.js: Atom and Microsoft Visual Studio Code. 

Atom (https://atom.io/) describes itself as a hackable editor for the 21st century. It is extendable by writing Node.js modules using the Atom API and the configuration files are easily editable. In other words, it's hackable in the same way plenty of other editors have been—going back to Emacs, meaning you write a software module to add capabilities to the editor. The Electron framework was invented in order to build Atom and it is is a super-easy way of building desktop applications using Node.js.

Microsoft Visual Studio Code (https://code.visualstudio.com/) is a hackable editor (well, the home page says extensible and customizable, which means the same thing) that is also open source and implemented in Electron. However, it's not a hollow me-too editor, copying Atom while adding nothing of its own. Instead, Visual Studio Code is a solid programmer's editor in its own right, bringing interesting functionality to the table.

As for debuggers, there are several interesting choices. Starting with Node.js 6.3, the inspector protocol has made it possible to use the Google Chrome debugger. Visual Studio Code has a built-in debugger that also uses the inspector protocol.

For a full list of debugging options and tools, see https://nodejs.org/en/docs/guides/debugging-getting-started/.

Another task related to the editor is adding extensions to help with the editing experience. Most programmer-oriented editors allow you to extend the behavior and assist with writing the code. A trivial example is syntax coloring for JavaScript, CSS, HTML, and so on. Code completion extensions are where the editor helps you write the code. Some extensions scan code for common errors; often these extensions use the word lint. Some extensions help to run unit test frameworks. Since there are so many editors available, we cannot provide specific suggestions.  

For some, the choice of programming editor is a serious matter defended with fervor, so we carefully recommend that you use whatever editor you prefer, as long as it helps you edit JavaScript code. Next, we will learn about the Node.js commands and a little about running Node.js scripts.

Running and testing commands

Now that you've installed Node.js, we want to do two things—verify that the installation was successful and familiarize ourselves with the Node.js command-line tools and running simple scripts with Node.js. We'll also touch again on async functions and look at a simple example HTTP server. We'll finish off with the npm and npx command-line tools.

Using Node.js's command-line tools

The basic installation of Node.js includes two commands: node and npm. We've already seen the node command in action. It's used either for running command-line scripts or server processes. The other, npm, is a package manager for Node.js.

The easiest way to verify that your Node.js installation works is also the best way to get help with Node.js. Type the following command:

$ node --help
Usage: node [options] [ -e script | script.js | - ] [arguments]
node inspect script.js [arguments]

Options:
-v, --version print Node.js version
-e, --eval script evaluate script
-p, --print evaluate script and print result
-c, --check syntax check script without executing
-i, --interactive always enter the REPL even if stdin
does not appear to be a terminal
-r, --require module to preload (option can be repeated)
- script read from stdin (default; interactive mode if a tty)

... many more options

Environment variables:
NODE_DEBUG ','-separated list of core modules that should print debug information
NODE_DEBUG_NATIVE ','-separated list of C++ core debug categories that should print debug output
NODE_DISABLE_COLORS set to 1 to disable colors in the REPL
NODE_EXTRA_CA_CERTS path to additional CA certificates file
NODE_NO_WARNINGS set to 1 to silence process warnings
NODE_OPTIONS set CLI options in the environment via a space-separated list
NODE_PATH ':'-separated list of directories prefixed to the module search path
... many more environment variables

That was a lot of output but don't study it too closely. The key takeaway is that node --help provides a lot of useful information.

Note that there are options for both Node.js and V8 (not shown in the previous command line). Remember that Node.js is built on top of V8; it has its own universe of options that largely focus on details of bytecode compilation or garbage collection and heap algorithms. Enter node --v8-options to see the full list of these options.

On the command line, you can specify options, a single script file, and a list of arguments to that script. We'll discuss script arguments further in the following section, Running a simple script with Node.js.

Running Node.js with no arguments drops you in an interactive JavaScript shell:

$ node
> console.log('Hello, world!');
Hello, world!
undefined  

Any code you can write in a Node.js script can be written here. The command interpreter gives a good terminal-oriented user experience and is useful for interactively playing with your code. You do play with your code, don't you? Good!

Running a simple script with Node.js

Now, let's look at how to run scripts with Node.js. It's quite simple; let's start by referring to the help message shown previously. The command-line pattern is just a script filename and some script arguments, which should be familiar to anyone who has written scripts in other languages.

Creating and editing Node.js scripts can be done with any text editor that deals with plain text files, such as VI/VIM, Emacs, Notepad++, Atom, Visual Studio Code, Jedit, BB Edit, TextMate, or Komodo. It's helpful if it's a programmer-oriented editor, if only for the syntax coloring.

For this and other examples in this book, it doesn't truly matter where you put the files. However, for the sake of neatness, you can start by making a directory named node-web-dev in the home directory of your computer and inside that, creating one directory per chapter (for example, chap02 and chap03).

First, create a text file named ls.js with the following content:

const fs = require('fs').promises; 

async function listFiles() {
try {
const files = await fs.readdir('.');
for (const file of files) {
console.log(file);
}
} catch (err) {
console.error(err);
}
}

listFiles();

Next, run it by typing the following command:

$ node ls.js
ls.js

This is a pale and cheap imitation of the Unix ls command (as if you couldn't figure that out from the name!). The readdir function is a close analog to the Unix readdir system call used to list the files in a directory. On Unix/Linux systems, we can run the following command to learn more:

$  man 3 readdir

The man command, of course, lets you read manual pages and section 3 covers the C library.

Inside the function body, we read the directory and print its contents. Using require('fs').promises gives us a version of the fs module (filesystem functions) that returns Promises; it, therefore, works well in an async function. Likewise, the ES2015 for..of loop construct lets us loop over entries in an array in a way that works well in async functions.

By default, the fs module functions use the callback paradigm originally created for Node.js. As a result, most Node.js modules use the callback paradigm. Within async functions, it is more convenient if functions return Promises instead so that the await keyword can be used. The util module provides a function, util.promisify, which generates a wrapper function for old-style callback-oriented functions so it instead returns a Promise.

This script is hardcoded to list files in the current directory. The real ls command takes a directory name, so let's modify the script a little.

Command-line arguments land in a global array named process.argv. Therefore, we can modify ls.js, copying it as ls2.js (as follows) to see how this array works:

const fs = require('fs').promises;

async function listFiles() {
try {
var dir = '.';
if (process.argv[2]) dir = process.argv[2];
const files = await fs.readdir(dir);
for (let fn of files) {
console.log(fn);
}
} catch (err) {
console.error(err);
}
}

listFiles();

You can run it as follows:

$ pwd
/Users/David/chap02
$ node ls2 ..
chap01
chap02
$ node ls2
app.js
ls.js
ls2.js

We simply checked whether a command-line argument was present, if (process.argv[2]). If it was, we override the value of the dir variable, dir = process.argv[2], and we then use that as the readdir argument:

$ node ls2.js /nonexistent
{ Error: ENOENT: no such file or directory, scandir '/nonexistent'
errno: -2,
code: 'ENOENT',
syscall: 'scandir',
path: '/nonexistent' }

If you give it a non-existent directory pathname, an error will be thrown and printed using the catch clause. 

Writing inline async arrow functions

There is a different way to write these examples that some feel is more concise. These examples were written as a regular function—with the function keyword—but with the async keyword in front. One of the features that came with ES2015 is the arrow function, which lets us streamline the code a little bit.

Combined with the async keyword, an async arrow function looks like this:

async () => {
// function body
}

You can use this anywhere; for example, the function can be assigned to a variable or it can be passed as a callback to another function. When used with the async keyword, the body of the arrow function has all of the async function's behavior.

For the purpose of these examples, an async arrow function can be wrapped for immediate execution:

(async () => {
// function body
})()

The final parenthesis causes the inline function to immediately be invoked.

Then, because async functions return a Promise, it is necessary to add a .catch block to catch errors. With all that, the example looks as follows:

const fs = require('fs');

(async () => {
var dir = '.';
if (process.argv[2]) dir = process.argv[2];
const files = await fs.readdir(dir);
for (let fn of files) {
console.log(fn);
}
})().catch(err => { console.error(err); });

Whether this or the previous style is preferable is perhaps a matter of taste. However, you will find both styles in use and it is necessary to understand how both work.

When invoking an async function at the top level of a script, it is necessary to capture any errors and report them. Failure to catch and report errors can lead to mysterious problems that are hard to pin down. For the original version of this example, the errors were explicitly caught with a try/catch block. In this version, we catch errors using a .catch block.

Before we had async functions, we had the Promise object and before that, we had the callback paradigm. All three paradigms are still used in Node.js, meaning you'll need to understand each.

Converting to async functions and the Promise paradigm

In the previous section, we discussed util.promisify and its ability to convert a callback-oriented function into one that returns a Promise. The latter plays well with async functions and therefore, it is preferable for functions to return a Promise.

To be more precise, util.promisify is to be given a function that uses the error-first-callback paradigm. The last argument of such functions is a callback function, whose first argument is interpreted as an error indicator, hence the phrase error-first-callback. What util.promisify returns is another function that returns a Promise. 

The Promise serves the same purpose as error-first-callback. If an error is indicated, the Promise resolves to the rejected status, while if success is indicated, the Promise resolves to a success status. As we see in these examples, the Promise is handled very nicely within an async function.

The Node.js ecosystem has a large body of functions that use error-first-callback. The community has began a conversion process where functions will return a Promise and possibly also take an error-first-callback for API compatibility.

One of the new features in Node.js 10 is an example of such a conversion. Within the fs module is a submodule, named fs.promises, with the same API but producing Promise objects. We wrote the previous examples using that API.

Another choice is a third-party module, fs-extra. This module has an extended API beyond the standard fs module. On one hand, its functions return a Promise if no callback function is provided or else invokes the callback. In addition, it includes several useful functions.

In the rest of this book, we will often use fs-extra because of those additional functions. For documentation on the module, go to https://www.npmjs.com/package/fs-extra.

The util module has another function, util.callbackify, which does as the name implies—it converts a function that returns a Promise into one that uses a callback function.

Now that we've seen how to run a simple script, let's look at a simple HTTP server.

Launching a server with Node.js

Many scripts that you'll run are server processes; we'll be running lots of these scripts later on. Since we're still trying to verify the installation and get you familiar with using Node.js, we want to run a simple HTTP server. Let's borrow the simple server script on the Node.js home page (http://nodejs.org).

Create a file named app.js, containing the following:

const http = require('http'); 
http.createServer(function (req, res) { 
  res.writeHead(200, {'Content-Type': 'text/plain'}); 
  res.end('Hello, World!
'); 
}).listen(8124, '127.0.0.1'); 
console.log('Server running at http://127.0.0.1:8124'); 

Run it as follows:

$ node app.js
Server running at http://127.0.0.1:8124  

This is the simplest of web servers you can build with Node.js. If you're interested in how it works, flip forward to Chapter 4, HTTP Servers and Clients, Chapter 5, Your First Express Application, and Chapter 6, Implementing the Mobile-First Paradigm. But for now, just type http://127.0.0.1:8124 in your browser to see the Hello, World! message:

A question to ponder is why this script didn't exit when ls.js did. In both cases, execution of the script reaches the end of the file; the Node.js process does not exit in app.js, while it does in ls.js.

The reason for this is the presence of active event listeners. Node.js always starts up an event loop and in app.js, the listen function creates an event, listener, that implements the HTTP protocol. This listener event keeps app.js running until you do something, such as press Ctrl + C in the terminal window. In ls.js, there is nothing there to create a long-running listener event, so when ls.js reaches the end of its script, the node process will exit.

To carry out more complex tasks with Node.js, we must use third-party modules. The npm repository is the place to go.

Using npm, the Node.js package manager

Node.js, being a JavaScript interpreter with a few interesting asynchronous I/O libraries, is by itself a pretty basic system. One of the things that makes Node.js interesting is the rapidly growing ecosystem of third-party modules for Node.js.

At the center of that ecosystem is the npm module repository. While Node.js modules can be downloaded as source and assembled manually for use with Node.js programs, that's tedious to do and it's difficult to implement a repeatable build process. npm gives us a simpler method; npm is the de facto standard package manager for Node.js and it greatly simplifies downloading and using these modules. We will talk about npm at length in the next chapter.

The sharp-eyed among you will have noticed that npm is already installed via all the installation methods discussed previously. In the past, npm was installed separately, but today it is bundled with Node.js.

Now that we have npm installed, let's take it for a quick spin. The hexy program is a utility used for printing hex dumps of files. That's a very 1970s thing to do, but it is still extremely useful. It serves our purpose right now as it gives us something to quickly install and try out:

$ npm install -g hexy
/opt/local/bin/hexy -> /opt/local/lib/node_modules/hexy/bin/hexy_cmd.js
+ [email protected]
added 1 package in 1.107s

Adding the -g flag makes the module available globally, irrespective of the present working directory of your command shell. A global install is most useful when the module provides a command-line interface. When a package provides a command-line script, npm sets that up. For a global install, the command is installed correctly for use by all users of the computer.

Depending on how Node.js is installed for you, it may need to be run with sudo:

$ sudo npm install -g hexy

Once it is installed, you'll be able to run the newly–installed program this way:

$ hexy --width 12 ls.js
00000000: 636f 6e73 7420 6673 203d 2072 const.fs.=.r
0000000c: 6571 7569 7265 2827 6673 2729 equire('fs')
00000018: 3b0a 636f 6e73 7420 7574 696c ;.const.util
00000024: 203d 2072 6571 7569 7265 2827 .=.require('
00000030: 7574 696c 2729 3b0a 636f 6e73 util');.cons
0000003c: 7420 6673 5f72 6561 6464 6972 t.fs_readdir
00000048: 203d 2075 7469 6c2e 7072 6f6d .=.util.prom
00000054: 6973 6966 7928 6673 2e72 6561 isify(fs.rea
00000060: 6464 6972 293b 0a0a 2861 7379 ddir);..(asy
0000006c: 6e63 2028 2920 3d3e 207b 0a20 nc.().=>.{..
00000078: 2063 6f6e 7374 2066 696c 6573 .const.files
00000084: 203d 2061 7761 6974 2066 735f .=.await.fs_
00000090: 7265 6164 6469 7228 272e 2729 readdir('.')
0000009c: 3b0a 2020 666f 7220 2866 6e20 ;...for.(fn.
000000a8: 6f66 2066 696c 6573 2920 7b0a of.files).{.
000000b4: 2020 2020 636f 6e73 6f6c 652e ....console.
000000c0: 6c6f 6728 666e 293b 0a20 207d log(fn);...}
000000cc: 0a7d 2928 292e 6361 7463 6828 .})().catch(
000000d8: 6572 7220 3d3e 207b 2063 6f6e err.=>.{.con
000000e4: 736f 6c65 2e65 7272 6f72 2865 sole.error(e
000000f0: 7272 293b 207d 293b rr);.});

The hexy command was installed as a global command, making it easy to run.

Again, we'll be doing a deep dive into npm in the next chapter. The hexy utility is both a Node.js library and a script for printing out these old-style hex dumps.

In the open source world, a perceived need often leads to creating an open source project. The folks who launched the Yarn project saw needs that weren't being addressed by npm and created an alternative package manager tool. They claim a number of advantages over npm, primarily in the area of performance. To learn more about Yarn, go to https://yarnpkg.com/.

For every example in this book that uses npm, there is a close equivalent command that uses Yarn.

For npm-packaged command-line tools, there is another, simpler way to use the tool.

Using npx to execute Node.js packaged binaries

Some packages in the npm repository are command-line tools, such as the hexy program we looked at earlier. Having to first install such a program before using it is a small hurdle. The sharp-eyed among you will have noticed that npx is installed alongside the node and npm commands when installing Node.js. This tool is meant to simplify running command-line tools from the npm repository by removing the need to first install the package.

The previous example could have been run this way:

$ npx hexy --width 12 ls.js

Under the covers, npx uses npm to download the package to a cache directory, unless the package is already installed in the current project directory. Because the package is then in a cache directory, it is only downloaded once.

There are a number of interesting options to this tool; to learn more, go to https://www.npmjs.com/package/npx.

We have learned a lot in this section about the command-line tools delivered with Node.js, as well as ran a simple script and HTTP server. Next, we will learn how advances in the JavaScript language affect the Node.js platform.

Advancing Node.js with ECMAScript 2015, 2016, 2017, and beyond 

In 2015, the ECMAScript committee released a long-awaited major update of the JavaScript language. The update brought in many new features to JavaScript, such as Promises, arrow functions, and class objects. The language update sets the stage for improvement since it should dramatically improve our ability to write clean, understandable JavaScript code.

The browser makers are adding those much-needed features, meaning the V8 engine is adding those features as well. These features are making their way to Node.js, starting with version 4.x.

To learn about the current status of ES2015/2016/2017/and so on in Node.js, visit https://nodejs.org/en/docs/es6/.

By default, only the ES2015, 2016, and 2017 features that V8 considers stable are enabled by Node.js. Further features can be enabled with command-line options. The almost-complete features are enabled with the --es_staging option. The website documentation gives more information.

The Node green website (http://node.green/) has a table that lists the status of a long list of features in Node.js versions.

The ES2019 language spec is published at https://www.ecma-international.org/publications/standards/Ecma-262.htm.

The TC-39 committee does its work on GitHub at https://github.com/tc39.

The ES2015 (and later) features make a big improvement to the JavaScript language. One feature, the Promise class, should mean a fundamental rethinking of common idioms in Node.js programming. In ES2017, a pair of new keywords, async and await, simplifies writing asynchronous code in Node.js, which should encourage the Node.js community to further rethink the common idioms of the platform.

There's a long list of new JavaScript features but let's quickly go over the two of them that we'll use extensively.

The first is a lighter-weight function syntax called the arrow function:

fs.readFile('file.txt', 'utf8', (err, data) => { 
  if (err) ...; // do something with the error 
  else ...;  // do something with the data 
}); 

This is more than the syntactic sugar of replacing the function keyword with the fat arrow. Arrow functions are lighter weight as well as being easier to read. The lighter weight comes at the cost of changing the value of this inside the arrow function. In regular functions, this has a unique value inside the function. In an arrow function, this has the same value as the scope containing the arrow function. This means that, when using an arrow function, we don't have to jump through hoops to bring this into the callback function because this is the same at both levels of the code.

The next feature is the Promise class, which is used for deferred and asynchronous computations. Deferred code execution to implement asynchronous behavior is a key paradigm for Node.js and it requires two idiomatic conventions:

  • The last argument to an asynchronous function is a callback function, which is called when an asynchronous execution is to be performed.
  • The first argument to the callback function is an error indicator.

While convenient, these conventions have resulted in multilayer code pyramids that can be difficult to understand and maintain:

doThis(arg1, arg2, (err, result1, result2) => { 
    if (err) ...; 
    else { 
         // do some work 
         doThat(arg2, arg3, (err2, results) => { 
              if (err2) ...; 
              else { 
                     doSomethingElse(arg5, err => { 
                             if (err) .. ; 
                             else ..; 
                     }); 
              } 
         }); 
    } 
}); 

You don't need to understand the code; it's just an outline of what happens in practice as we use callbacks. Depending on how many steps are required for a specific task, a code pyramid can get quite deep. Promises will let us unravel the code pyramid and improve reliability because error handling is more straightforward and easily captures all errors.

A Promise class is created as follows:

function doThis(arg1, arg2) { 
    return new Promise((resolve, reject) => { 
        // execute some asynchronous code 
        if (errorIsDetected) return reject(errorObject); 
        // When the process is finished call this: 
        resolve(result1, result2); 
    }); 
}

Rather than passing in a callback function, the caller receives a Promise object. When properly utilized, the preceding pyramid can be coded as follows:

doThis(arg1, arg2) 
.then(result => { 
// This can receive only one value, hence to
// receive multiple values requires an object or array return doThat(arg2, arg3); }) .then((results) => { return doSomethingElse(arg5); }) .then(() => { // do a final something }) .catch(err => { // errors land here });

This works because the Promise class supports chaining if a then function returns a Promise object.

The async/await feature implements the promise of the Promise class to simplify asynchronous coding. This feature becomes active within an async function:

async function mumble() {
// async magic happens here
}

An async arrow function is as follows: 

const mumble = async () => {
// async magic happens here
};

To see how much of an improvement the async function paradigm gives us, let's recode the earlier example as follows:

async function doSomething(arg1, arg2, arg3, arg4, arg5) {
const { result1, result2 } = await doThis(arg1, arg2);
const results = await doThat(arg2, arg3);
await doSomethingElse(arg5);
// do a final something
return finalResult;
}

Again, we don't need to understand the code but just look at its shape. Isn't this a breath of fresh air compared to the nested structure we started with?

The await keyword is used with a Promise. It automatically waits for the Promise to resolve. If the Promise resolves successfully, then the value is returned and if it resolves with an error, then that error is thrown. Both handling results and throwing errors are handled in the usual manner.

This example also shows another ES2015 feature: destructuring. The fields of an object can be extracted using the following code:

const { value1, value2 } = {
value1: "Value 1", value2: "Value 2", value3: "Value3"
};

This demonstrates having an object with three fields but only extracting two of the fields.

To continue our exploration of advances in JavaScript, let's take a look at Babel.

Using Babel to use experimental JavaScript features

The Babel transpiler is the leading tool for using cutting-edge JavaScript features or experimenting with new JavaScript features. Since you've probably never seen the word transpiler, it means to rewrite source code from one language to another. It is like a compiler in that Babel converts computer source code into another form, but instead of directly executable code, Babel produces JavaScript. That is, it converts JavaScript code into JavaScript code, which may not seem useful until you realize that Babel's output can target older JavaScript releases.

Put more simply, Babel can be configured to rewrite code with ES2015, ES2016, ES2017 (and so on) features into code conforming to the ES5 version of JavaScript. Since ES5 JavaScript is compatible with practically every web browser on older computers, a developer can write their frontend code in modern JavaScript then convert it to execute on older browsers using Babel.

To learn more about Babel, go to https:// babeljs.io.

The Node Green website makes it clear that Node.js supports pretty much all of the ES2015, 2016, and 2017 features. Therefore, as a practical matter, we no longer need to use Babel for Node.js projects. You may possibly be required to support an older Node.js release and you can use Babel to do so.

For web browsers, there is a much longer time lag between a set of ECMAScript features and when we can reliably use those features in browser-side code. It's not that the web browser makers are slow in adopting new features as the Google, Mozilla, and Microsoft teams are proactive about adopting the latest features. Apple's Safari team seems slow to adopt new features, unfortunately. What's slower, however, is the penetration of new browsers into the fleet of computers in the field. 

Therefore, modern JavaScript programmers need to familiarize themselves with Babel.

We're not ready to show example code for these features yet, but we can go ahead and document the setting up of the Babel tool. For further information on setup documentation, visit http://babeljs.io/docs/setup/ and click on the CLI button.

To get a brief introduction to Babel, we'll use it to transpile the scripts we saw earlier to run on Node.js 6.x. In those scripts, we used async functions, a feature that is not supported on Node.js 6.x. 

In the directory containing ls.js and ls2.js, type these commands:

$ npm install babel-cli 
       babel-plugin-transform-es2015-modules-commonjs 
babel-plugin-transform-async-to-generator

This installs the Babel software, along with a couple of transformation plugins. Babel has a plugin system so that you can enable the transformations required by your project. Our primary goal in this example is converting the async functions shown earlier into Generator functions. Generators are a new sort of function introduced with ES2015 that form the foundation for the implementation of async functions.

Because Node.js 6.x does not have either the fs.promises function or util.promisify, we need to make some substitutions to create a file named ls2-old-school.js:

const fs = require('fs');

const fs_readdir = dir => {
return new Promise((resolve, reject) => {
fs.readdir(dir, (err, fileList) => {
if (err) reject(err);
else resolve(fileList);
});
});
};

async function listFiles() {
try {
let dir = '.';
if (process.argv[2]) dir = process.argv[2];
const files = await fs_readdir(dir);
for (let fn of files) {
console.log(fn);
}
} catch(err) { console.error(err); }
}
listFiles();

We have the same example we looked at earlier, but with a couple of changes. The fs_readdir function creates a Promise object then calls fs.readdir, making sure to either reject or resolve the Promise based on the result we get. This is more or less what the util.promisify function does.

Because fs_readdir returns a Promise, the await keyword can do the right thing and wait for the request to either succeed or fail. This code should run as is on Node.js releases, which support async functions. But what we're interested in—and the reason why we added the fs_readdir function—is how it works on older Node.js releases.

The pattern used in fs_readdir is what is required to use a callback-oriented function in an async function context.

Next, create a file named .babelrc, containing the following:

{
"plugins": [
"transform-es2015-modules-commonjs",
"transform-async-to-generator"
]
}

This file instructs Babel to use the named transformation plugins that we installed earlier. As the name implies, it will transform the async functions to generator functions.

Because we installed babel-cli, a babel command is installed, such that we can type the following:

$ ./node_modules/.bin/babel -help  

To transpile your code, run the following command:

$ ./node_modules/.bin/babel ls2-old-school.js -o ls2-babel.js 

This command transpiles the named file, producing a new file. The new file is as follows:

'use strict';

function _asyncToGenerator(fn) { return function ()
{ var gen = fn.apply(this, arguments);
return new Promise(function (resolve, reject)
{ function step(key, arg) { try { var info =
gen[key](arg); var value = info.value; } catch (error)
{ reject(error); return; } if (info.done) { resolve(value);
} else { return Promise.resolve(value).then(function (value)
{ step("next", value); }, function (err) { step("throw",
err); }); } } return step("next"); }); }; }

const fs = require('fs');

const fs_readdir = dir => {
return new Promise((resolve, reject) => {
fs.readdir(dir, (err, fileList) => {
if (err) reject(err);
else resolve(fileList);
});
});
};

_asyncToGenerator(function* () {
var dir = '.';
if (process.argv[2]) dir = process.argv[2];
const files = yield fs_readdir(dir);
for (let fn of files) {
console.log(fn);
}
})().catch(err => {
console.error(err);
});

This code isn't meant to be easy to read for humans. Instead, it means that you edit the original source file and then convert it for your target JavaScript engine. The main thing to notice is that the transpiled code uses a Generator function (the notation function* indicates a generator function) in place of the async function and the yield keyword in place of the await keyword. What a generator function is—and precisely what the yield keyword does—is not important; the only thing to note is that yield is roughly equivalent to await and that the _asyncToGenerator function implements functionality similar to async functions. Otherwise, the transpiled code is fairly clean and looks rather similar to the original code.

The transpiled script is run as follows:

$ nvm use 4
Now using node v4.9.1 (npm v2.15.11)
$ node --version
v4.9.1
$ node ls2-babel

.babelrc
app.js
ls.js
ls2-babel.js
ls2-old-school.js
ls2.js
node_modules

In other words, it runs the same as the async version but on an older Node.js release. Using a similar process, you can transpile code written with modern ES2015 (and so on) constructions so it can run in an older web browser.

In this section, we learned about advances in the JavaScript language, especially async functions, and then learned how to use Babel to use those features on older Node.js releases or in older web browsers.

Summary

You learned a lot in this chapter about installing Node.js using its command-line tools and running a Node.js server. We also breezed past a lot of details that will be covered later in this book, so be patient.

Specifically, we covered downloading and compiling the Node.js source code, installing Node.js—either for development use in your home directory or for deployment in system directories—and installing npm, the de facto standard package manager used with Node.js. We also saw how to run Node.js scripts or Node.js servers. We then took a look at the new features in ES2015, 2016, and 2017. Finally, we looked at how to use Babel to implement those features in your code.

Now that we've seen how to set up a development environment, we're ready to start working on implementing applications with Node.js. The first step is to learn the basic building blocks of Node.js applications and modules, meaning taking a more careful look at Node.js modules, how they are used, and how to use npm to manage application dependencies. We will cover all of that in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.121.160