Chapter 2. Microservices in Node.js – Seneca and PM2 Alternatives

In this chapter, you will mainly learn about two frameworks, Seneca and PM2, and why they are important for building microservices. We will also get to know the alternatives to these frameworks in order to get a general understanding of what is going on in the Node.js ecosystem. In this chapter, we are going to focus on the following topics:

  • Need for Node.js: In this section, we are going to justify the choice of Node.js as a framework to build our microservices-oriented software. We will walk through the software stack required to use this awesome technology.
  • Seneca – a microservices framework: In this section, you will learn the basics of Seneca and why it is the right choice if we want to keep our software manageable. We will explain how to integrate Seneca with Express (the most popular web server in Node.js) in order to follow the industry standards.
  • PM2: PM2 is the best choice to run Node.js applications. No matter what your problem in deploying your ecosystem of apps is, PM2 will always have a solution for it.

Need for Node.js

In the previous chapter, I mentioned that I wasn't a big fan of Node.js in the past. The reason for this was that I wasn't prepared to cope with the level of standardization that JavaScript was undergoing.

JavaScript in the browser was painful. Cross-browser compatibility was always a problem and the lack of standardization didn't help to ease the pain.

Then Node.js came and it was easy to create highly scalable applications due to its non-blocking nature (we will talk about it later in this chapter) and it was also very easy to learn as it was based on JavaScript, a well-known language.

Nowadays, Node.js is the preferred choice for a large number of companies across the world, as well as the number one choice for aspects that require a non-blocking nature in the server, such as web sockets.

In this book, we will primarily (but not only) use Seneca and PM2 as the frameworks for building and running microservices, but it does not mean that the alternatives are not good.

There are few alternatives in the market such as restify or Express for building applications and forever or nodemon to run them. However, I find Seneca and PM2 to be the most appropriate combination for building microservices for the following reasons:

  • PM2 is extremely powerful regarding application deployments
  • Seneca is not only a framework to build microservices, but it is also a paradigm that reshapes what we know about object-oriented software

We will be using Express in a few examples in the chapters of this book and we will also discuss how to integrate Seneca in Express as a middleware.

However, before that, let's discuss some concepts around Node.js that will help us to understand those frameworks.

Installing Node.js, npm, Seneca, and PM2

Node.js is fairly easy to install. Depending on your system, there is an installer available that makes the installation of Node.js and npm (Node Package Manager) a fairly simple task. Simply double-click on it and follow the instructions. At the time of writing this book, there are installers available for Windows and OSX.

However, the advanced users, especially DevOps engineers, will need to install Node.js and npm from the sources or binaries.

Note

Both Node.js and npm programs come bundled together in a single package that we can download for various platforms from the Node.js website (either sources or binaries):

https://nodejs.org/en/download/

For the Chef users, a popular configuration management software to build servers, there are few options available, but the most popular is the following recipe (for those unfamiliar with Chef, a recipe is basically a script to install or configure software in a server through Chef):

https://github.com/redguide/nodejs

At the time of writing this book, there are binaries available for Linux.

Learning npm

npm is a software that comes with Node.js and enables you to pull dependencies from the Internet without worrying about their management. It can also be used to maintain and update dependencies, as well as create projects from scratch.

As you probably know, every node app comes with a package.json file. This file describes the configuration of the project (dependencies, versions, common commands, and so on). Let's see the following example:

{
  "name": "test-project",
  "version": "1.0.0",
  "description": "test project",
  "main": "index.js",
  "scripts": {
  "test": "grunt validate --verbose"
  },
  "author": "David Gonzalez",
  "license": "ISC"
}

The file itself is self-explanatory. There is an interesting section in the file—scripts.

In this section, we can specify the command that is used to run for different actions. In this case, if we run npm test from the terminal, npm will execute grunt validate --verbose.

Node applications are usually as easy to run as executing the following command:

node index.js

In the root of your project, consider that the bootstrapping file is index.js. If this is not the case, the best thing you can do is add a subsection in the scripts section in package.json, as follows:

"scripts": {
  "test": "grunt validate --verbose"
  "start": "node index.js"
},

As you can see, now we have two commands executing the same program:

node index.js
npm start

The benefits of using npm start are quite obvious—uniformity. No matter how complex your application is, npm start will always run it (if you have configured the scripts section correctly).

Let's install Seneca and PM2 on a clean project.

First, execute npm init in a new folder from the terminal after installing Node.js. You should get a prompt similar to the following image:

Learning npm

npm will ask you for a few parameters to configure your project, and once you are done, it writes a package.json file with content similar to the preceding code.

Now we need to install the dependencies; npm will do that for us. Just run the following command:

npm install --save seneca

Now, if you inspect package.json again, you can see that there is a new section called dependencies that contains an entry for Seneca:

"dependencies": {
  "seneca": "^0.7.1"
}

This means that from now on, our app can require the Seneca module and the require() function will be able to find it. There are a few variations of the save flag, as follows:

  • save: This saves the dependency in the dependencies section. It is available through all the development life cycle.
  • save-dev: This saves the dependency in the devDependencies section. It is only available in development and does not get deployed into production.
  • save-optional: This adds a dependency (such as save), but lets npm continue if the dependency can't be found. It is up to the app to handle the lack of this dependency.

Let's continue with PM2. Although it can be used as a library, PM2 is mainly a command tool, like ls or grep in any Unix system. npm does a great job installing command-line tools:

npm install –g pm2

The –g flags instruct npm to globally install PM2, so it is available in the system, not in the app. This means that when the previous command finishes, pm2 is available as a command in the console. If you run pm2 help in a terminal, you can see the help of PM2.

Our first program – Hello World

One of the most interesting concepts around Node.js is simplicity. You can learn Node.js in few days and master it in a few weeks, as long as you are familiar with JavaScript. Code in Node.js tends to be shorter and clearer than in other languages:

var http = require('http');

var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World
");
});

server.listen(8000);

The preceding code creates a server that listens on the 8000 port for requests. If you don't believe it, open a browser and type http://127.0.0.1:8000 in the navigation bar, as shown in the following screenshot:

Our first program – Hello World

Let's explain the code:

  • The first line loads the http module. Through the require() instruction, we ask the node to load the http module and assign the export of this module to the http variable. Exporting language elements is the way that Node.js has to expose functions and variables to the outer world from inside a module.
  • The second construction in the script creates the HTTP server. The http module creates and exposes a method called createServer() that receives a function (remember JavaScript treats functions as first-level objects so that they can be passed as other functions arguments) as a parameter that, in the Node.js world, is called callback. A callback is an action to be executed as a response to an event. In this case, the event is that the script receives an HTTP request. Node.js has a heavy usage of callbacks due to its thread model. Your application will always be executed on a single thread so that not blocking the application thread while waiting for operations to complete and prevents our application from looking stalled or hanged. Otherwise, your program won't be responsive. We'll come back to this in Chapter 4, Writing Your First Microservice in Node.js.
  • In the next line, server.listen(8000) starts the server. From now on, every time our server receives a request, the callback on the http.createServer() function will be executed.

This is it. Simplicity is the key to Node.js programs. The code allows you to go to the point without writing tons of classes, methods, and config objects that complicate what, in the first instance, can be done much more simply: write a script that serves requests.

Node.js threading model

Programs written in Node.js are single-threaded. The impact of this is quite significant; in the previous example, if we have ten thousand concurrent requests, they will be queued and satisfied by the Node.js event loop (it will be further explained in Chapter 4, Writing Your First Microservice in Node.js and Chapter 6, Testing and Documenting Node.js Microservices) one by one.

At first glance, this sounds wrong. I mean, the modern CPUs can handle multiple parallel requests due to their multicore nature. So, what is the benefit of executing them in one thread?

The answer to this question is that Node.js was designed to handle asynchronous processing. This means that in the event of a slow operation such as reading a file, instead of blocking the thread, Node.js allows the thread to continue satisfying other events, and then the control process of the node will execute the method associated with the event, processing the response.

Sticking to the previous example, the createServer() method accepts a callback that will be executed in the event of an HTTP request, but meanwhile, the thread is free to keep executing other actions.

The catch in this model is what Node.js developers call the callback hell. The code gets complicated as every single action that is a response to a blocking action has to be processed on a callback, like in the previous example; the function used as a parameter to the createServer() method is a good example.

Modular organization best practices

The source code organization for big projects is always controversial. Different developers have different approaches to how to order the source code in order to keep the chaos away.

Some languages such as Java or C# organize the code in packages so that we can find source code files that are related inside a package. As an example, if we are writing a task manager software, inside the com.taskmanager.dao package we can expect to find classes that implement the data access object (DAO) pattern in order to access the database. In the same way, in the com.taskmanager.dao.domain.model package, we can find all the classes that represent model objects (usually tables) in our application.

This is a convention in Java and C#. If you are a C# developer, and you start working on an existing project, it only takes you a few days to get used to how the code is structured as the language enforces the organization of the source.

Javascript

JavaScript was first designed to be run inside the browser. The code was supposed to be embedded in HTML documents so that the Document Object Model (DOM) could be manipulated to create dynamic effects. Take a look at the following example:

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <title>Title of the document</title>
</head>
<body>
  Hello <span id="world">Mundo</span>
  <script type="text/javascript">
  document.getElementById("world").innerText = 'World';
  </script>
</body>
</html>

As you can see, if you load this HTML on a browser, the text inside the span tag with the id as world is replaced when the page loads.

In JavaScript, there is no concept of dependency management. JavaScript can be segregated from the HTML into its own file, but there is no way (for now) to include a JavaScript file into another JavaScript file.

This leads to a big problem. When the project contains dozens of JavaScript files, the assets management become more of an art than an engineering effort.

The order in which you import the JavaScript files becomes important as the browser executes the JavaScript files as it finds them. Let's reorder the code in the previous example to demonstrate it, as follows:

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
  <title>Title of the document</title>
  <script type="text/javascript">
    document.getElementById("world").innerText = 'World';
  </script>
</head>
<body>
  Hello <span id="world">Mundo</span>

</body>
</html>

Now, save this HTML in an index.html file and try to load it in any browser, as shown in the following image:

Javascript

In this case, I have used Chrome and the console shows an Uncaught TypeError: Cannot set property 'innerText' of null error in line 7.

Why is that happening?

As we explained earlier, the browser executes the code as it is found, and it turns out that when the browser executes the JavaScript, the world element does not exist yet.

Fortunately, Node.js has solved the dependency-loading problem using a very elegant and standard approach.

SOLID design principles

When talking about microservices, we always talk about modularity, and modularity always boils down to the following (SOLID) design principles:

  • Single responsibility principle
  • Open for extension, closed for modification
  • Liskov substitution
  • Interface segregation
  • Dependency inversion (inversion of control and dependency injection)

You want your code to be organized in modules. A module is an aggregation of code that does something simple, such as manipulating strings, and it does it well. The more functions (or classes, utilities, and so on) your module contains, the less cohesive it is, and we are trying to avoid that.

In Node.js, every JavaScript file is a module by default. We can also use folders as modules, but let's focus on files:

function contains(a, b) {
  return a.indexOf(b) > -1;
}

function stringToOrdinal(str) {
  var result = ""
  for (var i = 0, len = str.length; i < len; i++) {
    result += charToNumber(str[i]);
  }
  return result;
}

function charToNumber(char) {
  return char.charCodeAt(0) - 96;
}

module.exports = {
  contains: contains,
  stringToOrdinal: stringToOrdinal
}

The preceding code represents a valid module in Node.js. In this case, the module contains three functions, where two of them are exposed to the outside of the module.

In Node.js, this is done through the module.exports variable. Whatever you assign to this variable is going to be visible by the calling code so that we can simulate private content on a module, such as the charToNumber() function in this case.

So, if we want to use this module, we just need to require() it, as follows:

var stringManipulation = require("./string-manipulation");
console.log(stringManipulation.stringToOrdinal("aabb"));

This should output 1122.

Let's go back to the SOLID principles and see how our module looks:

  • Single responsibility principle: Our module only deals with strings
  • Open for extension, closed for modification: We can add more functions, but the ones that we have are correct and they can be used to build new functions in the module
  • Liskov substitution: We will skip this one, as the structure of the module is irrelevant to fulfil this principle
  • Interface segregation: JavaScript is not a language that counts with an interface element such as Java or C#, but in this module, we exposed the interface, and the module.exports variable will act as a contract for the calling code and the change in our implementation won't affect how the module is being called
  • Dependency inversion: Here is where we fail, not fully, but enough to reconsider our approach

In this case, we require the module, and the only way to interact with it is through the global scope. If, inside the module, we want to interact with data from outside, the only possible option is to create a global variable (or function) prior to requiring the module, and then assume that it is always going to be in there.

Global variables are a big problem in Node.js. As you are probably aware, in JavaScript, if you omit the var keyword when declaring a variable, it is automatically global.

This, coupled with the fact that intentional global variables create a data coupling between modules (coupling is what we want to avoid at any cost), is the reason to find a better approach to how to define the modules for our microservices (or in general).

Let's restructure the code as follows:

function init(options) {
  
  function charToNumber(char) {
    return char.charCodeAt(0) - 96;
  }
  
  function StringManipulation() {
  }
  
  var stringManipulation = new StringManipulation();
  
  stringManipulation.contains = function(a, b) {
    return a.indexOf(b) > -1;
  };
  
  stringManipulation.stringToOrdinal = function(str) {
    var result = ""
    for (var i = 0, len = str.length; i < len; i++) {
      result += charToNumber(str[i]);
    }
    return result;
  }
  return stringManipulation;
}

module.exports = init;

This looks a bit more complicated, but once you get used to it, the benefits are enormous:

  • We can pass configuration parameters to the module (such as debugging information)
  • Avoids the pollution of global scope as if everything is wrapped inside a function, and we enforce the use strict configuration (this avoids declarations without var with a compilation error)
  • Parameterizing a module makes it easy to mock behaviors and data for testing

In this book, we are going to be writing a good amount of code to model systems from the microservices prospective. We will try to keep this pattern as much as we can so that we can see the benefits.

One of the library that we are going to be using to build microservices, Seneca, follows this pattern, as well as a large number of libraries that can be found on Internet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.202.61