Wiring plugins

The dream architecture of a software engineer is the one having a small, minimal core, extensible as needed through the use of plugins. Unfortunately, this is not always easy to obtain, since most of the time it has a cost in terms of time, resources, and complexity. Nonetheless, it's always desirable to support some kind of external extensibility, even if limited to just some parts of the system. In this section, we are going to plunge into this fascinating world and focus on a dualistic problem:

  • Exposing the services of an application to a plugin
  • Integrating a plugin into the flow of the parent application

Plugins as packages

Often in Node.js, the plugins of an application are installed as packages into the node_modules directory of a project. There are two advantages for doing this. Firstly, we can leverage the power of npm to distribute the plugin and manage its dependencies. And secondly, a package can have its own private dependency graph, which reduces the chances of having conflicts and incompatibilities between dependencies, as opposed to letting the plugin use the dependencies of the parent project.

The following directory structure gives an example of an application with two plugins distributed as packages:

application 
'-- node_modules 
    |-- pluginA 
    '-- pluginB 

In the Node.js world, this is a very common practice. Some popular examples are express (http://expressjs.com) with its middleware, gulp (http://gulpjs.com), grunt (http://gruntjs.com), nodebb (http://nodebb.org), and docpad (http://docpad.org).

However, the benefits of using packages are not only limited to external plugins. In fact, one popular pattern is to build entire applications by wrapping their components into packages, as if they were internal plugins. So, instead of organizing the modules in the main package of the application, we can create a separate package for each big chunk of functionality and install it into the node_modules directory.

Tip

A package can be private and not necessarily available on the public npm registry. We can always set the private flag into the package.json to prevent accidental publication to npm. Then, we can commit the packages into a version control system such as git or leverage a private npm server to share them with the rest of the team.

Why follow this pattern? First of all, convenience: people often find it impractical or too verbose to reference the local modules of a package using the relative path notation. Let's, for example, consider the following directory structure:

application 
|-- componentA 
|   '-- subdir 
|       '-- moduleA 
'-- componentB 
    '-- moduleB 

If we want to reference moduleB from moduleA, we have to write something like this:

require('../../componentB/moduleB'); 

Instead, we can leverage the properties of the resolving algorithm of require() (as we have studied it in Chapter 2, Node.js Essential Patterns) and put the entire componentB directory into a package. By installing it into the node_modules directory, we can then write something such as the following (from anywhere in the main package of the application):

require('componentB/module'); 

The second reason for splitting a project into packages is, of course, reusability. A package can have its own private dependencies, and it forces the developer to think in terms of what to expose to the main application and what instead to keep private, with beneficial effects on the decoupling and information hiding of the entire application.

Tip

Pattern

Use packages as a means to organize your application, not just for distributing code in combination with npm.

The use cases we have just described make use of a package not just as a stateless, reusable library (like most of the packages on npm), but more as an integral part of a particular application, providing services, extending its functionality, or modifying its behavior. The main difference is that these types of packages are integrated inside an application rather than just used.

Note

For simplicity, we will use the term plugin to describe any package meant to integrate with a particular application.

As we will see, the common problem that we are going to face when deciding to support this type of architecture is exposing parts of the main application to plugins. In fact, we cannot think of only stateless plugins—this is, of course, the aim for a perfect extensibility, because sometimes the plugin has to use some of the services of the parent application in order to carry out its tasks. This aspect might depend a lot on the technique used to wire modules in the parent application.

Extension points

There are literally infinite ways to make an application extensible. For example, some of the design patterns we studied in Chapter 6, Design Patterns, are meant exactly for this: using Proxy or Decorator we are able to change or augment the functionality of a service; with Strategy, we can swap parts of an algorithm; with middleware, we can insert processing units in an existing pipeline. Also, streams can provide great extensibility thanks to their composable nature.

On the other hand, EventEmitters allow us to decouple our components using events and the publish/subscribe pattern. Another important technique is to explicitly define in the application some points where new functionalities can be attached or the existing ones modified; these points in an application are commonly known as hooks. To summarize, the most important ingredient to support plugins is a set of extension points.

The way we wire our components also plays a decisive role because it can affect the way we expose the services of the application to the plugin. In this section, we are mainly going to focus on this aspect.

Plugin-controlled vs application-controlled extension

Before we go ahead and present some examples, it is important to understand the background of the technique we are going to use. There are mainly two approaches for extending the components of an application:

  • Explicit extension
  • Extension through Inversion of Control (IoC)

In the first case, we have a more specific component (the one providing the new functionality) explicitly extending the infrastructure, while in the second case, it is the infrastructure to control the extension by loading, installing, or executing the new specific component. In the second scenario, the flow of control is inverted, as shown in the following image:

Plugin-controlled vs application-controlled extension

IoC is a very broad principle that can be applied not only to the problem of application extensibility. In fact, in more general terms it can be said that by implementing some form of IoC, instead of the custom code controlling the infrastructure, the infrastructure controls the custom code. With IoC, the various components of an application trade off their power of controlling the flow in exchange for an improved level of decoupling. This is also known as the Hollywood principle or "don't call us, we'll call you".

For example, a DI container is a demonstration of the IoC principle applied to the specific case of dependency management. The Observer pattern is another example of IoC applied to state management. Template, Strategy, State, and Middleware are also more localized manifestations of the same principle. The browser implements the IoC principle when dispatching UI events to the JavaScript code (it's not the JavaScript code actively polling the browser for events), and guess what, Node.js itself follows the IoC principle when controlling the execution of the various callbacks for us.

Note

To know more about the IoC principle, we advise you to study the topic directly from the words of its master, Martin Fowler, at http://martinfowler.com/bliki/InversionOfControl.html.

Applying this concept to the specific case of plugins, we can then identify two forms of extension:

  • Plugin-controlled extension
  • Application-controlled extension (IoC)

In the first case, it is the plugin that taps into the components of the application to extend them as needed, while in the second case, the control is in the hands of the application, which integrates the plugin into one of its extension points.

To make a quick example, let's consider a plugin that extends the Express application with a new route. By using a plugin-controlled extension, this would look like the following:

//in the application: 
const app = express(); 
require('thePlugin')(app); 
 
//in the plugin: 
module.exports = function plugin(app) { 
  app.get('/newRoute', function(req, res) {...}) 
}; 

If, instead, we want to use an application-controlled extension (IoC), the same preceding example would look like the following:

//in the application: 
const app = express(); 
const plugin = require('thePlugin')(); 
app[plugin.method](plugin.route, plugin.handler); 
 
//in the plugin: 
module.exports = function plugin() { 
  return { 
    method: 'get', 
    route: '/newRoute', 
    handler: function(req, res) {...} 
  } 
} 

In the last code fragment, we saw how the plugin is only a passive player in the extension process; the control is in the hands of the application, which implements the framework to receive the plugin.

Based on the preceding example, we can immediately identify a few important differences between the two approaches:

  • Plugin-controlled extension is more powerful and flexible, as often we have access to the internals of the application and we can move freely as if the plugin was actually a part of the application itself. However, this sometimes can be more of a liability than an advantage. In fact, any change in the application would have repercussions on the plugins more readily, requiring constant updates as the main application evolves.
  • Application-controlled extension requires a plugin infrastructure in the main application. With a plugin-controlled extension, the only requirement is that the components of the application are extensible in some way.
  • With a plugin-controlled extension, it becomes essential to share the internal services of the application with the plugin (in the preceding small example, the service to share was the app instance); otherwise, we would not be able to extend them. With an application-controlled extension, it might still be necessary to access some of the services of the application, not to extend but rather to use them. For example, we might want to query the db instance in our plugin or leverage the logger of the main application, just to name a few scenarios.

This last point should allow us to think about the importance of exposing the services of an application to the plugin—that's what we are mainly interested in exploring. The best way of doing this is to show a practical example of a plugin-controlled extension, which requires minimal effort in terms of infrastructure and we can emphasize more on the problem of sharing the application's state with the plugins.

Implementing a logout plugin

Let's now start to work on a small plugin for our authentication server. With the way we originally created the application, it is not possible to explicitly invalidate a token; it simply becomes invalid when it expires. Now we want to add support for this feature, namely logout, and we want to do that by not modifying the code of the main application but rather delegating the task to an external plugin.

To support this new feature, we need to save each token in the database after it is created and then check for its existence every time we want to validate it. To invalidate a token, we simply need to remove it from the database.

To do this, we are going to use a plugin-controlled extension to proxy the calls to authService.login() and authService.checkToken(). We then need to decorate the authService with a new method called logout(). After doing this, we also want to register a new route against the main Express server to expose a new endpoint (/logout), which we can use to invalidate a token using an HTTP request.

We are going to implement the plugin we just described in four different variations:

  • Using hardcoded dependencies
  • Using dependency injection
  • Using a service locator
  • Using a DI container

Using hardcoded dependencies

The first variation of the plugin we are now going to implement covers the case in which our application mainly uses hardcoded dependencies for wiring its stateful modules. In this context, if our plugin lives in a package under the node_modules directory, to use the services of the main application we have to gain access to the parent package. We can do this in two ways:

  • Using require() and navigating to the application's root using relative or absolute paths.
  • Using require() by impersonating a module in the parent application—usually the module instantiating the plugin. This will allow us to easily gain access to all the services of the application by using require(), as if it was invoked by the parent application and not from the plugin.

The first technique is less robust as it assumes that the package is aware of the position of the main application. The module impersonation pattern can instead be used regardless of where the package is required from, and for this reason, this is the technique that we are going to use to implement the next demo.

To build our plugin, we first need to create a new package under the node_modules directory, named authsrv-plugin-logout. Before we start coding, we need to create a minimal package.json to describe the package, filling in only the essential parameters (the complete path to the file is : node_modules/authsrv-plugin-logout/package.json):

{ 
  "name": "authsrv-plugin-logout", 
  "version": "0.0.0" 
} 

Now we are ready to create the main module of our plugin, we will use the file index.js, as it is the default module that Node.js will try to load when requiring the package (if no main property is defined in the package.json). As always, the initial lines of the module are dedicated to loading the dependencies; pay attention to how we are going to require them (file node_modules/authsrv-plugin-logout/index.js):

const parentRequire = module.parent.require; 
 
const authService = parentRequire('./lib/authService'); 
const db = parentRequire('./lib/db'); 
const app = parentRequire('./app'); 
 
const tokensDb = db.sublevel('tokens'); 

The first line of code is what makes the difference. We obtain a reference to the require() function of the parent module, which is the one that loads the plugin. In our case, the parent is going to be the app module in the main application, and this means that every time we use parentRequire(), we are loading a module as if we were doing it from app.js.

The next step is creating a proxy for the authService.login() method. After studying this pattern in Chapter 6, Design Patterns, we should already know how it works:

const oldLogin = authService.login;                    //[1] 
authService.login = (username, password, callback) => { 
  oldLogin(username, password, (err, token) => {       //[2] 
    if(err) return callback(err);                      //[3] 
 
    tokensDb.put(token, {username: username}, () => { 
      callback(null, token); 
    }); 
  }); 
} 

In the preceding code, the steps performed are explained as follows:

  1. We first save a reference to the old login() method and we then override it with our proxied version.
  2. In the proxy function, we invoke the original login() method by providing a custom callback so that we can intercept the original return value.
  3. If the original login() returns an error, we simply forward it to the callback; otherwise, we save the token into the database.

Similarly, we need to intercept the calls to checkToken() so that we can add our custom logic:

const oldCheckToken = authService.checkToken; 
 
authService.checkToken = (token, callback) => { 
  tokensDb.get(token, function(err, res) { 
    if(err) return callback(err); 
 
    oldCheckToken(token, callback); 
  }); 
} 

This time, we first want to check whether the token exists in the database before giving control to the original checkToken() method. If the token is not found, the get() operation returns an error; this means that our token was invalidated and so we immediately return the error back to the callback.

To finalize the extension of the authService, we now need to decorate it with a new method, which we will use to invalidate a token:

authService.logout = (token, callback) => { 
  tokensDb.del(token, callback); 
} 

The logout() method is very simple: we just delete the token from the database.

Finally, we can attach a new route to the Express server to expose the new functionality through a web service:

app.get('/logout', (req, res, next) => { 
  authService.logout(req.query.token, function() { 
    res.status(200).send({ok: true}); 
  }); 
}); 

Now our plugin is ready to be attached to the main application. To do this we just need to go back to the main directory of the application and edit the app.js module:

// ... 
let app = module.exports = express(); 
app.use(bodyParser.json()); 
 
require('authsrv-plugin-logout'); 
 
app.post('/login', authController.login); 
app.all('/checkToken', authController.checkToken); 
// ... 

As we can see, to attach the plugin we only need to require it. As soon as this happens—during the startup of the application—the flow of control is given to the plugin, which in turn will extend the authService and the app modules, as we saw earlier.

Now our authentication server also supports the invalidation of the token. We did that in a reusable way, the core of the application remained almost untouched, and we were able to easily apply the Proxy and Decorator patterns to extend its functionalities.

We can now try to start the application again:

node app

Then, we can verify that the new /logout web service actually exists and works as expected. Using curl, we can now try to obtain a new token using /login:

curl -X POST -d '{"username": "alice", "password":"secret"}' http://localhost:3000/login -H "Content-Type: application/json"

Then, we can check whether the token is valid using /checkToken:

curl -X GET -H "Accept: application/json" http://localhost:3000/checkToken?token=<TOKEN HERE>

Then, we can pass the token to the /logout endpoint to invalidate it; with curl, this can be done with a command such as this:

curl -X GET -H "Accept: application/json" http://localhost:3000/logout?token=<TOKEN HERE>

Now, if we try to check the validity of the token again, we should get a negative response, confirming that our plugin is working perfectly.

Even with a small plugin like the one we just implemented, the advantages of supporting plugin-based extensibility are clear. We also learned how to gain access to the services of the main application from another package using the module impersonation.

Note

The module impersonation pattern is used by quite a few NodeBB plugins; you might want to check a couple of them in order to have an idea of how this is used in a real application. These are the links to some notable examples:

nodebb-plugin-poll:

https://github.com/Schamper/nodebb-plugin-poll/blob/b4a46561aff279e19c23b7c635fda5037c534b84/lib/nodebb.js

nodebb-plugin-mentions:

https://github.com/julianlam/nodebb-plugin-mentions/blob/9638118fa7e06a05ceb24eb521427440abd0dd8a/library.js#L4-13

Module impersonation is, of course, a form of hardcoded dependency and shares with it strengths and weaknesses. From one side, it allows us to access any service of the main application with little effort and minimal infrastructural requirements, but from the other, it creates a tight coupling, not only with a particular instance of a service but also with its location, which more easily exposes the plugin to changes and refactoring in the main application.

Exposing services using a service locator

Similar to module impersonation, the service locator is also a good choice if we want to expose all the components of an application to its plugins, but on top of that, it has a major advantage, because a plugin can use the service locator to expose its own services to the application or even to other plugins.

Let's now refactor our logout plugin again to use a service locator. We'll refactor the main module of the plugin in the node_modules/authsrv-plugin-logout/index.js file:

module.exports = (serviceLocator) => { 
  const authService = serviceLocator.get('authService'); 
  const db = serviceLocator.get('db'); 
  const app = serviceLocator.get('app'); 
 
  const tokensDb = db.sublevel('tokens'); 
 
  const oldLogin = authService.login; 
  authService.login = (username, password, callback) => { 
    //...same as in the previous version 
  } 
 
  const oldCheckToken = authService.checkToken; 
  authService.checkToken = (token, callback) => { 
    //...same as in the previous version 
  } 
 
  authService.logout = (token, callback) => { 
    //...same as in the previous version 
  } 
 
  app.get('/logout', (req, res, next) => { 
    //...same as in the previous version 
  }); 
}; 

Now that our plugin receives the service locator of the parent application as the input, it can access any of its services as needed. This means that the application does not have to know in advance what the plugin is going to need in terms of dependencies; this is surely a major advantage when implementing a plugin-controlled extension.

The next step is to execute the plugin from the main application, and to do that, we have to modify the app.js module. We will use the version of the authentication server already based on the service locator pattern. The required changes are given in the following block of code:

// ... 
const svcLoc = require('./lib/serviceLocator')(); 
svcLoc.register(...); 
// ... 
 
svcLoc.register('app', app); 
const plugin = require('authsrv-plugin-logout'); 
plugin(svcLoc); 
 
// ... 

The changes are highlighted in the preceding code; those changes enabled us to:

  • Register the app module itself in the service locator, as the plugin might want to have access to it
  • Require the plugin
  • Invoke the plugin's main function by providing the service locator as an argument

As we already said, the main strength of the service locator is that it provides a simple way to expose all the services of an application to its plugins, but it can also be used as a mechanism for sharing services from the plugin back into the parent application or even other plugins. This last consideration is probably the main strength of the service locator pattern in the context of plugin-based extensibility.

Exposing services using DI

Using DI to propagate services to a plugin is as easy as using it in the application itself. This pattern becomes almost a requirement if it's already the main method for wiring dependencies in the parent application, but nothing prevents us from using it when the prevalent form of dependency management is hardcoded dependencies or a service locator. DI is also an ideal choice when we want to support an application-controlled extension because it provides better control over what is shared with the plugin.

To test these assumptions, let's immediately try to refactor the logout plugin to use DI. The changes required are minimal, so let's start from the main module of the plugin (node_modules/authsrv-plugin-logout/index.js):

module.exports = (app, authService, db) => { 
  const tokensDb = db.sublevel('tokens'); 
 
  const oldLogin = authService.login; 
  authService.login = (username, password, callback) => { 
    //...same as in the previous version 
  } 
 
  let oldCheckToken = authService.checkToken; 
  authService.checkToken = (token, callback) => { 
    //...same as in the previous version 
  } 
 
  authService.logout = (token, callback) => { 
    //...same as in the previous version 
  } 
 
  app.get('/logout', (req, res, next) => { 
    //...same as in the previous version 
  }); 
}; 

All we did is wrap the plugin's code into a factory that receives the services of the parent application as the input; the rest of it remains unchanged.

To complete our refactoring, we also need to change the way we attach the plugin from the parent application; let's then change that one line where we require the plugin in the app.js module:

// ... 
const plugin = require('authsrv-plugin-logout'); 
plugin(app, authService, authController, db); 
// ... 

We intentionally didn't show how these dependencies were obtained. In fact, it doesn't really make any difference, any method will equally work; we might use hardcoded dependencies or obtain the instances from factories or from a service locator—it doesn't really matter. This proves that DI is a flexible pattern when wiring plugins that can be used regardless of the way we wire the services in the parent application.

But the differences are much more profound. DI is definitely the cleanest way of providing a set of services to a plugin, but most importantly, it offers the best level of control over what's exposed to it, resulting in better information hiding and better protection against overly aggressive extensions. However, this can be also considered a drawback, because the main application can't always know what services the plugin is going to need, so we end up either injecting every service, which is impractical, or only a subset of them, for example, only the essential core services of the parent application. For this reason, DI is not the ideal choice if we mainly want to support plugin-controlled extensibility; however, the use of a DI container can easily solve these issues.

Note

Grunt (http://gruntjs.com), a task runner for Node.js, uses DI to provide each plugin with an instance of the core Grunt service. Each plugin can then extend it by attaching new tasks, using it to retrieve the configuration parameters, or running other tasks. A Grunt plugin looks like the following:

module.exports = function(grunt) {
  grunt.registerMultiTask('taskName', 'description',
    function(...) {...} 
  ); 
};

Exposing services using a DI container

Taking the previous example as a starting point, we can use a DI container in combination with our plugin by applying a small change to the app module, as shown in the following code:

// ... 
const diContainer = require('./lib/diContainer')(); 
diContainer.register(...); 
// ... 
//initialize the plugin 
diContainer.inject(require('authsrv-plugin-logout')); 
// ... 

After registering the factories or the instances of our application, all we have to do is instantiate the plugin, which is done by injecting its dependencies using the DI container. This way, each plugin can require its own set of dependencies without the parent application needing to know. All the wiring is again carried out automatically by the DI container.

Using a DI container also means that each plugin can potentially access any service of the application, reducing the information hiding and the control over what can be used or extended. A possible solution to this problem is to create a separate DI container registering only the services that we want to expose to plugins; this way, we can control what each plugin can see of the main application. This demonstrates that a DI container can also be a very good choice in terms of encapsulation and information hiding.

This concludes our last refactoring of the logout plugin and the authentication server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.251.128