© Anand Tamboli 2019
Anand TamboliBuild Your Own IoT Platformhttps://doi.org/10.1007/978-1-4842-4498-2_10

10. Rule Engine and Authentication

Anand Tamboli1 
(1)
Sydney, NSW, Australia
 

The rule engine is one of the most powerful blocks of our IoT platform or any IoT platform in general. The fact that anything happening in the connected environment of an IoT platform can trigger something else to happen makes the rule engine a critical block.

In this chapter, we will
  • Establish working logic for our rule engine

  • Build the relevant rule engine flow

  • Create APIs for rule management

  • Understand authentication logic and implement it

Start with the Rule Engine Logic

We can build a rule engine infrastructure in several ways. There is no right or wrong way per se. There is only smart and smarter or efficient or more efficient. At the same time, the value of a rule engine is directly proportional to the simplicity it can provide and the job it can do to achieve the result. If you must build too many things to achieve results, the value of that build starts to diminish.

Keeping this in mind, we can build our rule engine in two ways. We can leverage the Node-RED flow-based approach and simply add a message stream listener, which can then distribute messages to several other nodes based on certain preprogrammed criterion, and then take further actions. The keyword here is preprogrammed . Yes, that means that we always have to manually preprogram the rules before deploying them. This makes the approach somewhat rigid. What if we must add a new rule on the fly? Or activate or deactivate an existing rule based on another rule’s output? These scenarios lead us to another approach: query-based rules. And this is what we will build now.

Creating a Database

The first step in building a query-based rule engine is to define the data schema and add a data table in our time-series data storage. Let’s create this rule table, as shown in Figure 10-1.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig1_HTML.jpg
Figure 10-1

Rule engine data table schema

We are naming it the ruleEngine table . In this data table, we have a primary key and an autoincrementing value, id. This is required to refer to any rule going forward. ruleName is the readable name of the rule. The active column is binary in nature and defines whether the rule is active or not. There are two columns that define a rule’s logic: topicPattern and payloadPattern . These columns hold SQL-like patterns for rule qualifications; based on their qualifications, we can use the webHook value to call on using a defined method in the method column. By default, method is set to GET request.

There could be several possibilities in which we can use this structure or augment it. We can certainly add a few more columns that can hold various other actions to be executed, values to be changed, scripts to be executed, or other global variables to be affected. We can also do all that in another web service or script file located at the webHook URL. It is cleaner this way.

Once this table is created, a typical entry looks like the one shown in Figure 10-2.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig2_HTML.jpg
Figure 10-2

Typical rule entry in our ruleEngine table

Let’s look at what we have added to the table. At the outset, we are calling/naming this rule timestamp rule , and the same is appearing in the ruleName column. The next field, active, is set to 1, which means the rule is active. If we want to disable this rule, we simply set active to 0. The topicPattern field has a value of timestamp%. The payloadPattern field is %, which is a pattern similar to what you would use for SQL queries. From the rules’ perspective, this means that the rule should execute if the received topic matches this pattern (timestamp%) (i.e., the topic starts with the word timestamp) and can have anything after that. For the payload, the pattern is set to % (i.e., anything is acceptable in the payload). The last two fields define the webHook information : the method field defines the type of call, and the webHook field has an actual URL to be invoked.

Building the Flow Sequence

With this sample entry ready, let’s create a sequence for in-flow rule execution. Figure 10-3 shows the created sequence.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig3_HTML.jpg
Figure 10-3

Rule engine flow sequence in combination with database listener

Note

While we already had a database listener, we have tied our rule sequence to the same listener. This will enable us to execute rules faster and closer to the time of message reception in the message stream.

Here we have connected the first function block, search rules , to the MQTT listener; thus, every time a new message is in the stream, this block will search for corresponding rules. The MySQL node helps fetch those rules from the ruleEngine data table. Once we have the rules available, we are invoking webhooks and sending the output of a webhook to the debug node for displaying the output on a debug console.
// Search rules
msg.topic = "SELECT * FROM ruleEngine" +
            " WHERE" +
            " ('" + msg.topic + "' LIKE topicPattern)" +
            " AND" +
            " ('" + msg.payload + "' LIKE payloadPattern)" +
            " AND active=1";
return msg;
The preceding code snippet shows the code written in the search rules block. The query written implements a reverse search technique. In a normal scenario, the query searches for columns matching a pattern; however, in this case, we are searching for a pattern that matches columns. The query also checks for only active rules (i.e., active=1). The call webHook block receives the output of the query and has the following code in it.
// Call webhook
if(msg.payload.length !== 0)
{
    for(var i = 0; i < msg.payload.length; i++)
    {
        msg.method = msg.payload[i].method;
        msg.url = msg.payload[i].webHook;
        node.send([msg]);
    }
}

The preceding snippet seems slightly unusual because it does not have return msg; code in it. The node first checks the length of the payload and executes a piece of code only if that length is non-zero. If there is no rule matching to the criterion, then the payload is empty, and thus we avoid going further (because there is no return statement after the if clause). However, if there is some payload, it has an array of rule objects. If there are multiple rules matching the rule condition, then there is more than one.

With the for loop, we ensure that all the rules are executed in sequence (i.e., the rule that was first created is executed first). By default, SQL results are in ascending order, which ensures that our rules are executed in the order of creation—the lowest ID executes first.

When we have a rule object, we assign an HTTP calling method and URL to it to be passed on to the HTTP request node. Then we send this packet through using a node.send statement. It forms an input to the HTTP request node, which has the settings shown in Figure 10-4.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig4_HTML.jpg
Figure 10-4

HTTP request node configuration

This means that the HTTP request node will execute an HTTP call and return the output as a parsed JSON object, which we are simply sending to the debug output.

At this stage, we have our rule engine flow ready, along with one rule added to the rule table. In Figure 10-5, the webhook that we added is POST ( www.in24hrs.xyz:1880/modifiedTime/rule-engine-works ). This is essentially our own data publish API call, which means that when a rule is executed, the /pub API is called and it publishes another message under the modifiedTime topic and the rule-engine-works payload.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig5_HTML.jpg
Figure 10-5

Two rules matching same criterion and one matching subsequent criterion

Testing the Rule Engine

We can test this in several ways, but the best way is to check it with the Paho MQTT utility because you are able to see the action live (i.e., when you publish a timestamp message, it is picked up and the rule engine searches for a rule). Since we already have a matching rule available, it will be executed and another message will be published. While you are on the Paho page, you see this second message coming in live, in almost no time.

To see how powerful our rule engine is, update the ruleEngine table with additional rules, as shown in Figure 10-5.

Here we have two rules matching the criterion (i.e., rule id 1 and id 2), whereas when we execute rule 2, it publishes a message, rule-engine-working-again . We have configured the third rule not to check topicPattern but payloadPattern for messages that end with the word again. This means that our second rule triggers the third rule.

Check this again in the Paho utility. You should see upon publishing something on the timestamp topic; there should be three additional messages following that.

Note

Remember to carefully craft your rules and check for circular references. If not handled carefully, these could result in a loop, which can lock access in no time.

Rule Management APIs

To manage rules easily, we will create three APIs that
  • Enable or disable a specified rule by ID

  • Enable or disable all the rules at once (handy when you want to stop all rules)

  • Create a new rule with the callback (fulfills the M7 requirement of our wish list)

Enable and Disable a Specific Rule

The first API flow sequence is followed by functional code, as shown in Figure 10-6.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig6_HTML.jpg
Figure 10-6

Activate or deactivate rule flow sequence

We have created two APIs: /rules/enable/:id and /rules/disable/:id for enabling and disabling rules by their ID. These two similar function blocks for creating a query differ in that one sets the active value to 1 while the other sets it to 0.
// Create query - /rules/enable/:id
msg.action = "enable";
msg.topic = "UPDATE ruleEngine" +
            " SET active=1" +
            " WHERE" +
            " id=" + msg.req.params.id + ";";
return msg;
The preceding code snippet is for enabling a query. We are creating this query and adding one variable to a msg object as action="enable". This enables us to respond properly in the prepare response functional block. Accordingly, the prepare response function has the following code.
// Prepare response
msg.payload = {
    "status": msg.action + " success"
};
return msg;
We are using an action variable from an upstream block to create a meaningful response. Since we already have one rule with id=1 in our table, we can activate or deactivate it with the following cURL.
# curl -X GET https://www.in24hrs.xyz:1880/rules/disable/1
Output
{"status":"disable success}

Now if you check the database, the rule has an active=0 value.

Enable and Disable All Rules

The second API that we will create enables or disables all the rules at once. This is a straightforward creation with a minor difference: the query will not check for any id value, so it applies to all the rules. The flow sequence for this API is shown in Figure 10-7.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig7_HTML.jpg
Figure 10-7

Activate or deactivate all rules

// Create query - rules/enableAll
msg.action = "enable all";
msg.topic = "UPDATE ruleEngine SET active=1;";
return msg;

The preceding code snippet that enables all API query code is self-explanatory. The code written for prepare response is the same as the earlier API’s.

Create a New Rule

Now we will create the third API to add a new rule. As mentioned earlier, we can use this API to create new rules and to have applications register a callback on the go. The flow sequence is shown in Figure 10-8.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig8_HTML.jpg
Figure 10-8

Register a call back (add new rule) flow sequence

The flow sequence follows a standard pattern and creates an endpoint, /rules/add/:rulename, where the balance of the parameters to the API are passed on in the POST body. The first function, create query, then inserts a new rule record in the ruleEngine table. Note that we have set the default value of an active field to 0, which means that when created, the rule will be inactive by default. The code snippets for both create query and prepare response are shown next.
// Create query
var ruleName = msg.req.params.rulename;
var topicPattern = msg.req.body.topicPattern;
var payloadPattern = msg.req.body.payloadPattern;
var method = msg.req.body.method;
var webHook = msg.req.body.webHook;
msg.topic = "INSERT INTO ruleEngine (ruleName, topicPAttern, payloadPattern, method, webHook)" + " VALUES" +
            " ('" + ruleName + "', '" + topicPattern + "', '" +
            payloadPattern + "', '" + method + "', '" + webHook + "'),";
return msg;
// Prepare response
if(msg.payload.affectedRows !== 0)
{
    msg.payload = {
        "status": "rule added",
        "ruleName": msg.req.params.rulename,
        "ruleId": msg.payload.insertId
    };
    return msg;
}
Once the rule is added to ruleEngine, we send its ID in the response. Upstream applications or devices can use this for further actions on rules. Deploy this flow sequence and then use cURL to test the functionality, as follows.
# curl -X POST "https://www.in24hrs.xyz:1880/rules/add/testRule"
--data-urlencode "topicPattern=%stamp"
--data-urlencode "payloadPattern=%1234%"
--data-urlencode "method=GET"
--data-urlencode "webHook=https://www.in24hrs.xyz:1880/sms/to/+1234567890/message/pattern detected"
Output 1
{"status":"rule added","ruleName":"testRule","ruleId":4}
# curl -X GET https://www.in24hrs.xyz:1880/rules/disable/4
Output 2
{"status":"enable success}

First, we are creating a new rule called testRule , which is triggered if the topic is like %stamp (which could be timestamp or anything else) and the payload has four numbers in sequence (e.g., %1234%). When this condition is met, we are setting it to send us an SMS using our SMS API. Before you add this rule, make sure that you have inserted a valid mobile number in place of the dummy one.

Upon successful creation, we get the output and its ID (which is 4 in this case.) Remember that this rule is still inactive, so we use our enable API and enable it. Once our rule is active, head over to the Paho MQTT utility and publish something on the timestamp topic. Make sure that the payload has the number sequence 1234 in it (e.g., payload = 1544312340320). If everything is set up correctly thus far, the specified mobile number will receive an SMS that says, “Pattern detected.”

We can also create an additional API to delete the rule from ruleEngine. It is not explained or demonstrated here; however, you can follow the same logic as in the /purge API, and create it yourself.

Building Another Rule Engine with Node-RED

While the rule engine that we are building is using the time-storage database for functioning, we can also build a rule engine with the Node-RED interface. However, as I mentioned earlier, it will not be dynamic in nature and would need to be manually modified every time you want to change rules; at least to a larger extent. Moreover, this method does not check for multiple input parameters (i.e., it can check for topic content or payload content but not both at once). It is the biggest advantage of our core rule engine, which utilizes time-series data storage. For known and fixed rules, however, this serves as an effective and efficient alternative. Let’s see how it can be utilized.

Refer to Figure 10-9, which shows a rule engine with the three rules configured; the fourth one is a default value.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig9_HTML.jpg
Figure 10-9

Node-RED-based rule engine

This construct continuously listens to the message stream and uses a switch node from Node-RED for matching every message with predefined conditions. These conditions could be as simplistic as if message-payload = something, to anything complex with regular expressions (a.k.a. RegEx).

I have created two rules to demonstrate how this type of rule engine can work (see Figure 10-5). This method, however, will not allow us to modify these rules programmatically. If this is okay with your type of application or domain, you should use this method because it offers a higher level of ease.

In fact, you can use both constructs together and use an appropriate mix of rules to match needs. This would be an even more powerful implementation than either method alone. The three rules created with this method are shown in Figure 10-10. You will see that their subsequent actions are different from one other.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig10_HTML.jpg
Figure 10-10

Rules with Node-RED

You can test this rule engine in the same manner as the earlier one. The debug sidebar in Figure 10-10 shows sample test results.

Adding Authentication to the Data API

In simple terms, authentication means confirming your own identity, while authorization means granting access to the system. Simply put, with authentication, the system verifies the user, accessing system, or application. With authorization, the system verifies if the requester has access to the requested resource(s).

Since our focus has been on the core of the platform all the while, it would make sense to have some level of built-in authentication abilities. It would be ideally handled by an upstream application. It would make sense to add topic-based access control to the data access API. We will follow the same logic that we used while adding access controls to the MQTT broker configuration.

What Are Our Options?

There are several authentication and authorization methods, and many systems utilize customizations of a few major approaches. The following are three popular approaches.
  • Basic. With HTTP basic authentication , the user agent provides a username and password to prove their authenticity. This approach does not require cookies, sessions, or logins, and so forth. Information is provided in the HTTP header. As many experts would suggest, the biggest issue with basic authentication is that unless the SSL is fully enforced for security, the authentication is transmitted in open and insecure channels, and thus is rendered useless. The username and password combination are encoded in the base64 format before adding to the header. In general, this option is good at balancing system costs and performance.

  • API key. This was created as somewhat of a fix to basic authentication and is a relatively faster approach. A uniquely generated random value or code is assigned to each user for authentication, and usually, such a string is long enough to be guessed easily. Additionally, setting up this type of system is relatively easy, and controlling these keys, once generated, is even easier since they can be managed fully from the server side. While this method is better than a basic authentication method, it is not the best.

  • OAuth. OAuth combines authentication and authorization to allow more sophisticated scope and validity control. However, it involves an authentication server, a user, and the system to do the handshake, and thereby perform authentication, which leads to authorization. This is supposedly a stronger implementation from a security perspective; however, it is also a time-consuming and costly proposition. Does this fit your purpose? It depends upon what do you plan to do!

Note

In general, the use of the API key method provides the best compromise between implementation costs, ease of usage, and performance overhead. We will use the API key method for authentication. I prefer to keep things as simple as possible, because every time you make the solution unnecessarily complex, you are also likely to leave a hole in it.

In our implementation, we are using authentication only for data access APIs because that is the critical piece of currency our platform may hold. When you implement your own platform, you can take the lead from this implementation and extend it to all other APIs as you see fit.

What Is the Plan?

To implement a simplified API, key-based authentication, and access control, we will make use of another data table in the time-series database, which can hold API keys and relevant information for our use. We then ensure that every time an API call is received for a data request, we check a supplied key and assert access controls via modification of our base queries. In previous chapters, we provisioned most of the APIs for this usage in the form of authFilter. The minimally required data schema for authTable is shown in Figure 10-11.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig11_HTML.jpg
Figure 10-11

Authentication table data schema

As you can see, the user column is only for usernames, token holds actual API keys, and access holds our access control SQL condition. The field named details is for capturing information about the user and last-change holds the timestamp, which is automatically updated each time we update the record. The following is an explanation of the fields in a typical record entry in the table schema.
  • user. Test user 1. This can be any alphanumeric username because it is the only placeholder for our purpose.

  • token. A 64-byte random string generated from our own API, /randomcode/64. Instead of random code, we could also use the /uuid API; or it can be either of these, depending on the requirements. This can be used as a bearer token in an HTTP header while making any API requests.

  • access. Here we are adding an actual query clause (so be careful what goes here) as topic LIKE 'timestamp%. This query clause ensures that when an authFilter is applied, the query is applicable only to topics that start with the word timestamp. This is how we control access to test-user-1 to only limited topics. The default value for this field is set to 0, which will return false if included in the SQL query, and thus result in no records at the output.

  • details . This is test user 1, who has access to only timestamp topic data and any subtopics under that.

This is a somewhat unconventional way to use the API key, but it is perfectly legal from a programming and coding perspective.

Adding Authentication Middleware

To implement this logic, we are going to modify the Node-RED settings.js file ; and while we do that, we will need another Node.js module called mysql. Let’s first install the module via command line, as follows.
# npm i mysql -g
Output
updated 1 package in 0.553s

Now open the settings.js file and search for the httpNodeMiddleware section. As the comments for this section state, this property can be used to add a custom middleware function in front of all HTTP in nodes. This allows custom authentication to be applied to all HTTP in nodes, or any other sort of common request processing.

Remove comments from this code block and update it with the following code.
httpNodeMiddleware: function(req, res, next) {
    function getData(query, cbFunction){
        var connection = require('mysql').createConnection({
            host: 'localhost',
            user: '<your-database-username>',
            password: '<your-database-password>',
            database: 'tSeriesDB'
        });
        connection.query(query, function(err, rows, fields){
            if(err)
                cbFunction(false, err);
            else
                cbFunction(true, rows);
        });
        connection.end();
    }
    // get auth details from request header
    if(req.headers.authorization)
    {
        auth = Buffer.from(req.headers.authorization, 'ascii').toString();
        // split the string at space-character
        // typical auth header is like Bearer <access-token>
        req.authType = auth.split(' ')[0];
        req.userToken = auth.split(' ')[1];
    }
    else
    {
        // take some actions if user is not authorized or provide only basic access
        req.authType = 'None';
        req.userToken = 'null';
    }
    getData('SELECT * FROM authTable WHERE token = " + req.userToken.toString() + '' ORDER BY ID DESC LIMIT 1;', function(code, data){
        // if data query is successful
        if(code === true)
        {
            // if authorization details are not available i.e. data.length === 0
            if(data.length === 0)
            {
                // set authFilter="0" if user authorization info is not available
                req.auth = false;
                req.authFilter= 0;
            }
            else
            {
                // use pass access string
                req.auth = true;
                req.authFilter = data[0].access;
            }
            // pass control to http node
            next();
        }
        else
        {
            // if there was an error, respond with 403 and terminate
            res.status(403).send("403: FORBIDDEN").end();
        }
    });
},

In the preceding code snippet, the middleware function does three main tasks.

First, it defines a user-defined function, which uses the MySQL library to connect with our time-series database and execute the supplied query. When query output is available or failed, the callback function is called with data and the status is passed to that function.

Second, middleware checks for an authorization header in every HTTP API request. If the access token is available, then it is captured in another variable and availability is set to true or false.

Once the token is captured, middleware checks for an access string in authTable , which is defined for the given token. This token, if available, is assigned to the authFilter variable, which we are using later for SQL query building. If no token was supplied, the filter will be set to '0', which will yield zero records upon query.

Enable and Test Authentication

Update settings.js appropriately, save it, and then restart Node-RED. Since Node-RED might have been already running with the forever utility, it is easy to restart it. Simply check for the PID of this process with the forever list command, and then send another command: forever restart <PID>. This restarts your Node-RED instance, and it reloads the updated settings.js file with authentication middleware code.

To test, let’s first issue a simple cURL command without any authorization headers. With this command, we should get nothing in the output because authFilter is set to 0.
# curl -X GET https://www.in24hrs.xyz:1880/get/timestamp
Output 1
[]
# curl -X GET "https://www.in24hrs.xyz:1880/get/timestamp" -H "Authorization: Bearer <token>"
Output 2
[{"id":149,"topic":"timestamp","payload":"partial-match","timestamp":"1544691441.578"}]

It is functioning as expected, and we are getting data only when we send the Bearer token . Now try adding a few more users with different tokens and change their topic access with simple SQL syntax. If you wish to provide access to multiple topics, use the OR operator to build the authFilter condition.

Our Core Platform Is Ready Now

While the message router has been represented as a separate block from a logical perspective, it is indeed an integrated functionality. And, we have done it in multiple passes throughout the last few chapters. The MQTT message broker, database listener, rule engine, and REST API all cater to form a functional message router.

Figure 10-12 shows the final set of blocks that we built; they are functional now.
../images/474034_1_En_10_Chapter/474034_1_En_10_Fig12_HTML.jpg
Figure 10-12

Our own IoT platform core is now fully ready and functional

The device manager and application/user management are essentially applications that can use our existing APIs for functioning. These applications can be developed and presented with a nice user interface for regular usage and device configurations to be attached to our IoT platform.

Summary

In this chapter, we created one of the critical blocks of our IoT platform. We also implemented authentication to our REST APIs, and thus secured them. Essentially, our core IoT platform is now ready.

In the next chapter, we see how to document our platform API and make it test-ready for developers. Going forward, this will also help you with regular testing of incremental changes in the platform, so that whenever you make a change to the API or message broker, or if you add a new API, testing can be done almost immediately in a convenient manner.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.226.105