© Joseph Faisal Nusairat 2020
J. F. NusairatRust for the IoThttps://doi.org/10.1007/978-1-4842-5860-6_6

6. Security

Joseph Faisal Nusairat1 
(1)
Scottsdale, AZ, USA
 

If we deployed our site the way it’s currently designed, it would not be a very secure site; in fact, right now anyone could access our message queues and add data to them or hit all of our endpoints in the microservice. This could somewhat work if you were running on a home network (although still vulnerable to any one who gets on your network). In fact, we don’t even have users; this obviously would not make a great application to use in a multi-customer environment. Even as a home project, we’d be locked into one person.

Regardless, we’d be remiss if we didn’t discuss how to secure our application. There are many tools and methodologies to secure a modern website. For this book, I am going to focus on just two of them for our application:
  • REST/GraphQL layer

  • MQTT layer

The first layer is our endpoints exposed on the microservice to be used by either internal applications talking to each other or external entities wanting to gain information from that layer. We will be making use of the REST and GraphQL endpoints created so far. For those endpoints, we are going to add authentication checking since all of our endpoints (except health) require an authenticated user via OAuth 2, which is fairly standard for any site. For the message queues, we will use SSL certs to secure the communication between the endpoints. These will be a set of X509 certs that makes sure the traffic is not only encrypted but secured by the endpoints. Don’t worry if some of those terms are confusing; we will get into what everything means and how to glue it together in a bit.

What We Aren’t Covering

To make a truly secure site, there are many other items you want to add to your site. Security is its own department at most large companies, and they even conduct formal architecture reviews of applications to make sure they are secure. If you actually consider the data we will be storing (personal video data), security is VERY important because you wouldn’t want to have yours or customer videos exposed to the Web. We won’t be able to get into all the techniques used for an application. However, one of the biggest pieces of software large and small companies make use of is monitoring software. Monitoring software can help you determine denial-of-service attacks and other attempts to penetrate your network. In addition, they are good for triage later after an attack to determine what vulnerabilities they were attempting to exploit and what they couldn’t. There are many paid and open source solutions for this that you can explore on your own.

One of our biggest hits is our communication between the microservices will not be over TLS; it will be over plain HTTP. This is far from ideal, but I didn’t want to set up TLS communication between the endpoints in the book. In a bigger environment, this is where I’d often recommend a service mesh like Istio that can help control the flow of traffic to the services and between services in a secure manner. With tools like Istio, mutual TLS becomes automatic and is just another way to make sure your endpoint traffic is secure.

Goals

Alas, we won’t be covering either of those, but what we will cover will be these:
  1. 1.

    Setting up and configuring an account with Auth0 to be used for security

     
  2. 2.

    Setting up logged in communication between a user and our services

     
  3. 3.

    Creating authenticated connections between microservices

     
  4. 4.

    Learning how to use certificates for MQTT

     
  5. 5.

    Creating self-signed certificates

     
  6. 6.

    Starting up the eMQTT with self-signed certificates

     
  7. 7.

    Using self-signed certificates between client and MQTT

     

Authenticate Endpoints

In this section, we are going to go through how to secure the endpoints in our microservices with authentication. This will allow us to prevent users or hackers from using application to read/write without authentication. In addition, this will allow us when creating media data and querying the data to know the user. For our application, we are going to use tried and true technologies to create this; we will use an Open Authorization 2.0 (OAuth 2) authorization with Auth0 providing the login portal tools. First, let’s talk a little about how we got here to use OAuth 2 and Auth0.

In the first 10–15 years of the World Wide Web’s mainstream existence, most applications were web based, and even more often those web applications were intrinsically tied to its backend servers, which had content generating the HTML itself for the application. Everything was highly coupled. We used our own databases; on our own servers, we’d create everything from scratch. This isn’t much different from days of the first computers when each manufacturer had their own languages for their own computers. Everything that was on the site was generally coded uniquely for the site. And even when we used frameworks, the storing of data was still centralized to the application. However, this model of all under one roof started to change, especially when it came to authentication.

As the use of the Internet evolved, we started to have more complex website and even mobile sites. As the Web evolved instead of creating one application that has the web code and backend code, we started to segregate out those units into their own more stand-alone parts. With the addition of mobile, this became another stand-alone application. Each of these applications still spoke to the same backend but no longer were you generating HTML from that backend. And even within the backends, they had to communicate to other servers with knowledge of the user. All of those lead us to creating a standard OAuth to allow authentication and authorization across different systems.

On the actual code for authentication, it’s hard to perform securely so that no one hacks it, and by its nature, it’s fairly generic. The login across lines of business is relatively the same. You have a login and password, as well as the ability to recover a forgotten password. I’ve implemented this myself many times, and it’s relatively repetitive. But doing it yourself, you have to constantly worry about security; what if someone were to hack your site? You would expose tons of customer emails, at which point you have to disclose embarrassingly that their info was disclosed. However, if you do not manage it on your own, you lower the risk to storing any personal identifiable data; all they have is UUIDs they can’t correlate; there is less of an embarrassment or risk. This leads to companies specializing in OAuth services with low subscription costs, and it just made more sense to spending money for it than spending time.

Authorization vs. Authentication

These two concepts go hand in hand, and we will use them throughout this section to discuss how we are going to work with securing the endpoint for a given user. However, I think it’s important not to conflate the terms and use them to understand what each is. Lets define what Authentication (AuthN) and Authorization (AuthZ) means.

Authorization

Authorization is your system that decides whether you should have access to a particular set of resources to perform tasks. This can decide whether a subject (in our case, usually the user) can access a particular endpoint and what they do. It is even used to determine whether two microservices can talk to each other; this helps securing your microservices. Authorization determines what permissions a particular subject is allowed. Most often, the system that one uses for authorization is OAuth 2.

Authentication

Authentication is the process of identifying the subject, most often a user, defining who that subject is and associating a unique identifier to them. This id will then be used to store in your databases and used as a reference when calling the application. This can be your Lightweight Directory Access Protocol (LDAP) system, your Azure AD, or even some homegrown system. Most often, the system that wraps around that interops with OAuth 2; the best implementation of such is OpenID Connect.

OAuth 2

The OAuth framework has been around since about 2006, and the OAuth 2 spec came shortly thereafter and has been in use ever since without any major changes. OAuth has become the industry standard for authorization and is developed as part of the OAuth 2 Authorization Framework.1

OAuth 2 allows applications to grant access to services and resources by using an intermediary service to authorize requests and grant user access. This allows you to enable resource access without giving unencrypted client credentials. Users can use a JSON Web Token (JWT) or an opaque token for an authorization server that can then be used to access various resources. These tokens can then be passed to various other services with an expiration date as long as the token is not expired. The client that is doing the requesting doesn’t even have to be aware at each call what resource the token belongs to.

OpenID Connect

Open Id is the authentication layer that easily sits on top of OAuth 2.2 Open ID allows you to do authentication returning JWTs that are easily usable in the OAuth 2 world. These JWTs contain claims that tell you additional information about the user. Standard properties to include are name, gender, birth date, and so on. JWTs can also be signed. If they are and you have the keys to confirm the signature, you can generally trust that data; if not, you can use the JWT to query an endpoint for this information instead.

Applying AuthZ and AuthN

Authentication is pretty standard for most applications, and you’ve probably never built an application without it even if you didn’t set it up yourself. The difference for this application might be how you are authorizing. In most applications you work with, we have a standard web portal or even a forwarded portal via your mobile application. These allow for authentication of the user through a standard username and password. And this works well on your iPhone or desktop because you have a full-size keyboard at all times. However, on most IoT devices, even with a touch screen, we won’t always want to make them type the username and password authentication out all the time. It can be time-consuming and error-prone causing a poor user experience.

There is another way of course, and you’ve probably used it before even if you weren’t aware of it at the time, the device authentication flow. With the device flow instead of using the device to log in directly, you will use a website. What happens is the device will prompt us to log in; it will then supply a URL and device code. We will then go to the website and log in using the device code when prompted. In the meantime, our application will ping the authorization server to see if the user has been authenticated. Once the user is authenticated, the system will get a response that the user is authorized that will include an access token and optionally user token. At that point, the device will consider itself authenticated and will continue performing whatever actions it needs to to get the rest of the data.

In Figure 6-1, we have an authorization flow that shows how we can use authentication services to request data.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig1_HTML.jpg
Figure 6-1

Authorization flow with device

Auth0

When using OAuth 2 with OpenID Connect, there are multitude ways to put this all together. A common approach is to create your own Authorization system but rely on an OpenID Connect to run the authentication flow. This is common when you go to a website and you are asked to log in and you see the redirects for Google, Facebook, and so on. One of the main reasons people use this approach is security and safety. People may not trust adding their username and password to your site, but they trust another. In addition, it makes it, so you don’t have to remember the username and password for each site you use. Also it takes away the onus on you having their user data. If your site gets compromised, it’s best to have less personally identifiable data than more. However, many people still like to maintain control of their authorization needs, the OAuth 2 portion; part of the reason is because of how many frameworks are out there to easily interoperability with them.

For our application, we are going to use a provider that can handle both aspects for us, Auth0. And for our testing needs, Auth0 will be free for us to use; even in smaller limited quantities, it is still free. Auth0 allows us to choose using either the Google authentication model or its own personal database. If you want more information on the OpenID Connect system, you can go here: https://auth0.com/docs/protocols/oidc.

For our system, we are going to use Auth0’s ability to use a built-in database (this cuts back on a few extra steps we’d have to take with setting up a flow with Google). Great thing about this is we could even use an existing database in our system for importing into Auth0 if we needed to. Our examples going forward will use the Auth0 authentication/authorization endpoints to run our user management and security checkpoints. However, most of the code we are writing could work in any system; you may just have to adjust the parameters since those can differ between providers, but the general flow is the same.

Setting Up Auth0
Let’s start with walking through setting up Auth0; it’s a fairly straightforward process but always good to be on the same page. Head to https://auth0.com/ and click Sign Up. In Figure 6-2, you will need to pick your method for signing up; I just used my Github sign-in, but it’s up to you.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig2_HTML.jpg
Figure 6-2

Start

Once signed in, you may need to pick the plan, although for most the free plan should have been selected automatically; in Figure 6-3, I’m picking the free plan. It is limited to 7K active users, but for a demonstration app or even a beginning application, it should be more than enough.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig3_HTML.jpg
Figure 6-3

Click the START NOW for $0/month plan

Now we will start with configuring the application. In Figure 6-4, you will start one of two steps; in the first step, decide on a subdomain name for your Auth0 app and a region majority of your clients will be located (you obviously can’t use mine).
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig4_HTML.jpg
Figure 6-4

Pick our domain name and region

Next in Figure 6-5, choose the ACCOUNT TYPE; this part does not matter and I barely filled it in.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig5_HTML.jpg
Figure 6-5

Fill in more relative information

Once completed, we will get the dashboard that appears in Figure 6-6.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig6_HTML.jpg
Figure 6-6

The interactive dashboard for our Auth0 managed application

At this point, we have our dashboard and a login set up but nothing that actually can interact with the application. We need to add two things: one is a user, and the other we need to add is to create an application to use. This will provide us a client id that our framework can interact with. The applications are independent of our user database, and we can have them interact with many or just a few of them. Each application allows for different types of authentication flows, from a web flow, to device flow, to service, to service communication. Let’s go through the process of creating an application that will work with device flows.

Create Authorization

On your left-hand navigation, you will see Applications; click the link and select CREATE; you will get a set of options like in Figure 6-7.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig7_HTML.jpg
Figure 6-7

List of application types we can create

There are four different options, and we can use two of them for our applications:
  • Native applications – The best way to describe these is any application that requires a login that is NOT from a website. This will be your mobile apps, Apple TV apps, and any device applications, which in our case are your IoT device apps.

  • Single-page web applications (SPAs)  – These are your React, Angular, and any modern web application. You’d likely use this as well if you wanted a support application, but the coding of this is out of scope for our book.

  • Regular web applications – These are more traditional, older style applications – your Java and ASP.NET apps. Many more traditional shops would use this approach, but to be fair, the SPAs are the more common way of creating web applications these days.

  • Machine to machine – These are for applications in your microservice world that do not have a user associated with them. This includes any batch application that needs to access endpoints or any service-to-service calls that are allowed for unauthenticated users.

To begin with, let’s create the native application first; this will allow us to authenticate and test against a user. Write any name you want and then go ahead and click Submit. Once you submit, you will be brought to the “Quick Start” page; select the second tab “Settings” and it should look like Figure 6-8.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig8_HTML.jpg
Figure 6-8

Native application created

This shows that our native application has been created; here you will see the client ID and are able to use the secret. We will be using this client ID later when validating our login. So take note of it and copy it to a text editor, since we will be using it shortly. There is one more interesting section to look at; scroll down to the bottom of the page and select “Advanced Settings” and then “Grant Types”. I have that shown in Figure 6-9 showing the grant types.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig9_HTML.jpg
Figure 6-9

Grant Types for our native authentication

These will be different for each application type (with some overlap), but they are also what makes the applications unique to each other and the different purposes they serve. As you can see for this one, the Device Code grant will be what will allow us to use the device authorization flow as our authentication mechanism. Make sure it is selected; if not, your first query will return an error stating device code not allowed for the client.

Take note of this screen; we will circle back to it in a bit when we make our first query. For now, let’s move on to creating a user.

Create User

We could use Google or Facebook authentication, but for the book, we are going to use the Auth0 existing authentication model. Let’s set up an example user to use. Head back to the main dashboard page and click “Users & Roles”; you will get a page like in Figure 6-10.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig10_HTML.jpg
Figure 6-10

Users to add to

In Figure 6-11, go ahead and create a user using your email address and whatever password you want.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig11_HTML.jpg
Figure 6-11

Creating a user

Once created, you will have a user registered and will be able to start authenticating against that user. Figure 6-12 shows the final screen when the user is created.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig12_HTML.jpg
Figure 6-12

User created screen

However, if you notice under the EMAIL header, it is marked as “pending”; go to your email account and you will have a verification email waiting for you. Figure 6-13 has a sample of that email you should receive, and click “VERIFY YOUR ACCOUNT”.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig13_HTML.jpg
Figure 6-13

Email verification

Now that will finish your user setup. As of right now, we have our user set up in Auth0 as well as an application set up to create Open Id authentication queries against to receive a JWT that we will be able to use for OAuth 2 interactions.

Authenticating

We haven’t focused it yet on our application, but part of what we are going to have to do is have a user associated with the records. After all, we are building this system to handle more than just our own personal device. We want this to handle a multitude of devices and users connecting to our system. What we are going to go over is how to use the command line to trigger the device flow authentication, authenticate, and then use the JWT that is returned to make subsequent user authenticated calls to our retrieval service.

Device Flow

We went over the device flow earlier; now let’s implement it. Later we will have the device make these calls, but let’s call from the command line for now. We will make a call to the device/code endpoint on Auth0 https://rustfortheiot.auth0.com/oauth/device/code. This endpoint takes a few parameters:
  • client_id – If you recall before when I mentioned we needed to circle back to the client Id, well here is where you need it. Since we can have multiple applications at the rustfortheiot endpoint, this determines WHICH application we will choose.

  • scope – A space separated delineation of what type of token with what access should be created.

The scopes help us define the authorization and what information will be returned by them for use. You can include one or many of the scopes. In Table 6-1, we list what each defines for the authorization.
Table 6-1

Variety of scopes for Auth0

Scope

Description

openid

By default, this system will give us back an access token to use that can be traded in for user authentication information. But if we want a JWT that we can use that has user info in it already, supply the openid, and we will get an id_token back as well.

offline_access

If this is going to be a longer-lived request (like with a device), we will need the ability to refresh our access token periodically; if not, the user will have to re-authenticate. This scope will have the response including a refresh_token that can be used for re-authorizing.

profile

By default, the access_token can only be traded in for the subject; if you want the ability to gain more than that, supply the profile scope.

email

If all you need though is the email, you can supply just the email scope to be able to retrieve the user info for it.

We will use all those scopes for our example request, so that we can see the full results. Let’s put it all together for a command-line request. In Listing 6-1, we curl the device/code endpoint with our parameters. This will give us back a URL and device code that we can go to the site and fill in the request to start the login process.
➔ curl --request POST
  --url 'https://rustfortheiot.auth0.com/oauth/device/code'
  --header 'content-type: application/x-www-form-urlencoded'
  --data 'client_id=rsc1qu5My3QZuRPZHp5af5S0MBUcD7Jb'
  --data scope='offline_access openid profile email'
{
    "device_code":"EINQsGUod_tIuDO05wW2kZ8q",
    "user_code":"KSPT-LWCW",
    "verification_uri":"https://rustfortheiot.auth0.com/activate",
    "expires_in":900,
    "interval":5,
    "verification_uri_complete":"https://rustfortheiot.auth0.com/activate?user_code=KSPT-LWCW"
}
Listing 6-1

Curl request to get a device code and URL

The JSON returned a few properties; let’s take a look at what these properties are:
  • device_code – This is a unique code we will use for subsequent calls to the system in order to receive back the access_token and to check whether the user is logged in.

  • user_code – This is the code the user will type in, in the uri to recognize the device that is trying to be authenticated. This is a short code to make it easy for a person to remember to type into a web page.

  • verification_uri – This is the URI to go to log in with the user code.

  • expires_in – This is the amount of time in seconds that the user has to be able to log in before the device code expires; this is 15 minutes – plenty of time.

  • interval – This is the interval in seconds that you should recheck if the user has been authenticated.

  • verification_uri_complete – This is the complete URI for verification; this isn’t as necessary to use for a visual device authorization, but if your authorization is triggered by a text message or other means, it will be good to use to forward to the system.

The preceding device_code is used to get a status of your login; this will be what our device uses to check if the user is authenticated. We will make a call to the oauth/token to determine whether or not a user is authenticated passing in the preceding device_code and passing in a grant-type of device_code. If you recall from earlier, device_code grant type was unique to our native application, which is why we choose it.

We have to periodically check the server to see if the user has been authenticated; in Listing 6-2, we perform a request against the oauth/token endpoint to do this check. Note: We haven’t actually authenticated yet so we would expect it not to work.
➔ curl --request POST
  --url 'https://rustfortheiot.auth0.com/oauth/token'
  --header 'content-type: application/x-www-form-urlencoded'
  --data grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code
  --data device_code=EINQsGUod_tIuDO05wW2kZ8q
  --data 'client_id=rsc1qu5My3QZuRPZHp5af5S0MBUcD7Jb'
{
    "error":"authorization_pending",
    "error_description":"User has yet to authorize device code."
}
Listing 6-2

Curl request to check if the user has been authenticated

The grant_type is a URL-encoded representation of the string urn:ietf:params:oauth:grant-type:device_code and is part of the OAuth 2 spec for device access token request.3 And it has done that, given us an authorization_pending response since we haven’t been authenticated. Besides being successful, let’s take a look at what other error conditions we may have, in Table 6-2.
Table 6-2

Various response errors for OAuth token

Code

Description

authorization_pending

The user has not attempted to authorize against the given user code.

slow_down

Your application is requesting status of being authorized too much; slow down your requests.

expired_token

The token has expired before the user has been authorized. In our case, this means there was no authorization within 15 minutes.

access_denied

The user has been denied access to the given resource.

How do we get a successful response? Let’s go back to that initial URL we are given to and visit the site; in Figure 6-14, we go to the site https://rustfortheiot.auth0.com/activate and enter the user code.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig14_HTML.jpg
Figure 6-14

Authorizing the device

Here you enter the code and select “Confirm”; you will then be requested to log in with the username and password we set up earlier. If everything is valid, you will receive a confirmation message like in Figure 6-15.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig15_HTML.jpg
Figure 6-15

Confirming the authentication

You will notice I have added the Apress logo; for the login page, you are able to customize those screens in the dashboard of Auth0 as well.

Let’s go back to our oauth/token and make another request now that we are authorized; in Listing 6-3, we get a standard OAuth 2 response; that will be familiar to you if you’ve used OAuth 2 in the past.
{
    "access_token": "sHiy83rLMLEqrxydFMOxjLVyaxi-cv_z",
    "refresh_token": "hnsureBL2jfb62UINDmgjt4F6vZBp0etExeoDja5qGy1Y",
    "id_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1N...xTA8WsM3vxC0Hwy__2g",
    "scope": "openid profile offline_access",
    "expires_in": 86400,
    "token_type": "Bearer"
}
Listing 6-3

Using the previous curl here is the new response to the authenticated system

But for those that haven’t, let’s take a look at what each of these properties means. In Table 6-3, we break down each of the properties from the token. Also note that id_token is usually much longer; I shortened it so it would take up less space.
Table 6-3

Our tokens for authentication

Token

Description

access_token

The access token is used as the token that is passed in the authorization phase to check the credentials of the calling application. Token can be used to inform the API that the bearer of the token has been authorized to access the API.

refresh_token

When the access token has expired, the refresh token can be used to go out and obtain a new access token without being required to re-authenticate the user again.

scope

The scopes the tokens have access to.

id_token

The ID token is used to cache and parse the user profile information after a successful authentication. They can then use this data to personalize the user experience.

In our token retrieval, the acccess_token has a 24-hour life (computed by the expires_in); once that hits, the application should use the refresh_token to go out and get another access_token before making any more calls to the backend system. The refreshing of the token will be handled by the framework we use, and you won’t have to code for it on your own. But we will use all these tokens in our application.

The ID token is not necessary to obtain user information; you could obtain the same information from the /userinfo endpoint by sending the access token to it; however, this saves us a call, and since this comes back from the authentication service we know, we know it’s a trusted data.

Processing the Tokens

Now that we have our tokens, what do we do with them? We can parse them and use them for access and deciding what to do with the user. For our system, all the calls will be requiring a user for each call, meaning we will need to translate the tokens to our user and use it for access. Let’s take a look at each of the tokens we have and how we are going to use them and the code for it.

ID Tokens
The first one to go over is the ID tokens; the ID tokens as we mentioned before are only to be used by the authentication section of the application and only to be used to retrieve more data about the user. You shouldn’t use these as an access token and send it to other services. The token is listed in Listing 6-3 with the field id_token. The token is a Java Web Token (JWT) , and they are easily decomposable to retrieve data from without calling out to any other service. In fact, if you haven’t used one before, there is a site jwt.io which can help you examine the contents of the token. Go ahead and take the contents of the token output above (from your own screen, maybe a bit hard to copy that entire token from a book) and paste it into jwt.io under the Encoded tab. You should get an output similar to Figure 6-16.
../images/481443_1_En_6_Chapter/481443_1_En_6_Fig16_HTML.jpg
Figure 6-16

The decomposition of the encoded id token

This reveals quite a bit of data including the algorithm that is used to encode the signature, subject, expiration, and so on. Of course, the big question is how do you trust this data? Since it’s a decomposable JWT, anyone can create one:
  1. 1.

    You called this from a localized microservice authentication, and this was the direct response. Hence, it wasn’t given by any middleman service.

     
  2. 2.

    We can use a public key from the authentication provider in our service to guarantee this originated from the resource we suspected it to be from.

     

If you look back at Figure 6-4, look at “VERIFY SIGNATURE”; we have an RS256 public key for Auth0 that will be unique for your account. You can download the URL at http://rustfortheiot.auth0.com/.well-known/jwks.json (replace the rustfortheiot with your domain). We can use this key to help decode the JSON in our application with JWKS. In our code, not only will we be able to decipher the JWT to get the subject or any other fields we want, but the main thing is it guarantees the JWT came from the source we expected it to be from.

Programmatically Parse

Now being able to run curl scripts and decode from the site is all well and good for testing and to verify you have all the right permissions to make everything work. But let’s dive into some code. We’ll start with parsing an ID token with alcoholic_jwt.

We will need to use JWKS to validate the token; there were not many crates JWT parsing that allowed it. Luckily, I stumbled upon one that isn’t often used for JWT but was designed specifically for JWKS validation, that is, alcoholic_jwt. In Listing 6-4, I have the added crates we will use.
hyper = "0.10.16"
alcoholic_jwt = "1.0.0"
reqwest = "0.10.4"
http = "0.2.1"
Listing 6-4

Crates used to allow parsing of the User ID JWT

Let’s create a function that will take our JWT slice, the JWKS, and the authentication URL and validate the slice and return the user (which is stored in the subject). In Listing 6-5, we decode the JWT to receive the User ID from it.
use alcoholic_jwt::{JWKS, Validation, validate, token_kid, ValidJWT};
fn parse_id_token(jwt_slice: &str, jwks: &JWKS, auth_url: &str) -> UserResult {
    debug!("JWT Slice :: {:?}", jwt_slice);
    // Several types of built-in validations are provided:
    let validations = vec![
        Validation::Issuer(format!("https://{}/", auth_url).into()), ①
        Validation::SubjectPresent, ②
        Validation::NotExpired, ③
    ];
    let kid = token_kid(&jwt_slice) ④
        .expect("Failed to decode token headers")
        .expect("No 'kid' claim present in token");
    let jwk = jwks.find(&kid).expect("Specified key not found in set");
    //let result: ValidJWT = validate(jwt_slice, jwk, validations).expect("Token validation has failed!");
    let user_id = validate(jwt_slice, jwk, validations)? ⑤
        .claims.get("sub").unwrap().to_string(); ⑥
    Ok(user_id) ⑦
}
async fn jwks_fetching_function(url: &str) -> JWKS { ⑧
    use std::io::Read;
    use std::collections::HashMap;
    let jwks_json: String = {
        let url_jwks = format!("https://{}/.well-known/jwks.json", url);
        let mut res = reqwest::get(url_jwks.as_str()).await.unwrap();
        res.text().await.unwrap()
    };
    let jwks: JWKS = serde_json::from_str(jwks_json.as_str()).expect("Failed to decode");
    jwks
}
Listing 6-5

Parsing the user ID from the JWT

  • ① Sets to validate the issuer against the rustfortheiot JWKS.

  • ② Sets to validate the subject is present since we need that to get the user id.

  • ③ Sets to validate that the token is not expired.

  • ④ Extracts the kid portion from the token.

  • ⑤ Uses the validations we created earlier to validate the token.

  • ⑥ Retrieves the subject from the claim.

  • ⑦ Returns the user id as part of the token.

  • ⑧ Functions to retrieve the JWKS and parse it.

This code does not have to be unique for any particular tier and can be used on the backend to deliver content to a website or on our device to display the user to the screen. But let’s move on to discussing the role the access token will play.

Access Tokens
The access token is what is used to send between microservices, so that when service A is working with a user, to tell service B about the user, service B can then perform verification of the token to make sure the user is still active and can then trade that token to the authorization service to get more information about the user like email address or other data. Using a token allows service B to be stand-alone and more secure, since if any outside service tried to send a random access token, it wouldn’t validate. Having it call back out to the authorization server also makes sure that the token is still active and usable. Access tokens are sent to a server in the header as either opaque or JWT with the formula of
Authorization: Bearer <access_token>
For Auth0, we will be sending the tokens as opaque. Once your microservice or your api gateway receives the token, it can check the authorization server that this is a valid token, and you can continue on the request. For us, we are going to also get the subject off the token and store it as a user id on the request so that our controllers can perform actions on it. On Auth0, our endpoint is /userinfo to get the user id from the opaque token. In Listing 6-6, we will retrieve from the /userinfo endpoint the data with our access_token we previously retrieved.
➔ curl --request GET
  --url 'https://rustfortheiot.auth0.com/userinfo'
  --header 'Authorization: Bearer 5BPHIDN84ciNsY4PeOWRy080mB_4R69U'
  --header 'Content-Type: application/json'
{
    "sub":"auth0|5d45ceebede4920eb1a665f0",
    "nickname":"nusairat",
    "name":"[email protected]",
    "picture":"https://s.gravatar.com/avatar/05927361dbd43833337aa1e71fdd96ef?s=480&r=pg&d=https%3A%2F%2Fcdn.auth0.com%2Favatars%2Fnu.png",
    "updated_at":"2019-10-15T03:17:11.630Z",
    "email":"[email protected]",
    "email_verified":true
}
Listing 6-6

Retrieving the user data from the user info

You will notice we get more than the subject back; we also get the nickname, name, picture, and email; this is because earlier we not only asked for the openid scope but also the profile scope which brings back more details for the access token. Let’s now use this in our code to check the user id and their authorization level.

The user info retrieval does not check the authentication status, but gets the user.

In Listing 6-7, we take the request, parse out the token from the header, and use the token to retrieve the user info which will contain the subject.
fn parse_access_token(request: &Request,  auth_url: &str) -> UserResult {
    // Get the full Authorization header from the incoming request headers
    let auth_header = match request.headers.get::<Authorization<Bearer>>() { ①
        Some(header) => header,
        None => panic!("No authorization header found")
    };
    debug!("Auth Header :: {:?}", auth_header);
    let jwt = header::HeaderFormatter(auth_header).to_string(); ②
    debug!("JWT :: {:?}", jwt);
    let jwt_slice = &jwt[7..];
    debug!("JWT Slice :: {:?}", jwt_slice);
    let item = block_on(retrieve_user(jwt_slice, auth_url));
    Ok(item.unwrap())
}
#[derive(Deserialize, Debug)]
struct Auth0Result {
    iss: String,
    sub: String,
    aud: String
}
async fn retrieve_user(jwt: &str, auth_url: &str) -> Result<String, reqwest::Error> {
    use std::collections::HashMap;
    use http::{HeaderMap,HeaderValue};
    let url = format!("https://{}/userinfo", auth_url);
    // headers
    let mut headers = HeaderMap::new();
    headers.insert("Authorization", HeaderValue::from_str(jwt).unwrap());
    headers.insert("Content-Type", HeaderValue::from_str("application/json").unwrap());
    let mut json = reqwest::Client::new() ③
                    .get(&url)
                    .headers(headers)
                    .send()
                    .await?
                    .json::<Auth0Result>()
                    .await?;
    Ok(json.sub)
}
Listing 6-7

Parsing the user ID from the access token

  • ① From the header retrieves the Authorization Bearer token, making sure it exists.

  • ② Converts the token to a string; this now would contain Bearer <access_token>.

  • ③ Calls out to the /userinfo endpoint to trade the token for user data.

Implement Authorization Check
Let’s implement this into our API code. With that set of code, we have discussed we can now retrieve the user info. But I don’t want to make these calls for each individual Iron action each time. In addition, I want to make sure that for certain calls an access token is ALWAYS supplied, and if not, the call should be rejected. Let’s take a look at our requirements to have our framework run automatic verification and injection for each service call:
  1. 1.

    Use middleware to retrieve the user id from the access token.

     
  2. 2.

    Have the middleware return an error if there is no access token.

     
  3. 3.

    Only have this middleware called for certain endpoints.

     
We’ve created the middleware a few times now, so most of this code should look very familiar. We will start in Listing 6-8 with a struct AuthorizationCheck that we will instantiate in our routes to create the authorization middleware. This will take the authorization url as a parameter that we are going to set in the args.
use futures::executor::block_on;
pub struct AuthorizationCheck {
    jwks: JWKS,
    // static and this will never change once set
    auth_url: String
}
impl AuthorizationCheck {
    pub fn new(auth_url: &str) -> AuthorizationCheck {
        // Get the jwks
        let jwks = block_on(jwks_fetching_function(auth_url));
        AuthorizationCheck {
            jwks: jwks,
            auth_url: auth_url.to_string()
        }
    }
}
Listing 6-8

Creating the authorization check struct for our authorization

Now is the bigger set of functions. This set we’ve seen before. We are going to create the struct AuthorizedUser to hold the results of the parse_access_token that we created previously. That data will then be inserted into the request extensions (if you recall, this uses the type of the struct as the key in the map to find the data). And finally, we will use the UserIdRequest as a trait that when on our controller can retrieve the user id with the call request.get_user_id. This code is laid out in Listing 6-9.
pub struct AuthorizedUser { ①
    user_id: String
}
impl AuthorizedUser {
    pub fn new(user_id: String) -> AuthorizedUser {
        AuthorizedUser {
            user_id: user_id
        }
    }
}
pub struct Value(AuthorizedUser);
impl typemap::Key for AuthorizedUser { type Value = Value; }
impl BeforeMiddleware for AuthorizationCheck {
    fn before(&self, req: &mut Request) -> IronResult<()> {
        let access_token = parse_access_token(&req, self.auth_url.as_str()); ②
        match  access_token {
            Ok(user_id) => {
                req.extensions.insert::<AuthorizedUser>(Value(AuthorizedUser::new(user_id)));
                Ok(())
            },
            Err(e) => {
                let error = Error::from(JwtValidation(e)); ③
                Err(IronError::new(error, Status::BadRequest))
            }
        }
    }
}
pub trait UserIdRequest { ④
    fn get_user_id(&self) -> String;
}
impl<'a, 'b> UserIdRequest for Request<'a, 'b> {
    fn get_user_id(&self) -> String {
        let user_value = self.extensions.get::<AuthorizedUser>().chain_err(|| "No user id, this should never happen").unwrap();
        let &Value(ref user) = user_value;
        // Clones it since we want to pass handling of it back
        user.user_id.clone()
    }
}
Listing 6-9

Creating the authorization check struct for our authorization

  • ① The AuthorizedUser struct that will store the results of the parse tokens return.

  • ② The middleware call that will parse the token; if an Ok is returned, we will extract the token from the success.

  • ③ If not Ok, we will return a Json validation error back to the caller.

  • ④ The trait that will be applied to controllers to retrieve the user id.

This sets up all the middleware; now we just need to tie it into our router model. If you recall earlier, we divided up our health and non-health calls to two different chains. We have the /api chain and /healthz. With this, we are going to have the authorization middleware run on the /api chain. In Listing 6-10, you can see the modified create_links method with the authorization check.
fn create_links(chain: &mut Chain, url: &str, auth_server: &str) {
    use crate::authorization::AuthorizationCheck;
    // Create the middleware for the diesel
    let diesel_middleware: DieselPg = DieselMiddleware::new(url).unwrap();
    // Authorization tier
    let auth_middleware = AuthorizationCheck::new(auth_server);
    // link the chain
    chain.link_before(auth_middleware);
    chain.link_before(diesel_middleware);
}
Listing 6-10

Modified create_links with authorization check, in file src/http.rs

As you can see, the healthz will have no extra middleware added, but our media and comments will.

Refresh Tokens
Finally, let’s discuss the refresh tokens. As we stated earlier, the access tokens have a time limit and will be up to our device application to know when they expire. When they do, we will have to obtain a new access token that we can use, along with a new id token as well. This is a relatively simple process where we once again call the ouath/token endpoint, except this time, we will pass in as the grant type refresh_token, so the server realizes we are passing in a refresh_token to it. In Listing 6-11, we make a curl call back to the server to get a new token.
➔ curl --request POST
  --url 'https://rustfortheiot.auth0.com/oauth/token'
  --header 'content-type: application/x-www-form-urlencoded'
  --data grant_type=refresh_token
  --data 'client_id=rsc1qu5My3QZuRPZHp5af5S0MBUcD7Jb'
  --data client_secret=C4YMZHE9dAFaEAysRH4rrao9YAjIKBM8-FZ4iCiN8G-MJjrq7O0alAn9qDoq3YF6
  --data refresh_token=hnsureBL2jfb62UINDmgjt4F6vZBp0etExeoDja5qGy1Y
  --data 'redirect_uri=undefined'
  {
      "access_token":"2JbKDWr5BBqT-j5i0lYp-nRbA1nrnfjP",
      "id_token":
        "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Ik5qSXhOek0xTmpjd05rRTNOa1E1UlVSRE1rUXpPVFV5TXpBeE1FSTBRakpGTVRnME5rTTVOQSJ9.eyJpc3MiOiJodHRwczovL3J1c3Rmb3J0aGVpb3QuYXV0aDAuY29tLyIsInN1YiI6ImF1dGgwfDVkNDVjZWViZWRlNDkyMGViMWE2NjVmMCIsImF1ZCI6InJzYzFxdTVNeTNRWnVSUFpIcDVhZjVTME1CVWNEN0piIiwiaWF0IjoxNTcxMTkwNTU1LCJleHAiOjE1NzEyMjY1NTV9.km3QnC28qqWnwvhPVO2T2oW8O0EDUFilLUOgerRAas7YHihmFrYgSnovVHBmsWjTMKbHkPmX3RCevyOH-AwqZ1DdOe7ckcFopd-lChubpkegxFBEmhdGahNQS7xZWY8_JV3y4ytiLlwfgi6LvJaWJYk0bcFKg_Sn37X7UoJkZ4hzqOs82bxLKKV01_yLJHspYry9pt_9yokj0Mo77jlGU62oZbdHvUHdYqrxZDQOasLGlrkRMNrmG83A2U-QlAotIYBbO0KoeGBRG3lTg7Vd4RazlMim9WYHzqslEHV85ksUFGu_oXiIztgN4fZEjWzWNzweCxoDJsg4JHJ7AlW_cg",
      "scope":"openid offline_access",
      "expires_in":86400,
      "token_type":"Bearer"
}
Listing 6-11

Retrieve a new set of tokens using the refresh token as a basis

Now that we have the new set of tokens, use them and set your expiration again.

You can revoke access token either via code or via the UI; you may have this as part of your application to handle situations where a device is stolen or compromised, and you want to prevent that device from talking to the services.

Much of the code you see here and curls will be incorporated when we get to Chapter 8 and start the device coding itself. But I didn’t want to overly divide up the concepts of Authorization and Authentication in two chapters. We will revisit some of these calls later, but for now, we have our authorization and authentication.

Securing MQTT

Our first pass of using MQTT for our application, we used TCP for the communication. The TCP connection to the message queue was not only not secure from an authentication perspective, but most importantly it was unencrypted. This means if anyone sniffed our packets being sent from our devices, they would see not only the packages we sent but where we send them to. And before we were using Protobuf, it meant it was even easier to view. This means anyone could not only send data as anyone, but could also receive all the topics that we are deploying revealing customer data. This would be a major breach of data.

There are many ways of solving for security and encryption with a device communicating or your backend services communicating. There are two main ways of solving these problems:
  1. 1.

    SSL (Secure Sockets Layer) – Like most backend application, we use SSL to create secure communication between the client and the server. To do this, we will have to create certificates for the server that the message queue server uses for running the SLL certs. In addition, we will have to create certs for each client that connects to our message queue server.

     
  2. 2.

    Authentication – Another way to secure the site is to have authentication. This makes it so a particular user can only access the message queue. In fact, you can lock it down even further by only allowing the user to have access to specific topics.

     

There are a few ways to achieve, but essentially the two are to let the message queue application handle it or have a sidecar handle both. Letting the message queue handle it means using the tools built into the MQ for SSL and Authentication to run the SSL and Authentication. Most MQs out there, and the one we are using, have SSL handlers out of the box as well as authentication. The other way is something that will make more sense in the next chapter when we talk about deployment, and that is using a sidecar injector. A sidecar injector will run along our application and intercept all requests to the service. You can use this to force your calls to all be SSL and authenticated. This can be especially useful in the authentication realm but also if you are not entirely happy with the SSL implementation. In addition, you could replace just one or the other piece with the customization.

If you have good expertise in SSL and Authentication, then the sidecar maybe for you; however, for our implementation, we are going to stick with using MQ SSL model.

Certificates

Certificates are used to communicate and encrypt data between two endpoints so that not only can someone in the middle read the data being transmitted but also so that you can trust that who made the call was the person you thought made the call. The use of certificates has been out since the early days of the Web, but in those days, people only used them to transmit credit cards and other highly secure pieces of data. Today almost every website uses them, and since 2014, Google will give your site a higher ranking when using them. There are essentially two types of certificates, certificate authority (CA) and self-signed.

Certificate Authority (CA) vs. Self-Signed Certificates

We will be talking about using CA vs. self-signed Certificates throughout the next two chapters. By rule, we will be using CA certs for our deployed environments and self-signed for our local environments. Certificate authority certs are certificates that are generated and signed by a recognized certificate authority. The main reason to use them is a guarantee that the domain you are connecting to is truly that domain owned by the person you expect it to be that site. There are various certificate authorities you can use; we will be using letsencrypt. Letsencrypt is free for the amount of requests we would need, and most applications have easy integration into letsencrypt.

When deploying locally, we cannot use signed certificates (well easily). Signed certificates are tied to a domain that can be resolved and are designed for QA and Production testing or any deployed environment. However, we need to test locally against a local MQ and local microservices. To do that, we will use self-signed certificates that will allow us to create the certs and destroy them as needed.

You can deploy self-signed certificates to deployed environments, but then you will have to make sure your system is not enforcing that they are certificate authority signed. You will notice the use of self-signed certificates in websites when they ask you to continue on a cert that is not CA certified.

Creating Server Certificates

For us, we are going to use most of our certs that we create for our MQTT communication.

Before we start, there will be many different file extensions that we will use in this section; it’s good to get an overview of the differences:
  1. 1.

    .PEM – This is an RFC 1421 and 1424 file format. PEM stands for Privacy Enhanced Mail and came about as a way to securely send emails, but now it’s used for a great many other security chains. This is a base65 x509 certificate that either can contain the public certificate or could contain the entire certificate chain including public key, private key, and the root certificates.

     
  2. 2.

    .KEY – The .key is mostly commonly used as the private key and will be formatted as PEM file containing just the private key that we should never share.

     
  3. 3.

    .CSR – This is an RFC 2986 specification in a PKCS10 format. CSR stands for Certificate Signing Request and contains all the information needed to request a certificate to be signed by a certificate authority. The digitally signed certificate will be signed and returned with its public key to be used for digital certs.

     
Generate CA Root Certificate
First off, we need to generate the private key we are going to use for our self-signed CA Root Certificate. In Listing 6-12, we generate an RSA private key with length of 2048. We are going to name the cert RustIOTRootCA.
openssl genrsa -out RustIOTRootCA.key 2048
Listing 6-12

Generate an RSA private key

This is the private key, the one that if this was a production system, you’d want to keep in a safe place. If someone else got a hold of that key, they could compromise your identity. Usually you use a CA provider to take care of your key that is generated.

Next, we are going to generate the Root Certificate from the key and give it an expiration of 1826 days or roughly 5 years. Could in theory be longer but 5 years is plenty of time for testing purposes. In Listing 6-13, we generate this key.
➔ openssl req -x509 -new -nodes -key RustIOTRootCA.key -sha256 -days 1826 -out RustIOTRootCA.pem ①
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:US ②
State or Province Name (full name) []:CA
Locality Name (eg, city) []:SF
Organization Name (eg, company) []:
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:localhost ③
Email Address []:
Listing 6-13

Generate an x509 certificate

  • ① Command to create the RustIOTRootCA cert using sha256 and creating an X509 certificate.

  • ② Add in a few fields like the country name and state.

  • ③ This is normally the fully qualified domain name; since we are running it from localhost, use that instead of a regular name.

The root cert is used as the start of your trust train. Certificates we generate after this will all use the Root to verify the authenticity up the chain. The Root CA can be used to generate any certificate on our site.

Message Queue Server Cert
But first, let’s start with creating the cert for the message queue itself. We will generate the private key and the cert for the MQTT, much like we did before with similar commands even. The big difference will be now we have a root CA we can use as well. Like in the previous example, let’s start by generating the private key in Listing 6-14.
openssl genrsa -out EmqttIot.key 2048
Listing 6-14

Generate an RSA private key for the MQ

Now let’s use that private key to create a certificate request; the certificate request is used to create a message to the CA requesting a digitally signed certificate. Since we are performing this all as self-signed, we will create that certificate request, then turn around, and create the PEM. In Listing 6-15, we are creating our CSR for the MQTT.
➔ openssl req -new -key ./EmqttIot.key -out EmqttIot.csr ①
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:US
State or Province Name (full name) []:CA
Locality Name (eg, city) []:San Mateo
Organization Name (eg, company) []:Apress
Organizational Unit Name (eg, section) []:IoT
Common Name (eg, fully qualified host name) []:localhost ②
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
Listing 6-15

Generate a CSR for the MQ

  • ① Command to create our CSR from the key we created previously.

  • ② Marking the localhost since we are running this locally.

Now that we have a CSR and a private key, we can use that to create a request for an actual certificate that we will use for the MQ. This will appear similar to the previous PEM creation, except this time we are going to reference the Root CA in Listing 6-16.
➔ openssl x509 -req -in ./EmqttIot.csr ①
    -CA RustIOTRootCA.pem ②
    -CAkey RustIOTRootCA.key ③
    -CAcreateserial ④
    -out EmqttIot.pem   ⑤
    -days 1826 -sha256   ⑥
Signature ok
subject=/C=US/ST=CA/L=SF/CN=localhost
Getting CA Private Key
Listing 6-16

Generate the certificate for the message queue

  • ① Command to create a x509 certificate using the previously created CSR as the basis for the request.

  • ② The Root CA certificate authority file we created previously.

  • ③ The private key for that CA that only the owner should have.

  • ④ This flag creates a file containing the serial number. This number is incremented each time you sign a certificate.

  • ⑤ Defining the file to output the certificate for the MQ to.

  • ⑥ Defining the days this certificate is active for and the cipher to use.

There will be two files generated from this request: the EmqtIot.pem that we defined to be requested and also the RustIOTRootCA.srl serial number that we used.

At this point, we have created our Root CA and the certificate for our MQTT Cert; in addition, we no longer need the EmqttIot.csr that we created, and you can delete it now if you want. We are able to now revisit our MQTT server itself.

Updating the MQTT Server

In order to do this, we are going to have to deactivate our old MQTT server that we created since they will be sharing some port numbers. Go ahead and run docker stop mqtt-no-auth; this will turn off the previous MQTT server that did not have certificates.

For the MQTT server, we are going to make use of the certificates we just created to set up the SSL port on the MQTT server so that we can accept only SSL calls with a trusted chain. The EMQTT we are using supports the use of certificates out of the box; it will be up to us to configure them. By default, there is an EMQTT configuration file that is used when running the application, and the docker command we are using can update them with environmental variables.

We need to set two things. One is to set the certificates so that we have SSL connectivity. This will allow us to access the MQ so long as our client is using a trusted Root CA as well for communication. While this will make sure all our traffic is over SSL and thus encrypted, it would still allow anyone the ability to communicate with our endpoint as long as they had a CA certificate.

If you recall, we are also using this as a secure mechanism to control who the clients are; thus, we also need to tell the MQTT server to only accept connections with valid client-side certificates as well. Those certificates can only be generated if one has the Root CA private key, which should be just us.

Let’s look at what updates we will need to the configuration files; Listing 6-17 has our eventual settings.
listener.ssl.external.keyfile = /etc/certs/EmqttIot.key ①
listener.ssl.external.certfile = /etc/certs/EmqttIot.pem ②
listener.ssl.external.cacertfile = /etc/certs/RustIOTRootCA.pem ③
listener.ssl.external.verify = verify_peer ④
listener.ssl.external.fail_if_no_peer_cert = true ⑤
Listing 6-17

Example of the properties we need to set for our certificates to work

  • ① The private key file for the EMQTT client.

  • ② The public certificate for the EMQTT client cert.

  • ③ The public root CA certificate.

  • ④ Verifies the client-side identities by their certificates.

  • ⑤ Ensures that we only allow SSL if there is a verified client-side certificate.

Now this leads to two questions:
  1. 1.

    Where do we place the files for the docker container to pick up?

     
  2. 2.

    How do we tell docker to update the emqtt.conf file with those properties listed in Listing 6-17?

     

The first is relatively easy; we can use the -v tag in docker to allow a local directory be used as mounted directory in the docker image we are running. The second requires us to use a naming convention used by the image to convert environmental variables to updates to the properties file.

When updating a reference like listener.ssl.external.keyfile, it is converted as an environmental variable starting with EMQ_, then uppercasing the entire string and replacing all the “.” with double underscores. Thus, we would have EMQ_LISTENER</emphasis>SSLEXTERNALKEYFILE from the example. This can be used for any of the properties in the EMQTT that you want to adjust. In Listing 6-18, we have our docker create for the emqtt-auth with the necessary environmental variable settings to run our secure EMQTT server.
docker run --restart=always -ti --name emqtt-auth --net=iot
-p 8883:8883 -p 18083:18083 -p 8083:8083 -p 8443:8443 -p 8084:8084 -p 8080:8080 ①
-v ~/book_certs:/etc/ssl/certs/ ②
-e EMQ_LISTENER__SSL__EXTERNAL__KEYFILE="/etc/ssl/certs/EmqttIot.key" ③
-e EMQ_LISTENER__SSL__EXTERNAL__CERTFILE="/etc/ssl/certs/EmqttIot.pem"
-e EMQ_LISTENER__SSL__EXTERNAL__CACERTFILE="/etc/ssl/certs/RustIOTRootCA.pem"
-e EMQ_LISTENER__SSL__EXTERNAL__VERIFY=verify_peer
-e EMQ_LISTENER__SSL__EXTERNAL__FAIL_IF_NO_PEER_CERT=true
-e "EMQ_LOG_LEVEL=debug"
-e "EMQ_ADMIN_PASSWORD=your_password"
-d devrealm/emqtt
Listing 6-18

Docker run to create an EMQTT server with SSL enabled and verify SSL turned on

  • ① Added the 8883 SSL port and removed the 1883 TCP port from being exposed since we no longer want users to connect via TCP.

  • ② Our local ~/book_certs directory can be mounted to the docker images /etc/ssl/cert.

  • ③ Referencing the directories with escaping the files.

We have our server up and running; it’s good to test to make sure it’s working as designed. And the way we are going to test to see if it works is if it gives us a correct error back. In Listing 6-19, we attempt to subscribe with just the RootCA.
➔ mosquitto_sub -t health/+ -h localhost -p 8883 -d --cafile ./RustIOTRootCA.pem  --insecure
Client mosq/rL5I4rEQ73Brv2ITSx sending CONNECT
OpenSSL Error: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
Error: A TLS error occurred.
Listing 6-19

Attempt to subscribe with the RootCA file

The error we get is sslv3 alert handshake failure; if you receive any other error particularly certificate verify failed, that means you set up the installation of the certificate incorrectly. But let’s now get the client certificates created. This is because while our server is set up to handle certificates, our client does not have them set up yet.

Creating Client Certificates

Our final step is creating the client certificate; in the future, we will need to be able to create a client certificate for each client. And in our case, each client is the Raspberry Pi devices. Since these are connected devices and we want an ability to control subscriptions, we will make it, so the clients only last for one month at a time. This way, we can control a bit better how long the server device is able to access the server. And in theory if we were doing a monthly billing, if they stopped paying well, they wouldn’t have access after that month.

But that will be done programmatically; for now, we are going to do this via the command line like the other certificates. Since this is a bit of a repeat of before, we are going to combine all three steps into one listing. Like before, we will create a private key, create a CSR from that private key, and then using the Root CA create the certificate for the client. In Listing 6-20, we have those steps.
➔ openssl genrsa -out PiDevice.key 2048 ①
Generating RSA private key, 2048 bit long modulus
........................................+++
.........+++
e is 65537 (0x10001)
➔ openssl req -new -key ./PiDevice.key -out PiDevice.csr ②
...
-----
-----
Country Name (2 letter code) []:US
State or Province Name (full name) []:CA
Locality Name (eg, city) []:SF
Organization Name (eg, company) []:Apress
Organizational Unit Name (eg, section) []:IoT
Common Name (eg, fully qualified host name) []:localhost
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
➔ openssl x509 -req -in ./PiDevice.csr -CA RustIOTRootCA.pem -CAkey RustIOTRootCA.key -CAcreateserial -out PiDevice.pem -days 3650 -sha256 ③
Signature ok
subject=/C=US/ST=CA/L=San Mateo/O=Apress/OU=IoT/CN=localhost
Getting CA Private Key
Listing 6-20

Create the client certificate from the Root CA

  • ① Creates the private key for the client.

  • ② Creates the CSR for the private key.

  • ③ Creates the client PEM using the private key created and the Root CA file that was created in the server section.

Pick a slightly different subject than for your clients vs. the Root Certificate. Having the same between the client and the Root will cause issues. Of course, that said, the issuer does need to match. You can double-check the settings for the certs you created with the command openssl x509 -in <filename> -subject -issuer -noout. This will give you the subject and the issuer. The issuer should all match across certs with the clients having different subjects.

Now that we have created the certificates, let’s try to create a connection. In Listing 6-21, we create a connection using the new client certificate we created.
 ➔ mosquitto_sub -t health/+ -h localhost -p 8883 -d --key PiDevice.key --cert PiDevice.pem --cafile RustIOTRootCA.pem  --insecure
Client mosq/nHh8mJ922PEe6VeUSN sending CONNECT
Client mosq/nHh8mJ922PEe6VeUSN received CONNACK (0)
Client mosq/nHh8mJ922PEe6VeUSN sending SUBSCRIBE (Mid: 1, Topic: health/+, QoS: 0, Options: 0x00)
Client mosq/nHh8mJ922PEe6VeUSN received SUBACK
Subscribed (mid: 1): 0
Listing 6-21

Running a mosquitto subscription with the new client certificate

Now we have a secure connection to test against and with; however, now we are going to have to update our actual code to switch from using a TCP connection to the SSL connection.

Creating Our New Message Queue Service

We have our MQ running SSL, and slightly more secured by requiring a client key to be used, and shut down the TCP access; the MQTT service we created in previous chapters will no longer work. At this point, the message queue will refuse any connections. We are going to have to convert our connection_method for the message queue to use SSL instead of TCP.

Luckily, this is relatively simple. Let’s start by defining what extra items we need:
  1. 1.

    Root CA – The root CA to the site that we created; this is the public certificate.

     
  2. 2.

    Client cert – The client certificate that is generated from the public/private Root CA.

     
  3. 3.

    Client key – The private key for that certificate.

     
You can either use the PiDevice certificate we created previously or create a new certificate to use for the MQ service. I am not going to step through all the code, but wanted to highlight two areas. The first is we need to add to our config a few more references; in Listing 6-22, we added the preceding certs to the MqttClientConfig.
#[derive(Debug, Clone)]
pub struct MqttClientConfig {
    pub ca_crt:  String,
    pub server_crt: String,
    pub server_key: String,
    pub mqtt_server: String,
    pub mqtt_port: u16,
    // for the RPC
    pub rpc_server: Option<String>,
    pub rpc_port: Option<u16>,
}
Listing 6-22

Updating the MqttClientConfig; this will be in the file src/mqtt/mod.rs

Now we need to apply those certificates to the client; before we were using a TCP connection method that didn’t require any extra configurations. In Listing 6-23, we alter that to be a TlS connection using the certs provided.
pub fn create_client(config: &MqttClientConfig, name: &str)
    -> (MqttClient, Receiver<Notification>) {
    let ca = read(config.ca_crt.as_str());
    let server_crt = read(config.server_crt.as_str());
    let server_key = read(config.server_key.as_str());
    create_client_conn(config, ca, server_crt, server_key, name)
}
Listing 6-23

Updating; this will be in the file src/mqtt/client.rs

And that is it; start up the application, and you can use the test calls we created earlier to send the files to the MQTT and read them over TLS. We now have our message queue system communicating over secure channels instead of insecure ones.

Summary

In this chapter, we covered the very basics of security. I felt authentication was critical since it drives about any Internet-based application. The integration in our layers is less than we probably should do, but good enough for demonstration purposes. You will want to add more as you continue. Same goes for MQTT, certificate-based authentication is very common even for the most basic of items. Remember, with your IoT device, when you first plug it in, you will want it to communicate with a server even without the person being authenticated. This could be to know if it’s active, if an update is required, and so on. We will do more callbacks to the authentication layer in Chapter 9 when we allow the user to authenticate with device flow on the Raspberry Pi.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.193.210