13

Advanced Topics

If you are reading this chapter – congratulations, you have reached the very final part of this book! We have discussed many topics related to microservice development, but some remain that are important to cover. The topics in this chapter span many areas, from observability and debugging to service ownership and security. You may find these topics useful at various points in time: some of them will be helpful once you have working services serving production traffic, while others will be useful while your services are still in active development.

In this chapter, we will cover the following topics:

  • Profiling Go services
  • Creating microservice dashboards
  • Frameworks
  • Storing microservice ownership data
  • Securing microservice communication with JWT

Let’s proceed to the first section of this chapter, which covers service profiling.

Technical requirements

To complete this chapter, you will need Go 1.11+ or above. Additionally, you will need the following tools:

You can find the code examples for this chapter on GitHub: https://github.com/PacktPublishing/microservices-with-go/tree/main/Chapter13.

Profiling Go services

In this section, we are going to review a technique called profiling, which involves collecting real-time performance data of a running process, such as a Go service. Profiling is a powerful technique that can help you analyze various types of service performance data:

  • CPU usage: Which operations used the most CPU power and what was the distribution of CPU usage among them?
  • Heap allocation: Which operations used heap (dynamic memory allocated in Go applications) and what amount of memory was used?
  • Call graph: In which order were service functions executed?

Profiling may help you in different situations:

  • Identifying CPU-intensive logic: At some point, you may notice that your service is consuming most of your CPU power. To understand this problem, you can collect the CPU profile – a graph showing the CPU usage of various service components, such as individual functions. Components that consume too much CPU power may indicate various issues, such as inefficient implementations or code bugs.
  • Capturing the service memory footprint: Similar to high CPU consumption, your service may be using too much memory (for example, to allocate too much data to the heap), resulting in occasional service crashes due to out-of-memory panics. Performing memory profiling may help you analyze the memory usage of various parts of your service and find components that have unexpectedly high memory usage.

Let’s illustrate how to profile Go services using the pprof tool, which is a part of the Go SDK. To visualize the results of the tool, you will need to install the Graphviz library: https://graphviz.org/.

We will use the metadata service that we implemented in Chapter 2 as an example. Open the metadata/cmd/main.go file and add the flag package to the imports block. Then, add the following code to the beginning of the main function, immediately after the logger initialization:

simulateCPULoad := flag.Bool("simulatecpuload",
    false,"simulate CPU load for profiling")
    flag.Parse()
    if *simulateCPULoad {
        go heavyOperation()
    }
go func() {
    if err := http.ListenAndServe("localhost:6060", nil);
    err != nil {
        logger.Fatal("Failed to start profiler handler",
            zap.Error(err))
    }
}()

In the code we just added, we introduced an additional flag called simulatecpuload that will let us simulate a CPU-intensive operation for our profiling. We also started an HTTP handler that we will use to access the profiler data from the command line.

Now, let’s add another function to the same file that will run a continuous loop and execute some CPU-intensive operations. We will generate random 1,024-byte arrays and calculate their md5 hashes (you can read about the md5 operation in the comments of its Go package at https://pkg.go.dev/crypto/md5). Our selection of such logic is fully arbitrary: we could easily choose any other operation that would consume some visible part of the CPU load.

Add the following code to the main.go file that we just updated:

func heavyOperation() {
    for {
        token := make([]byte, 1024)
        rand.Read(token)
        md5.New().Write(token)
    }
}

Now, we are ready to test our profiling logic. Run the service with the --simulatecpuload argument:

go run *.go --simulatecpuload

Now, execute the following command:

go tool pprof http://localhost:6060/debug/pprof/profile?seconds=5

The command should take 5 seconds to complete. If it executes successfully, the pprof tool will be running, as shown here:

Type: cpu
Time: Sep 13, 2022 at 5:37pm (+05)
Duration: 5.14s, Total samples = 4.42s (85.92%)
Entering interactive mode (type "help" for commands,
    "o" for options)
(pprof)

Type web in the command prompt of the tool and press Enter. If everything worked well, you will be redirected to a browser window containing a CPU profile graph:

Figure 13.1 – Go CPU profile example

Figure 13.1 – Go CPU profile example

Let’s walk through the data from the graph to understand how to interpret it. Each node on the graph includes the following data:

  • Package name
  • Function name
  • Elapsed time and the total time of the execution

For example, the heavyOperation function took just 0.01 seconds, but all the operations that were executed in it (including all function calls inside it) took 4.39 seconds, taking most of the elapsed time.

If you walk through the graph, you will see the distribution of the elapsed time by sub-operations. In our case, heavyOperation executed two functions that got recorded by the CPU profiler: md5.Write and rand.Read. The md5.Write function took 2.78 seconds in total, while rand.Read took 1.59 seconds of the execution time. Level by level, you can analyze the calls and find the CPU-intensive functions.

When working with the CPU profiler data, notice the functions that take the most processing time. Such functions are illustrated as larger rectangles to help you find them. If you notice that some functions have unexpectedly high processing time, spend some time analyzing their code to see whether there is any opportunity to optimize them.

Now, let’s illustrate another example of profiler data. This time, we will be capturing a heap profile – a profile showing dynamic memory allocation by a Go process. Run the following command:

go tool pprof http://localhost:6060/debug/pprof/heap

Similar to the previous example, successfully executing this command should run the pprof tool, where we can execute a web command. The result will contain the following graph:

Figure 13.2 – Go heap profile example

Figure 13.2 – Go heap profile example

This diagram is similar to the CPU profile. The last line inside each node shows the ratio between the memory used by the function and the total heap memory allocated by the process.

In our example, three high-level operations are consuming the heap memory:

  • api.serviceRegister: A function that registers a service via the Consul API
  • zap.NewProduction: Logger initialization via the zap library
  • trace.init: Initializes the tracing logic
  1. Looking at the heap profiler data, it’s easy to find functions allocating an unexpectedly high amount of heap memory. Similar to CPU profiler graphs, heap profilers display the functions that have the highest heap allocation as larger rectangles, making it easier to visualize the most memory-consuming functions.
  2. I suggest that you practice with the pprof tool and try the other operations it provides. Being able to profile Go applications is a highly valuable skill in production debugging that should help you optimize your services and solve different performance-related issues. The following are some other useful tips for profiling Go services:
    • You can profile Go tests without adding any extra logic to your code. Running the go test command with the -cpuprofile and -memprofile flags will capture the CPU and memory profiles of your logic, respectively.
    • The top command of the pprof tool is a convenient way of showing the top memory consumers. There is also the top10 command, which shows the top 10 memory consumers.
    • Using the roroutine mode of the pprof tool, you can get a profile of all used goroutines, as well as their stack traces.

Now that we have covered the basics of Go profiling, let’s move on to the next topic of this chapter: service dashboarding.

Creating microservice dashboards

In the previous two chapters, we reviewed various ways of working with service metrics. In Chapter 11, we demonstrated how to collect the service metrics, while in Chapter 12, we showed you how to aggregate and query them using the Prometheus tool. In this section, we will describe one more way of accessing the metrics data that can help you explore your metrics and plot them as charts. The technique that we will cover is called dashboarding and is useful for visualizing various service metrics.

Let’s provide an example of a dashboard – a set of charts representing different metrics. The following figure shows the dashboard of a Go service containing some system-level metrics, such as the goroutine count, the number of Go threads, and allocated memory size:

Figure 13.3 – Go process dashboard example from the Grafana tool

Figure 13.3 – Go process dashboard example from the Grafana tool

Dashboards help visualize various types of data, such as time series datasets, allowing us to analyze service performance. The following are some other use cases for using dashboards:

  • Debugging: Being able to visualize various service performance metrics helps us identify service issues and notice any anomalies in system activity
  • Data correlation: Having a side-by-side representation of multiple service performance charts helps us find related events, such as an increase in server errors or a sudden drop in available memory

It’s a great practice to have a dashboard for each of your services, as well as some dashboards that span all services, to get some high-level system performance data, such as the number of active service instances, network throughput, and much more.

Let’s demonstrate how to set up an example dashboard for the Prometheus data that we collected in Chapter 12. For this, we will use the open source tool called Grafana, which has built-in support for various types of time series data and provides a convenient user interface for setting up different dashboards. Follow these instructions to set up a Grafana dashboard:

  1. Execute the following command to run the Grafana Docker image:
    docker run -d -p 3000:3000 grafana/grafana-oss

This command should fetch and run the open source version of Grafana (Grafana also comes in an enterprise version, which we won’t cover in this chapter) and expose port 3000 so that we can access it via HTTP.

Note

Similar to Prometheus, Grafana is also written in Go and is another example of a popular open source Go project widely used across the software development industry.

  1. Once you’ve run the preceding command, open http://localhost:3000 in your browser. This will lead you to the Grafana login page. By default, the Docker-based version of Grafana includes a user with admin as both its username and password, so you can use these credentials to log in.
  2. From the side menu, select Configuration:
Figure 13.4 – Grafana data source configuration menu

Figure 13.4 – Grafana data source configuration menu

  1. On the Configuration page, click on the Data sources menu item, then click Add data source and choose Prometheus from the list of available data sources. Doing so will open a new page that displays Prometheus settings. In the HTTP section, set URL to http://host.docker.internal:9090, as shown in the following screenshot:
Figure 13.5 – Grafana configuration for a Prometheus data source

Figure 13.5 – Grafana configuration for a Prometheus data source

  1. Now, you can click the Save and test button at the bottom of the page, which should let you know whether the operation was successful. If you did everything well, Grafana should be ready to display your metrics from Prometheus.
  2. From the side menu, click on New dashboard:
Figure 13.6 – Grafana’s New dashboard menu item for dashboard creation

Figure 13.6 – Grafana’s New dashboard menu item for dashboard creation

  1. This should open an empty dashboard page.
  2. Click on the Add a new panel button on this dashboard page; you will be redirected to the panel creation page.

A panel is a core element of a Grafana dashboard and its purpose is to visualize the provided dataset. To illustrate how to use it, let’s select our Prometheus data source and some of the metrics that it already has. On the panel view, choose Prometheus as the data source and, in the Metric field, find the process_open_fds element and select it. Now, click on the Run queries button; you should see the following view:

Figure 13.7 – Grafana panel view

Figure 13.7 – Grafana panel view

We just configured the dashboard panel to display the process_open_fds time series stored in Prometheus. Each data point on the chart shows the value of the time series at a different time, displayed below the chart. On the right-hand panel, you can set the panel title to Open fd count. Now, save the dashboard by clicking the Apply button provided in the top menu. You will be redirected to the dashboard page.

In the top menu, you will find the Add panel button, which you can use to add a new panel to our dashboard. If you follow the same steps that we did for the previous panel and choose the go_gc_duration_seconds metric, you will add a new panel to the dashboard that will visualize the go_gc_duration_seconds time series from Prometheus.

The resulting dashboard should look like this:

Figure 13.8 – Example Grafana dashboard

Figure 13.8 – Example Grafana dashboard

We just created an example dashboard that has two panels that display some existing Prometheus metrics. You can use the same approach to create any dashboards for your services, as well as high-level dashboards showing the system-global metrics, such as the total number of API requests, network throughput, or the total number of all service instances.

Let’s provide some examples of metrics that can be useful for setting up a dashboard for an individual service. This includes The Four Golden Signals, which we mentioned in Chapter 12:

  • Client error rate: The ratio between client errors (such as invalid or unauthenticated requests) and all requests to the service
  • Server error rate: The ratio between server errors (such as database write errors) and all requests to the service
  • API throughput: Number of API requests per second/minute
  • API latency: API request processing latency, usually measured in percentiles, such as p90/p95/p99 (you can learn about percentiles by reading this blog post: https://www.elastic.co/blog/averages-can-dangerous-use-percentile)
  • CPU utilization: Current usage of CPUs (100% means all CPUs are fully loaded)
  • Memory utilization: Ratio between used and total memory across all service instances
  • Network throughput: Total amount of network write/read traffic per second/minute

Depending on the operations performed by your service (for example, database writes or reads, cache usage, Kafka consumption, or production), you may wish to include additional panels that will help you visualize your service performance. Make sure that you cover all the high-level functionality of the service so that you can visually notice any service malfunctions on your dashboards.

The Grafana tool, which we used in our example, also supports lots of different visualization options, such as displaying tables, heatmaps, numerical values, and much more. We will not cover these features in this chapter, but you can get familiar with them by reading the official documentation: https://grafana.com/docs/. Using the full power of Grafana will help you set up excellent dashboards for your services, simplifying your debugging and performance analysis.

Now, let’s move on to the next section, where we will describe Go frameworks.

Frameworks

In Chapter 2, we covered the topic of the Go project structure, as well as some common patterns of organizing your Go code. The code organization principles that we described are generally based on conventions – written agreements or statements that define specific rules for naming and placing Go files. Some of the conventions that we followed were proposed by the authors of the Go language, while others are commonly used and proposed by authors of various Go libraries.

While conventions play an important role in establishing the common principles of organizing Go code, there are other ways of enforcing specific code structures. One such way is using frameworks, which we are going to cover in this section.

Generally speaking, frameworks are tools that establish a structure for various components of your code. Let’s take the following code snippet as an example:

package main
import (
    "fmt"
    "net/http"
)
func main() {
    http.HandleFunc("/echo",
        func(w http.ResponseWriter, _ *http.Request) {
            fmt.Fprintf(w, "Hi!")
        })
    if err := http.ListenAndServe(":8080", nil);
    err != nil {
        panic(err)
    }
}

Here, we are registering an HTTP handler function and letting it handle HTTP requests on the localhost:8080/echo endpoint. The code for our example is extremely simple, yet it does a lot of background work (you can check the source of the net/http package to see how complex the internal part of the HTTP handling logic is) to start an HTTP server, accept all incoming requests, and respond to them by executing the function provided by us. Most importantly, our code allows us to add additional HTTP handlers by following the same format of calling the http.HandleFunc function and passing handler functions to it. The net/http library that we used in our example established a structure for handling HTTP calls to various endpoints, acting as a framework for our Go application.

The authors of the net/http package were able to add additional HTTP endpoint handlers (provided by the http.HandleFunc function) by following a pattern called inversion of control (IoC). IoC is a way of organizing code in which some component (in our case, the net/http package) takes control of the execution flow by calling the other components of it (in our case, the function provided as an argument to http.HandleFunc). In our example, the moment we call the http.ListenAndServe function, the net/http package takes control of executing the HTTP handler functions: each time the HTTP server receives an incoming request, our function is called automatically.

IaC is a primary mechanism of most frameworks that allows them to establish a foundation for various parts of application code. In general, most frameworks work by taking control of an application, or a part of it, and handling some routing operations, such as resource management (opening and closing incoming connections, writing and reading files, and so on), serialization and deserialization, and many more.

What are the primary use cases for using Go frameworks? We can list some of the most common ones:

  • Writing web servers: Similar to our example of an HTTP server, there can be other types of web servers handling requests to different endpoints using different protocols, such as Apache Thrift or gRPC.
  • Async event processing: There are libraries for various asynchronous communication tools, such as Apache Kafka, that help organize code in an IoC way by passing handler functions for various types of events (such as Kafka messages belonging to different topics), which get called automatically each time there is a new unprocessed message.

It is important to note that frameworks have some significant downsides:

  • Harder to debug and understand the execution flow: In addition to taking control of the execution flow, frameworks also perform lots of background work that is hidden from developers. Because of this, it is usually much harder to understand how your code is being executed, as well as to debug various issues, such as initialization errors (you may find more information on this in the following article: https://www.baeldung.com/cs/framework-vs-library).
  • Steeper learning curve: Frameworks generally require a good understanding of the logic and abstractions they provide. This requires developers to spend more time reading the related documentation or learning some key lessons in practice.
  • Harder to catch some trivial bugs via static checks: Frameworks often use dynamic code invocation libraries, such as reflect (https://pkg.go.dev/reflect). Such operations are performed when executing a program, making it hard to catch various types of issues, such as the incorrect implementation of interfaces or invalid naming.

When deciding on using a specific framework, you should do some analysis and compare the advantages it provides to you with the downsides it brings, especially in the long term. Many developers underestimate the complexity that frameworks bring to them or the other developers in their organizations: most frameworks perform a fair amount of magic to provide a convenient code structure to application developers. In general, you should always start from a simpler option (in our case, not using a particular framework) and only decide to use a framework if its benefits outweigh its downsides.

Now that we have discussed the topic of frameworks, let’s move on to the next section, where we will describe the different aspects of microservice ownership.

Storing microservice ownership data

One of the key benefits of using microservice architectures is the ability to distribute their development: each service can be developed and maintained by a separate team, and teams can be distributed across the globe. While the distributed development model helps different teams build various parts of their systems independently, it brings some new challenges, such as service ownership.

To illustrate the problem of service ownership, imagine that you are working in a company with thousands of microservices. One day, the security engineers of your company find out that there is a critical security vulnerability in a popular Go library that is used in most of the company’s services. How can you communicate with the right teams and find out who would be responsible for making the changes in each service?

There are numerous companies with thousands of microservices. In such companies, it becomes impossible to remember which team and which developers are responsible for each of them. In such companies, it becomes crucial to find a solution to the service ownership problem.

Note

While we are discussing the ownership problem for microservices, the same principles apply to many other types of technological assets, such as Kafka topics and database tables.

How can we define service ownership?

There are many different ways of doing this, each of which is equally important for some specific use cases:

  • Accountability: Which person/entity is accountable for the service and who can act as the primary point of contact or the main authority for it?
  • Support: Who is going to provide support for the service, such as a service bug, feature request, or user question?
  • On-call: Who is currently on-call for the service? Who can we contact in case of an emergency issue?

As you can see, there are many ways of interpreting the word ownership, depending on the use case. Let’s look at some ways to define each role, starting with accountability: who should be accountable, or liable, for a service?

In most organizations, liability is attributed to engineering managers: every engineering manager acts as an accountable individual for some unique domain. If you define a mapping between your services and the engineering managers that are responsible for them, you can solve the service accountability problem by allowing them to easily find the relevant point of contact, such as an engineering manager that is liable for it.

An alternative way of defining service accountability is to associate services with teams. However, there can be multiple issues with this:

  • Shared accountability does not always work: If you have multiple people that are responsible for a service, it becomes unclear who the final authority among them is.
  • A team is a loosely defined concept in many organizations: Unless you have a single, well-defined registry of teams in your company, it’s better to avoid referencing team names in your systems.

Now, let’s discuss the support aspect of ownership. Ideally, each service should have a mechanism for reporting any issues or bugs. Such a mechanism can take one of the following forms:

  • Support channel: The identifier or URL of a messaging channel for leaving support requests, such as a link to the relevant Google group, Slack channel, or any other similar tool.
  • Ticketing system URL: The URL to a system/page that allows you to create a support request ticket. Developers often use Atlassian Jira for this purpose.

If you provide such metadata for all your services, you will significantly simplify user support: all service users, such as other developers, will always know how to request support for them or report any bugs or other issues.

Let’s move on to the on-call ownership metadata. An easy solution to this is to link each service to its on-call rotation. If you use PagerDuty, you can store the relationships between service names and their corresponding PagerDuty rotation identifiers.

An example of the ownership metadata that we just described is as follows:

ownership:
    rating-service:
        accountable: [email protected]
            support:
                slack: rating-service-support-group
                    oncall:
                        pagerduty_rotation:SOME_ROTATION_ID

Our example is defined in YAML format, though it may be preferable to store this data in some system that would allow us to query or modify it via an API. This way, you can automatically submit new ownership changes (for example, when people leave the company and you want to reassign the ownership automatically). I would also suggest making the ownership data mandatory for all services. To enforce this, you can establish a service creation process that will request the ownership data before developers provision new services.

Now that we’ve discussed service ownership, let’s move on to the next section, where we will describe the basics of Go microservice security.

Securing microservice communication with JWT

In this section, we are going to review some basic concepts of microservice security, such as authentication and authorization. You will learn how to implement such logic in Go using a popular JSON Web Token (JWT) protocol.

Let’s start with one of the primary aspects of security: authentication. Authentication is the process of verifying someone’s identity, such as via user credentials. When you log into some system, such as Gmail, you generally go through the authentication process by providing your login details (username and password). The system that performs authentication performs verification by comparing the provided data with the existing records it stores. Verification can take one or multiple steps: some types of authentication, such as two-factor authentication, require some additional actions, such as verifying access to a phone number via SMS.

A successful authentication often results in granting the caller access to some resources, such as user data (for example, user emails in Gmail). Additionally, the server performing this authentication may provide a security token to the caller that can be used on subsequent calls to skip the verification process.

Another form of access control, known as authorization, involves specifying access rights to various resources. Authorization is often performed to check whether a user has permission to perform a certain action, such as viewing a specific admin page. Authorization is often performed by using a security token that was obtained during authentication, as illustrated in the following diagram:

Figure 13.9 – Authorization request providing a token

Figure 13.9 – Authorization request providing a token

There are many different ways to implement authentication and authorization in microservices. Among the most popular protocols is JWT, a proposed internet standard for creating security tokens that can contain any number of facts about the caller’s identity, such as them being an administrator. Let’s review the basics of the protocol to help you understand how to use it in your services.

JWT basics

JWTs are generated by components that perform authentication or authorization. Each token consists of three parts: a header, a payload, and a signature. The payload is the main part of the token and it contains a set of claims – statements about the caller’s identity, such as a user identifier or a role in the system. The following code shows an example of a token payload:

{
    "name": "Alexander",
    "role": "admin",
    "iat": 1663880774
}

Our example payload contains three claims: the user’s name, role (admin in our example), and token issuance time (iat is a standard field name that is a part of the JWT protocol). Such claims could be used in various flows – for example, when checking whether a user has the admin role to access a system dashboard.

As a protection mechanism against modifications, each token contains a signature – a cryptographic function of its payload - a header, and a special value, called a secret, that is known only to the authentication server. The following pseudocode provides an example of token signature calculation:

HMACSHA256(
    base64UrlEncode(header) + "." +
    base64UrlEncode(payload),
    secret,
)

The algorithm that is used for creating a token signature is defined in a token header. The following JSON record provides an example of a header:

{
    "alg": "HS256",
    "typ": "JWT"
}

In our example, the token is using HMAC-SHA256, a cryptographic algorithm that is commonly used for signing JWTs. Our selection of HMAC-SHA256 is primarily due to its popularity; if you wish to learn about other signing algorithms, you can find a link to an overview of them in the Further reading section of the chapter.

The resulting JWT is a concatenation of the token’s header, payload, and signature, encoded with the Base64uri protocol. For example, the following value is a JWT that’s been created by combining the header and the payload from our code snippets, signed with a secret string called our-secret:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoiQWxleGFuZGVyIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNjYzODgwNzc0fQ.FqogLyrV28wR5po6SMouJ7qs2Y3m6gmpaPg6MUthWpQ

To practice JWT creation, I suggest using the JWT tool available at https://jwt.io to try encoding arbitrary JWTs and see the resulting token values.

Now that we have discussed the high-level details of JWT, let’s move on to the practical part of this section – implementing basic authentication and authorization in Go microservices using JWTs.

Implementing authentication and authorization with JWTs

In this section, we will provide some examples of implementing basic access control via authentication and authorization using Go.

Let’s start with the authentication process. A simple credential-based authentication flow can be summarized as follows:

  • The client initiating authentication would call a specified endpoint (for example, HTTPS POST /auth) while providing the user credentials, such as username and password.
  • The server handling authentication would verify the credentials and perform one of two actions:
    • Return an error if the credentials are invalid (for example, an HTTP error with a 401 code).
    • Return a successful response with a 200 code, containing a JWT, that is signed with the server’s secret.
  • If authentication is successful, the client can store the received token so that it can be used in the following requests.

Let’s illustrate how to implement the server logic for the authentication flow that we just described. To generate JWTs in our Go code, we will use the https://github.com/golang-jwt/jwt library.

The following code provides an example of handling an HTTP authentication request. It performs credential validation and returns a successful response with a signed JWT if the validation passes:

const secret = "our-secret"
func Authenticate(w http.ResponseWriter, req *http.Request) {
    username := req.FormValue("username")
    password := req.FormValue("password")
    if !validCredentials(username, password) {
        http.Error(w, "invalid credentials", http.StatusUnauthorized)
        return
    }
    token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
        "username": username,
        "iat": time.Now().Unix(),
    })
    tokenString, err := token.SignedString(secret)
    if err != nil {
        http.Error(w, "failed to create a token", http.StatusInternalServerError)
        return
    }
    fmt.Fprintf(w, tokenString)
}
func validCredentials(username, password string) bool {
    // Implement your credential verification here.
    return false
}

In the preceding code, we created a token using the jwt.NewWithClaims function. The token includes two fields:

  • username: Name of the authenticated user
  • iat: Time of token creation

The server code that we just created is using the secret value to sign the token. Any attempts to modify the token would be impossible without knowing the secret: the token signature allows us to check whether the token is correct.

Now, let’s illustrate how the client can perform requests using the token that it receives after successfully authenticating:

func authorizationExample(
    token string, operationURL string) error {
        req, err := http.NewRequest(
            http.MethodPost, operationURL, nil)
            if err != nil {
                return err
            }
        req.Header.Set("Authorization", "Bearer "+token)
        resp, err := http.DefaultClient.Do(req)
        // Handle response.
    }

In our example of an authorized operation, we added an Authorization header to the request while using the token value with the Bearer prefix. The Bearer prefix defines a bearer token – a token that intends to give access to its bearer.

Let’s also provide the logic of a server handler that would handle such an authorized request and verify whether the provided token is correct:

func AuthorizedOperationExample(w http.ResponseWriter,
    req *http.Request) {
        authHeaderValue := req.Header.Get("Authorization")
        const bearerPrefix = "Bearer "
        if !strings.HasPrefix(authHeaderValue,
            bearerPrefix) {
                http.Error(w,
                    "request does not contain an Authorization Bearer token", http.StatusUnauthorized)
                return
            }
        tokenString := strings.TrimPrefix(authHeaderValue,
            bearerPrefix)
        // Validate token.
        token, err := jwt.Parse(tokenString,
            func(token *jwt.Token) (interface{}, error) {
                if _, ok := token.Method.(
                    *jwt.SigningMethodHMAC); !ok {
                        return nil,
                        fmt.Errorf(
                            "unexpected signing method:
                                %v", token.Header["alg"])
            }
            return secret, nil
        })
        if err != nil {
            http.Error(w, "invalid token",
                http.StatusUnauthorized)
        }
        claims, ok := token.Claims.(jwt.MapClaims)
        if !ok || !token.Valid {
            http.Error(w, "invalid token",
                http.StatusUnauthorized)
            return
        }
        username := claims["username"]
        fmt.Fprintf(w, "Hello, "+username.(string))
    }

Let’s describe some highlights of the provided example:

  • We use the jwt.Parse function to parse the token and validate it. We return an error if the signature algorithm does not match HMAC-SHA256, which we used previously.
  • The parsed token contains the Claims field, which contains the claims from the token payload
  • We use the username claim from the token payload in our function. Once we successfully parse the token and verify that it is valid, we can assume that the information in its payload has been securely passed to us and can trust it.

Now that we have provided examples of Go authentication and authorization using JWTs, let’s list some best practices for using JWTs to secure microservice communication:

  • Set a token expiration time: When issuing JWTs, it is useful to set the token expiration time (the exp JWT claim field) to avoid situations where users use old authorization records. By having an expiration time set in each token payload, you can verify it against authorization requests. For example, when a user authenticates as a system administrator, you can set a short token expiration time (for example, a few hours) to avoid situations where a former administrator can still perform critical actions in the system.
  • Include the token issuance time: Additional metadata, such as the token issuance time (the ist JWT claim field), can be useful in many practical situations. For example, if you identify a security breach that happened at a certain point in time, you can invalidate all access tokens that were issued before that moment by using the token issuance time metadata.
  • Use JWTs with HTTPS instead of HTTP: The HTTPS protocol encrypts request metadata, such as authorization request headers, preventing various types of security attacks. An example of such a security attack is a man-in-the-middle attack, which is when some third party (such as a hacker trying to obtain a user’s access token) captures network traffic to extract JWTs from request headers.
  • Prefer standard JWT claim fields to custom ones: When including metadata in the JWT payload, make sure that there is no standard field for the same purpose. You can find a list of standard JWT claim fields at https://en.wikipedia.org/wiki/JSON_Web_Token#Standard_fields.

The https://jwt.io/ website contains some additional tips on using JWTs, as well as an online tool for encoding and decoding JWTs, that you can use to debug your service communication.

Summary

With that, we have finished the last chapter of this book by reviewing lots of microservice development topics that were not included in the previous chapters. You learned how to profile Go services, create microservice dashboards so that you can monitor their performance, define and store microservice ownership data, and secure microservice communication with JWTs. I hope that you have found lots of interesting tips in this chapter that will help you build scalable, highly performant, and secure microservices.

The Go language keeps evolving, as well as the tooling for it. Each day, developers release new libraries and tools for it that can solve various microservice development problems that we described in this book. While this book provided you with lots of tips on Go microservice development, you should keep improving your skills and make your services simpler and easier to maintain.

I also want to thank you for reading this book. I hope you enjoyed reading it and gained lots of useful experience that will help you in mastering the art of Go microservice development. Let your Go microservices be highly performant, secure, and easy to maintain!

Further reading

To learn more about the topics that were covered in this chapter, take a look at the following resources:

We suggest that you use these resources to stay up to date with the latest news related to Go microservice development:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.195.97