If you are reading this chapter – congratulations, you have reached the very final part of this book! We have discussed many topics related to microservice development, but some remain that are important to cover. The topics in this chapter span many areas, from observability and debugging to service ownership and security. You may find these topics useful at various points in time: some of them will be helpful once you have working services serving production traffic, while others will be useful while your services are still in active development.
In this chapter, we will cover the following topics:
Let’s proceed to the first section of this chapter, which covers service profiling.
To complete this chapter, you will need Go 1.11+ or above. Additionally, you will need the following tools:
You can find the code examples for this chapter on GitHub: https://github.com/PacktPublishing/microservices-with-go/tree/main/Chapter13.
In this section, we are going to review a technique called profiling, which involves collecting real-time performance data of a running process, such as a Go service. Profiling is a powerful technique that can help you analyze various types of service performance data:
Profiling may help you in different situations:
Let’s illustrate how to profile Go services using the pprof tool, which is a part of the Go SDK. To visualize the results of the tool, you will need to install the Graphviz library: https://graphviz.org/.
We will use the metadata service that we implemented in Chapter 2 as an example. Open the metadata/cmd/main.go file and add the flag package to the imports block. Then, add the following code to the beginning of the main function, immediately after the logger initialization:
simulateCPULoad := flag.Bool("simulatecpuload",
false,"simulate CPU load for profiling")
flag.Parse()
if *simulateCPULoad {
go heavyOperation()
}
go func() {
if err := http.ListenAndServe("localhost:6060", nil);
err != nil {
logger.Fatal("Failed to start profiler handler",
zap.Error(err))
}
}()
In the code we just added, we introduced an additional flag called simulatecpuload that will let us simulate a CPU-intensive operation for our profiling. We also started an HTTP handler that we will use to access the profiler data from the command line.
Now, let’s add another function to the same file that will run a continuous loop and execute some CPU-intensive operations. We will generate random 1,024-byte arrays and calculate their md5 hashes (you can read about the md5 operation in the comments of its Go package at https://pkg.go.dev/crypto/md5). Our selection of such logic is fully arbitrary: we could easily choose any other operation that would consume some visible part of the CPU load.
Add the following code to the main.go file that we just updated:
func heavyOperation() {
for {
token := make([]byte, 1024)
rand.Read(token)
md5.New().Write(token)
}
}
Now, we are ready to test our profiling logic. Run the service with the --simulatecpuload argument:
go run *.go --simulatecpuload
Now, execute the following command:
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=5
The command should take 5 seconds to complete. If it executes successfully, the pprof tool will be running, as shown here:
Type: cpu
Time: Sep 13, 2022 at 5:37pm (+05)
Duration: 5.14s, Total samples = 4.42s (85.92%)
Entering interactive mode (type "help" for commands,
"o" for options)
(pprof)
Type web in the command prompt of the tool and press Enter. If everything worked well, you will be redirected to a browser window containing a CPU profile graph:
Figure 13.1 – Go CPU profile example
Let’s walk through the data from the graph to understand how to interpret it. Each node on the graph includes the following data:
For example, the heavyOperation function took just 0.01 seconds, but all the operations that were executed in it (including all function calls inside it) took 4.39 seconds, taking most of the elapsed time.
If you walk through the graph, you will see the distribution of the elapsed time by sub-operations. In our case, heavyOperation executed two functions that got recorded by the CPU profiler: md5.Write and rand.Read. The md5.Write function took 2.78 seconds in total, while rand.Read took 1.59 seconds of the execution time. Level by level, you can analyze the calls and find the CPU-intensive functions.
When working with the CPU profiler data, notice the functions that take the most processing time. Such functions are illustrated as larger rectangles to help you find them. If you notice that some functions have unexpectedly high processing time, spend some time analyzing their code to see whether there is any opportunity to optimize them.
Now, let’s illustrate another example of profiler data. This time, we will be capturing a heap profile – a profile showing dynamic memory allocation by a Go process. Run the following command:
go tool pprof http://localhost:6060/debug/pprof/heap
Similar to the previous example, successfully executing this command should run the pprof tool, where we can execute a web command. The result will contain the following graph:
Figure 13.2 – Go heap profile example
This diagram is similar to the CPU profile. The last line inside each node shows the ratio between the memory used by the function and the total heap memory allocated by the process.
In our example, three high-level operations are consuming the heap memory:
Now that we have covered the basics of Go profiling, let’s move on to the next topic of this chapter: service dashboarding.
In the previous two chapters, we reviewed various ways of working with service metrics. In Chapter 11, we demonstrated how to collect the service metrics, while in Chapter 12, we showed you how to aggregate and query them using the Prometheus tool. In this section, we will describe one more way of accessing the metrics data that can help you explore your metrics and plot them as charts. The technique that we will cover is called dashboarding and is useful for visualizing various service metrics.
Let’s provide an example of a dashboard – a set of charts representing different metrics. The following figure shows the dashboard of a Go service containing some system-level metrics, such as the goroutine count, the number of Go threads, and allocated memory size:
Figure 13.3 – Go process dashboard example from the Grafana tool
Dashboards help visualize various types of data, such as time series datasets, allowing us to analyze service performance. The following are some other use cases for using dashboards:
It’s a great practice to have a dashboard for each of your services, as well as some dashboards that span all services, to get some high-level system performance data, such as the number of active service instances, network throughput, and much more.
Let’s demonstrate how to set up an example dashboard for the Prometheus data that we collected in Chapter 12. For this, we will use the open source tool called Grafana, which has built-in support for various types of time series data and provides a convenient user interface for setting up different dashboards. Follow these instructions to set up a Grafana dashboard:
docker run -d -p 3000:3000 grafana/grafana-oss
This command should fetch and run the open source version of Grafana (Grafana also comes in an enterprise version, which we won’t cover in this chapter) and expose port 3000 so that we can access it via HTTP.
Note
Similar to Prometheus, Grafana is also written in Go and is another example of a popular open source Go project widely used across the software development industry.
Figure 13.4 – Grafana data source configuration menu
Figure 13.5 – Grafana configuration for a Prometheus data source
Figure 13.6 – Grafana’s New dashboard menu item for dashboard creation
A panel is a core element of a Grafana dashboard and its purpose is to visualize the provided dataset. To illustrate how to use it, let’s select our Prometheus data source and some of the metrics that it already has. On the panel view, choose Prometheus as the data source and, in the Metric field, find the process_open_fds element and select it. Now, click on the Run queries button; you should see the following view:
Figure 13.7 – Grafana panel view
We just configured the dashboard panel to display the process_open_fds time series stored in Prometheus. Each data point on the chart shows the value of the time series at a different time, displayed below the chart. On the right-hand panel, you can set the panel title to Open fd count. Now, save the dashboard by clicking the Apply button provided in the top menu. You will be redirected to the dashboard page.
In the top menu, you will find the Add panel button, which you can use to add a new panel to our dashboard. If you follow the same steps that we did for the previous panel and choose the go_gc_duration_seconds metric, you will add a new panel to the dashboard that will visualize the go_gc_duration_seconds time series from Prometheus.
The resulting dashboard should look like this:
Figure 13.8 – Example Grafana dashboard
We just created an example dashboard that has two panels that display some existing Prometheus metrics. You can use the same approach to create any dashboards for your services, as well as high-level dashboards showing the system-global metrics, such as the total number of API requests, network throughput, or the total number of all service instances.
Let’s provide some examples of metrics that can be useful for setting up a dashboard for an individual service. This includes The Four Golden Signals, which we mentioned in Chapter 12:
Depending on the operations performed by your service (for example, database writes or reads, cache usage, Kafka consumption, or production), you may wish to include additional panels that will help you visualize your service performance. Make sure that you cover all the high-level functionality of the service so that you can visually notice any service malfunctions on your dashboards.
The Grafana tool, which we used in our example, also supports lots of different visualization options, such as displaying tables, heatmaps, numerical values, and much more. We will not cover these features in this chapter, but you can get familiar with them by reading the official documentation: https://grafana.com/docs/. Using the full power of Grafana will help you set up excellent dashboards for your services, simplifying your debugging and performance analysis.
Now, let’s move on to the next section, where we will describe Go frameworks.
In Chapter 2, we covered the topic of the Go project structure, as well as some common patterns of organizing your Go code. The code organization principles that we described are generally based on conventions – written agreements or statements that define specific rules for naming and placing Go files. Some of the conventions that we followed were proposed by the authors of the Go language, while others are commonly used and proposed by authors of various Go libraries.
While conventions play an important role in establishing the common principles of organizing Go code, there are other ways of enforcing specific code structures. One such way is using frameworks, which we are going to cover in this section.
Generally speaking, frameworks are tools that establish a structure for various components of your code. Let’s take the following code snippet as an example:
package main
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/echo",
func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprintf(w, "Hi!")
})
if err := http.ListenAndServe(":8080", nil);
err != nil {
panic(err)
}
}
Here, we are registering an HTTP handler function and letting it handle HTTP requests on the localhost:8080/echo endpoint. The code for our example is extremely simple, yet it does a lot of background work (you can check the source of the net/http package to see how complex the internal part of the HTTP handling logic is) to start an HTTP server, accept all incoming requests, and respond to them by executing the function provided by us. Most importantly, our code allows us to add additional HTTP handlers by following the same format of calling the http.HandleFunc function and passing handler functions to it. The net/http library that we used in our example established a structure for handling HTTP calls to various endpoints, acting as a framework for our Go application.
The authors of the net/http package were able to add additional HTTP endpoint handlers (provided by the http.HandleFunc function) by following a pattern called inversion of control (IoC). IoC is a way of organizing code in which some component (in our case, the net/http package) takes control of the execution flow by calling the other components of it (in our case, the function provided as an argument to http.HandleFunc). In our example, the moment we call the http.ListenAndServe function, the net/http package takes control of executing the HTTP handler functions: each time the HTTP server receives an incoming request, our function is called automatically.
IaC is a primary mechanism of most frameworks that allows them to establish a foundation for various parts of application code. In general, most frameworks work by taking control of an application, or a part of it, and handling some routing operations, such as resource management (opening and closing incoming connections, writing and reading files, and so on), serialization and deserialization, and many more.
What are the primary use cases for using Go frameworks? We can list some of the most common ones:
It is important to note that frameworks have some significant downsides:
When deciding on using a specific framework, you should do some analysis and compare the advantages it provides to you with the downsides it brings, especially in the long term. Many developers underestimate the complexity that frameworks bring to them or the other developers in their organizations: most frameworks perform a fair amount of magic to provide a convenient code structure to application developers. In general, you should always start from a simpler option (in our case, not using a particular framework) and only decide to use a framework if its benefits outweigh its downsides.
Now that we have discussed the topic of frameworks, let’s move on to the next section, where we will describe the different aspects of microservice ownership.
One of the key benefits of using microservice architectures is the ability to distribute their development: each service can be developed and maintained by a separate team, and teams can be distributed across the globe. While the distributed development model helps different teams build various parts of their systems independently, it brings some new challenges, such as service ownership.
To illustrate the problem of service ownership, imagine that you are working in a company with thousands of microservices. One day, the security engineers of your company find out that there is a critical security vulnerability in a popular Go library that is used in most of the company’s services. How can you communicate with the right teams and find out who would be responsible for making the changes in each service?
There are numerous companies with thousands of microservices. In such companies, it becomes impossible to remember which team and which developers are responsible for each of them. In such companies, it becomes crucial to find a solution to the service ownership problem.
Note
While we are discussing the ownership problem for microservices, the same principles apply to many other types of technological assets, such as Kafka topics and database tables.
How can we define service ownership?
There are many different ways of doing this, each of which is equally important for some specific use cases:
As you can see, there are many ways of interpreting the word ownership, depending on the use case. Let’s look at some ways to define each role, starting with accountability: who should be accountable, or liable, for a service?
In most organizations, liability is attributed to engineering managers: every engineering manager acts as an accountable individual for some unique domain. If you define a mapping between your services and the engineering managers that are responsible for them, you can solve the service accountability problem by allowing them to easily find the relevant point of contact, such as an engineering manager that is liable for it.
An alternative way of defining service accountability is to associate services with teams. However, there can be multiple issues with this:
Now, let’s discuss the support aspect of ownership. Ideally, each service should have a mechanism for reporting any issues or bugs. Such a mechanism can take one of the following forms:
If you provide such metadata for all your services, you will significantly simplify user support: all service users, such as other developers, will always know how to request support for them or report any bugs or other issues.
Let’s move on to the on-call ownership metadata. An easy solution to this is to link each service to its on-call rotation. If you use PagerDuty, you can store the relationships between service names and their corresponding PagerDuty rotation identifiers.
An example of the ownership metadata that we just described is as follows:
ownership:
rating-service:
accountable: [email protected]
support:
slack: rating-service-support-group
oncall:
pagerduty_rotation:SOME_ROTATION_ID
Our example is defined in YAML format, though it may be preferable to store this data in some system that would allow us to query or modify it via an API. This way, you can automatically submit new ownership changes (for example, when people leave the company and you want to reassign the ownership automatically). I would also suggest making the ownership data mandatory for all services. To enforce this, you can establish a service creation process that will request the ownership data before developers provision new services.
Now that we’ve discussed service ownership, let’s move on to the next section, where we will describe the basics of Go microservice security.
In this section, we are going to review some basic concepts of microservice security, such as authentication and authorization. You will learn how to implement such logic in Go using a popular JSON Web Token (JWT) protocol.
Let’s start with one of the primary aspects of security: authentication. Authentication is the process of verifying someone’s identity, such as via user credentials. When you log into some system, such as Gmail, you generally go through the authentication process by providing your login details (username and password). The system that performs authentication performs verification by comparing the provided data with the existing records it stores. Verification can take one or multiple steps: some types of authentication, such as two-factor authentication, require some additional actions, such as verifying access to a phone number via SMS.
A successful authentication often results in granting the caller access to some resources, such as user data (for example, user emails in Gmail). Additionally, the server performing this authentication may provide a security token to the caller that can be used on subsequent calls to skip the verification process.
Another form of access control, known as authorization, involves specifying access rights to various resources. Authorization is often performed to check whether a user has permission to perform a certain action, such as viewing a specific admin page. Authorization is often performed by using a security token that was obtained during authentication, as illustrated in the following diagram:
Figure 13.9 – Authorization request providing a token
There are many different ways to implement authentication and authorization in microservices. Among the most popular protocols is JWT, a proposed internet standard for creating security tokens that can contain any number of facts about the caller’s identity, such as them being an administrator. Let’s review the basics of the protocol to help you understand how to use it in your services.
JWTs are generated by components that perform authentication or authorization. Each token consists of three parts: a header, a payload, and a signature. The payload is the main part of the token and it contains a set of claims – statements about the caller’s identity, such as a user identifier or a role in the system. The following code shows an example of a token payload:
{
"name": "Alexander",
"role": "admin",
"iat": 1663880774
}
Our example payload contains three claims: the user’s name, role (admin in our example), and token issuance time (iat is a standard field name that is a part of the JWT protocol). Such claims could be used in various flows – for example, when checking whether a user has the admin role to access a system dashboard.
As a protection mechanism against modifications, each token contains a signature – a cryptographic function of its payload - a header, and a special value, called a secret, that is known only to the authentication server. The following pseudocode provides an example of token signature calculation:
HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload),
secret,
)
The algorithm that is used for creating a token signature is defined in a token header. The following JSON record provides an example of a header:
{
"alg": "HS256",
"typ": "JWT"
}
In our example, the token is using HMAC-SHA256, a cryptographic algorithm that is commonly used for signing JWTs. Our selection of HMAC-SHA256 is primarily due to its popularity; if you wish to learn about other signing algorithms, you can find a link to an overview of them in the Further reading section of the chapter.
The resulting JWT is a concatenation of the token’s header, payload, and signature, encoded with the Base64uri protocol. For example, the following value is a JWT that’s been created by combining the header and the payload from our code snippets, signed with a secret string called our-secret:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoiQWxleGFuZGVyIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNjYzODgwNzc0fQ.FqogLyrV28wR5po6SMouJ7qs2Y3m6gmpaPg6MUthWpQ
To practice JWT creation, I suggest using the JWT tool available at https://jwt.io to try encoding arbitrary JWTs and see the resulting token values.
Now that we have discussed the high-level details of JWT, let’s move on to the practical part of this section – implementing basic authentication and authorization in Go microservices using JWTs.
In this section, we will provide some examples of implementing basic access control via authentication and authorization using Go.
Let’s start with the authentication process. A simple credential-based authentication flow can be summarized as follows:
Let’s illustrate how to implement the server logic for the authentication flow that we just described. To generate JWTs in our Go code, we will use the https://github.com/golang-jwt/jwt library.
The following code provides an example of handling an HTTP authentication request. It performs credential validation and returns a successful response with a signed JWT if the validation passes:
const secret = "our-secret"
func Authenticate(w http.ResponseWriter, req *http.Request) {
username := req.FormValue("username")
password := req.FormValue("password")
if !validCredentials(username, password) {
http.Error(w, "invalid credentials", http.StatusUnauthorized)
return
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
"username": username,
"iat": time.Now().Unix(),
})
tokenString, err := token.SignedString(secret)
if err != nil {
http.Error(w, "failed to create a token", http.StatusInternalServerError)
return
}
fmt.Fprintf(w, tokenString)
}
func validCredentials(username, password string) bool {
// Implement your credential verification here.
return false
}
In the preceding code, we created a token using the jwt.NewWithClaims function. The token includes two fields:
The server code that we just created is using the secret value to sign the token. Any attempts to modify the token would be impossible without knowing the secret: the token signature allows us to check whether the token is correct.
Now, let’s illustrate how the client can perform requests using the token that it receives after successfully authenticating:
func authorizationExample(
token string, operationURL string) error {
req, err := http.NewRequest(
http.MethodPost, operationURL, nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+token)
resp, err := http.DefaultClient.Do(req)
// Handle response.
}
In our example of an authorized operation, we added an Authorization header to the request while using the token value with the Bearer prefix. The Bearer prefix defines a bearer token – a token that intends to give access to its bearer.
Let’s also provide the logic of a server handler that would handle such an authorized request and verify whether the provided token is correct:
func AuthorizedOperationExample(w http.ResponseWriter,
req *http.Request) {
authHeaderValue := req.Header.Get("Authorization")
const bearerPrefix = "Bearer "
if !strings.HasPrefix(authHeaderValue,
bearerPrefix) {
http.Error(w,
"request does not contain an Authorization Bearer token", http.StatusUnauthorized)
return
}
tokenString := strings.TrimPrefix(authHeaderValue,
bearerPrefix)
// Validate token.
token, err := jwt.Parse(tokenString,
func(token *jwt.Token) (interface{}, error) {
if _, ok := token.Method.(
*jwt.SigningMethodHMAC); !ok {
return nil,
fmt.Errorf(
"unexpected signing method:
%v", token.Header["alg"])
}
return secret, nil
})
if err != nil {
http.Error(w, "invalid token",
http.StatusUnauthorized)
}
claims, ok := token.Claims.(jwt.MapClaims)
if !ok || !token.Valid {
http.Error(w, "invalid token",
http.StatusUnauthorized)
return
}
username := claims["username"]
fmt.Fprintf(w, "Hello, "+username.(string))
}
Let’s describe some highlights of the provided example:
Now that we have provided examples of Go authentication and authorization using JWTs, let’s list some best practices for using JWTs to secure microservice communication:
The https://jwt.io/ website contains some additional tips on using JWTs, as well as an online tool for encoding and decoding JWTs, that you can use to debug your service communication.
With that, we have finished the last chapter of this book by reviewing lots of microservice development topics that were not included in the previous chapters. You learned how to profile Go services, create microservice dashboards so that you can monitor their performance, define and store microservice ownership data, and secure microservice communication with JWTs. I hope that you have found lots of interesting tips in this chapter that will help you build scalable, highly performant, and secure microservices.
The Go language keeps evolving, as well as the tooling for it. Each day, developers release new libraries and tools for it that can solve various microservice development problems that we described in this book. While this book provided you with lots of tips on Go microservice development, you should keep improving your skills and make your services simpler and easier to maintain.
I also want to thank you for reading this book. I hope you enjoyed reading it and gained lots of useful experience that will help you in mastering the art of Go microservice development. Let your Go microservices be highly performant, secure, and easy to maintain!
To learn more about the topics that were covered in this chapter, take a look at the following resources:
We suggest that you use these resources to stay up to date with the latest news related to Go microservice development:
18.223.195.97