Adding instrumentation to your microservice

For any microservice, along with logging, instrumentation is vital. The metrics package of Go Kit records statistics about your service’s runtime behavior: counting the number of jobs processed, recording the duration of requests after they have finished, and so on. This is also a middleware that tampers the HTTP requests and collects metrics. To define a middleware, simply add one more struct, similar to the logging middleware. Metrics are useless unless we monitor. Prometheus is a metrics monitoring tool that can collect latency, number of requests for a given service, and so on. Prometheus scrapes the data from the metrics that Go Kit generates.

You can download the latest stable version of Prometheus from this site. Before using Prometheus, make sure you install these packages, that are needed by the Go Kit:

go get github.com/prometheus/client_golang/prometheus
go get github.com/prometheus/client_golang/prometheus/promhttp

Once these are installed, try to copy the last discussed logging service project into a directory called encryptServiceWithInstrumentation. The directory is exactly the same, except we add one more file called instrumentation.go to the helpers directory and modify our main.go to import the instrumentation middleware. The project structure looks like this:

├── helpers
│ ├── endpoints.go
│ ├── implementations.go
│ ├── instrumentation.go
│ ├── jsonutils.go
│ ├── middleware.go
│ └── models.go
└── main.go

Instrumentation can measure the number of requests per service and the latency in terms of parameters such as Counter and Histogram, respectively. We try to create a middleware that has these two measurements (requests count, latency) and implements the functions for the given services. In those middleware functions, we try to call the Prometheus client API to increment the number of requests, log the latency, and so on. The core Prometheus client library tries to increment a request count in this way:

// Prometheus
c := prometheus.NewCounter(stdprometheus.CounterOpts{
    Name: "request_duration",
    ...
}, []string{"method", "status_code"})
c.With("method", "MyMethod", "status_code", strconv.Itoa(code)).Add(1)

NewCounter creates a new counter struct that expects counter options. These options are the name of the operation and other details. Then, we need to call the With function on the struct with the method, method name, and error code. This particular signature is demanded by Prometheus to generate the counter metric. Finally, we are incrementing the counter with the Add(1) function call.

The newly added file instrumentation.go implementation looks like this:

package helpers
import (
"context"
"fmt"
"time"
"github.com/go-kit/kit/metrics"
)
// InstrumentingMiddleware is a struct representing middleware
type InstrumentingMiddleware struct {
RequestCount metrics.Counter
RequestLatency metrics.Histogram
Next EncryptService
}
func (mw InstrumentingMiddleware) Encrypt(ctx context.Context, key string, text string) (output string, err error) {
defer func(begin time.Time) {
lvs := []string{"method", "encrypt", "error", fmt.Sprint(err != nil)}
mw.RequestCount.With(lvs...).Add(1)
mw.RequestLatency.With(lvs...).Observe(time.Since(begin).Seconds())
}(time.Now())
output, err = mw.Next.Encrypt(ctx, key, text)
return
}
func (mw InstrumentingMiddleware) Decrypt(ctx context.Context, key string, text string) (output string, err error) {
defer func(begin time.Time) {
lvs := []string{"method", "decrypt", "error", "false"}
mw.RequestCount.With(lvs...).Add(1)
mw.RequestLatency.With(lvs...).Observe(time.Since(begin).Seconds())
}(time.Now())
output, err = mw.Next.Decrypt(ctx, key, text)
return
}

This is exactly the same as the logging middleware code. We created a struct with a few fields. We attached the functions for both the encrypt and decrypt services. Inside the middleware function, we are looking for two metrics; one is count and the second one is latency. When a request is passed through this middleware:

mw.RequestCount.With(lvs...).Add(1)

This line increments the counter. Now see the other line:

mw.RequestLatency.With(lvs...).Observe(time.Since(begin).Seconds())

This line observes the latency by calculating the difference between the request arrival time and final time (since the defer keyword is used, this will be executed after the request and response cycle is completed). In simple words, the preceding middleware logs the request count and latency to the metrics provided by the Prometheus client. Now let us modify our main.go file to look like this:

package main
import (
"log"
"net/http"
"os"
stdprometheus "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
kitlog "github.com/go-kit/kit/log"
httptransport "github.com/go-kit/kit/transport/http"
"github.com/narenaryan/encryptService/helpers"
kitprometheus "github.com/go-kit/kit/metrics/prometheus"
)
func main() {
logger := kitlog.NewLogfmtLogger(os.Stderr)
fieldKeys := []string{"method", "error"}
requestCount := kitprometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: "encryption",
Subsystem: "my_service",
Name: "request_count",
Help: "Number of requests received.",
}, fieldKeys)
requestLatency := kitprometheus.NewSummaryFrom(stdprometheus.SummaryOpts{
Namespace: "encryption",
Subsystem: "my_service",
Name: "request_latency_microseconds",
Help: "Total duration of requests in microseconds.",
}, fieldKeys)
var svc helpers.EncryptService
svc = helpers.EncryptServiceInstance{}
svc = helpers.LoggingMiddleware{Logger: logger, Next: svc}
svc = helpers.InstrumentingMiddleware{RequestCount: requestCount, RequestLatency: requestLatency, Next: svc}
encryptHandler := httptransport.NewServer(helpers.MakeEncryptEndpoint(svc),
helpers.DecodeEncryptRequest,
helpers.EncodeResponse)
decryptHandler := httptransport.NewServer(helpers.MakeDecryptEndpoint(svc),
helpers.DecodeDecryptRequest,
helpers.EncodeResponse)
http.Handle("/encrypt", encryptHandler)
http.Handle("/decrypt", decryptHandler)
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(":8080", nil))
}

We are importing the kit Prometheus package for initializing the metrics template, and the client Prometheus package for providing the option structs. We are creating requestCount and requestLatency metrics-type structs and passing them to our InstrumentingMiddleware, which is imported from helpers.  If you see this line:

 requestCount := kitprometheus.NewCounterFrom(stdprometheus.CounterOpts{
    Namespace: "encryption",
    Subsystem: "my_service",
    Name:      "request_count",
    Help:      "Number of requests received.",
  }, fieldKeys)

It is how we create a template that matches with the RequestCount in the InstrumentingMiddleware struct in helpers.go. The options that we pass will be appended to a single string while generating the metrics:

encryption_my_service_request_count

This is a uniquely identifiable service instrumentation that tells us, This is a request count operation for my microservice called Encryption. There is one more interesting line we added to the server part of the code in main.go:

"github.com/prometheus/client_golang/prometheus/promhttp"
...
http.Handle("/metrics", promhttp.Handler())

This actually creates an endpoint that can generate a page with collected metrics. This page can be scraped (parsed) by Prometheus to store, plot, and display metrics. If we run the program and make 5 HTTP requests to the encrypt service and 10 HTTP requests to the decrypt service, the metrics page logs the count of requests and their latencies:

go run main.go # This starts the server

Make 5 CURL requests to the encrypt service in a loop from another bash shell (in Linux):

for i in 1 2 3 4 5; do curl -XPOST -d'{"key":"111023043350789514532147", "text": "I am A Message"}' localhost:8080/encrypt; done

{"message":"8/+JCfTb+ibIjzQtmCo=","error":""}
{"message":"8/+JCfTb+ibIjzQtmCo=","error":""}
{"message":"8/+JCfTb+ibIjzQtmCo=","error":""}
{"message":"8/+JCfTb+ibIjzQtmCo=","error":""}
{"message":"8/+JCfTb+ibIjzQtmCo=","error":""}

Make 10 CURL requests in a loop for the decrypt service (the output is hidden for brevity):

for i in 1 2 3 4 5 6 7 8 9 10; do curl -XPOST -d'{"key":"111023043350789514532147", "message": "8/+JCfTb+ibIjzQtmCo="}' localhost:8080/decrypt; done

Now, visit the URL http://localhost:8080/metrics and you will see a page that the Prometheus Go client is generating for us. The content of the page will have this information:

# HELP encryption_my_service_request_count Number of requests received.
# TYPE encryption_my_service_request_count counter
encryption_my_service_request_count{error="false",method="decrypt"} 10
encryption_my_service_request_count{error="false",method="encrypt"} 5
# HELP encryption_my_service_request_latency_microseconds Total duration of requests in microseconds.
# TYPE encryption_my_service_request_latency_microseconds summary
encryption_my_service_request_latency_microseconds{error="false",method="decrypt",quantile="0.5"} 5.4538e-05
encryption_my_service_request_latency_microseconds{error="false",method="decrypt",quantile="0.9"} 7.6279e-05
encryption_my_service_request_latency_microseconds{error="false",method="decrypt",quantile="0.99"} 8.097e-05
encryption_my_service_request_latency_microseconds_sum{error="false",method="decrypt"} 0.000603101
encryption_my_service_request_latency_microseconds_count{error="false",method="decrypt"} 10
encryption_my_service_request_latency_microseconds{error="false",method="encrypt",quantile="0.5"} 5.02e-05
encryption_my_service_request_latency_microseconds{error="false",method="encrypt",quantile="0.9"} 8.8164e-05
encryption_my_service_request_latency_microseconds{error="false",method="encrypt",quantile="0.99"} 8.8164e-05
encryption_my_service_request_latency_microseconds_sum{error="false",method="encrypt"} 0.000284823
encryption_my_service_request_latency_microseconds_count{error="false",method="encrypt"} 5

As you can see, there are two types of metrics:

  • encryption_myservice_request_count
  • encryption_myservice_request_latency_microseconds

If you see the number of requests to the encrypt method and decrypt method, they match with the CURL requests we made.

The encryption_myservice metrics type has count and latency metrics for both the encrypt and decrypt microservices. The method parameter tells from which microservice the metrics are drawn.

These kinds of metrics give us key insights, such as which microservice is being used heavily and how the latency trends are over time, and so on. But in order to see the data in action, you need to install the Prometheus server and write a configuration file for Prometheus to scrape metrics from your Go Kit service. For more information about creating targets (hosts generating metrics pages) in Prometheus, visit https://prometheus.io/docs/operating/configuration/.

We can also pass data from Prometheus to Grafana, a graphing and monitoring tool for nice real-time charts of metrics. Go Kit provides many other features, such as service discovery. Scaling microservices is only possible if the system is loosely coupled, monitored, and optimized. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.75.221