One of the greatest challenges with the management of a microservices architecture is simply trying to understand the relationships between individual components of the overall system. A single end-user transaction might flow through several, perhaps a dozen or more independently deployed microservices or pods, and discovering where performance bottlenecks have occurred provides valuable information.
Often the first thing to understand about your microservices architecture is specifically which microservices are involved in an end-user transaction. If many teams are deploying their dozens of microservices, all independently of one another, it is often challenging to understand the dependencies across that “mesh” of services. Istio’s Mixer comes “out of the box” with the ability to pull tracing spans from your distributed microservices. This means that tracing is programming-language agnostic so that you can use this capability in a polyglot world where different teams, each with its own microservice, can be using different programming languages and frameworks.
Although Istio supports both Zipkin and Jaeger, for our purposes we focus on Jaeger, which implements OpenTracing, a vendor neutral tracing API. Jaeger was original open sourced by the Uber Technologies team and is a distributed tracing system specifically focused on microservices architecture.
One important term to understand is span, and Jaeger defines span as “a logical unit of work in the system that has an operation name, the start time of the operation, and the duration. Spans can be nested and ordered to model causal relationships. An RPC call is an example of a span.”
Another important term to understand is trace, and Jaeger defines trace as “a data/execution path through the system, and can be thought of as a directed acyclic graph of spans.”
You open the Jaeger console by using the following command:
minishift openshift service jaeger-query --in-browser
You can then select Customer from the drop-down list box and explore the traces found, as illustrated in Figure 6-1.
One aspect that is important to remember is that your programming logic must forward the OpenTracing headers with every outbound call:
x-request-id x-b3-traceid x-b3-spanid x-b3-parentspanid x-b3-sampled x-b3-flags x-ot-span-context
You can see an example of this concept in the customer class called HttpHeaderForwarderHandlerInterceptor
in the accompanying sample code.
By default, Istio’s default configuration will gather telemetry data across the service mesh. Simply installing Prometheus and Grafana is enough to get started with this important service, however do keep in mind many other backend metrics/telemetry-collection services are supported. In Chapter Chapter 2, you saw the following four commands to install and expose the metrics system:
oc apply -f install/kubernetes/addons/prometheus.yaml oc apply -f install/kubernetes/addons/grafana.yaml oc expose svc grafana oc expose svc prometheus
You can then launch the Grafana console using the minishift service command:
open "$(minishift openshift service grafana -u)/dashboard/db/istio- dashboard?var-source=All"
Make sure to select Istio Dashboard in the upper left of the Grafana dashboard, as demonstrated in Figure 6-2.
As of this writing, you do need to append ?var-source=All
to the Grafana dashboard URL. This is likely to change in the future, watch the istio-tutorial for changes.
Here’s an example URL:
http://grafana-istio-system.192.168.99.101.nip.io/dashboard/db/istio-dashboard?var-source=All
Figure 6-3 shows the dashboard. You can also visit the Prometheus dashboard directly at the following (note, this will open the URL in a browser for you; you could use --url
instead of --in-browser
to get just the URL):
minishift openshift service prometheus --in-browser
3.133.114.221