We explained load balancing in Chapter 7, Understanding the Istio Service Mesh. Linkerd uses a smart load balancing mechanism, which is described by William Morgan as follows:
Let's explore how load balancing is configured for the emojivoto application:
- Validate the emojivoto microservice and the availability of its pods:
$ kubectl get pods -n emojivoto
NAME READY STATUS RESTARTS AGE
emoji-697b575bd9-6487c 2/2 Running 0 29m
vote-bot-7bd97dfbdc-f8hfv 2/2 Running 0 29m
voting-6b4bf7494b-pxk5k 2/2 Running 0 29m
web-559684dbc5-9pmdf 2/2 Running 0 29m
- Scale the voting and web deployments from 1 to 2 replicas. We can scale to any number, as long as enough CPU and memory is available:
$ kubectl -n emojivoto scale deploy voting --replicas=2
deployment.extensions/voting scaled
$ kubectl -n emojivoto scale deploy web --replicas=2
deployment.extensions/web scaled
To recap, the emojivoto application has an emoji-bot that continuously sends traffic to the application. The doughnut emoji has a built-in HTTP 404 error. The emoji-bot sends 15% of its traffic to this emoji, and it picks up other emojis at random. Later in this chapter, we will debug and determine the root cause of this issue.
- Now, let's check the stats of deployment using the linkerd CLI:
$ linkerd -n emojivoto stat deployments
NAME MESHED SUCCESS RPS LATENCY_P50 ---
emoji 1/1 100.00% 2.0rps 1ms ---
vote-bot 1/1 - - - ---
voting 2/2 91.67% 1.0rps 1ms ---
web 2/2 95.76% 2.0rps 4ms ---
--- LATENCY_P95 LATENCY_P99 TCP_CONN
--- 2ms 2ms 3
--- - - -
--- 1ms 1ms 6
--- 10ms 18ms 4
Notice the success rate, requests rate per second (rps), and latency distribution percentile, as this is the aggregated information for deployment as a whole. The aggregated metrics are the values that Linkerd provides.
- Check the aggregated information at the pod level for the web and voting pods:
$ linkerd -n emojivoto stat pods
NAME STATUS MESHED SUCCESS RPS ---
emoji-697b575bd9-6487c Running 1/1 100.00% 2.0rps ---
vote-bot-7bd97dfbdc-f8hfv Running 1/1 - - ---
voting-6b4bf7494b-8znt2 Running 1/1 64.29% 0.5rps ---
voting-6b4bf7494b-pxk5k Running 1/1 81.25% 0.5rps ---
web-559684dbc5-9pmdf Running 1/1 84.13% 1.1rps ---
web-559684dbc5-l64dd Running 1/1 90.91% 0.9rps ---
--- LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
--- 1ms 1ms 1ms 3
--- - - - -
--- 1ms 1ms 1ms 3
--- 1ms 2ms 2ms 3
--- 7ms 17ms 19ms 2
--- 3ms 13ms 19ms 2
- Browse to http://dashboard.linkerd.local in your VM.
- Navigate to Resources | Pods | All. On the main console page, you will see HTTP metrics and TCP metrics.
- Filter the HTTP metrics by clicking the three vertical bars on the top right corner and typing emojivoto.
- Repeat the same for TCP metrics.
Initially, you may only see traffic on one web service, but if you wait for a few seconds, the traffic will balance automatically:
Notice that we didn't make any configuration changes to accomplish load balancing. This capability is offered out of the box.
Linkerd provides a mechanism for aggregating metrics through a service profile. This helps in obtaining better traffic assessments across services. We'll explore this further in the next section.