Chapter 3. Traffic Control

As we’ve seen in previous chapters, Istio consists of a control plane and a data plane. The data plane is made up of proxies that live in the application architecture. We’ve been looking at a proxy-deployment pattern known as the sidecar, which means each application instance has its own dedicated proxy through which all network traffic travels before it gets to the application. These sidecar proxies can be individually configured to route, filter, and augment network traffic as needed. In this chapter, we take a look at a handful of traffic-control patterns that you can take advantage of via Istio. You might recognize these patterns as some of those practiced by the big internet companies like Netflix, Amazon, or Facebook.

Smarter Canaries

The concept of the canary deployment has become fairly popular in the last few years. The name “canary deployment” comes from the “canary in the coal mine” concept. Miners used to take a canary in a cage into the mines to detect whether there were any dangerous gases present because the canaries are more susceptible to poisonous gases than humans. The canary would not only provide nice musical songs to entertain the miners, but if at any point it collapsed off its perch, the miners knew to get out of the coal mine rapidly.

The canary deployment has similar semantics. With a canary deployment, you deploy a new version of your code to production, but you allow only a subset of traffic to reach it. Perhaps only beta customers, perhaps only internal employees of your organization, perhaps only iOS users, and so on. After the canary is out there, you can monitor it for exceptions, bad behavior, changes in Service-Level Agreement (SLA), and so forth. If it exhibits no bad behavior, you can begin to slowly deploy more instances of the new version of code. If it exhibits bad behavior, you can pull it from production. The canary deployment allows you to deploy faster but with minimal disruption should a “bad” code change be made.

By default, Kubernetes offers out-of-the-box round-robin load balancing of all the pods behind a service. If you want only 10% of all end-user traffic to hit your newest immutable container, you must have at least a 10 to 1 ratio of old pods to the new pod. With Istio, you can be much more fine-grained. You can specify that only 2% of traffic, across only three pods be routed to the latest version. Istio will also let you gradually increase the overall traffic to the new version until all end-users have been migrated over and the older versions of the app logic/code can be removed from the production environment.

Traffic Routing

As we touched on previously, Istio allows much more fine-grained canary deployments. With Istio, you can specify routing rules that control the traffic to a deployment. Specifically, Istio uses a RouteRule resource to specify these rules. Let’s take a look at an example RouteRule:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-default
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 1
  route:
  - labels:
      version: v1
    weight: 100

This RouteRule definition allows you to configure a percentage of traffic and direct it to a specific version of the recommendation service. In this case, 100% of traffic for the recommendation service will always go to pods matching the labels version: v1. The selection of pods here is very similar to the Kubernetes selector model for matching based on labels. So, any service within the service mesh that tries to communicate with the recommendation service will always be routed to v1 of the recommendation service.

The routing behavior described above is not just for ingress traffic; that is, traffic coming into the mesh. This is for all interservice communication within the mesh. As we’ve illustrated in the example, these routing rules apply to services potentially deep within a service call graph. If you have a service deployed to Kubernetes that’s not part of the service mesh, it will not see these rules and will adhere to the default Kubernetes load-balancing rules (as just mentioned).

Routing to Specific Versions of a Deployment

To illustrate more complex routing, and ultimately what a canary rollout would look like, let’s deploy v2 of our recommendation service. First, you need to make some changes to the source code for the recommendation service. Change the RESPONSE_STRING_FORMAT String in the com.redhat.developer.demos.recommendation.RecommendationVerticle to be something like this:

private static final String RESPONSE_STRING_FORMAT =
  "recommendation v2 from '%s': %d
";

Now do a build and package of this code as v2:

cd recommendation

mvn clean package

docker build -t example/recommendation:v2 .

Finally, inject the Istio sidecar proxy and deploy this into Kubernetes:

oc apply -f <(istioctl kube-inject -f 
src/main/kubernetes/Deployment-v2.yml) -n tutorial

You can run oc get pods -w to watch the pods and wait until they all come up. You should see something like this when all of the pods are running successfully:

NAME                                 READY   STATUS    RESTARTS   AGE
customer-3600192384-fpljb            2/2     Running   0          17m
preference-243057078-8c5hz           2/2     Running   0          15m
recommendation-v1-60483540-9snd9     2/2     Running   0          12m
recommendation-v2-2815683430-vpx4p   2/2     Running   0          15s

At this point, if you curl the customer endpoint, you should see traffic load balanced across both versions of the recommendation service. You should see something like this:

#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .1
done

customer => preference => recommendation v1 from '2819441432-qsp25': 29
customer => preference => recommendation v2 from '99634814-sf4cl': 37
customer => preference => recommendation v1 from '2819441432-qsp25': 30
customer => preference => recommendation v2 from '99634814-sf4cl': 38
customer => preference => recommendation v1 from '2819441432-qsp25': 31
customer => preference => recommendation v2 from '99634814-sf4cl': 39

Now you can create your first RouteRule and route all traffic to only v1 of the recommendation service. You should navigate to the root of the source code you cloned from the istio-tutorial directory, and run the following command:

istioctl create -f istiofiles/route-rule-recommendation-v1.yml 
 -n tutorial

Now if you try to query the customer service, you should see all traffic routed to v1 of the service:

#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .1
Done

customer => preference => recommendation v1 from '1543936415': 1
customer => preference => recommendation v1 from '1543936415': 2
customer => preference => recommendation v1 from '1543936415': 3
customer => preference => recommendation v1 from '1543936415': 4
customer => preference => recommendation v1 from '1543936415': 5
customer => preference => recommendation v1 from '1543936415': 6

Canary release of recommendation v2

Now that all traffic is going to v1 of your recommendation service, you can initiate a canary release using Istio. The canary release should take 90% of the incoming live traffic. To do this, you need to specify a weighted routing rule in new RouteRule that looks like this:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-v1-v2
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 5
  route:
  - labels:
      version: v1
    weight: 90
  - labels:
      version: v2
    weight: 10

As you can see, you’re sending 90% of the traffic to v1 and 10% of the traffic to v2 with this RouteRule. An important thing to notice about this RouteRule is the precedence value. In the preceding example, we set this to 5 for this route rule, which means it has higher precedence than the earlier rule that routes all traffic to v1. Try creating it and see what happens when you put load on the service:

istioctl create -f istiofiles/route-rule-recommendation-v1_and_v2.yml 
-n tutorial

If you start sending load against the customer service like in the previous steps, you should see that only a fraction of traffic actually makes it to v2. This is a canary release. You should monitor your logs, metrics, and tracing systems to see whether this new release has introduced any negative unintended or unexpected behaviors into your system.

Continue rollout of recommendation v2

At this point, if no bad behaviors have surfaced, you should have a bit more confidence in the v2 of our recommendation service. You might then want to increase the traffic to v2. You can do that with another RouteRule that looks like this:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-v1-v2
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 5
  route:
  - labels:
      version: v1
    weight: 50
  - labels:
      version: v2
    weight: 50

With this RouteRule we’re going to open the traffic up to 50% to v1, and 50% to v2. Notice the precedence is still the same value (it’s 5, just like the canary release) and that the route rule’s name is the same as the canary release (recommendation-v1-v2). When you create this route rule using istioctl, you should use the replace command:

istioctl replace -f 
istiofiles/route-rule-recommendation-v1_and_v2_50_50.yml -n tutorial

Now you should see traffic behavior change in real time. You should see approximately half the traffic go to v1 of the recommendation service and half go to v2. You should see something like the following:

customer => preference => recommendation v1 from '1543936415': 192
customer => preference => recommendation v2 from '3116548731': 37
customer => preference => recommendation v2 from '3116548731': 38
customer => preference => recommendation v1 from '1543936415': 193
customer => preference => recommendation v2 from '3116548731': 39
customer => preference => recommendation v2 from '3116548731': 40
customer => preference => recommendation v2 from '3116548731': 41
customer => preference => recommendation v1 from '1543936415': 194
customer => preference => recommendation v2 from '3116548731': 42

Finally, if everything continues to look good with this release, you can switch all of the traffic to go to v2 of recommendation service. You need to install the Route​Rule that routes all traffic to v2:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-default
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 1
  route:
  - labels:
      version: v2
    weight: 100

You can replace the v1 route rule like this:

istioctl replace -n tutorial -f 
istiofiles/route-rule-recommendation-v2.yml

Note that the precedence for this route rule is set to 1. This means the traffic-control RouteRules you used in the previous steps, which had their precedence values set to 5 would still have higher precedence. That’s true. You need to next delete the canary/rollout route rules so that all traffic matches the v2 routerule:

istioctl delete routerule -n tutorial recommendation-v1-v2

Now you should see all traffic going to v2 of the recommendation service:

customer => preference => recommendation v2 from '3116548731': 308
customer => preference => recommendation v2 from '3116548731': 309
customer => preference => recommendation v2 from '3116548731': 310
customer => preference => recommendation v2 from '3116548731': 311
customer => preference => recommendation v2 from '3116548731': 312
customer => preference => recommendation v2 from '3116548731': 313
customer => preference => recommendation v2 from '3116548731': 314
customer => preference => recommendation v2 from '3116548731': 315

Restore route rules to v1

To clean up this section, replace the route rules to direct traffic back to v1 of the recommendation service:

istioctl replace -n tutorial -f 
istiofiles/route-rule-recommendation-v1.yml

Routing Based on Headers

You’ve seen how you can use Istio to do fine-grained routing based on service metadata. You also can use Istio to do routing based on request-level metadata. For example, you can use matching predicates to set up specific route rules based on requests that match a specified set of criteria. For example, you might want to split traffic to a particular service based on geography, mobile device, or browser. Let’s see how to do that with Istio.

With Istio, you can use a match clause in the RouteRule to specify a predicate. For example, take a look at the following RouteRule:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-safari
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 2
  match:
    request:
      headers:
        user-agent:
          regex: ".*Safari.*"
  route:
  - labels:
      version: v2

This rule uses a request header–based matching clause that will match only if the request includes “Safari” as part of the user-agent header. If the request matches the predicate, it will be routed to v2 of the recommendation service. Note that this also has a precedence that’s higher than the default route rule (recall, the default route rule routes every request to v1 and has a precedence of 1).

Install the rule:

$  istioctl create -f 
istiofiles/route-rule-safari-recommendation-v2.yml -n tutorial

And let’s try it out:

$  curl customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v1 from '1543936415': 465

And if you pass in a user-agent header of Safari, you should be routed to v2:

$ curl -H 'user-agent: Safari' 
customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v2 from '3116548731': 318

Cleaning up route rules

After getting to this section, you can clean up all of the route rules you’ve installed. First, you should list the route rules you have using istioctl get:

$  istioctl get routerule -n tutorial

NAME                    KIND                                NAMESPACE
recommendation-default  RouteRule.v1alpha2.config.istio.io  tutorial
recommendation-safari   RouteRule.v1alpha2.config.istio.io  tutorial

Now you can delete them:

istioctl delete routerule recommendation-safari -n tutorial
istioctl delete routerule recommendation-default -n tutorial

Dark Launch

Dark launch can mean different things to different people. In essence, a dark launch is a deployment to production that goes unnoticed to customer traffic. You might choose to release to a subset of customers (like internal or nonpaying customers) but the broader user base does not see the release. Another option is to duplicate or mirror production traffic into a cluster that has a new deployment and see how it behaves compared to the live traffic. This way you’re able to put production quality requests into your new service without affecting any live traffic.

For example, you could say recommendation v1 takes the live traffic and recommendation v2 will be your new deployment. You can use Istio to mirror traffic that goes to v1 into the v2 cluster. When Istio mirrors traffic, it does so in a fire-and-forget manner. In other words, Istio will do the mirroring asynchronously from the critical path of the live traffic, send the mirrored request to the test cluster, and not worry about or care about a response. Let’s try this out.

The first thing you should do is make sure that there are no route rules currently being used:

istioctl get routerules -n tutorial

Let’s take a look at a RouteRule that configures mirroring:

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: recommendation-mirror
spec:
  destination:
    namespace: tutorial
    name: recommendation
  precedence: 2
  route:
  - labels:
      version: v1
    weight: 100
  - labels:
      version: v2
    weight: 0
  mirror:
    namespace: tutorial
    name: recommendation
    labels:
      version: v2

You can see that this directs all traffic to v1 of recommendation and no traffic to v2. In the mirror clause, you specify which cluster to receive the mirrored traffic.

Next, verify you’re in the root directory of the source files you gleaned from the istio-tutorial and run the following command:

istioctl create -f istiofiles/route-rule-recommendation-v1-mirror-v2.yml
 -n tutorial

Now, in one terminal, tail the logs for the recommendation v2 service:

oc logs -f `oc get pods|grep recommendation-v2|awk '{ print $1 }'` 
 -c recommendation

In another window, you can send in a request:

$  curl customer-tutorial.$(minishift ip).nip.io
customer => preference => recommendation v1 from '1543936415': 466

You can see from the response that we’re hitting v1 of the recommendation service as expected. If you observe in your tailing of the v2 logs, you’ll also see new entries as it’s processing the mirrored traffic.

You can use mirrored traffic to do powerful prerelease testing, but it does not come without its own challenges. For example, a new version of a service might still need to communicate with a database or other collaborator services. For dealing with data in a microservices world, take a look at Edson Yanaga’s book Migrating to Microservices Databases. For a more detailed treatment on advanced mirroring techniques, you can take a look at Christian’s blog post “Advanced Traffic-shadowing Patterns for Microservices With Istio Service Mesh”.

Egress

By default, Istio directs all traffic originating in a service through the Istio proxy that’s deployed alongside the service. This proxy evaluates its routing rules and decides how best to deliver the request. One nice thing about the Istio service mesh is that by default it blocks all outbound (outside of the cluster) traffic unless you specifically and explicitly create routing rules to allow traffic out. From a security standpoint, this is crucial. You can use Istio in both zero-trust networking architectures as well as traditional perimeter-based security. In both cases, Istio helps protect against a nefarious agent gaining access to a single service and calling back out to a command-and-control system thus allowing an attacker full access to the network. By blocking any outgoing access by default and allowing routing rules to control not only internal traffic but any and all outgoing traffic, you can make your security posture more resilient to outside attacks irrespective of where they originate.

To demonstrate, we will have you create a service that makes a call out to an external website, namely, httpbin.org, and see how it behaves in the service mesh.

From the root of the companion source code you’ve cloned earlier, go to the egress/egresshttpbin folder. This is another Spring Boot Java application that does the following salient functionality:

    @RequestMapping
    public String headers() {
        RestTemplate restTemplate = new RestTemplate();
        String url = "http://httpbin.org/headers";

        HttpHeaders httpHeaders = new HttpHeaders();
        HttpEntity<String> httpEntity =
            new HttpEntity<>("", httpHeaders);

        String responseBody;
        try {
            ResponseEntity<String> response
            = restTemplate.exchange(url, HttpMethod.GET,
                httpEntity,
                String.class);
            responseBody = response.getBody();
        } catch (Exception e) {
            responseBody = e.getMessage();
        }
            return responseBody + "
";
    }

This HTTP endpoint, when called, will make a call out to httpbin.org/headers, which is a service residing on the public internet that returns a list of headers that were sent to the HTTP GET /headers endpoint.

Now you can build, package, and deploy and expose this service:

$  cd egress/egresshttpbin

$  mvn clean package

$  docker build -t example/egresshttpbin:v1 .

$  oc apply -f <(istioctl kube-inject -f 
src/main/kubernetes/Deployment.yml)

$  oc create -f src/main/kubernetes/Service.yml
$  oc expose service egresshttpbin

You should not be able to query the service like this:

$  curl http://egresshttpbin-tutorial.$(minishift ip).nip.io

You should see a response like this:

404 Not Found

Dang! This service cannot communicate with services in the public internet that live outside of our cluster!

Let’s go back to the root of your source code and create an egress rule that looks like this:

apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
  name: httpbin-egress-rule
spec:
  destination:
    service: httpbin.org
  ports:
    - port: 80
      protocol: http

This EgressRule allows your traffic to reach the outside internet but only for the httpbin.org website. Here, you can create the rule and try querying your service again:

istioctl create -f istiofiles/egress_httpbin.yml -n tutorial

You can list the egress rules like this:

$  istioctl get egressrule
NAME                    KIND                                 NAMESPACE
httpbin-egress-rule     EgressRule.v1alpha2.config.istio.io  tutorial

Now you can try to curl the service again:

curl http://egresshttpbin-tutorial.$(minishift ip).nip.io

Yay! It should have worked this time! Istio EgressRules allowed your service to reach the outside internet for this specific service. If you had a failure at this step, you can file a GitHub issue for the istio-tutorial.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.27.45