Traffic mirroring

The live traffic mirroring capability of Istio is very useful for shadowing traffic from a production service to a mirror service. Istio allows complete mirroring from one service to another or a portion of the traffic. It is very important that mirroring should happen without impacting the critical path of the original application.

Mirroring traffic using Istio is sometimes branded as out of band since mirroring is accomplished asynchronously through Istio's sidecar proxy. The mirrored traffic should be identified distinctly. This is done by appending shadow to the Host or Authority header.  

Let's understand traffic mirroring or shadowing through an example.

We will create two versions of the httpbin service and enable a logging mechanism to ensure which service is receiving or mirroring the traffic:

  1. The following is a deployment example of httpbin-v1. Review the following script to deploy the sample httpbin service:
# Script : 18-deploy-httpbin-v1.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v1
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
ports:
- containerPort: 80
  1. Deploy httpbin-v1:
$ kubectl -n istio-lab apply -f 18-deploy-httpbin-v1.yaml
deployment.extensions/httpbin-v1 created
  1. The following is the deployment example for httpbin-v2:
# Script : 19-deploy-httpbin-v2.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v2
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v2
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
ports:
- containerPort: 80
  1. Deploy httpbin-v2:
$ kubectl -n istio-lab apply -f 19-deploy-httpbin-v2.yaml
deployment.extensions/httpbin-v2 created

Create a Kubernetes httpbin service, which will load balance the traffic between httpbin-v1 and httpbin-v2. Notice that both deployments use a label of app: httpbin, which is the same label selector that's used by the httpbin service:

# Script : 20-create-kubernetes-httpbin-service.yaml

apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
  1. Deploy the httpbin service:
$ kubectl -n istio-lab apply -f 20-create-kubernetes-httpbin-service.yaml
service/httpbin created

Let's disable the Kubernetes load balancing capabilities of httpbin for httpbin-v1 and httpbin-v2 through the use of an Istio destination rule. We will do this to define subsets that will be used by the Istio virtual service to direct 100% of the traffic to httpbin-v1. Define some destination rules to create subsets:

# Script : 21-create-destination-rules-subsets.yaml

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
  1. Create the required destination rules:
$ kubectl -n istio-lab apply -f 21-create-destination-rules-subsets.yaml
destinationrule.networking.istio.io/httpbin created
  1. Define a virtual service in order to direct 100% of the traffic to subset v1:
# Script : 22-create-httpbin-virtual-service.yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
  1. Create a virtual service:
$ kubectl -n istio-lab apply -f 22-create-httpbin-virtual-service.yaml
virtualservice.networking.istio.io/httpbin configured

Now, we can send some traffic to httpbin. However, before we do that, open two separate command-line windows to put a tail on the logs for both of the httpbin services.

  1. Use the first command-line window for the httpbin:v1 tail:
$ V1_POD=$(kubectl -n istio-lab get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name}) ; echo $V1_POD
httpbin-v1-b9985cc7d-4wmcf

$ kubectl -n istio-lab -c httpbin logs -f $V1_POD
[2019-04-24 01:01:56 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-04-24 01:01:56 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-04-24 01:01:56 +0000] [1] [INFO] Using worker: sync
[2019-04-24 01:01:56 +0000] [8] [INFO] Booting worker with pid: 8
  1. Use the second command-line window for the httpbin:v2 tail:
$ V2_POD=$(kubectl -n istio-lab get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name}) ; echo $V2_POD
httpbin-v2-5cdb74d4c7-mxtfm

$ kubectl -n istio-lab -c httpbin logs -f $V2_POD
[2019-04-24 01:01:56 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-04-24 01:01:56 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2019-04-24 01:01:56 +0000] [1] [INFO] Using worker: sync
[2019-04-24 01:01:56 +0000] [8] [INFO] Booting worker with pid: 8
  1. Open one more command-line window and run the following curl command using the ratings pod to send traffic to the httpbin service:
$ RATING_POD=$(kubectl -n istio-lab get pods -l app=ratings -o jsonpath='{.items..metadata.name}') ; echo $RATING_POD
ratings-v1-79b6d99979-k2j7t

$ kubectl -n istio-lab exec -it $RATING_POD -c ratings -- curl http://httpbin:8000/headers | python -m json.tool

{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8000",
"User-Agent": "curl/7.38.0",
"X-B3-Parentspanid": "58e256d2258d93de",
"X-B3-Sampled": "1",
"X-B3-Spanid": "ad58600dc4bf258a",
"X-B3-Traceid": "4042bd191da4131058e256d2258d93de"
}
}
  1. Switch back to the command-line windows that have tails for the v1 and v2 services. You will notice an additional logline in the httpbin:v1 service; the httpbin:v2 service does not show any additional log lines:
[2019-08-02 13:04:14 +0000] [1] [INFO] Using worker: sync
[2019-08-02 13:04:14 +0000] [8] [INFO] Booting worker with pid: 8
127.0.0.1 - - [24/Apr/2019:01:35:55 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.38.0"
  1. Now, let's mirror the traffic from v1 to v2.  Modify the httpbin virtual service by adding a mirror to subset v2:
# Script : 23-mirror-traffic-between-v1-and-v2.yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
  1. Modify the virtual service. Run the following command from the third window and make sure that you switch to the cd ~/istio/scripts/01-traffic-management directory:
$ kubectl -n istio-lab apply -f 23-mirror-traffic-between-v1-and-v2.yaml
virtualservice.networking.istio.io/httpbin configured
  1. Send the same traffic to httpbin:v1. Now, we should see log lines appear in the httpbin:v1 and httpbin:v2 pods. Just wait a few seconds for the rules to propagate:
$ kubectl -n istio-lab exec -it $RATING_POD -c ratings -- curl http://httpbin:8000/headers | python -m json.tool
  1. The first window, httpbin:v1, shows one more line in addition to the previous one that we had already received:
127.0.0.1 - - [24/Apr/2019:01:46:34 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.38.0"
127.0.0.1 - - [24/Apr/2019:01:48:30 +0000] "GET /headers HTTP/1.1" 200 303 "-" "curl/7.38.0"
  1. The second window, httpbin:v2, shows the new line:
127.0.0.1 - - [24/Apr/2019:01:48:30 +0000] "GET /headers HTTP/1.1" 200 343 "-" "curl/7.38.0"

While traffic is being mirrored, the response from the second httpbin:v2 is not sent back since its purpose is to apply httpbin requests to and from v1 to v2. The proxy sidecar of httpbin:v2 does not return any response, as expected. This can be seen in the following diagram:

Did you notice how easy it is to mirror traffic from one microservice to another? This is a very useful case in which you can mirror the traffic from the edge service to a different namespace or another Kubernetes cluster that has the same application and do any type of testing, such as infrastructure testing, testing a different version of the complete application, and so on. There are lots of use cases, and you can enable this capability by making the necessary configuration changes.

You can press Ctrl+C in both command-line windows to stop the tails on the logs of both pods.

For us to be able to move on to the next chapter, we will remove the restrictions that were set on the external traffic flow.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.193.129