Service mesh implementation using Istio
This chapter discusses the service mesh implementation using Istio and has the following sections:
9.1 Overview
When talking to many enterprise customers who are at the beginning of their application modernization journey, the question that often pops up is: “What about a service mesh?”. This usually prompts many follow-up questions:
How should we implement traffic control (firewall rules) for services?
How to control ingress and egress from the Kubernetes cluster?
What service registry to use?
How to implement load balancing between the microservices?
How to integrate the microservices on Kubernetes with existing ESB?
Can the whole traffic be encrypted?
There are a lot of articles on the internet about different service registry and discovery solutions, and about microservices and cloud native programing frameworks. Many of them relate to a specific product or technology, but it is not obvious how these solutions relate to Kubernetes or to what extent they are applicable in hybrid, multi-cloud, enterprise-centric environments. Things that work fine for an enthusiastic cloud startup developer might not be easy for an enterprise developer who often operates in a highly regulated environment.
In the following section we briefly summarize the role of the service mesh and discuss which functions are fulfilled by Kubernetes itself and which ones need to be augmented by additional solution – in case of IBM Cloud Private the product of choice is Istio.
9.2 Role of the service mesh
In traditional applications, the communication pattern was usually built into application code and service endpoint configuration was usually static. This approach does not work in dynamic cloud-native environments. Therefore, there is a requirement for an additional infrastructure layer that helps manage communication between complex applications consisting of a large number of distinct services. Such a layer is usually called a service mesh.
The service mesh provides several important capabilities like:
Service discovery
Load balancing
Encryption
Observability and traceability
Authentication and authorization
Support for the circuit breaker pattern
The following sections describe each of these functionalities in more detail.
9.2.1 Service registry
Modern, cloud-native applications are often highly distributed and use a large number of components or microservices that are loosely coupled. In a dynamic, cloud environments placement of any service instance can change at any time, so there is a need for an information repository which will hold current information about which services are available and how they can be reached. This repository is called service registry. There are several popular implementations of service registry, for example Eureka developed by Netflix, Consul from Hashicorp, and Apache Zookeeper. None of these solutions are not Kubernetes specific, and their functions to some extent overlap with what Kubernetes natively provides.
IBM Cloud Private uses Kubernetes technology as its foundation. In Kubernetes, services are of primary importance, and Kubernetes provides implicit service registry using DNS. When a controller alters Kubernetes resources, for example starting or stopping some pods, it also updates the related service entries. Binding between pod instances and services that expose them is dynamic, using label selectors.
Istio leverages Kubernetes service resources, and does not implement a separate service registry.
9.2.2 Service discovery
The process of locating the service instance is called service discovery. There are 2 types of service discoveries:
Client-side discovery Application that requests a service network location gets all service instances from the service registry, and decides which one to contact. This approach is implemented, for example, by the Netflix Ribbon library.
Server-side discovery Application that sends a request to a proxy that routes a request to one of the available instances. This approach is used in that Kubernetes environment and in Istio.
9.2.3 Load balancing
When there are multiple instances of a target service available, the incoming traffic should be load balanced between them. Kubernetes natively implements this functionality, but Istio greatly enhances the available configuration options.
9.2.4 Traffic encryption
In Kubernetes internal data traffic can be either be all plain or all encrypted using IPSec. Istio allows dynamically to encrypt traffic to and from specific services based on policies and it does not require any changes to the application code.
9.2.5 Observability and traceability
This capability is not implemented in standard Kubernetes, and can be provided by the CNI network implementation. However project Calico used by IBM Cloud Private does not provide this capability. To trace traffic between applications, the applications must embed some distributed tracing libraries, such as Zipkin. Istio implements this capability, allowing all traffic to be traced and visualized, without any modification to the application code.
9.2.6 Access control
Kubernetes, by default, defines network policies that govern which pods can communicate but implementation of network policies is done at the Container Network Interface (CNI) network level. Project Calico used by IBM Cloud Private allows for defining firewall rules between pods based on IP and port combinations. Istio enhances access control up to L7 (layer 7).
9.2.7 Circuit breaker pattern support
By default, Kubernetes does not provide this capability. It can be embedded in application code using, for example, Hystrix from Netflix OSS, but can be also implemented at the proxy level like in Istio or Consul.
9.3 Istio architecture
An Istio service mesh is logically split into a data plane and a control plane1.
The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub.
The control plane manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.
9.3.1 Components
The following section describes the components of the planes.
Figure 9-1 on page 293 shows the different components that make up each plane.
Figure 9-1 Istio architecture2
Envoy
Istio uses an extended version of the Envoy proxy. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Istio leverages Envoy’s many built-in features, for example:
Dynamic service discovery
Load balancing
TLS termination
HTTP/2 and gRPC proxies
Circuit breakers
Health checks
Staged rollouts with percentage-based traffic split
Fault injection
Rich metrics
Envoy is deployed as a sidecar to the relevant service in the same Kubernetes pod. This deployment allows Istio to extract a wealth of signals about traffic behavior as attributes. Istio can, in turn, use these attributes in Mixer to enforce policy decisions, and send them to monitoring systems to provide information about the behavior of the entire mesh.
The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code. You can read more about why we chose this approach in our Design Goals.
Mixer
Mixer is a platform-independent component. Mixer enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy and other services. The proxy extracts request level attributes, and sends them to Mixer for evaluation. You can find more information about this attribute extraction and policy evaluation in Mixer Configuration documentation.
Mixer includes a flexible plug-in model. This model enables Istio to interface with a variety of host environments and infrastructure backends. Thus, Istio abstracts the Envoy proxy and Istio-managed services from these details.
Pilot
Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (e.g., A/B tests, canary deployments, etc.), and resiliency (timeouts, retries, circuit breakers, etc.).
Pilot converts high level routing rules that control traffic behavior into Envoy-specific configurations, and propagates them to the sidecars at run time. Pilot abstracts platform-specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the Envoy data plane APIs can use. With this loose coupling, Istio can run on multiple environments, such as Kubernetes, Consul, or Nomad, while maintaining the same operator interface for traffic management.
Citadel
Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Using Citadel, operators can enforce policies based on service identity rather than on network controls. Starting from release 0.5, you can use Istio’s authorization feature to control who can access your services.
Galley
Galley validates user authored Istio API configuration on behalf of the other Istio control plane components. Over time, Galley will take over responsibility as the top-level configuration ingestion, processing and distribution component of Istio. It will be responsible for insulating the rest of the Istio components from the details of obtaining user configuration from the underlying platform (for example Kubernetes).
9.3.2 Istio functions
Istio has the following major functions.
Connect
Istio provides traffic management for services. The traffic management function includes:
Intelligent routing: The ability to perform traffic splitting and traffic steering over multiple versions of the service
Resiliency: The capability to increase micro services application performance and fault tolerance by performing resiliency tests, error and fault isolation and failed service ejection.
Secure
Istio implements a Role-based Access Control (RBAC) which allow a specific determination on which service can connect to which other services. Istio uses Secure Production Identity Framework for Everyone (SPIFFE) to identify the ServiceAccount of a micro service uniquely and use that to make sure communication are allowed.
Control
Istio provides a set of Policies that allows control to be enforced based on data collected. This capability is performed by Mixer.
Observe
While enforcing the policy, Itsio also collects data that is generated by the Envoy proxies. The data can be collected as metrics into Prometheus or as tracing data that can be viewed through Jaeger and Kiali.
This chapter describes mainly the resiliency (9.5, “Service resiliency” on page 301) and security (9.6, “Achieving E2E security for microservices using Istio” on page 311) implementation using Istio, while intelligent routing is discussed in “Chapter 4 Manage your service mesh with Istio” of the IBM Redbooks IIBM Cloud Private Application Developer's Guide, SG24-8441.
9.4 Installation of Istio and enabling the application for Istio
Istio can be enabled during IBM Cloud Private installation time, or it can be installed after the cluster is set up. The IBM Knowledge Center for Cloud Private provides good guidance. You can see https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/manage_cluster/istio.html for more details.
Here we will introduce another approach by installing the Istio using the command-line interface, where you have more control on the options.
 
Tip: This approach can be used as a base in the airgap environment where you must use a local chart.
In addition, we will also demonstrate the sample bookInfo application deployed in IBM Cloud Private, where you must pay extra attention due to the enhanced security management features starting from IBM Cloud Private version 3.1.1.
9.4.1 Install Istio with the helm command
Before the installation, we assume that you have downloaded the couldctl, kubectl, and helm tools for your platform from the IBM Cloud Private dashboard.
1. Log in to IBM Cloud Private:
Run the command cloudctl -a https://Your_ICP_Host:8443/. Log in with the admin ID and password. Upon successful login, the tool sets up the kubctl and the Helm client.
2. Next you will set up the Helm repository. Run the command in Example 9-1 to add a Helm repository that points to the management repository from IBM Cloud Private.
Example 9-1 Command to add a Helm repository
helm repo add --ca-file ~/.helm/ca.pem --key-file ~/.helm/key.pem --cert-file ~/.helm/cert.pem icp https://Your_ICP_Host:8443/mgmt-repo/charts
Note that the ca-file, the key-file, and the cert-file are created automatically when you perform the first step.
Refresh the repository with the helm repo update command.
3. Create the secret for Grafana and Kali. We enable Grafana and Kali for this deployment. The secret for the console login is required. Create the files as in Example 9-2 and in Example 9-3 on page 296.
Example 9-2 Secret object for Kiali
apiVersion: v1
kind: Secretometadata:
name: kiali
namespace: istio-system
labels:
app: kiali
type: Opaque
data:
username: YWRtaW4K
passphrase: YWRtaW4K
Example 9-3 Secret object for Grafana
apiVersion: v1
kind: Secret
metadata:
name: grafana
namespace: istio-system
labels:
app: grafana
type: Opaque
data:
username: YWRtaW4K
passphrase: YWRtaW4K
Notice that the username and passphrase are base64 encoded. You can get the encoded value by running echo yourtext | base64.
4. Run the commands in Example 9-4 to apply the objects.
Example 9-4 Commands to apply the secret objects
kubectl apply -f grafana.yaml
kubectl apply -f kail.yaml
5. Now you need to customize your settings. Create a YAML file as shown in Example 9-5 to override the default settings of the istio Helm chart. Save it as vars.yaml.
You can see the values.yaml file in the chart. The default chart tarball can be downloaded with the following command:
curl -k -LO https://<Your Cluster>:8443/mgmt-repo/requiredAssets/ibm-istio-1.0.5.tgz
Example 9-5 Override the default settings of the Istio Helm chart
grafana:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
Here we enable the grafana, tracing, and kiali, which are disabled by default.
6. Next you will deploy the Istio chart by running the command in Example 9-6 on page 297.
Example 9-6 Deploy the Istio Helm chart
helm.icp install --name istio --namespace istio-system -f vars.yaml icp/ibm-istio --tls
Note that the CostumResourceDefintions (CRD) no longer needs to be created separately before the deployment.
7. To validate the installation, run the command shown in Example 9-7.
Example 9-7 Validation
kubectl -n istio-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
grafana-cbc8c66bb-bqdll 1/1 Running 0 103m 172.20.72.180 10.93.221.105 <none>
istio-citadel-7cc85b9986-vk7nn 1/1 Running 0 103m 172.20.72.138 10.93.221.105 <none>
istio-egressgateway-79895bb8f7-k6zfw 1/1 Running 0 103m 172.20.121.208 10.93.221.68 <none>
istio-galley-77554979fc-j2qcg 1/1 Running 0 103m 172.20.72.189 10.93.221.105 <none>
istio-ingressgateway-56758bf968-4gfmt 1/1 Running 0 103m 172.20.61.77 10.171.37.135 <none>
istio-ingressgateway-56758bf968-zjzjz 1/1 Running 0 65m 172.20.121.209 10.93.221.68 <none>
istio-pilot-599f699d55-479ct 2/2 Running 0 103m 172.20.72.185 10.93.221.105 <none>
istio-policy-f8fcb8496-sgmck 2/2 Running 0 103m 172.20.72.184 10.93.221.105 <none>
istio-sidecar-injector-864d889459-zzlq2 1/1 Running 0 103m 172.20.72.190 10.93.221.105 <none>
istio-statsd-prom-bridge-75cc7c6c45-xq72c 1/1 Running 0 103m 172.20.72.131 10.93.221.105 <none>
istio-telemetry-665689b445-vfvqb 2/2 Running 0 103m 172.20.72.150 10.93.221.105 <none>
istio-tracing-694d9bf7b4-8tlhs 1/1 Running 0 103m 172.20.72.186 10.93.221.105 <none>
kiali-749cfd5f6-5kgjw 1/1 Running 0 103m 172.20.72.183 10.93.221.105 <none>
prometheus-77c5cc6dbd-h8bxv 1/1 Running 0 103m 172.20.72.187 10.93.221.105 <none>
All the pods under the namespace istio-system are running. Notice that other than the ingressgateway and egressgateway that run on the proxy node, the rest of the services all run on the management node.
9.4.2 Enable application for Istio
With the release of IBM Cloud Private Version 3.1.1, many security enhancements are turned on by default. This section documents what extra configurations are required in order for your application to be managed by Istio.
The bookinfo application is a sample application that was developed by istio.io to demonstrate the various Istio features. We will deploy the application into a dedicated namespace, istio-exp, instead of the default namespace, which you will more likely face in a real project.
1. Create the namespace as follows:
a. You can create the namespace with the Dashboard console. Go to Menu →Manage → Namespaces, then click Create Namespace. This action displays a dialog, as shown in Figure 9-2 on page 299.
Figure 9-2 Create a namespace
b. Name the namespace istio-exp, then select ibm-privileged-psp as the Pod Security Policy.
c. Click Create.
 
Note: You can also create these using the kubectl create ns istio-exp kubectl command.
d. Create the file named as rolebinding.yaml with the content shown in Example 9-8.
Example 9-8 Rolebinding to bind service account to predefined cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ibm-privileged-clusterrole-rolebinding
namespace: istio-exp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ibm-privileged-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:istio-exp
e. Then run the command kubectl apply -f rolebinding.yaml.
When Istio injects the sidecar into the pod through initContainer, it needs the privileged right. Therefore, we assign the ibm-privileged-clusterrole to the service account of the namespace. If not, you might see the error message as Example 9-9.
Example 9-9 Error message when proper PSP is nor assigned
message: 'pods "details-v1-876bf485f-m84f8" is forbidden: unable to validate against
any pod security policy: [spec.initContainers[0].securityContext.capabilities.add:
Invalid value: "NET_ADMIN": capability may not be added]'
2. Next you will create the image policy.
a. The images for the bookInfo application are not in the whitelist. To run the application, create the file, image.policy.yaml, as in Example 9-10.
Example 9-10 Image policy for the bookInfo app
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ImagePolicy
metadata:
name: book-info-images-whitelist
namespace: istio-exp
spec:
repositories:
- name: docker.io/istio/*
b. Then apply the policy with kubectl apply -f image.policy.yaml.
3. Now you will label the namespace for istio injection. Run the command kubectl label namespace istio-exp istio-injection=enabled to tag the target namespace with the flag of istio-injection to enable the automatic sidecar injection for the namespace.
4. Deploy the application by running the command:
kubectl -n istio-exp apply -f istio/istio-1.0.6/samples/bookinfo/platform/kube/bookinfo.yaml
5. Validate that the pods are running as shown in Example 9-11.
Example 9-11 Validate that the pods are running
kubectl -n istio-exp get pods
 
NAME READY STATUS RESTARTS AGE
details-v1-876bf485f-k58pb 2/2 Running 0 45m
productpage-v1-8d69b45c-6thb9 2/2 Running 0 45m
ratings-v1-7c9949d479-sbt8p 2/2 Running 0 45m
reviews-v1-85b7d84c56-pntvg 2/2 Running 0 45m
reviews-v2-cbd94c99b-hbpzz 2/2 Running 0 45m
reviews-v3-748456d47b-qtcs5 2/2 Running 0 45m
Notice the 2/2 of the output. The sidecar injection adds the additional istio-proxy container and makes it into two containers.
9.4.3 Uninstallation
To uninstall Istio, run the command helm delete istio --tls --purge to delete and purge the release.
You will also need to delete the CustomResourceDefinition objects that are left over in this version. To delete these object run the command kubectl delete -f istio/ibm-mgmt-istio/templates/crds.yaml.
9.5 Service resiliency
In a distributed system, dealing with unexpected failures is one of the hardest problems to solve. For example what happens when an instance of microservice is unhealthy? Apart from detecting it, we need a mechanism to auto-correct it. With an appropriate liveness/readiness probe in POD specification we can detect if the pod is working correctly, and Kubernetes will restart it if pod is not functioning properly.
But to achieve service resiliency we need to address the following challenges.
How to handle services which are working, but taking too long to respond due to certain environment issue?
How to handle services that respond after certain number of retries and within a certain amount of time?
How to stop the incoming traffic for sometime (if the service has a problem), wait for the service to “heal” itself and when it is working resume the inbound traffic automatically.
How to set a timeout for a request landed on Kubernetes so that in a definite time frame a response is given to the client?
How to auto-remove a pod, which is unhealthy for quite sometime?
How to do load balancing for pods to increase the throughput and lower the latency?
Kubernetes does not fulfill the previous challenges as ready-for-use functionality; we need service mesh for it. Service mesh provides critical capabilities including service discovery, load balancing, encryption, observability, traceability, authentication, and authorization and support for the circuit breaker pattern. There are different solutions for service mesh, such as Istio, Linkerd, Conduit, and so on. We will use the Istio service mesh to discuss how to achieve service resiliency.
Istio comes with several capabilities for implementing resilience within applications. Actual enforcement of these capabilities happens in the sidecar. These capabilities will be discussed in the following sections within the context of a microservices example.
To understand the Istio capabilities we will use the example published on the following IBM Redbooks GitHub url:
The details about microservices setup are given in the readme file.
9.5.1 Retry
In the two following scenarios, the retry capability of Istio can be used:
Transient, intermittent errors can come due to environment issues for example network, storage.
The service or pod might have gone down only briefly.
With Istio’s retry capability, it can make a few retries before having to truly deal with the error.
Example 9-12 the virtual service definition for the catalog service.
Example 9-12 Virtual service definition for the catalog service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:s
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
retries:
attempts: 3
perTryTimeout: 2s
1. Create a virtual service for the catalog. See Example 9-13.
Example 9-13 Create virtual service for catalog
root@scamp1:~/istio_lab# kubectl create -f catalog_retry.yaml
 
virtualservice.networking.istio.io/catalog created
2. Create a destination rule for all three microservices. See Example 9-14.
Example 9-14 Create a desalination rule for all three microservices
root@scamp1:~/istio_lab# istioctl create -f destination_rule.yaml
 
Created config destination-rule/default/user at revision 2544618
Created config destination-rule/default/catalog at revision 2544619
Created config destination-rule/default/product at revision 2544621
3. Get cluster IP for the user microservice. See Example 9-15.
Example 9-15 Get cluster I for user microservice
root@scamp1:~/istio_lab# kubectl get svc
 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalog ClusterIP 10.0.0.31 <none> 8000/TCP 23h
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 23h
product ClusterIP 10.0.0.221 <none> 8000/TCP 23h
user ClusterIP 10.0.0.55 <none> 8000/TCP 23h
4. Launch the user microservice to check the responses. See Example 9-16.
Example 9-16 Launch the user microservice to check the responses
root@scamp1:~/istio_lab# ./execute.sh 10.0.0.55
 
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
5. As we can see from the output in Example 9-16, the user microservice is randomly calling different versions of the catalog microservice. Now add a bug in catalog:v2 microservice and check how it behaves. See Example 9-17.
Example 9-17 Add a bug in the catalog:v2 microservice to make it unhealthy翿
root@scamp1:~/istio_lab# kubectl get po
 
NAME READY STATUS RESTARTS AGE
catalog-v1-5bf8c759b9-vbmv5 2/2 Running 0 1m
catalog-v2-547b5f6769-6qgzq 2/2 Running 0 1m
catalog-v3-569bd6c7d9-p9sgr 2/2 Running 0 1m
product-v1-747cf9f795-c4z5l 2/2 Running 0 1m
user-v1-6b5c74b477-cqr6b 2/2 Running 0 1m
 
root@scamp1:~/istio_lab# kubectl exec -it catalog-v2-547b5f6769-6qgzq -- curl localhost:8000/unhealthy
 
Defaulting container name to catalog.
service got unhealthy
We will launch the user microservice to determine how it responds. See Example 9-18.
Example 9-18 Start the user microservice
root@scamp1:~/istio_lab# ./execute.sh 10.0.0.55
user==>catalog:v1==>product:Able to fetch infromation from product service
catalog:v2 service not available
catalog:v2 service not available
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
catalog:v2 service not available
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
catalog:v2 service not available
catalog:v2 service not available
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
catalog:v2 service not available
As we can see in the output, catalog:v2 microservice is not responding and all calls made to it from the user microservice fail.
6. We will add retry logic for virtualservice of the catalog and check how it behaves. See Example 9-19.
Example 9-19 Add retry logic in virtualservice of the catalog
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
retries:
attempts: 3
perTryTimeout: 2s
7. The previous virtualservice definition for the catalog will make sure that each call to the catalog service will do 3 attempts of calling the catalog microservices before failure. Next, create a virtual service for the catalog using the definition in Example 9-20.
Example 9-20 Create a virtual service for catalog
root@scamp1:~/istio_lab# istioctl create -f catalog_retry.yaml
 
Created config virtual-service/default/catalog at revision 2553113
8. Then, examine the output of starting the user microservice, as shown in Example 9-21.
Example 9-21 Launching the user microservice
root@scamp1:~/istio_lab# ./execute.sh 10.0.0.55
 
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
As we can see in Example 9-21 on page 304, the user microservice no longer starts catalog:v2. As it tries to launch, the call fails. The user microservice retries 2 more times to start catalog:v2, and then launches catalog:v1 or catalog:v3, which succeeds.
9.5.2 Timeout
Calls to services over a network can result in unexpected behavior. We can only guess why the service has failed? Is it just slow? Is it unavailable? Waiting without any timeouts uses resources unnecessarily, causes other systems to wait, and is usually a contributor to cascading failures. Your network traffic should always have timeouts in place, and you can achieve this goal with the timeout capability of Itsio. It can wait for only a few seconds before giving up and failing.
1. We will delete any virtualservice, destinationrule definitions on setup. Add 5 second delay in response of the catalog:v2 microservice, as shown in Example 9-22.
Example 9-22 Add 5 second delay in response of the catalog:v2 microservice
root@scamp1:~/istio_lab# kubectl get po
 
NAME READY STATUS RESTARTS AGE
catalog-v1-5bf8c759b9-vbmv5 2/2 Running 0 1h
catalog-v2-547b5f6769-6qgzq 2/2 Running 0 1h
catalog-v3-569bd6c7d9-p9sgr 2/2 Running 0 1h
product-v1-747cf9f795-c4z5l 2/2 Running 0 1h
user-v1-6b5c74b477-cqr6b 2/2 Running 0 1h
 
root@scamp1:~/istio_lab# kubectl exec -it catalog-v2-547b5f6769-6qgzq -- curl localhost:8000/timeout
Defaulting container name to catalog.
delayed added in microservice
2. Try to start the user microservice in Example 9-23.
Example 9-23 Launch user microservice
root@scamp1:~/istio_lab# ./execute.sh 10.0.0.55
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v2==>product::Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
As the script is run, you will notice that whenever a call goes to catalog:v2 microservice, it takes more than 5 seconds to respond, which is unacceptable for service resiliency.
3. Now we will create a VirtualService for the catalog and add 1 second as the timeout value. This is shown in Example 9-24.
Example 9-24 Create a VirtualService for the catalog and add 1 second as timeout
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
timeout: 1.000s
4. Create a virtual service for the catalog as shown in Example 9-25.
Example 9-25 Create a virtual service for the catalog
root@scamp1:~/istio_lab# istioctl create -f catalog_timeout.yaml
Created config virtual-service/default/catalog at revision 2561104
5. We will now start the user microservice as shown in Example 9-26.
Example 9-26 Launch the user microservice
root@scamp1:~/istio_lab# ./execute.sh 10.0.0.55
user==>catalog:v3==>product:Able to fetch infromation from product service
upstream request timeout
upstream request timeout
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
upstream request timeout
user==>catalog:v1==>product:Able to fetch infromation from product service
upstream request timeout
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v3==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
upstream request timeout
user==>catalog:v1==>product:Able to fetch infromation from product service
user==>catalog:v1==>product:Able to fetch infromation from product service
upstream request timeout
user==>catalog:v1==>product:Able to fetch infromation from product service
As we can see in Example 9-26, whenever a call is made to catalog:v2, it timeouts in 1 sec according to the virtual service definition for the catalog.
9.5.3 Load balancer
All HTTP traffic bound to a service is automatically rerouted through Envoy. Envoy distributes the traffic across instances in the load balancing pool. Istio currently allows three load balancing modes: round robin, random and weighted least request.
9.5.4 Simple circuit breaker
Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.
We will show an example where we will not define circuit breaker rule and see how it works. See Example 9-27.
Example 9-27 Add timeout of 5 seconds for catalog:v2 microservice
root@scamp1:~/istio_lab/cb# kubectl get po
 
NAME READY STATUS RESTARTS AGE
catalog-v1-5bf8c759b9-fjzp8 2/2 Running 0 2m
catalog-v2-547b5f6769-h4rkr 2/2 Running 0 2m
catalog-v3-569bd6c7d9-7l4c4 2/2 Running 0 2m
product-v1-747cf9f795-zwgt4 2/2 Running 0 2m
user-v1-6b5c74b477-r84tn 2/2 Running 0 2m
 
root@scamp1:~/istio_lab/cb# kubectl exec -it catalog-v2-547b5f6769-h4rkr -- curl localhost:8000/timeout
 
delayed added in microservice
1. We have added 5 seconds in the catalog:v2 microservice, it will take a minimum of 5 seconds to respond. ClusterIP for the catalog microservice in the setup is 10.0.0.31. See Example 9-28.
Example 9-28 Create a destinationrule for the catalog microservice
root@scamp1:~/istio_lab/cb# cat destination_rule.yaml
 
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
spec:
host: catalog
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
- labels:
version: v3
name: version-v3
 
root@scamp1:~/istio_lab/cb# kubectl create -f destination_rule.yaml
 
destinationrule.networking.istio.io/catalog created
2. Next, create a virtual service and destination rule for the catalog microservice. See Example 9-29.
Example 9-29 Create a virtual service for the catalog microservice
root@scamp1:~/istio_lab/cb# cat virtual_service_cb.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- catalog
http:
- route:
- destination:
host: catalog
subset: version-v1
weight: 25
- destination:
host: catalog
subset: version-v2
weight: 50
- destination:
host: catalog
subset: version-v3
weight: 25
 
root@scamp1:~/istio_lab/cb# kubectl create -f virtual_service_cb.yaml
 
virtualservice.networking.istio.io/catalog created
3. Now, try to check how the catalog microservice responds to requests. Use the seige tool to make a request to the catalog microservice. In Example 9-30 you can see that 5 clients are sending 8 concurrent requests to the catalog.
Example 9-30 Five clients are sending eight concurrent requests to the catalog
root@scamp1:~/istio_lab/cb# siege -r 8 -c 5 -v 10.0.0.31:8000
 
** SIEGE 4.0.4
** Preparing 5 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 200 5.05 secs: 75 bytes ==> GET /
HTTP/1.1 200 5.06 secs: 75 bytes ==> GET /
HTTP/1.1 200 5.04 secs: 75 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 200 5.09 secs: 75 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 5.14 secs: 75 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 200 5.03 secs: 75 bytes ==> GET /
HTTP/1.1 200 5.05 secs: 75 bytes ==> GET /
HTTP/1.1 200 5.01 secs: 75 bytes ==> GET /
.
.
Transactions: 40 hits
Availability: 100.00 %
Elapsed time: 25.17 secs
Data transferred: 0.00 MB
Response time: 2.03 secs
Transaction rate: 1.59 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 3.22
Successful transactions: 40
Failed transactions: 0
Longest transaction: 5.14
As we can see in Example 9-30 on page 308, all requests to our system were successful, but it took some time to complete the test, as the v2 instance of catalog was a slow performer. But suppose that in a production system this 5 second delay was caused by too many concurrent requests to the same instance. This might cause cascade failures in your system.
4. Now, add a circuit breaker that will open whenever there is more than one request that is handled by the catalog microservice. See Example 9-31.
Example 9-31 Add a circuit breaker
root@scamp1:~/istio_lab/cb# cat destination_rule_cb.yaml
 
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: catalog
spec:
host: catalog
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
tcp:
maxConnections: 1
outlierDetection:
baseEjectionTime: 120.000s
consecutiveErrors: 1
interval: 1.000s
maxEjectionPercent: 100
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
- labels:
version: v3
name: version-v3
 
kubectl apply -f destination_rule_cb.yaml
 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
destinationrule.networking.istio.io/catalog configured
5. Because the circuit breaker for the catalog is in place, make a request to the catalog microservice. Example 9-32 shows 5 clients sending 8 concurrent requests to the catalog microservice.
Example 9-32 Make a request to the catalog microservice
root@scamp1:~/istio_lab/cb# siege -r 8 -c 5 -v 10.0.0.31:8000
 
** SIEGE 4.0.4
** Preparing 5 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 0.05 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.05 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.05 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.05 secs: 74 bytes ==> GET /
HTTP/1.1 503 0.02 secs: 57 bytes ==> GET /
HTTP/1.1 503 0.04 secs: 57 bytes ==> GET /
HTTP/1.1 200 0.04 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.05 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.04 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 503 0.02 secs: 57 bytes ==> GET /
HTTP/1.1 503 0.00 secs: 57 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 503 0.01 secs: 57 bytes ==> GET /
HTTP/1.1 200 0.04 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.04 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.03 secs: 74 bytes ==> GET /
HTTP/1.1 503 0.02 secs: 57 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 200 5.05 secs: 75 bytes ==> GET /
HTTP/1.1 503 0.01 secs: 57 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 74 bytes ==> GET /
HTTP/1.1 200 0.02 secs: 74 bytes ==> GET /
In Example 9-32, you can see the 503 errors being displayed. As the circuit breaker is being opened, whenever Istio detects more than one pending request being handled by the catalog microservice, it opens circuit breaker.
9.5.5 Pool ejection
Pool ejection is a strategy that works when you have a pool of pods to serve the requests. If a request that was forwarded to a pod fails, Istio will eject this pod from the pool for a sleep window. After the sleep window is over, it will be added to the pool again. This strategy makes sure we have functioning pods participating in the pool of instances. For more information, see the following link:
9.6 Achieving E2E security for microservices using Istio
To achieve E2E security for microservices using Istio the following items should be taken into consideration.
9.6.1 Inbound traffic
For Inbound traffic in a Kubernetes environment, Kubernetes ingress resource is used to specify services that should be exposed outside the cluster. In an Istio service mesh, a better approach (which also works in both Kubernetes and other environments) is to use a different configuration model, namely Istio gateway. A gateway allows Istio features such as monitoring and route rules to be applied to the traffic entering the cluster. The following example shows inbound traffic.
1. Find the ingress port and ingress host IP for the Kubernetes cluster as shown in Example 9-33.
Example 9-33 Find the ingress port and ingress host IP for the Kubernetes cluster
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
 
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
 
root@scamp1:~# echo $INGRESS_PORT
31380
 
root@scamp1:~# echo $SECURE_INGRESS_PORT
31390
 
root@scamp1:~# export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}')
 
root@scamp1:~# echo $INGRESS_HOST
XX.XX.XX.XX
2. Create an ingress gateway and attach it to the user virtual service, as shown in Example 9-34.
Example 9-34 Create an ingress gateway and attach it the user virtual service
root@scamp1:~/istio_lab# cat gateway.yaml
 
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app
spec:
hosts:
- "*"
gateways:
- app-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: user
port:
number: 8000
 
root@scamp1:~/istio_lab# kubectl create -f gateway.yaml
gateway.networking.istio.io/app-gateway created
virtualservice.networking.istio.io/app created
We have now created a virtual service configuration for the user service containing one route rule that allows traffic for path /. The gateways list specifies that only requests through your app-gateway are allowed. All other external requests will be rejected with a 404 response. Note that in this configuration, internal requests from other services in the mesh are not subject to these rules, but instead will default to round-robin routing. To apply these or other rules to internal calls, you can add the special value mesh to the list of the gateways.
3. Try to access the user service using the curl command shown in Example 9-35.
Example 9-35 Access the user service
root@scamp1:~/istio_lab# curl $INGRESS_HOST:$INGRESS_PORT
 
user==>catalog:v1==>product:Able to fetch infromation from product service
If you want to expose HTTPs endpoint for inbound traffic, see the following link:
9.6.2 Outbound traffic
By default, Istio-enabled services are unable to access URLs outside of the cluster, because the pod uses iptables to transparently redirect all outbound traffic to the sidecar proxy, which only handles intra-cluster destinations.
This section describes how to configure Istio to expose external services to Istio-enabled clients. You will learn how to enable access to external services by defining ServiceEntry configurations, or alternatively, to bypass the Istio proxy for a specific range of IPs.
Configuring the external service
Using Istio ServiceEntry configurations, you can access any publicly accessible service from within your Istio cluster. Example 9-36 shows that you can access httpbin.org and www.google.com.
Example 9-36 Access httpbin.org and www.google.com
cat <<EOF | kubectl apply -f -
 
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: google
spec:
hosts:
- www.google.com
tls:
- match:
- port: 443
sni_hosts:
- www.google.com
route:
- destination:
host: www.google.com
port:
number: 443
weight: 100
EOF
In Example 9-36 on page 313 we have created a serviceentry and a virtualservice to allow access to an external HTTP service. Note that for TLS protocols, including HTTPS, the TLS virtualservice is required in addition to the serviceentry.
1. Create a ServiceEntry to allow access to an external HTTP service Example 9-37.
Example 9-37 Create a ServiceEntry
root@scamp1:~/istio_lab#kubectl apply -f -
 
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-ext
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
EOF
2. Make requests to the external services. Make an HTTP outbound request to httpbin.org from the user pod, as shown in Example 9-38.
Example 9-38 HTTP outbound request to httpbin.org from the user to the pod
root@scamp1:~/istio_lab# export SOURCE_POD=$(kubectl get pod -l app=user -o jsonpath={.items..metadata.name})
 
root@scamp1:~/istio_lab# kubectl exec -it $SOURCE_POD -c user curl http://httpbin.org/headers
{
"headers": {
"Accept": "*/*",
"Cache-Control": "max-stale=0",
"Host": "httpbin.org",
"If-Modified-Since": "Thu, 14 Mar 2019 06:00:44 GMT",
"User-Agent": "curl/7.38.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "51c9d4d21b2e4f7c",
"X-B3-Traceid": "51c9d4d21b2e4f7c",
"X-Bluecoat-Via": "6f5b02aba0abb15e",
"X-Envoy-Decorator-Operation": "httpbin.org:80/*",
"X-Istio-Attributes": "CikKGGRlc3RpbmF0aW9uLnNlcnZpY2UubmFtZRINEgtodHRwYmluLm9yZwoqCh1kZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWVzcGFjZRIJEgdkZWZhdWx0CiQKE2Rlc3RpbmF0aW9uLnNlcnZpY2USDRILaHR0cGJpbi5vcmcKPQoKc291cmNlLnVpZBIvEi1rdWJlcm5ldGVzOi8vdXNlci12MS02YjVjNzRiNDc3LXd4dGN6LmRlZmF1bHQKKQoYZGVzdGluYXRpb24uc2VydmljZS5ob3N0Eg0SC2h0dHBiaW4ub3Jn"
}
}
Similarly, we can access https://google.com from any pod on the Kubernetes setup. We will be able to access it because we have created a relevant serviceentry and virtualservice for the same.
9.6.3 Mutual TLS authentication
Istio tunnels service-to-service communication through the client-side and server-side Envoy proxies. For a client to call a server, the steps followed are:
1. Istio reroutes the outbound traffic from a client to the client’s local sidecar Envoy.
2. The client-side Envoy starts a mutual TLS handshake with the server-side Envoy. During the handshake, the client-side Envoy also performs a secure naming check to verify that the service account presented in the server certificate is authorized to run the target service.
3. The client-side Envoy and the server-side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client-side Envoy to the server-side Envoy.
4. After the authorization, the server-side Envoy forwards the traffic to the server service through local TCP connections.
We will use an example for mutual TLS. See Example 9-39.
Example 9-39 Create mutual TLS mesh-wide policy
root@scamp1:~# cat <<EOF | kubectl apply -f -
 
> apiVersion: "authentication.istio.io/v1alpha1"
> kind: "MeshPolicy"
> metadata:
> name: "default"
> spec:
> peers:
> - mtls: {}
> EOF
meshpolicy.authentication.istio.io/default created
This policy specifies that all workloads in the service mesh will only accept encrypted requests using TLS. As you can see, this authentication policy has the kind: MeshPolicy. The name of the policy must be default and it contains no targets specification (as it is intended to apply to all services in the mesh). At this point, only the receiving side is configured to use the mutual TLS. If you run the curl command between the Istio services (for example those with the sidecars), all requests will fail with a 503 error code as the client side is still using plain-text. See Example 9-40.
Example 9-40 Try to access the catalog microservice from the user microservice
root@scamp1:~# kubectl get svc
 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalog ClusterIP 10.0.0.31 <none> 8000/TCP 4d
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d
product ClusterIP 10.0.0.221 <none> 8000/TCP 4d
sleep ClusterIP 10.0.0.249 <none> 80/TCP 5h
user ClusterIP 10.0.0.55 <none> 8000/TCP 4d
 
root@scamp1:~# kubectl get po
 
NAME READY STATUS RESTARTS AGE
catalog-v1-5bf8c759b9-pcphd 2/2 Running 0 1h
catalog-v2-547b5f6769-7sdkd 2/2 Running 0 1h
catalog-v3-569bd6c7d9-sgz8h 2/2 Running 0 1h
product-v1-747cf9f795-z5ps2 2/2 Running 0 1h
sleep-64cbb8bf78-2w85v 2/2 Running 0 1h
user-v1-6b5c74b477-svtj7 2/2 Running 0 1h
 
root@scamp1:~# kubectl exec -it user-v1-6b5c74b477-svtj7 -- curl 10.0.0.31:8000
 
Defaulting container name to user.
upstream connect error or disconnect/reset before headers
To configure the client side, you need to set destination rules to use mutual TLS. It is possible to use multiple destination rules, one for each applicable service (or namespace). However, it is more convenient to use a rule with the * wildcard to match all services so that the configuration is on par with the mesh-wide authentication policy. See Example 9-41.
Example 9-41 Set the destination rule to use the mutual TLS
root@scamp1:~# cat <<EOF | kubectl apply -f -
> apiVersion: "networking.istio.io/v1alpha3"
> kind: "DestinationRule"
> metadata:
> name: "default"
> namespace: "default"
> spec:
> host: "*.local"
> trafficPolicy:
> tls:
> mode: ISTIO_MUTUAL
> EOF
destinationrule.networking.istio.io/default created
Now, try to make a call to the catalog microservice from the user microservice. See Example 9-42.
Example 9-42 Make a call to the catalog microservice from the user microservice
root@scamp1:~# kubectl exec -it user-v1-6b5c74b477-svtj7 -- curl 10.0.0.31:8000
 
Defaulting container name to user.
user==>catalog:v2==>product::Able to fetch infromation from product service
9.6.4 White or black listing
Istio supports attribute-based whitelists and blacklists. Using Istio you can control access to a service based on any attributes that are available within Mixer.
White listing
The whitelist is a deny everything rule, except for the approved invocation paths:
1. In the following example, you white list calls from the user microservice to the catalog microservice. See Example 9-43.
Example 9-43 White listing calls from user microservice to catalog microservice
root@scamp1:~/istio_lab# cat catalog_whitelist.yaml
 
apiVersion: "config.istio.io/v1alpha2"
kind: listchecker
metadata:
name: catalogwhitelist
spec:
overrides: ["user"]
blacklist: false
---
apiVersion: "config.istio.io/v1alpha2"
kind: listentry
metadata:
name: catalogsource
spec:
value: source.labels["app"]
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: checkfromuser
spec:
match: destination.labels["app"] == "catalog"
actions:
- handler: catalogwhitelist.listchecker
instances:
- catalogsource.listentry
 
root@scamp1:~/istio_lab# istioctl replace -f catalog_whitelist.yaml
 
Updated config listchecker/default/catalogwhitelist to revision 3251839
Updated config listentry/default/catalogsource to revision 3251840
Updated config rule/default/checkfromuser to revision 3251841
2. We will try to make calls from the user microservice to the catalog microservice. See Example 9-44.
Example 9-44 Make calls from the user microservice to the catalog microservice
root@scamp1:~/istio_lab# kubectl get svc
 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalog ClusterIP 10.0.0.31 <none> 8000/TCP 4d
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d
product ClusterIP 10.0.0.221 <none> 8000/TCP 4d
sleep ClusterIP 10.0.0.249 <none> 80/TCP 6h
user ClusterIP 10.0.0.55 <none> 8000/TCP 4d
 
root@scamp1:~/istio_lab# kubectl get po
 
NAME READY STATUS RESTARTS AGE
catalog-v1-5bf8c759b9-pcphd 2/2 Running 0 1h
catalog-v2-547b5f6769-7sdkd 2/2 Running 0 1h
catalog-v3-569bd6c7d9-sgz8h 2/2 Running 0 1h
product-v1-747cf9f795-z5ps2 2/2 Running 0 1h
sleep-64cbb8bf78-2w85v 2/2 Running 0 1h
user-v1-6b5c74b477-svtj7 2/2 Running 0 1h
 
root@scamp1:~/istio_lab# kubectl exec -it user-v1-6b5c74b477-svtj7 -- curl 10.0.0.31:8000
 
Defaulting container name to user.
user==>catalog:v3==>product:Able to fetch infromation from product service
As we can see in Example 9-44 on page 317, the user microservice call can be made to the catalog microservice.
3. Now we will try to make a call from the product microservice to the catalog microservice. See Example 9-45.
Example 9-45 Make a call from the product microservice to the catalog microservice
root@scamp1:~/istio_lab# kubectl exec -it product-v1-747cf9f795-z5ps2 -- curl 10.0.0.31:8000
 
Defaulting container name to product.
NOT_FOUND:catalogwhitelist.listchecker.default:product is not whitelisted
Example 9-45 shows that the call from the product microservice to the catalog microservice has failed, as we had only white listed the user microservice. This is the wanted result.
Black listing
The black list is explicit denials of particular invocation paths.
1. In Example 9-46 we black list the user microservice in the catalog microservice, so that it cannot make calls to the catalog microservice.
Example 9-46 Black listing the user microservice to the catalog microservice
root@scamp1:~/istio_lab# cat catalog_blacklist.yaml
 
apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
name: denyuserhandler
spec:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: checknothing
metadata:
name: denyuserrequests
spec:
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denycustomer
spec:
match: destination.labels["app"] == "catalog" && source.labels["app"]=="user"
actions:
- handler: denyuserhandler.denier
instances: [ denyuserrequests.checknothing ]
2. We will make a call from the user microservice to the catalog microservice as shown in Example 9-47.
Example 9-47 Make a call from the user microservice to the catalog microservice
root@scamp1:~/istio_lab# kubectl exec -it user-v1-6b5c74b477-svtj7 -- curl 10.0.0.31:8000
 
Defaulting container name to user
PERMISSION_DENIED:denyuserhandler.denier.default:Not allowed
As we had blacklisted user microservice to the catalog microservice so call has failed in Example 9-47.
3. Make a call from the product microservice to the user microservice. See Example 9-48.
Example 9-48 Make a call from the product microservice to the user microservice
root@scamp1:~/istio_lab# kubectl exec -it product-v1-747cf9f795-z5ps2 -- curl 10.0.0.31:8000
 
Defaulting container name to product.
user==>catalog:v2==>product::Able to fetch infromation from product service
The previous call has succeeded because we did not blacklist the product microservice.
9.6.5 Istio authorization
Istio’s authorization feature, also known as role-based access control (RBAC), provides namespace-level, service-level, and method-level access control for services in an Istio mesh. It features:
Role-based semantics, which are simple and easy to use.
Service-to-service and end-user-to-service authorization.
Flexibility through custom properties support, for example conditions, in roles and role-bindings.
High performance, as Istio authorization is enforced natively on Envoy.
See the Istio authorization architecture in Figure 9-3.
Figure 9-3 Istio authorization architecture3
Figure 9-3 shows the basic Istio authorization architecture. Operators specify Istio authorization policies by using .yaml files. When Itsio is deployed, it saves the policies in the Istio Config Store. Pilot watches for changes to Istio authorization policies. It fetches the updated authorization policies if it sees any changes. Pilot distributes Istio authorization policies to the Envoy proxies that are colocated with the service instances.
Each Envoy proxy runs an authorization engine that authorizes requests at run time. When a request comes to the proxy, the authorization engine evaluates the request context against the current authorization policies, and returns the authorization result, ALLOW or DENY.
1. The first thing to do is enable Istio authorization by using the RbacConfig object. See Example 9-49.
Example 9-49 Enable Istio authorization
root@scamp1:~/istio_lab# cat << EOF | kubectl apply -f -
 
> apiVersion: "rbac.istio.io/v1alpha1"
> kind: RbacConfig
> metadata:
> name: default
> spec:
> mode: 'ON_WITH_INCLUSION'
> inclusion:
> namespaces: ["default"]
> EOF
rbacconfig.rbac.istio.io/default created
2. Try to make a call to the user microservice. See Example 9-50.
Example 9-50 Make a call to the user microservice
root@scamp1:~/istio_lab# curl 10.0.0.55:8000
 
RBAC: access denied
By default, Istio uses a deny by default strategy, meaning that nothing is permitted until you explicitly define access control policy to grant access to any service.
3. Now, grant access to any user to any service of our mesh (user, catalog, or product) only if the communication goes through the GET method. See Example 9-51.
Example 9-51 Grant access to any user to any service
root@scamp1:~/istio_lab# cat << EOF | kubectl apply -f -
 
> apiVersion: "rbac.istio.io/v1alpha1"
> kind: ServiceRole
> metadata:
> name: service-viewer
> spec:
> rules:
> - services: ["*"]
> methods: ["GET"]
> constraints:
> - key: "destination.labels[app]"
> values: ["user", "catalog", "product"]
> ---
> apiVersion: "rbac.istio.io/v1alpha1"
> kind: ServiceRoleBinding
> metadata:
> name: bind-service-viewer
> namespace: default
> spec:
> subjects:
> - user: "*"
> roleRef:
> kind: ServiceRole
> name: "service-viewer"
> EOF
4. Now if you make a GET call to the user microservice, it should succeed as seen in Example 9-52.
Example 9-52 Make a GET call to the user microservice
root@scamp1:~/istio_lab# curl 10.0.0.55:8000
 
user==>catalog:v2==>product::Able to fetch infromation from product service
For more information about RBAC, see the following link:
 

2 Image taken from Istio documentation at https://istio.io/docs/concepts/what-is-istio/
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.188.201