Service mesh deployment models

We have discussed service mesh solutions and how they are helping to realize the elusive goal of service resiliency. There are a few different deployment models:

  • Involving one proxy instance per hose/node
  • Leveraging the popular sidecar deployment of the service mesh proxy

Per-host proxy deployment pattern: In this deployment model, the proxy instance is being used in every host/node. As described previously, one host can be a VM or a BM server. This host is a Kubernetes worker node. Many services can be made to run on a single host. All of these services have to send their various service requests to the destination through the proxy instance. As shown in the following diagram, the proxy instance can be deployed as a DaemonSet in each of the participating hosts:

Each host is comprised of one service and its instances. Each service and its instances communicate with other services being deployed in other hosts. The proxy is the intermediary between the distributed services to ensure the resiliency of service communications.

Sidecar proxy deployment pattern: In this model, one sidecar proxy is deployed per instance of every service. As mentioned previously, one microservice can have several instances to ensure the aspects of failover and failback. This model is good for deployments that use containers or Kubernetes. As a best practice, every container hosts and runs a microservice. That is, if there are multiple instances for a microservice, then we need that number of containers to uniquely host and manage them in good condition. If, for each container, if we deploy one sidecar proxy, then the number of containers is bound to escalate. Also, it is insisted to have a small footprint for the sidecar proxy. Otherwise, the performance may get degraded. The alternate approach is to deploy a sidecar proxy per a host so that the number of sidecar proxy containers turns out to be less:

Sidecar pattern for service mesh: Services A, B, and C can communicate with one another via corresponding sidecar proxy instances. By default, proxies handle only intra-service mesh cluster traffic between the source (upstream) and the destination (downstream) services. To a expose a service that is part of a service mesh to the outside word, you have to enable ingress traffic. Similarly, if a service depends on an external service, you may need to enable the egress traffic:

Any service mesh solution has to enable seamless and spontaneous interactions among all the participating microservices. To accomplish any service-to-service communication, a number of vital capabilities are expected out of service mesh solutions. Here is a list of the major competencies of service mesh solutions. The widely used mesh topology in computer networking is being replicated here in the service era to guarantee service stability, availability, and reliability. When services are individually resilient and elastic, then their amalgamation is going to be trustworthy and deterministic. Here is a list of the key characteristics of any standard service mesh solutions. 

Dynamic request routing: There are routing rules and tables to empower service mesh solutions to route service requests to a preferred version of the microservice running in different environments, such as development, testing, staging, and production. Dynamic request routing comes in handy for common deployment scenarios such as blue-green, Canary, and A/B testing:

Control plane: The standard architecture of service mesh solutions is to have two separate planes to flexibly do different tasks. Data planes are being attached with every microservice instance. In some cases, a data plane is being embedded with each pod, which typically comprises many containers to accommodate a full-fledged application. If empowering each pod with a data plane instance is unnecessary, every node is stuffed with an instance of a data plane. The control plane is the centralized monitoring and management module. It has the capability of policy establishment and enforcement feature.

Similarly, other details needed for enabling the data plane to work according to the evolving situation is being done at the control plane. We discussed the leading open source service mesh solutions and their various components in detail in next chapter. The routing tasks of the data plane are being activated and accelerated by the control plane. Similarly, service registration and discovery are being performed by the control plane. The important load balancing task is also being managed by the control plane:

Service discovery: In a microservices environment, each participating service has to register itself in a service registry to enable other services to find and bind dynamically. Service registry is a middleware application, which means that it helps to identify the pool of service instances so that service access and leverage can become smooth and spontaneous.

Load balancing: This is important to balance the load of services. Service requests are being intelligently directed to those services that are not overloaded. The combination of control and data planes of service mesh solutions comes in handy in fulfilling this unique requirement toward ensuring the high availability of services. There are several load balancing algorithms. Some service mesh solutions profess and provide failure and latency-aware load balancing capabilities.

In a sidecar pattern, the functionality of the main container is considerably extended or enhanced by a sidecar container. However, there is no strong coupling between the main and sidecar containers. As we all know, Kubernetes emerges as the key container orchestration platform. Pods are the primary building blocks as per the Kubernetes specification. A sidecar container, which is a kind of utility container, is attached to each pod. The sidecar container is the containerized version of the sidecar proxy, which is the data plane. A pod is comprised of one or more application containers. A sidecar container is predominantly used for supporting and empowering the main application containers in their pursuit. Sidecar containers are not standalone containers, and hence they have to be paired with some business-specific containers to be right and relevant. Sidecar containers are hugely usable and reusable and can be attached with any number of pods and their application containers:

Here is an example of the sidecar pattern. The main container is a web server, and this is being empowered by a log saver container, which is a sidecar container. This sidecar container meticulously and minutely collects the web server's logs from the local disk and streams them to the centralized log collector.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.190.102