Challenges

Scheduling services can quickly become complicated, especially as our application and infrastructure grow. We need to balance efficiency, isolation, scalability, and performance, all while accounting for each application’s various requirements. We need a service that can automate the decisions and all the little complexities involved in determining what machines in our cluster should run each instance of a service.

Efficiency/Density

Efficiency measures how well our infrastructure schedules services based on the resources available. In an ideal environment, services would be evenly distributed across multiple servers and there would be no wasted resources.

In the real world, each service has its own unique resource requirements, and all nodes might not provide the same resources. A service can have low processor and low memory requirements, but it can require lots of storage. Another service can have a requirement for low amounts of high-throughput storage—in other words, it would require solid state storage or a RAM disk instead of a traditional hard drive.

The scheduler needs to quickly identify optimal placement of our services alongside other services on nodes in the cluster. On top of this, the scheduler needs to constantly account for changing resources due to hardware provisioning and node failures.

Isolation

Contrary to the distributed nature of scheduling, services make heavy use of isolation. Our services are designed to be created, deployed, and destroyed repeatedly without affecting the performance or availability of other services. Although services can communicate with each other, removing isolation and creating dependencies between them essentially defeats the purpose of a microservices architecture.

As an example, container-based solutions such as Docker use the Linux kernel’s cgroups feature to control resource consumption by specific processes. They also make use of kernel namespaces, which limits the scope of a process. This can greatly improve fault and resource isolation of services in a microservices architecture. In the event of an unexpected failure, a single service would not compromise the entire node.

Scalability

As application complexity grows, so does the complexity of the data center. Not only do we need to design our infrastructure around existing services, but we also need to consider how our infrastructure will scale to meet the demands of future services. The scheduler might need to manage a growing number of machines. Some are even able to increase and decrease the pool of virtual machines to match demand.

Performance

Performance problems can be indicative of a poor scheduling solution. The scheduler has to manage an extremely dynamic environment in which resources are changing; the services running on those resource are changing, and load on the services are changing all the time. This can be complex, and maintaining optimal performance can often require a great monitoring solution.

Identifying the most optimal resource for a task can take time and sometimes it’s important that the task is scheduled quickly to respond to an increase in demand or a node failure.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.217.17