Orchestration

In the context of infrastructure and systems management, orchestration is a fairly general term that is often used to refer to cluster management, scheduling of compute tasks, and the provisioning and de-provisioning of host machines. It includes automating resource allocation and distribution, with the goal of optimizing the process of building, deploying, and destroying computing tasks. In this case, the tasks we’re referring to are microservices, such as those deployed in Docker containers.

In this context orchestration consists of provisioning nodes, cluster management, and container scheduling.

Provisioning is the process of bringing new nodes online and getting them ready to perform work. In addition to creating the virtual machine, this often involves initializing the node with cluster management software and adding it into a cluster. Provisioning would also commonly include resources other than compute, like networking, data storage services, monitoring services, and other cloud provider services. Provisioning can be the result of manual administration, or automated scaling implementation used to increase the cluster pool size.

Cluster management involves sending tasks to nodes, adding and removing nodes, and managing active processes. Typically there is at least one machine that acts as cluster manager. This machine(s) is responsible for delegating tasks, identifying failures, and synchronizing changes to the state of the application. Cluster management is very closely related to scheduling, and in many cases the same tool is used for both.

Scheduling is the process of running specific application tasks and services against specific nodes in the cluster. Scheduling defines how a service should be executed. Schedulers are responsible for comparing the service definition with the resources available in the cluster and determining the best way to run the service. Schedulers integrate closely with cluster managers because schedulers need to be aware of each host and its available resources. For a more detailed explanation, see the section on Scheduling.


Image Service Connectivity

Although service discovery, application gateways, and network overlays are not necessarily a part of orchestration, we will cover them in this chapter as they are closely related to setting up a cluster and running services in a cluster.


Service Discovery and Application Gateways are used to route traffic to the service instances deployed in the cluster. We typically have multiple instances of our service running on nodes within the cluster, as determined by the scheduler, and we need a way to discover their location and route traffic to them. Some container orchestration tools will provide this functionality.

Overlay networks enable us to create a single network that sits on top of multiple networks beneath. We can deploy containers and services to the overlay network, and they can communicate with other containers on the same overlay network without having to worry about the complexities of the networks beneath.

Let’s start with provisioning and bootstrapping the virtual machine with the necessary cluster management and orchestration tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.139.224