What is container orchestration?

Container orchestration is the automated management, coordination, arrangement, and monitoring of computer resources so that they can be provided to engineers with enterprise quality without taking much time to setup. The most popular container orchestration software available on the market is as follows:

  • Red Hat's OpenShift
  • Google's Kubernetes or AWS EKS
  • Mesos and Marathon
  • CoreOS Tectonics
  • Docker compose
  • OpenStack Magnum

Red Hat's OpenShift: Red Hat is early starter in the container orchestration market and includes the majority of Kubernetes clusters. OpenShift is a container orchestration solution that helps to deploy containerized microservices into pods. Red Hat has developed a UI layer and integrated other products to provide this solution. It has the following components:

  • Web UI/dashboard/namespaces
  • Pods
  • Router
  • Services
  • Deployment management

You can configure any physical, cloud, and virtual servers under OpenShift. It uses the Kubernetes nodes and Docker as a base container platform to give you an OpenShift cluster form. We can increase the application capacity simply by increasing the pod count configuration by modifying a replica key value pair in a YAML file. It uses the oc command, which is just a wrapper around kubectl commands.

The traffic flow in the OpenShift cluster starts from Route 53 (DNS) and continues on to ELB (or not, if there is no load balancer). The proxy is configured to redirect traffic to specific pod instances before sending traffic to OpenShift. Here, the service module routes your traffic to the correct pod and then finally to the container inside the pod.

The flow can be represented as follows: DNS | load balancer (optional) | proxy/router | services | pods | configured container inside the pod.

The following architecture diagram shows the various components of the OpenShift platform. These include the following:

  • Enterprise Container Hosts (any physical, virtual, or cloud-based server)
  • Container Orchestration and Management 
  • Application Lifecycle Management 
  • CONTAINER

This screenshot can be found at the following link: https://www.openshift.com/learn/what-is-openshift/.

An OpenShift cluster requires servers, on which it will install the required cluster components, such as Kubernetes, an internal load balancer, internal log management, security, and multi-tenancy functionality. These come as bundled software. The application life cycle management layer is a layer in which you can use your own in-house tool to deploy your application using oc APIs or commands. We can also use any CI/CD tool, such as Jenkins.

The following example code shows some of the most commonly used commands, including how to log in to your ACP (OpenShift) endpoint, access your pods, and log in to a running container:

# Packt Pub example to access your container running in ACP cluster:

# Following command will help you to login inside your Openshift setup
#oc login -u 'Shailender' -p 'YOURPASSWORD'
#oc project <Project name to switch >

# Following command will help you to list all container running inside your POD with name and resource specification.

#oc describe pods packtpub-app-1-dvj4b|egrep -i "name|image:|started|mem|cpu"
#oc describe pods |egrep -i "^name:"


# Following command will help you to login inside a container running under POD

#oc exec -it packtpub-app-1-dvj4b -c 'packtpub-webserver' bash

NOTE: packtpub-app-1-dvj4b is randomly generated name so after each new deployment this name will change so make sure you are using current running POD name value during running above mentioned commands.

Take a look at the following screenshot, which shows the OpenShift dashboard:

The preceding screenshot shows almost all the features of this product. On the right-hand side, we can see all the menus that allow us to navigate to all of the previously mentioned components. The environments and projects are separated through namespaces. Here, Hello Openshift is our namespace or project name, under which we will deploy or configure our pods. In the circle, we can see the currently running pod count for a specific service that can be increased or decreased easily by pressing the up or down arrows.

We can clearly see the container names in key value form. For example, in the postgresql pod, the value is POSTGRESQL. We can also see the image used to build each pod and the port number on which it is listed. We can automatically scale these pods using the scaling option. By default, we can only scale using the CPU threshold, and there is no default option for the memory or the number of network connections supported. If we want to implement this functionality, we need to come up with our own custom solutions.

Google's Kubernetes or AWS EKS: Kubernetes is the open source version of Google's clustering solution, which is named Borg, and Omega and was donated by Google to the open source community CNCF in 2014. Kubernetes is dominating the container orchestration market and extends far beyond the Docker swarm solution in terms of its usage. Docker took the lead in the container market, whereas Kubernetes took the lead in the orchestration market. Kubernetes gives you the option to configure any container platform, which means you can use it to configure Docker or rkt containers as well. Kubernetes is commonly known as Kubernetes and is written in Go language. It includes both a master server and a nodes server (minions).

The master node runs a REST-based kube-apiserver service that behaves as a frontend to the Kubernetes cluster and consumes JSON. The internal workings of the Kubernetes cluster are handled by internal architectural components, including kube-clusterstore, kube-controller-manager, and kube-scheduler. kubectl is the command that is used on a day-to-day basis to manage activities on this cluster.

Most third-party vendors and DaaS vendors have now started to provide Kubernetes as a service, and it can be found in almost all cloud vendors, including AWS, Google's cloud, the Aliababa cloud, and MS Azure. It has similar components to those that we described in OpenShift:

This screenshot can be found at the following link: https://x-team.com/blog/introduction-kubernetes-architecture/.

Apache Mesos and Marathon: Marathon is a container orchestration solution for DC/OS. To deploy applications, it is common to use a combination of Mesos and Marathon. Mesos includes resource nodes, while Marathon uses its scheduler to deploy jobs. For more information, consult the following diagram:

This screenshot can be found at the following link: https://www.ericsson.com/research-blog/mesos-meetup-hosted-ericsson-research/.

CoreOS Tectonics: CoreOS is currently developing its tool chain to become a real container operating system with orchestration solutions. The following diagram shows how CoreOS has combined various technologies with the Tectonics orchestration solution. The following diagram contains some of the most important components, including the following:

  • The container image registry layer
  • The host layer on which your workload will run
  • The monitoring and security toolset
  • The container environment

  This screenshot can be found at the following link: https://coreos.com/tectonic/.

It has the following components, which can be deployed on any cloud vendor. This helps us avoid vendor lock-in issues:

  • A Tectonic console interface
  • Prometheus as a monitoring solution
  • Kubernetes as an internal orchestration solution
  • Docker as a container engine 
  • CoreOS as an operation system

Docker compose: Docker compose is an orchestrator as well as a command. docker compose is used to configure your application. For example, a web server and a database server can be configured under a single file, called a linked container, and the docker compose command will be used to run the setup. The following code is an example of how to create a .yml file and how to run configured containers:

# vim packtpub-deployment.yml
version: '2' services: packtpub-web: build: . ports: - "80:80" packtpub-postgres: image: "postgres:alpine"
# Run docker compose from your project folder
#docker-compose up

OpenStack Magnum: Another option in the private cloud setup market is OpenStack. This is an orchestration solution, which uses existing Docker and Kubernetes technologies with the Magnum service, through which you can interact with APIs.

The left-hand side of the image shows you the compute capacity of your cluster, where you can configure any virtual or physical machine running on any container-supported platform. Docker runs as a container engine and containers work as specific service instances that Kubernetes or Swarm can use as worker nodes according to your scaling requirements.

On the right-hand side of the image, we can see that the Magnum components are running in conjunction with the OpenStack Heat template. Here, it interacts with OpenStack services, such as Glance, for image management; Cinder, a block storage service; Neutron, a network service; and Nova, a compute service. Whenever you require more capacity for your application, it interacts with the components on the left to spin up more Docker containers to fulfil the demand requirements through the Magnum conductor.

For cluster management, Magnum provides APIs through which we can interact with the Magnum Client. An example of one of these APIs is the python-magnum client:

This screenshot can be found at the following link: https://wiki.openstack.org/wiki/Magnum.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.162.247