Master

We know now that the Master is the brain of our cluster. We have the core API server, which maintains RESTful web services for querying and defining our desired cluster and workload state. It's important to note that the control pane only accesses the master to initiate changes and not the nodes directly.

Additionally, the master includes the scheduler. The replication controller/replica set works with the API server to ensure that the correct number of pod replicas are running at any given time. This is exemplary of the desired state concept. If our replication controller/replica set is defining three replicas and our actual state is two copies of the pod running, then the scheduler will be invoked to add a third pod somewhere in our cluster. The same is true if there are too many pods running in the cluster at any given time. In this way, K8s is always pushing toward that desired state.

As discussed previously, we'll look more closely into each of the Master components. kube-apiserver has the job of providing the API for the cluster as the front end of the control plane that the Master is providing. In fact, the apiserver is exposed through a service specifically called kubernetes, and we install the API server using the kubelet. This service is configured via the kube-apiserver.yaml file, which lives in /etc/kubernetes/manifests/ on every manage node within your cluster.

kube-apiserver is a key portion of high availability in Kubernetes and, as such, it's designed to scale horizontally. We'll discuss how to construct highly available clusters later in this book, but suffice to say that you'll need to spread the kube-apiserver container across several Master nodes and provide a load balancer in the front.

Since we've gone into some detail about the cluster state store, it will suffice to say that an etcd agent is running on all of the Master nodes.

The next piece of the puzzle is kube-scheduler, which makes sure that all pods are associated and assigned to a node for operation. The schedulers works with the API server to schedule workloads in the form of pods on the actual minion nodes. These pods include the various containers that make up our application stacks. By default, the basic Kubernetes scheduler spreads pods across the cluster and uses different nodes for matching pod replicas. Kubernetes also allows specifying necessary resources, hardware and software policy constraints, affinity or anti-affinity as required, and data volume locality for each container, so scheduling can be altered by these additional factors.

The last two main pieces of the Master nodes are kube-controller-manager and cloud-controller-manager. As you might have guessed based on their names, while both of these services play an important part in container orchestration and scheduling, kube-controller-manager helps to orchestrate core internal components of Kubernetes, while cloud-controller-manager interacts with different vendors and their cloud provider APIs.

kube-controller-manager is actually a Kubernetes daemon that embeds the core control loops, otherwise known as controllers, that are included with Kubernetes:

  • The Node controller, which manages pod availability and manages pods when they go down
  • The Replication controller, which ensures that each replication controller object in the system has the correct number of pods
  • The Endpoints controller, which controls endpoint records in the API, thereby managing DNS resolution of a pod or set of pods backing a service that defines selectors

In order to reduce the complexity of the controller components, they're all packed and shipped within this single daemon as kube-controller-manager.

cloud-controller-manager, on the other hand, pays attention to external components, and runs controller loops that are specific to the cloud provider that your cluster is using. The original intent of this design was to decouple the internal development of Kubernetes from cloud-specific vendor code. This was accomplished through the use of plugins, which prevents Kubernetes from relying on code that is not inherent to its value proposition. We can expect over time that future releases of Kubernetes will move vendor-specific code completely out of the Kubernetes code base, and that vendor-specific code will be maintained by the vendor themselves, and then called on by the Kubernetes cloud-controller-manager. This design prevents the need for several pieces of Kubernetes to communicate with the cloud provider, namely the kubelet, Kubernetes controller manager, and the API server. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.200.149