How it works...

The recipes in this section showed you how to schedule pods on preferred locations, sometimes based on complex requirements.

In the Labeling nodes recipe, in Step 1, you can see that some standard labels have been applied to your nodes already. Here is a short explanation of what they mean and where they are used:

  • kubernetes.io/arch: This comes from the runtime.GOARCH parameter and is applied to nodes to identify where to run different architecture container images, such as x86, arm, arm64, ppc64le, and s390x, in a mixed architecture cluster.
  • kubernetes.io/instance-type: This is only useful if your cluster is deployed on a cloud provider. Instance types tell us a lot about the platform, especially for AI and machine learning workloads where you need to run some pods on instances with GPUs or faster storage options.
  • kubernetes.io/os: This is applied to nodes and comes from runtime.GOOS. It is probably less useful unless you have Linux and Windows nodes in the same cluster.
  • failure-domain.beta.kubernetes.io/region and /zone: This is also more useful if your cluster is deployed on a cloud provider or your infrastructure is spread across a different failure-domain. In a data center, it can be used to define a rack solution so that you can schedule pods on separate racks for higher availability. 
  • kops.k8s.io/instancegroup=nodes: This is the node label that's set to the name of the instance group. It is only used with kops clusters.
  • kubernetes.io/hostname: Shows the hostname of the worker.
  • kubernetes.io/role: This shows the role of the worker in the cluster. Some common values include node for representing worker nodes and master, which shows the node is the master node and is tainted as not schedulable for workloads by default.

In the Assigning pods to nodes using node and inter-pod affinity recipe, in Step 3, the node affinity rule says that the pod can only be placed on a node with a label whose key is environment and whose value is production.

In Step 4, the affinity key: value requirement is preferred (preferredDuringSchedulingIgnoredDuringExecution). The weight field here can be a value between 1 and 100. For every node that meets these requirements, a Kubernetes scheduler computes a sum. The nodes with the highest total score are preferred.

Another detail that's used here is the In parameter. Node Affinity supports the following operators: In, NotIn, Exists, DoesNotExist, Gt, and Lt. You can read more about the operators by looking at the Scheduler affinities through examples link mentioned in the See also section.

If selector and affinity rules are not well planned, they can easily block pods getting scheduled on your nodes. Keep in mind that if you have specified both nodeSelector and nodeAffinity rules, both requirements must be met for the pod to be scheduled on the available nodes.

In Step 5, inter-pod affinity is used (podAffinity) to satisfy the requirement in PodSpec. In this recipe, podAffinity is requiredDuringSchedulingIgnoredDuringExecution. Here, matchExpressions says that a pod can only run on nodes where failure-domain.beta.kubernetes.io/zone matches the nodes where other pods with the app: mongodb label are running.

In Step 6, the requirement is satisfied with podAntiAffinity using preferredDuringSchedulingIgnoredDuringExecution. Here, matchExpressions says that a pod can't run on nodes where failure-domain.beta.kubernetes.io/zone matches the nodes where other pods with the app: todo-dev label are running. The weight is increased by setting it to 100

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.153.212