Assigning pods to nodes using node and inter-pod Affinity

In this recipe, we will learn how to expand the constraints we expressed in the previous recipe, Assigning pods to labeled nodes using nodeSelector, using the affinity and anti-affinity features.

Let's use a scenario-based approach to simplify this recipe for different affinity selector options. We will take the previous example, but this time with complicated requirements:

  • todo-prod must be scheduled on a node with the environment:production label and should fail if it can't.
  • todo-prod should run on a node that is labeled with failure-domain.beta.kubernetes.io/zone=us-east-1a or us-east-1b but can run anywhere if the label requirement is not satisfied.
  • todo-prod must run on the same zone as mongodb, but should not run in the zone where todo-dev is running.
The requirements listed here are only examples in order to represent the use of some affinity definition functionality. This is not the ideal way to configure this specific application. The labels may be completely different in your environment.

The preceding scenario will cover both types of node affinity options (requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution). You will see these options later in our example. Let's get started:

  1. Create a copy of the Helm chart we used in the Manually scaling an application recipe to a new directory called todo-prod. We will edit the templates later in order to specify nodeAffinity rules:
$ cd src/chapter7/charts
$ mkdir todo-prod
$ cp -a node/* todo-prod/
$ cd todo-prod
  1. Edit the values.yaml file. To access it, use the following command:
$ vi values.yaml
  1. Replace the last line, affinity: {}, with the following code. This change will satisfy the first requirement we defined previously, meaning that a pod can only be placed on a node with an environment label and whose value is production:
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Start of the affinity addition #1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: environment
operator: In
values:
- production
# End of the affinity addition #1

You can also specify more than one matchExpressions under the nodeSelectorTerms. In this case, the pod can only be scheduled onto a node where all matchExpressions are satisfied, which may limit your successful scheduling chances.

Although it may not be practical on large clusters, instead of using nodeSelector and labels, you can also schedule a pod on a specific node using the nodeName setting. In this case, instead of the nodeSelector setting, add nodeName: yournodename to your deployment manifest.
  1. Now, add the following lines right under the preceding code addition. This addition will satisfy the second requirement we defined, meaning that nodes with a label of failure-domain.beta.kubernetes.io/zone and whose value is us-east-1a or us-east-1b  will be preferred:
          - production
# End of the affinity addition #1
# Start of the affinity addition #2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-east-1a
- us-east-1b
# End of the affinity addition #2
  1. For the third requirement, we will use the inter-pod affinity and anti-affinity functionalities. They allow us to limit which nodes our pod is eligible to be scheduled based on the labels on pods that are already running on the node instead of taking labels on nodes for scheduling. The following podAffinity requiredDuringSchedulingIgnoredDuringExecution rule will look for nodes where app: mongodb exist and use failure-domain.beta.kubernetes.io/zone as a topology key to show us where the pod is allowed to be scheduled:
          - us-east-1b
# End of the affinity addition #2
# Start of the affinity addition #3a
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb
topologyKey: failure-domain.beta.kubernetes.io/zone
# End of the affinity addition #3a
  1. Add the following lines to complete the requirements. This time, the podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution rule will look for nodes where app: todo-dev exists and use failure-domain.beta.kubernetes.io/zone as a topology key:
      topologyKey: failure-domain.beta.kubernetes.io/zone
# End of the affinity addition #3a
# Start of the affinity addition #3b
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- todo-dev
topologyKey: failure-domain.beta.kubernetes.io/zone
# End of the affinity addition #3b
  1. Edit the Chart.yaml file and change the chart name to its folder name. In this recipe, it's called todo-prod. After making these changes, the first two lines should look as follows:
apiVersion: v1
name: todo-prod
...
  1. Update the Helm dependencies and build them. The following commands will pull all the dependencies and build the Helm chart:
$ helm dep update & helm dep build

  1. Examine the chart for issues. If there are any issues with the chart files, the linting process will bring them up; otherwise, no failures should be found:
$ helm lint .
==> Linting .
Lint OK
1 chart(s) linted, no failures
  1. Install the To-Do application example using the following command. This Helm chart will deploy two pods, including a Node.js service and a MongoDB service, this time following the detailed requirements we defined at the beginning of this recipe:
$ helm install . --name my-app7-prod --set serviceType=LoadBalancer
  1. Check that all the pods that have been scheduled on the nodes are labeled as environment: production using the following command. You will find the my-app7-dev-todo-dev pod running on the nodes:
$ for n in $(kubectl get nodes -l environment=production --no-headers | cut -d " " -f1); do kubectl get pods --all-namespaces --no-headers --field-selector spec.nodeName=${n} ; done

In this recipe, you learned about advanced pod scheduling practices while using a number of primitives in Kubernetes, including nodeSelector, node affinity, and inter-pod affinity. Now, you will be able to configure a set of applications that are co-located in the same defined topology or scheduled in different zones so that you have better service-level agreement (SLA) times.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.120.187