CPU constraints example

Let's go ahead and create another namespace in which to hold our example:

kubectl create namespace cpu-low-area

Now, let's set up a LimitRange for CPU constraints, which uses the measurement of millicpus. If you're requesting 500 m, it means that you're asking for 500 millicpus or millicores, which is equivalent to 0.5 in notational form. When you request 0.5 or 500 m, you're asking for half of a CPU in whatever form your platform provides (vCPU, Core, Hyper Thread, vCore, or vCPU).

As we did previously, let's create a LimitRange for our CPU constraints:

apiVersion: v1
kind: LimitRange
metadata:
name: cpu-demo-range
spec:
limits:
- max:
cpu: "500m"
min:
cpu: "300m"
type: Container

Now, we can create the LimitRange:

kubectl create -f cpu-constraint.yaml --namespace=cpu-low-area

Once we create the LimitRange, we can inspect it. What you'll notice is that the defaultRequest is specified as the same as the maximum, because we didn't specify it. Kubernetes sets the defaultRequest to max:

kubectl get limitrange cpu-demo-range --output=yaml --namespace=cpu-low-area

limits:
- default:
cpu: 500m
defaultRequest:
cpu: 500m
max:
cpu: 500m
min:
cpu: 300m
type: Container

This is the intended behavior. When further containers are scheduled in this namespace, Kubernetes first checks to see whether the pod specifies a request and limit. If it doesn't, the defaults are applied. Next, the controller confirms that the CPU request is more than the lower bound in the LimitRange, 300 m. Additionally, it checks for the upper bound to make sure that the object is not asking for more than 500 m.

You can check the container constraints again by looking at the YAML output of the pod:

kubectl get pod cpu-demo-range --output=yaml --namespace=cpu-low-area

resources:

limits:
cpu: 500m
requests:
cpu: 300m

Now, don't forget to delete the pod:

kubectl delete pod cpu-demo-range --namespace=cpu-low-area
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.202.103