Memory limit example

Let's walk through an example. First, we'll create a namespace to house our memory limit:

master $ kubectl create namespace low-memory-area
namespace "low-memory-area" created

Once we've created the namespace, we can create a file that sets a LimitRange object, which will allow us to enforce a default for memory limits and requests. Create a file called memory-default.yaml with the following contents:

apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

And now, we can create it in the namespace:

master $ kubectl create -f test.ym --namespace=low-memory-area
limitrange "mem-limit-range" created

Let's create a pod without a memory limit, in the low-memory-area namespace, and see what happens.

Create the following low-memory-pod.yaml file:

apiVersion: v1
kind: Pod
metadata:
name: low-mem-demo
spec:
containers:
- name: low-mem-demo
image: redis

Then, we can create the pod with this command:

kubectl create -f low-memory-pod.yaml --namespace=low-memory-area
pod "low-mem-demo" created

Let's see if our resource constraints were added to the pod's configuration for containers, without having to explicitly specify it in the pod configuration. Notice the memory limits in place! We've removed some of the informational output for readability:

kubectl get pod low-mem-demo --output=yaml --namespace=low-memory-area

Here's the output of the preceding code:

apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: memory request for container
low-mem-demo; memory limit for container low-mem-demo'
creationTimestamp: 2018-09-20T01:41:40Z
name: low-mem-demo
namespace: low-memory-area
resourceVersion: "1132"
selfLink: /api/v1/namespaces/low-memory-area/pods/low-mem-demo
uid: 52610141-bc76-11e8-a910-0242ac11006a
spec:
containers:
- image: redis
imagePullPolicy: Always
name: low-mem-demo
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-t6xqm
readOnly: true
dnsPolicy: ClusterFirst
nodeName: node01
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-t6xqm
secret:
defaultMode: 420
secretName: default-token-t6xqm
hostIP: 172.17.1.21
phase: Running
podIP: 10.32.0.3
qosClass: Burstable
startTime: 2018-09-20T01:41:40Z

You can delete the pod with the following command:

Kubectl delete pod low-mem-demo --namespace=low-memory-area
pod "low-mem-demo" delete

There are a lot of options for configuring resource limits. If you create a memory limit, but don't specify the default request, the request will be set to the maximum available memory, which will correspond to the memory limit. That will look like the following:

resources:
limits:
memory: 4096m
requests:
memory: 4096m

In a cluster with diverse workloads and API-driven relationships, it's incredibly important to set memory limits with your containers and their corresponding applications in order to prevent misbehaving applications from disrupting your cluster. Services don't implicitly know about each other, so they're very susceptible to resource exhaustion if you don't configure limits correctly.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.159.189