Node pool

A node pool is a set of instances in GCP that share the same configuration. When we launch a cluster from the gcloud command, we pass --num-node=3 and the rest of the arguments. Then three instances will be launched inside the same pool, sharing the same configuration, using the same method:

# gcloud compute instance-groups list NAME LOCATION SCOPE NETWORK MANAGED INSTANCES gke-my-k8s-cluster-default-pool-36121894-grp us-central1-a zone k8s-network Yes 3 

Assume there is an expected heavy peak time for your service. As a Kubernetes administrator, you might want to resize your node pool inside the cluster.

# gcloud container clusters resize my-k8s-cluster --size 5 --zone us-central1-a --node-pool default-pool
Pool [default-pool] for [my-k8s-cluster] will be resized to 5.
Do you want to continue (Y/n)? y
Resizing my-k8s-cluster...done.
Updated [https://container.googleapis.com/v1/projects/kubernetes-cookbook/zones/us-central1-a/clusters/my-k8s-cluster].
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-36121894-04rv Ready <none> 6h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-71wg Ready <none> 6h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-8km3 Ready <none> 39s v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9j9p Ready <none> 31m v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9jmv Ready <none> 36s v1.9.2-gke.1

The resize command can help you scale out and in. If the node count after resizing is less than before, the scheduler will migrate the pods to run on available nodes.

You can set the compute resource boundary for each container in the spec. You set requests and limits to a pod container. Assume we have a super nginx which requires 1024 MB memory:

# cat super-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: super-nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
requests:
memory: 1024Mi

// create super nginx deployment
# kubectl create -f super-nginx.yaml
deployment "super-nginx" created

# kubectl get pods
NAME READY STATUS RESTARTS AGE
super-nginx-df79db98-5vfmv 0/1 Pending 0 10s
# kubectl describe po super-nginx-779494d88f-74xjp
Name: super-nginx-df79db98-5vfmv
Namespace: default
Node: <none>
Labels: app=nginx
pod-template-hash=89358654
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nginx
Status: PendingIP:
Controlled By: ReplicaSet/super-nginx-df79db98
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11s (x5 over 18s) default-scheduler 0/5 nodes are available: 5 Insufficient memory.

The node size we created is f1-miro, which only has 0.6 GM memory per node. It means the scheduler will never find a node with sufficient memory to run super-nginx. In this case, we can add more nodes with higher memory to the cluster by creating another node pool. We'll use g1-small as an example, which contains 1.7 GB memory:

// create a node pool named larger-mem-pool with n1-standard-1 instance type
# gcloud container node-pools create larger-mem-pool --cluster my-k8s-cluster --machine-type n1-standard-1 --num-nodes 2 --tags private --zone us-central1-a --scopes=storage-rw,compute-ro
...
Creating node pool larger-mem-pool...done.
Created [https://container.googleapis.com/v1/projects/kubernetes-cookbook/zones/us-central1-a/clusters/my-k8s-cluster/nodePools/larger-mem-pool].
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
larger-mem-pool n1-standard-1 100 1.9.2-gke.1

// check node pools
# gcloud container node-pools list --cluster my-k8s-cluster --zone us-central1-a
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
default-pool f1-micro 100 1.9.2-gke.1
larger-mem-pool n1-standard-1 100 1.9.2-gke.1

// check current nodes
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-36121894-04rv Ready <none> 7h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-71wg Ready <none> 7h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-8km3 Ready <none> 9m v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9j9p Ready <none> 40m v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9jmv Ready <none> 9m v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-f1tb Ready <none> 1m v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 Ready <none> 1m v1.9.2-gke.1

Looks like we have two more powerful nodes. Let's see the status of our super nginx:

# kubectl get pods
NAME READY STATUS RESTARTS AGE
super-nginx-df79db98-5vfmv 1/1 Running 0 23m

It's running! Kubernetes scheduler will always try to find sufficient resources to schedule pods. In this case, there are two new nodes added to the cluster that can fulfill the resource requirement, so the pod is scheduled and run:

// check the event of super nginx
# kubectl describe pods super-nginx-df79db98-5vfmv
...
Events:
Warning FailedScheduling 3m (x7 over 4m) default-scheduler 0/5 nodes are available: 5 Insufficient memory.
Normal Scheduled 1m default-scheduler Successfully assigned super-nginx-df79db98-5vfmv to gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1
Normal SuccessfulMountVolume 1m kubelet, gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 MountVolume.SetUp succeeded for volume "default-token-bk8p2"
Normal Pulling 1m kubelet, gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 pulling image "nginx"
Normal Pulled 1m kubelet, gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 Successfully pulled image "nginx"
Normal Created 1m kubelet, gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 Created container
Normal Started 1m kubelet, gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1 Started container

From the events of the pod, we know what path it ran through. Originally, it couldn't find any nodes with sufficient resources and eventually it's scheduled to the new node named gke-my-k8s-cluster-larger-mem-pool-a51c8da3-scw1.

For making the user preference on scheduling pods on certain nodes, nodeSelector was introduced. You could either use built-in node labels, such as beta.kubernetes.io/instance-type: n1-standard-1 in pod spec, or use customized labels to achieve it. For more information, please refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node.

Kubernetes also supports cluster autoscaler, which automatically resizes your cluster based on capacity if all nodes have insufficient resources to run the requested pods. To do that, we add –enable-autoscaling and specify the maximum and minimum node count when we create the new node pool:

# cloud container node-pools create larger-mem-pool --cluster my-k8s-cluster --machine-type n1-standard-1 --tags private --zone us-central1-a --scopes=storage-rw,compute-ro --enable-autoscaling --min-nodes 1 --max-nodes 5
...
Creating node pool larger-mem-pool...done.
Created [https://container.googleapis.com/v1/projects/kubernetes-cookbook/zones/us-central1-a/clusters/my-k8s-cluster/nodePools/larger-mem-pool].
NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
larger-mem-pool n1-standard-1 100 1.9.2-gke.1

After a few minutes, we can see there is a new node inside our cluster:

#  kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-36121894-04rv Ready <none> 8h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-71wg Ready <none> 8h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-8km3 Ready <none> 1h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9j9p Ready <none> 1h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9jmv Ready <none> 1h v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-s6s6 Ready <none> 15m v1.9.2-gke.1

Now, let's change the replica of our super-nginx from 1 to 4 by using kubectl edit or creating a new deployment:

// check current pods
# kubectl get pods
NAME READY STATUS RESTARTS AGE
super-nginx-df79db98-5q9mj 0/1 Pending 0 3m
super-nginx-df79db98-72fcz 1/1 Running 0 3m
super-nginx-df79db98-78lbr 0/1 Pending 0 3m
super-nginx-df79db98-fngp2 1/1 Running 0 3m

We find there are two pods with a pending status:

// check nodes status
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-my-k8s-cluster-default-pool-36121894-04rv Ready <none> 8h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-71wg Ready <none> 8h v1.9.2-gke.1
gke-my-k8s-cluster-default-pool-36121894-9j9p Ready <none> 2h v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-d766 Ready <none> 4m v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-gtsn Ready <none> 3m v1.9.2-gke.1
gke-my-k8s-cluster-larger-mem-pool-a51c8da3-s6s6 Ready <none> 25m v1.9.2-gke.1

After a few minutes, we see that there are new members in our larger mem pool, and all our pods get to run:

// check pods status
# kubectl get pods
NAME READY STATUS RESTARTS AGE
super-nginx-df79db98-5q9mj 1/1 Running 0 3m
super-nginx-df79db98-72fcz 1/1 Running 0 3m
super-nginx-df79db98-78lbr 1/1 Running 0 3m
super-nginx-df79db98-fngp2 1/1 Running 0 3m

Cluster autoscaler comes in handy and is cost-effective. When the nodes are over-provisioned, the additional node in the node pool will be terminated automatically.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.205.99