Scale up and down manually with the kubectl scale command

Assume that today we'd like to scale our nginx Pods from two to four:

// kubectl scale --replicas=<expected_replica_num> deployment <deployment_name>
# kubectl scale --replicas=4 deployment my-nginx
deployment "my-nginx" scaled

Let's check how many pods we have now:

# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-6484b5fc4c-9v7dc 1/1 Running 0 1m
my-nginx-6484b5fc4c-krd7p 1/1 Running 0 1m
my-nginx-6484b5fc4c-nsvzt 0/1 ContainerCreating 0 2s
my-nginx-6484b5fc4c-v68dr 1/1 Running 0 2s

We could find two more Pods are scheduled. One is already running and another one is creating. Eventually, we will have four Pods up and running if we have enough compute resources.

Kubectl scale (also kubectl autoscale!) supports Replication Controller (RC) and Replica Set (RS), too. However, deployment is the recommended way to deploy Pods.

We could also scale down with the same kubectl command, just by setting the replicas parameter lower:

// kubectl scale –replicas=<expected_replica_num> deployment <deployment_name>
# kubectl scale --replicas=2 deployment my-nginx
deployment "my-nginx" scaled

Now, we'll see two Pods are scheduled to be terminated:

# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-6484b5fc4c-9v7dc 1/1 Running 0 1m
my-nginx-6484b5fc4c-krd7p 1/1 Running 0 1m
my-nginx-6484b5fc4c-nsvzt 0/1 Terminating 0 23s
my-nginx-6484b5fc4c-v68dr 0/1 Terminating 0 23s

There is an option, --current-replicas, which specifies the expected current replicas. If it doesn't match, Kubernetes doesn't perform the scale function as follows:

// adding –-current-replicas to precheck the condistion for scaling.
# kubectl scale --current-replicas=3 --replicas=4 deployment my-nginx
error: Expected replicas to be 3, was 2
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.138.14