Manually scaling an application

When the usage of your application increases, it becomes necessary to scale the application up. Kubernetes is built to handle the orchestration of high-scale workloads.

Let's perform the following steps to understand how to manually scale an application:

  1. Change directories to /src/chapter7/charts/node, which is where the local clone of the example repository that you created in the Getting ready section can be found:
$ cd /charts/node/
  1. Install the To-Do application example using the following command. This Helm chart will deploy two pods, including a Node.js service and a MongoDB service:
$ helm install . --name my-ch7-app
  1. Get the service IP of my-ch7-app-node to connect to the application. The following command will return an external address for the application:
$ export SERVICE_IP=$(kubectl get svc --namespace default my-ch7-app-node --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
$ echo http://$SERVICE_IP/
http://mytodoapp.us-east-1.elb.amazonaws.com/

  1. Open the address from Step 3 in a web browser. You will get a fully functional To-Do application:

  1. Check the status of the application using helm status. You will see the number of pods that have been deployed as part of the deployment in the Available column:
$ helm status my-ch7-app
LAST DEPLOYED: Thu Oct 3 00:13:10 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-ch7-app-mongodb 1/1 1 1 9m9s
my-ch7-app-node 1/1 1 1 9m9s
...
  1. Scale the node pod to 3 replicas from the current scale of a single replica:
$ kubectl scale --replicas 3 deployment/my-ch7-app-node
deployment.extensions/my-ch7-app-node scaled
  1. Check the status of the application again and confirm that, this time, the number of available replicas is 3 and that the number of my-ch7-app-node pods in the v1/Pod section has increased to 3:
$ helm status my-ch7-app
...
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-ch7-app-mongodb 1/1 1 1 26m
my-ch7-app-node 3/3 3 3 26m
...
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
my-ch7-app-mongodb-5499c954b8-lcw27 1/1 Running 0 26m
my-ch7-app-node-d8b94964f-94dsb 1/1 Running 0 91s
my-ch7-app-node-d8b94964f-h9w4l 1/1 Running 3 26m
my-ch7-app-node-d8b94964f-qpm77 1/1 Running 0 91s
  1. To scale down your application, repeat Step 5, but this time with 2 replicas:
$ kubectl scale --replicas 2 deployment/my-ch7-app-node
deployment.extensions/my-ch7-app-node scaled

With that, you've learned how to scale your application when needed. Of course, your Kubernetes cluster resources should be able to support growing workload capacities as well. You will use this knowledge to test the service healing functionality in the Auto-healing pods in Kubernetes recipe.

The next recipe will show you how to autoscale workloads based on actual resource consumption instead of manual steps. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.131.10