Updates and rollouts

Deployments allow for updating in a few different ways. First, there is thkubectl set command, which allows us to change the deployment configuration without redeploying manually. Currently, it only allows for updating the image, but as new versions of our application or container image are processed, we will need to do this quite often. 

Let's take a look using our deployment from the previous section. We should have three replicas running right now. Verify this by running the get pods command with a filter for our deployment:

$ kubectl get pods -l name=node-js-deploy

We should see three pods similar to those listed in the following screenshot:

Deployment Pod Listing

Take one of the pods listed on our setup, replace it in the following command where it says {POD_NAME_FROM_YOUR_LISTING}, and run the command:

$ kubectl describe pod/{POD_NAME_FROM_YOUR_LISTING} | grep Image:

We should see an output like the following image with the current image version of 0.1

Current Pod Image

Now that we know what our current deployment is running, let's try to update to the next version. This can be achieved easily using thkubectl set command and specifying the new version, as shown here:

$ kubectl set image deployment/node-js-deploy node-js-deploy=jonbaier/pod-scaling:0.2

If all goes well, we should see the text that says deployment "node-js-deploy" image updated displayed on the screen.

We can double–check the status using the following rollout status command:

$ kubectl rollout status deployment/node-js-deploy

We should see some text about the deployment successfully rolled out. If you see any text about waiting for the rollout to finish, you may need to wait a moment for it to finish or alternatively check the logs for issues.

Once it's finished, run the get pods command as earlier, once more. This time we will see new pods listed:

Deployment Pod Listing After Update

Once again plug one of your pod names into the describe command we ran earlier. This time we should see the image has been updated to 0.2.

What happened behind the scenes is that Kubernetes has rolled out a new version for us. It basically creates a new replica set with the new version. Once this pod is online and healthy it kills one of the older versions. It continues this behavior, scaling out the new version and scaling down the old versions, until only the new pods are left.

The following figure describes the workflow for your reference:

Deployment Lifecycle

It's worth noting that the rollback definition allows us to control the pod replace method in our deployment definition. There is a strategy.type field that defaults to RollingUpdate and the preceding behavior. Optionally, we can also specify Recreate as the replacement strategy and it will kill all the old pods first before creating the new versions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.151.90