The previous chapter introduced two important Kubernetes objects: ReplicationController and ReplicaSet. At this point, you already know that they serve similar purposes in terms of maintaining identical, healthy replicas (copies) of Pods. In fact, ReplicaSet is a successor of ReplicationController and, in the most recent versions of Kubernetes, ReplicaSet should be used in favor of ReplicationController.
Now, it is time to introduce the Deployment object, which provides easy scalability, rolling updates, and versioned rollbacks for your stateless Kubernetes applications and services. Deployment objects are built on top of ReplicaSets and they provide a declarative way of managing them – just describe the desired state in the Deployment manifest and Kubernetes will take care of orchestrating the underlying ReplicaSets in a controlled, predictable manner. Alongside StatefulSet, which will be covered in the next chapter, it is the most important workload management object in Kubernetes. This will be the bread and butter of your development and operations on Kubernetes! The goal of this chapter is to make sure that you have all the tools and knowledge you need to deploy your stateless application components using Deployment objects, as well as to safely release new versions of your components using rolling updates of deployments.
This chapter will cover the following topics:
For this chapter, you will need the following:
Kubernetes cluster deployment (local and cloud-based) and kubectl installation were covered in Chapter 3, Installing Your First Kubernetes Cluster.
You can download the latest code samples for this chapter from the official GitHub repository: https://github.com/PacktPublishing/The-Kubernetes-Bible/tree/master/Chapter11.
Kubernetes gives you out-of-the-box flexibility when it comes to running different types of workloads, depending on your use cases. Let's have a brief look at the supported workloads to understand where the Deployment object fits, as well as its purpose. When implementing cloud-based applications, you will generally need the following types of workloads:
With this brief summary regarding the different types of workloads in Kubernetes, we can dive deeper into managing stateless workloads using Deployment objects. In short, they provide declarative and controlled updates for Pods and ReplicaSets. You can declaratively perform operations such as the following by using them:
In this way, Deployment objects provide an end-to-end pipeline for managing your stateless components running in Kubernetes clusters. Usually, you will combine them with Service objects, as presented in Chapter 7, Exposing Your Pods with Services, to achieve high fault tolerance, health monitoring, and intelligent load balancing for traffic coming into your application.
Now, let's have a closer look at the anatomy of the Deployment object specification and how to create a simple example deployment in our Kubernetes cluster.
First, let's take a look at the structure of an example Deployment YAML manifest file, nginx-deployment.yaml, that maintains three replicas of an nginx pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
environment: test
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: nginx
environment: test
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
As you can see, the structure of the Deployment spec is almost identical to ReplicaSet, although it has a few extra parameters for configuring the strategy for rolling out new versions. The specification has four main components:
Important Note
The Deployment spec provides a high degree of reconfigurability to suit your needs. We recommend referring to the official documentation for all the details: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#deploymentspec-v1-apps.
To better understand the relationship of Deployment, its underlying child ReplicaSet, and Pods, please look at the following diagram:
As you can see, once you have defined and created a Deployment, it is not possible to change its selector. This is desired because otherwise, you could easily end up with orphaned ReplicaSets. There are two important actions that you can perform on existing Deployment objects:
Now, let's declaratively apply our example Deployment YAML manifest file, nginx-deployment.yaml, to the cluster using the kubectl apply command:
$ kubectl apply -f ./nginx-deployment.yaml --record
Using the --record flag is useful for tracking the changes that are made to the objects, as well as to inspect which commands caused these changes. You will then see an additional automatic annotation, kubernetes.io/change-cause, which contains information about the command.
Immediately after the Deployment object has been created, use the kubectl rollout command to track the status of your Deployment in real time:
$ kubectl rollout status deployment nginx-deployment-example
Waiting for deployment "nginx-deployment-example" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "nginx-deployment-example" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "nginx-deployment-example" rollout to finish: 0 of 3 updated replicas are available...
deployment "nginx-deployment-example" successfully rolled out
This is a useful command that can give us a lot of insight into what is happening with an ongoing Deployment rollout. You can also use the usual kubectl get or kubectl describe commands:
$ kubectl get deploy nginx-deployment-example
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment-example 3/3 3 3 6m21s
As you can see, the Deployment has been successfully created and all three Pods are now in the ready state.
Tip
Instead of typing deployment, you can use the deploy abbreviation when using kubectl commands.
You may also be interested in seeing the underlying ReplicaSets:
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-example-5549875c78 3 3 3 8m7s
Please take note of the additional generated hash, 5549875c78, in the name of our ReplicaSet, which is also the value of the pod-template-hash label, which we mentioned earlier.
Lastly, you can see the pods in the cluster that were created by the Deployment object using the following command:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-example-5549875c78-5srkn 1/1 Running 0 11m
nginx-deployment-example-5549875c78-h5n76 1/1 Running 0 11m
nginx-deployment-example-5549875c78-mn5zn 1/1 Running 0 11m
Congratulations – you have created and inspected your first Kubernetes Deployment! Next, we will take a look at how Service objects are used to expose your Deployment to external traffic coming into the cluster.
Service objects were covered in detail in Chapter 7, Exposing Your Pods with Services, so in this section, we will provide a brief recap about the role of Services and how they are usually used with Deployments. Services are Kubernetes objects that allow you to expose your Pods, whether this is to other Pods in the cluster or end users. They are the crucial building blocks for highly available and fault-tolerant Kubernetes applications, since they provide a load balancing layer that actively routes incoming traffic to ready and healthy Pods.
The Deployment objects, on the other hand, provide Pod replication, automatic restarts when failures occur, easy scaling, controlled version rollouts, and rollbacks. But there is a catch: Pods that are created by ReplicaSets or Deployments have a finite life cycle. At some point, you can expect them to be terminated; then, new Pod replicas with new IP addresses will be created in their place. So, what if you have a Deployment running web server Pods that need to communicate with Pods that have been created as a part of another Deployment such as backend Pods? Web server Pods cannot assume anything about the IP addresses or the DNS names of backend Pods, as they may change over time. This issue can be resolved with Service objects, which provide reliable networking for a set of Pods.
In short, Services target a set of Pods, and this is determined by label selectors. These label selectors work on the same principle that you have learned about for ReplicaSets and Deployments. The most common scenario is exposing a Service for an existing Deployment by using the same label selector. The Service is responsible for providing a reliable DNS name and IP address, as well as for monitoring selector results and updating the associated Endpoint object with the current IP addresses of the matching Pods. For internal cluster communication, this is usually achieved using simple ClusterIP Services, whereas to expose them to external traffic, you can use the NodePort Service or, more commonly in cloud deployments, the LoadBalancer Service.
To visualize how Service objects interact with Deployment objects in Kubernetes, please look at the following diagram:
This diagram visualizes how any client Pod in the cluster can transparently communicate with the nginx Pods that are created by our Deployment object and exposed using the ClusterIP Service. ClusterIPs are essentially virtual IP addresses that are managed by the kube-proxy service (process) that is running on each Node. kube-proxy is responsible for all the clever routing logic in the cluster and ensures that the routing is entirely transparent to the client Pods – they do not need to be aware if they are communicating with the same Node, a different Node, or even an external component. The role of the Service object is to define a set of ready Pods that should be hidden behind a stable ClusterIP. Usually, the internal clients will not be calling the Service pods using the ClusterIP, but they will use a DNS short name, which is the same as Service name; for example, nginx-service-example. This will be resolved to the ClusterIP by the cluster's internal DNS service. Alternatively, they may use a DNS Fully Qualified Domain Name (FQDN) in the form of <serviceName>.<namespaceName>.svc.<clusterDomain>; for example, nginx-service-example.default.svc.cluster.local.
Important Note
For LoadBalancer or NodePort Services that expose Pods to external traffic, the principle is similar as internally, they also provide a ClusterIP for internal communication. The difference is that they also configure more components so that external traffic can be routed to the cluster.
Now that you're equipped with the necessary knowledge about Service objects and their interactions with Deployment objects, let's put what we've learned into practice!
In this section, we are going to expose our nginx-deployment-example Deployment using the nginx-service-example Service object, which is of the LoadBalancer type, by performing the following steps:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-example
spec:
selector:
app: nginx
environment: test
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 80
The label selector of the Service is the same as the one we used for our Deployment object. The specification of the Service instructs us to expose our Deployment on port 80 of the cloud load balancer, and then route the traffic from target port 80 to the underlying Pods.
Important Note
Depending on how your Kubernetes cluster is deployed, you may not be able to use the LoadBalancer type. In that case, you may need to use the NodePort type for this exercise or stick to the simple ClusterIP type and skip the part about external access. For local development deployments such as minikube, you will need to use the minikube service command to access your Service. You can find more details in the documentation: https://minikube.sigs.k8s.io/docs/commands/service/.
$ kubectl describe service nginx-service-example
...
LoadBalancer Ingress: 40.88.196.15
…
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 10m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 11m service-controller Ensured load balancer
Please note that creating the external cloud load balancer may take a bit of time, so you may not see an external IP address immediately. In this case, the external IP is 40.88.196.15.
Tip
You can use the svc abbreviation in the kubectl commands instead of typing service.
This shows how Services are used to expose Deployment Pods to external traffic. Now, let's perform an experiment that demonstrates how internal traffic is handled by Services for other client Pods:
$ kubectl run -i --tty busybox --image=busybox --rm --restart=Never -- sh
$ wget http://nginx-service-example && cat index.html
$ rm ./index.html
$ wget http://nginx-service-example.default.svc.cluster.local && cat index.html
Now, we will quickly show you how to achieve a similar result using imperative commands to create a Service for our Deployment object.
A similar effect can be achieved using the imperative kubectl expose command – a Service will be created for our Deployment object named nginx-deployment-example. Use the following command:
$ kubectl expose deployment --type=LoadBalancer nginx-deployment-example
service/nginx-deployment-example exposed
This will create a Service with the same name as the Deployment object; that is, nginx-deployment-example. If you would like to use a different name, as shown in the declarative example, you can use the --name=nginx-service-example parameter. Additionally, port 80, which will be used by the Service, will be the same as the one that was defined for the Pods. If you want to change this, you can use the --port=<number> and --target-port=<number> parameters.
Please note that this imperative command is recommended for use in development or debugging scenarios only. For production environments, you should leverage declarative Infrastructure-as-Code and Configuration-as-Code approaches as much as possible.
In Kubernetes, there are three types of probes that you can configure for each container running in a Pod:
All these probes are incredibly useful when you're configuring your Deployments – always try to predict possible life cycle scenarios for the processes running in your containers and configure the probes accordingly for your Deployments.
Probes can have different forms. For example, they can be running a command (exec) inside the container and verifying whether the exit code is successful. Alternatively, they can be HTTP GET requests (httpGet) to a specific endpoint of the container or attempting to open a TCP socket (tcpSocket) and checking if a connection could be established. Usually, httpGet probes are used on dedicated health endpoints (for liveness) or ready endpoints (for readiness) that are exposed by the process running in the container. These endpoints would encapsulate the logic of the actual health or readiness check.
Please note that, by default, no probes are configured on containers running in Pods. Kubernetes will serve traffic to Pod containers behind the Service, but only if the containers have successfully started, and restart them if they have crashed using the default always-restart policy. This means that it is your responsibility to figure out what type of probes and what settings you need for your particular case. You will also need to understand the possible consequences and caveats of incorrectly configured probes – for example, if your liveness probe is too restrictive and has timeouts that are too small, it may wrongfully restart your containers and decrease the availability of your application.
Now, let's demonstrate how you can configure a readiness probe on our Deployment and how it works in real time.
Important Note
If you are interested in the configuration details for other types of probes, please refer to the official documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/. We have only covered the readiness probe in this section as it is the most important for interactions between Service objects and Deployment objects.
The nginx Deployment that we use is very simple and does not need any dedicated readiness probe. Instead, we will arrange the container's setup so that we can have the container's readiness probe fail or succeed on demand. The idea is to create an empty file called /usr/share/nginx/html/ready during container setup, which will be served on the /ready endpoint by nginx (just like any other file) and configure a readiness probe of the httpGet type to query the /ready endpoint for a successful HTTP status code. Now, by deleting or recreating the ready file using the kubectl exec command, we can easily simulate failures in our Pods that cause the readiness probe to fail or succeed.
Please follow these steps to configure and test the readiness probe:
$ kubectl delete deployment nginx-deployment-example
$ cp nginx-deployment.yaml nginx-deployment-readinessprobe.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
...
template:
...
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
command:
- /bin/sh
- -c
- |
touch /usr/share/nginx/html/ready
echo "You have been served by Pod with IP address: $(hostname -i)" > /usr/share/nginx/html/index.html
nginx -g "daemon off;"
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 2
There are multiple parts changing in the Deployment manifest, all of which have been highlighted. First, we overridden the default container entry point command using command and passed additional arguments. command is set to /bin/sh to execute a custom shell command. The additional arguments are constructed in the following way:
Important Note
Usually, you would perform such customization using a new Docker image, which inherits from the nginx:1.17 image as a base. The method shown here is being used for demonstration purposes and shows how flexible the Kubernetes runtime is.
The second set of changes we made in the YAML manifest for the Deployment were for the definition of readinessProbe, which is configured as follows:
To create the deployment, follow these steps:
$ kubectl apply -f ./nginx-deployment-readinessprobe.yaml --record
$ kubectl describe svc nginx-service-example
...
LoadBalancer Ingress: 52.188.43.251
...
Endpoints: 10.244.0.43:80,10.244.1.50:80,10.244.1.51:80
We will use 52.188.43.251 as the IP address in our examples. You can also see that the service has three Endpoints that map to our Deployment Pods, all of which are ready to serve traffic.
You have been served by Pod with IP address: 10.244.0.43
... (a few F5 hits later)
You have been served by Pod with IP address: 10.244.1.51
... (a few F5 hits later)
You have been served by Pod with IP address: 10.244.1.50
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP ...
nginx-deployment-example-85cd4b66f-94r4q 1/1 Running 0 11m 10.244.1.51 ...
nginx-deployment-example-85cd4bb66f-95bwd 1/1 Running 0 11m 10.244.1.50 ...
nginx-deployment-example-85cd4bb66f-ssccm 1/1 Running 0 11m 10.244.0.43 ...
$ kubectl exec -it nginx-deployment-example-85cd4bb66f-94r4q -- rm /usr/share/nginx/html/ready
$ kubectl describe svc nginx-service-example
...
Endpoints: 10.244.0.43:80,10.244.1.50:80
$ kubectl describe pod nginx-deployment-example-85cd4bb66f-94r4q
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 2m21s (x151 over 7m21s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
$ kubectl exec -it nginx-deployment-example-85cd4bb66f-95bwd -- rm /usr/share/nginx/html/ready
$ kubectl exec -it nginx-deployment-example-85cd4bb66f-ssccm -- rm /usr/share/nginx/html/ready
$ kubectl exec -it nginx-deployment-example-85cd4bb66f-ssccm -- touch /usr/share/nginx/html/ready
You have been served by Pod with IP address: 10.244.0.43
Congratulations – you have successfully configured and tested the readiness probe for your Deployment Pods! This should give you a good insight into how the probes work and how you can use them with Services that expose your Deployments.
Next, we will take a brief look at how you can scale your Deployments.
The beauty of Deployments is that you can almost instantly scale them up or down, depending on your needs. When the Deployment is exposed behind a Service, the new Pods will be automatically discovered as new Endpoints when you scale up, or automatically removed from the Endpoints list when you scale down. The steps for this are as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 10
...
$ kubectl apply -f ./nginx-deployment-readinessprobe.yaml --record
deployment.apps/nginx-deployment-example configured
$ kubectl describe deploy nginx-deployment-example
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 21s deployment-controller Scaled up replica set nginx-deployment-example-85cd4bb66f to 10
$ kubectl scale deploy nginx-deployment-example --replicas=10
deployment.apps/nginx-deployment-example scaled
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 2
...
$ kubectl apply -f ./nginx-deployment-readinessprobe.yaml --record
$ kubectl scale deploy nginx-deployment-example --replicas=2
If you describe the Deployment, you will see that this scaling down is reflected in the events:
$ kubectl describe deploy nginx-deployment-example
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5s deployment-controller Scaled down replica set nginx-deployment-example-85cd4bb66f to 2
Deployment events are very useful if you want to know the exact timeline of scaling and the other operations that can be performed with the Deployment object.
Important Note
It is possible to autoscale your deployments using HorizontalPodAutoscaler. This will be covered in Chapter 20, Autoscaling Kubernetes Pods and Nodes.
Next, you will learn how to delete a Deployment from your cluster.
To delete a Deployment object, you can do two things:
To delete the Deployment object and its Pods, you can use the regular kubectl delete command:
$ kubectl delete deploy nginx-deployment-example
You will see that the Pods get terminated and that the Deployment object is then deleted.
Now, if you would like to delete just the Deployment object, you need to use the --cascade=orphan option for kubectl delete:
$ kubectl delete deploy nginx-deployment-example --cascade=orphan
After executing this command, if you inspect what pods are in the cluster, you will still see all the Pods that were owned by the nginx-deployment-example Deployment.
So far, we have only covered making one possible modification to a living Deployment – we have scaled up and down by changing the replicas parameter in the specification. However, this is not all we can do! It is possible to modify the Deployment's Pod template (.spec.template) in the specification and, in this way, trigger a rollout. This rollout may be caused by a simple change, such as changing the labels of the Pods, but it may be also a more complex operation when the container images in the Pod definition are changed to a different version. This is the most common scenario as it enables you, as a Kubernetes cluster operator, to perform a controlled, predictable rollout of a new version of your image and effectively create a new revision of your Deployment.
Your Deployment uses a rollout strategy, which can be specified in a YAML manifest using .spec.strategy.type. Kubernetes supports two strategies out of the box:
Tip
Consider the Deployment strategies as basic building blocks for more advanced Deployment scenarios. For example, if you are interested in Blue/Green Deployments, you can easily achieve this in Kubernetes by using a combination of Deployments and Services while manipulating label selectors. You can find out more about this in the official Kubernetes blog post: https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/.
Now, we will perform a rollout using the RollingUpdate strategy. The Recreate strategy, which is much simpler, can be exercised similarly.
First, let's recreate the Deployment that we used previously for our readiness probe demonstration:
$ cp nginx-deployment-readinessprobe.yaml nginx-deployment-rollingupdate.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
replicas: 3
...
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
...
spec:
containers:
- name: nginx
image: nginx:1.17
...
readinessProbe:
...
$ kubectl apply -f ./nginx-deployment-rollingupdate.yaml --record
With the deployment ready in the cluster, we can start rolling out a new version of our application. We will change the image in the Pod template for our Deployment to a newer version and observe what happens during the rollout. To do this, follow these steps:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-example
spec:
...
template:
...
spec:
containers:
- name: nginx
image: nginx:1.18
$ kubectl apply -f ./nginx-deployment-rollingupdate.yaml --record
deployment.apps/nginx-deployment-example configured
$ kubectl rollout status deployment nginx-deployment-example
Waiting for deployment "nginx-deployment-example" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment-example" rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-deployment-example" successfully rolled out
$ kubectl describe deploy nginx-deployment-example
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m56s deployment-controller Scaled up replica set nginx-deployment-example-85cd4bb66f to 3
Normal ScalingReplicaSet 3m22s deployment-controller Scaled up replica set nginx-deployment-example-54769f6df8 to 1
Normal ScalingReplicaSet 3m22s deployment-controller Scaled down replica set nginx-deployment-example-85cd4bb66f to 2
Normal ScalingReplicaSet 3m22s deployment-controller Scaled up replica set nginx-deployment-example-54769f6df8 to 2
Normal ScalingReplicaSet 3m5s deployment-controller Scaled down replica set nginx-deployment-example-85cd4bb66f to 0
Normal ScalingReplicaSet 3m5s deployment-controller Scaled up replica set nginx-deployment-example-54769f6df8 to 3
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-example-54769f6df8 3 3 3 6m44s
nginx-deployment-example-85cd4bb66f 0 0 0 7m18s
$ kubectl describe pods
...
Containers:
nginx:
Container ID: docker://a3ff03abf8f76e0d128f5561b6b8fd0c7a355f0fb8a4d3d9ef45ed9 ee8adf23c
Image: nginx:1.18
This shows that we have indeed performed a rollout of the new nginx container image version!
Tip
You can change the Deployment container image imperatively using the kubectl set image deployment nginx-deployment-example nginx=nginx:1.18 --record command. This approach is only recommended for non-production scenarios, and it works well with imperative rollbacks.
Next, you will learn how to roll back a deployment.
If you are using a declarative model to introduce changes to your Kubernetes cluster and are committing each change to your source code repository, performing a rollback is very simple and involves just reverting the commit and applying the configuration again. Usually, the process of applying changes is performed as part of the CI/CD pipeline for the source code repository, instead of the changes being manually applied by an operator. This is the easiest way to manage Deployments, and this is generally recommended in the Infrastructure-as-Code and Configuration-as-Code paradigms.
Tip
One very good example of using a declarative model in practice is Flux (https://fluxcd.io/), which is a project that's currently incubating at CNCF. Flux is the core of the approach known as GitOps, which is a way of implementing continuous deployment for cloud-native applications. It focuses on a developer-centric experience when operating the infrastructure by using tools developers are already familiar with, including Git and continuous deployment tools.
The Kubernetes CLI still provides an imperative way to roll back a deployment using revision history. Imperative rollbacks can also be performed on Deployments that have been updated declaratively. Now, we will demonstrate how to use kubectl for rollbacks. Follow these steps:
$ kubectl set image deployment nginx-deployment-example nginx=nginx:1.19 --record
deployment.apps/nginx-deployment-example image updated
$ kubectl rollout status deployment nginx-deployment-example
...
deployment "nginx-deployment-example" successfully rolled out
$ kubectl rollout history deploy nginx-deployment-example
deployment.apps/nginx-deployment-example
REVISION CHANGE-CAUSE
1 kubectl apply --filename=./nginx-deployment-rollingupdate.yaml --record=true
2 kubectl apply --filename=./nginx-deployment-rollingupdate.yaml --record=true
3 kubectl set image deployment nginx-deployment-example nginx=nginx:1.19 --record=true
$ kubectl rollout history deploy nginx-deployment-example --revision=2
deployment.apps/nginx-deployment-example with revision #2
Pod Template:
...
Containers:
nginx:
Image: nginx:1.18
$ kubectl rollout undo deploy nginx-deployment-example
deployment.apps/nginx-deployment-example rolled back
$ kubectl rollout undo deploy nginx-deployment-example --to-revision=2
$ kubectl rollout status deploy nginx-deployment-example
Please note that you can also perform rollbacks on currently ongoing rollouts. This can be done in both ways; that is, declaratively and imperatively.
Tip
If you need to pause and resume the ongoing rollout of a Deployment, you can use the kubectl rollout pause deployment nginx-deployment-example and kubectl rollout resume deployment nginx-deployment-example commands.
Congratulations – you have successfully rolled back your Deployment. In the next section, we will provide you with a set of best practices for managing Deployment objects in Kubernetes.
This section will summarize known best practices when working with Deployment objects in Kubernetes. This list is by no means complete, but it is a good starting point for your journey with Kubernetes.
In the DevOps world, it is a good practice to stick to declarative models when introducing updates to your infrastructure and applications. This is atthe core of the Infrastructure-as-Code and Configuration-as-Code paradigms. In Kubernetes, you can easily perform declarative updates using the kubectl apply command, which can be used on a single file or even a whole directory of YAML manifest files.
Tip
To delete objects, it is still better to use imperative commands. It is more predictable and less prone to errors. Declaratively deleting resources in your cluster is only useful in CI/CD scenarios, where the whole process is entirely automated.
The same principle also applies to Deployment objects. Performing a rollout or rollback when your YAML manifest files are versioned and kept in a source control repository is easy and predictable. Using the kubectl rollout undo and kubectl set image deployment commands is generally not recommended in production environments. Using these commands gets much more complicated when more than one person is working on operations in the cluster.
Using the Recreate strategy may be tempting as it provides instantaneous updates for your Deployments. However, at the same time, this will mean downtime for your end users. This is because all the existing Pods for the old revision of the Deployment will be terminated at once and replaced with the new Pods. There may be a significant delay before the new pods become ready, and this means downtime. This downtime can be easily avoided by using the RollingUpdate strategy in production scenarios.
It is possible to create Pods with labels that match the label selector of some existing Deployment. This can be done using bare Pods or another Deployment or ReplicaSet. This leads to conflicts, which Kubernetes does not prevent, and makes the existing deployment believe that it has created the other Pods. The results may be unpredictable and in general, you need to pay attention to how you label the resources in your cluster. We advise you to use semantic labeling here, which you can learn more about in the official documentation: https://kubernetes.io/docs/concepts/configuration/overview/#using-labels.
The liveness, readiness, and startup probes of your Pod containers can provide a lot of benefits but at the same time, if they have been misconfigured, they can cause outages, including cascading failures. You should always be sure that you understand the consequences of each probe going into a failed state and how it affects other Kubernetes resources, such as Service objects.
There are a couple of established best practices for readiness probes that you should consider:
Similar to readiness probes, there are a couple of guidelines on how and when you should use liveness probes:
These are the most important points concerning probes for Pods. Now, let's discuss how you should tag your container images.
Managing deployment rollbacks and inspecting the history of rollouts requires that we use good tagging for the container images. If you rely on the latest tag, performing a rollback will not be possible because this tag points to a different version of the image as time goes on. It is a good practice to use semantic versioning for your container images. Additionally, you may consider tagging the images with a source code hash, such as a Git commit hash, to ensure that you can easily track what is running in your Kubernetes cluster.
If you are working on workloads that were developed on older versions of Kubernetes, you may notice that, starting with Kubernetes 1.16, you can't apply the Deployment to the cluster because of the following error:
error: unable to recognize "deployment": no matches for kind "Deployment" in version "extensions/v1beta1"
The reason for this is that in version 1.16, the Deployment object was removed from the extensions/v1beta1 API group, according to the API versioning policy. You should use the apps/v1 API group instead, which Deployment has been part of since 1.9. It is becoming a stable feature.
This also shows an important rule to follow when you work with Kubernetes: always follow the API versioning policy and try to upgrade your resources to the latest API groups when you migrate to a new version of Kubernetes. This will save you unpleasant surprises when the resource is eventually deprecated in older API groups.
In this chapter, you learned how to work with stateless workloads and applications on Kubernetes using Deployment objects. First, you created an example Deployment and exposed its Pods using a Service object of the LoadBalancer type for external traffic. Next, you learned how to scale and manage Deployment objects in the cluster. The management operations we covered included rolling out a new revision of a Deployment and rolling back to an earlier revision in case of a failure. Lastly, we equipped you with a set of known best practices when working with Deployment objects.
The next chapter will extend this knowledge with details about managing stateful workloads and applications. While doing so, we will introduce a new Kubernetes object: StatefulSet.
For more information regarding Deployments and Services, please refer to the following Packt books:
3.145.191.169