Adding liveness probes to pods

Kubernetes uses liveness probes to find out when to restart a container. Liveness can be checked by running a liveness probe command inside the container and validating that it returns 0 through TCP socket liveness probes or by sending an HTTP request to a specified path. In that case, if the path returns a success code, then kubelet will consider the container to be healthy. In this recipe, we will learn how to send an HTTP request method to the example application. Let's perform the following steps to add liveness probes:

  1. Edit the minio.yaml file in the src/chapter7/autoheal/minio directory and add the following livenessProbe section right under the volumeMounts section, before volumeClaimTemplates. Your YAML manifest should look similar to the following. This will send an HTTP request to the /minio/health/live location every 20 seconds to validate its health:
...
volumeMounts:
- name: data
mountPath: /data
#### Starts here
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
#### Ends here
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:

For liveness probes that use HTTP requests to work, an application needs to expose unauthenticated health check endpoints. In our example, MinIO provides this through the /minio/health/live endpoint. If your workload doesn't have a similar endpoint, you may want to use liveness commands inside your pods to verify their health.

  1. Deploy the application. It will create four pods:
$ kubectl apply -f minio.yaml
  1. Confirm the liveness probe by describing one of the pods. You will see a Liveness description similar to the following:
$ kubectl describe pod minio-0
...
Liveness: http-get http://:9000/minio/health/live delay=120s timeout=1s period=20s #success=1 #failure=3
...
  1. To test the liveness probe, we need to edit the minio.yaml file again. This time, set the livenessProbe port to  8000, which is where the application will not able to respond to the HTTP request. Repeat Steps 2 and 3, redeploy the application, and check the events in the pod description. You will see a minio failed liveness probe, will be restarted message in the events:
$ kubectl describe pod minio-0
  1. You can confirm the restarts by listing the pods. You will see that every MinIO pod is restarted multiple times due to it having a failing liveness status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
minio-0 1/1 Running 4 12m
minio-1 1/1 Running 4 12m
minio-2 1/1 Running 3 11m
minio-3 1/1 Running 3 11m

In this recipe, you learned how to implement the auto-healing functionality for applications that are running in Kubernetes clusters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.38.121