Deployment

We'll be building upon the Kubernetes test environment started previously. To begin the deployment, ensure you move into the correct repository path, relative to the repository root as follows:

cd ./chapter06/provision/kubernetes/

As access to the Kubernetes API is required, the role-based access control (RBAC) configuration for this deploy is quite extensive, which includes a Role, a RoleBinding, a ClusterRole, a ClusterRoleBinding, and a ServiceAccount. This manifest is available at ./kube-state-metrics/kube-state-metrics-rbac.yaml.

It should be applied using the following command:

kubectl apply -f ./kube-state-metrics/kube-state-metrics-rbac.yaml

We'll be creating a deployment for kube-state-metrics with just one instance, as, in this case, no clustering or special deployment requirements are necessary:

apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: monitoring
spec:
selector:
matchLabels:
k8s-app: kube-state-metrics
replicas: 1
...

This deployment will run an instance of the kube-state-metrics exporter, along with addon-resizer to scale the exporter dynamically:

...
template:
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
...
- name: addon-resizer
...

This can be applied using the following instruction:

kubectl apply -f ./kube-state-metrics/kube-state-metrics-deployment.yaml

We can follow the deployment status using the following:

kubectl rollout status deployment/kube-state-metrics -n monitoring

After a successful deployment, we'll be creating a service for this exporter, this time with two ports: one for the Kubernetes API object metrics and another for the exporter's internal metrics themselves:

apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: monitoring
labels:
k8s-app: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
type: NodePort
ports:
- {name: http-metrics, port: 8080, targetPort: http-metrics, protocol: TCP}
- {name: telemetry, port: 8081, targetPort: telemetry, protocol: TCP}
selector:
k8s-app: kube-state-metrics

The previous manifest can be applied as follows:

kubectl apply -f ./kube-state-metrics/kube-state-metrics-service.yaml

With the service in place, we are able to validate both metrics endpoints using the following command:

minikube service kube-state-metrics -n monitoring

This will open two different browser tabs, one for each metrics endpoint:

Figure 6.5: The kube-state-metrics web interface

Finally, it is time to configure Prometheus to scrape both endpoints using the ServiceMonitor manifest as shown here:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: kube-state-metrics
name: kube-state-metrics
namespace: monitoring
spec:
endpoints:
- interval: 30s
port: http-metrics
- interval: 30s
port: telemetry
selector:
matchLabels:
k8s-app: kube-state-metrics

And it can now be applied using the following command:

kubectl apply -f ./kube-state-metrics/kube-state-metrics-servicemonitor.yaml

We can now validate the correct configuration of scrape targets in Prometheus, using the following instruction to open its web interface:

minikube service prometheus-service -n monitoring

Figure 6.6: Prometheus /targets endpoint showing kube-state-metrics targets for metrics and telemetry

Some interesting metrics from kube-state-metrics that can be used to keep an eye on your Kubernetes clusters are:

  • kube_pod_container_status_restarts_total, which can tell you if a given pod is restarting on a loop;
  • kube_pod_status_phase, which can be used to alert on pods that are in a non-ready state for a long time;
  • Comparing kube_<object>_status_observed_generation with kube_<object>_metadata_generation can give you a sense when a given object has failed but hasn't been rolled back
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.79.88