Using HorizontalPodAutoscaler without metrics adapter

If we do not create a metrics adapter, Metrics Aggregator only knows about CPU and memory usage related to containers and nodes. To make things more complicated, that information is limited only to the last few minutes. Since HPA is just concerned about Pods and containers inside them, we are limited to only two metrics. When we create an HPA, it will scale or de-scale our Pods if memory or CPU consumption of the containers that constitute those Pods is above or below predefined thresholds.

Metrics Server periodically fetches information (CPU and memory) from Kubelets running inside worker nodes.

Those metrics are passed to Metrics Aggregator which, in this scenario, does not add any additional value. From there on, HPAs periodically consult the data in the Metrics Aggregator (through its API endpoint). When there is a discrepancy between target values defined in an HPA and the actual values, an HPA will manipulate the number of replicas of a Deployment or a StatefulSet. As we already know, any change to those controllers results in rolling updates executed through creation and manipulation of ReplicaSets, which create and delete Pods, which are converted into containers by a Kubelet running on a node where a Pod is scheduled.

Figure 5-1: HPA with out of the box setup (arrows show the flow of data)

Functionally, the flow we just described works well. The only problem is the data available in the Metrics Aggregator. It is limited to memory and CPU. More often than not, that is not enough. So, there's no need for us to change the process, but to extend the data available to HPA. We can do that through a metrics adapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.12.240