Cluster state

The second major piece of the Kubernetes architecture, the cluster state, is the etcd key value store. etcd is consistent and highly available, and is designed to quickly and reliably provide Kubernetes access to critical cluster current and desired state. etcd is able to provide this distributed coordination of data through such core concepts as leader election and distributed locks. The Kubernetes API, via its API server, is in charge of updating objects in etcd that correspond to the RESTful operations of the cluster. This is very important to remember: the API server is responsible for managing what's stuck into Kubernetes' picture of the world. Other components in this ecosystem watch etcd for changes in order to modify themselves and enter into the desired state.

This is of particular important because every component we've described in the Kubernetes Master and those that we'll investigate in the nodes below are stateless, which means their state is stored elsewhere, and that elsewhere is etcd.

Kubernetes doesn't take specific action to make things happen on the cluster; the Kubernetes API, via the API server, writes into etcd what should be true, and then the various pieces of Kubernetes make it so. etcd provides this interface via a simple HTTP/JSON API, which makes interacting with it quite simple.

etcd is also important in considerations of the Kubernetes security model due to it existing at a very low layer of the Kubernetes system, which means that any component that can write data to etcd has root to the cluster. Later on, we'll look into how the Kubernetes system is divided into layers in order to minimize this exposure. You can consider etcd to underlay Kubernetes with other parts of the ecosystem such as the container runtime, an image registry, a file storage, a cloud provider interface, and other dependencies that Kubernetes manages but does not have an opinionated perspective on.

In non-production Kubernetes clusters, you'll see single-node instantiations of etcd to save money on compute, simplify operations, or otherwise reduce complexity. It is essential to note however that a multi-master strategy of 2n+1 nodes is essential for production-ready clusters, in order to replicate data effectively across masters and ensure fault tolerance. It is recommended that you check the etcd documentation for more information.

If you're in front of your cluster, you can check to see the status of etcd by checking componentstatuses or cs:

[node3 /]$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
Due to a bug in the AKS ecosystem, this will currently not work on Azure. You can track this issue here to see when it is resolved:

https://github.com/Azure/AKS/issues/173: kubectl get componentstatus fails for scheduler and controller-manager #173

If you were to see an unhealthy etcd service, it'd look something like so:

[node3 /]$ kubectl get cs

NAME STATUS MESSAGE ERROR
etcd-0 Unhealthy Get http://127.0.0.1:2379/health: dial tcp 127.0.0.1:2379: getsockopt: connection refused
controller-manager Healthy ok
scheduler Healthy ok
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.241.51