How it works...

To verify our HA cluster, take a look at the pods in the namespace kube-system:

$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-6bnrk 1/1 Running 0 1d
calico-etcd-p7lpv 1/1 Running 0 1d
calico-kube-controllers-d554689d5-qjht2 1/1 Running 0 1d
calico-node-2r2zs 2/2 Running 0 1d
calico-node-97fjk 2/2 Running 0 1d
calico-node-t55l8 2/2 Running 0 1d
kube-apiserver-master01 1/1 Running 0 1d
kube-apiserver-master02 1/1 Running 0 1d
kube-controller-manager-master01 1/1 Running 0 1d
kube-controller-manager-master02 1/1 Running 0 1d
kube-dns-6f4fd4bdf-xbfvp 3/3 Running 0 1d
kube-proxy-8jk69 1/1 Running 0 1d
kube-proxy-qbt7q 1/1 Running 0 1d
kube-proxy-rkxwp 1/1 Running 0 1d
kube-scheduler-master01 1/1 Running 0 1d
kube-scheduler-master02 1/1 Running 0 1d

These pods are working as system daemons: Kubernetes system services such as the API server, Kubernetes add-ons such as the DNS server, and CNI ones; here we used Calico. But wait! As you take a closer look at the pods, you may be curious about why the controller manager and scheduler runs on both masters. Isn't there just single one in the HA cluster?

As we understood in the previous section, we should avoid running multiple controller managers and multiple schedulers in the Kubernetes system. This is because they may try to take over requests at the same time, which not only creates conflict but is also a waste of computing power. Actually, while booting up the whole system by using kubeadm, the controller manager and scheduler are started with the leader-elect flag enabled by default:

// check flag leader-elect on master node
$ sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --leader-elect=true
...

You may find that the scheduler has also been set with leader-elect. Nevertheless, why is there still more than one pod? The truth is, one of the pods with the same role is idle. We can get detailed information by looking at system endpoints:

// ep is the abbreviation of resource type "endpoints"
$ kubectl get ep -n kube-system
NAME ENDPOINTS AGE
calico-etcd 192.168.122.201:6666,192.168.122.202:6666 1d
kube-controller-manager <none> 1d
kube-dns 192.168.241.67:53,192.168.241.67:53 1d
kube-scheduler <none> 1d

// check endpoint of controller-manager with YAML output format
$ kubectl get ep kube-controller-manager -n kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_bf4e22f7-4f56-11e8-aee3-52540048ed9b","leaseDurationSeconds":15,"acquireTime":"2018-05-04T04:51:11Z","renewTime":"2018-05-04T05:28:34Z","leaderTransitions":0}'
creationTimestamp: 2018-05-04T04:51:11Z
name: kube-controller-manager
namespace: kube-system
resourceVersion: "3717"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: 5e2717b0-0609-11e8-b36f-52540048ed9b

Take the endpoint for kube-controller-manager, for example: there is no virtual IP of a pod or service attached to it (the same as kube-scheduler). If we dig deeper into this endpoint, we find that the endpoint for kube-controller-manager relies on annotations to record lease information; it also relies on resourceVersion for pod mapping and to pass traffic. According to the annotation of the kube-controller-manager endpoint, it is our first master that took control. Let's check the controller manager on both masters:

// your pod should be named as kube-controller-manager-<HOSTNAME OF MASTER>
$ kubectl logs kube-controller-manager-master01 -n kube-system | grep "leader"
I0504 04:51:03.015151 1 leaderelection.go:175] attempting to acquire leader lease kube-system/kube-controller-manager...
...
I0504 04:51:11.627737 1 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"5e2717b0-0609-11e8-b36f-52540048ed9b", APIVersion:"v1", ResourceVersion:"187", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' master01_bf4e22f7-4f56-11e8-aee3-52540048ed9b became leader

As you can see, only one master works as a leader and handles the requests, while the other one persists, acquires the lease, and does nothing. 

For a further test, we are trying to remove our current leader pod, to see what happens. While deleting the deployment of system pods by a kubectl request, a kubeadm Kubernetes would create a new one since it's guaranteed to boot up any application under the/etc/kubernetes/manifests directory.  Therefore, avoid the automatic recovery by kubeadm, we remove the configuration file out of the manifest directory instead. It makes the downtime long enough to give away the leadership:

// jump into the master node of leader
// temporary move the configuration file out of kubeadm's control
$ sudo mv /etc/kubernetes/manifests/kube-controller-manager.yaml ./
// check the endpoint
$ kubectl get ep kube-controller-manager -n kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_4faf95c7-4f5b-11e8-bda3-525400b06612","leaseDurationSeconds":15,"acquireTime":"2018-05-04T05:37:03Z","renewTime":"2018-05-04T05:37:47Z","leaderTransitions":1}'
creationTimestamp: 2018-05-04T04:51:11Z
name: kube-controller-manager
namespace: kube-system
resourceVersion: "4485"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: 5e2717b0-0609-11e8-b36f-52540048ed9b
subsets: null
The /etc/kubernetes/manifests directory is defined in kubelet by --pod-manifest-path flag. Check /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, which is the system daemon configuration file for kubelet, and the help messages of kubelet for more details.

Now, it is the other node's turn to wake up its controller manager and put it to work. Once you put back the configuration file for the controller manager, you find the old leader is now waiting for the lease:

$ kubectl logs kube-controller-manager-master01 -n kube-system
I0504 05:40:10.218946 1 controllermanager.go:116] Version: v1.10.2
W0504 05:40:10.219688 1 authentication.go:55] Authentication is disabled
I0504 05:40:10.219702 1 insecure_serving.go:44] Serving insecurely on 127.0.0.1:10252
I0504 05:40:10.219965 1 leaderelection.go:175] attempting to acquire leader lease kube-system/kube-controller-manager...
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.52.208