Stacked nodes

In order to run a set of stack nodes, you'll need to bootstrap the first control plane node with a kubeadm-conf-01.yaml template. Again, this example is using Calico, but you can configure the networking as you please. You'll need to substitute the following values with your own in order to make the example work:

  • LB_DNS
  • LB_PORT
  • CONTROL01_IP
  • CONTROL01_HOSTNAME

Open up a new file, kubeadm-conf-01.yaml, with your favorite IDE:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LB_DNS"
api:
controlPlaneEndpoint: "LB_DNS:LB_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CONTROL01_IP:2379"
advertise-client-urls: "https://CONTROL01_IP:2379"
listen-peer-urls: "https://CONTROL01_IP:2380"
initial-advertise-peer-urls: "https://CONTROL01_IP:2380"
initial-cluster: "CONTROL01_HOSTNAME=https://CONTROL01_IP:2380"
serverCertSANs:
- CONTROL01_HOSTNAME
- CONTROL01_IP
peerCertSANs:
- CONTROL01_HOSTNAME
- CONTROL01_IP
networking:
podSubnet: "192.168.0.0/16"

Once you have this file, execute it with the following command:

kubeadm init --config kubeadm-conf-01.yaml

Once this command is complete, you'll need to copy the following list of certificates and files to the other control plane nodes:

/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
/etc/kubernetes/admin.conf

In order to move forward, we'll need to add another template file on our second node to create the second stacked node under kubeadm-conf-02.yaml. Like we did previously, you'll need to replace the following values with your own:

  • LB_DNS
  • LB_PORT
  • CONTROL02_IP
  • CONTROL02_HOSTNAME

Open up a new file, kubeadm-conf-02.yaml, with your favorite IDE:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LB_DNS:LB_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CONTROL02_IP:2379"
advertise-client-urls: "https://CONTROL02_IP:2379"
listen-peer-urls: "https://CONTROL02_IP:2380"
initial-advertise-peer-urls: "https://CONTROL01_IP:2380"
initial-cluster: "CONTROL01_HOSTNAME=https://CONTROL01_IP:2380,CONTROL02_HOSTNAME=https://CONTROL02_IP:2380"
initial-cluster-state: existing
serverCertSANs:
- CONTROL02_HOSTNAME
- CONTROL02_IP
peerCertSANs:
- CONTROL02_HOSTNAME
- CONTROL02_IP
networking:
podSubnet: "192.168.0.0/16"

Before running this template, you'll need to move the copied files over to the correct directories. Here's an example that should be similar on your system:

 mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

Once you've copied those files over, you can run a series of kubeadm commands to absorb the certificates, and then bootstrap the second node:

kubeadm alpha phase certs all --config kubeadm-conf-02.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-conf-02.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm-conf-02.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm-conf-02.yaml
systemctl start kubelet

Once that's complete, you can add the node to the etcd as well. You'll need to set some variables first, along with the IPs of the virtual machines running your nodes:

export CONTROL01_IP=<YOUR_IP_HERE>
export CONTROL01_HOSTNAME=cp01H
export CONTROL02_IP=<YOUR_IP_HERE>
export CONTROL02_HOSTNAME=cp02H

Once you've set up those variables, run the following kubectl and kubeadm commands. First, add the certificates:

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl exec -n kube-system etcd-${CONTROL01_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CONTROL01_IP}:2379 member add ${CONTROL02_HOSTNAME} https://${CP1_IP}:2380

Next, phase in the configuration for etcd:

kubeadm alpha phase etcd local --config kubeadm-config-02.yaml

This command will cause the etcd cluster to become unavailable for a short period of time, but that is by design. You can then deploy the remaining components in the kubeconfig and controlplane, and then mark the node as a master:

kubeadm alpha phase kubeconfig all --config kubeadm-conf-02.yaml
kubeadm alpha phase controlplane all --config kubeadm-conf-02.yaml
kubeadm alpha phase mark-master --config kubeadm-conf-02.yaml

We'll run through this once more with the third node, adding more value to the initial cluster under etcd's extraArgs.

You'll need to create a third kubeadm-conf-03.yaml file on the third machine. Follow this template and substitute the variables, like we did previously:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
apiServerCertSANs:
- "LB_DNS"
api:
controlPlaneEndpoint: "LB_DNS:LB_PORT"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://CONTROL03_IP:2379"
advertise-client-urls: "https://CONTROL03_IP:2379"
listen-peer-urls: "https://CONTROL03_IP:2380"
initial-advertise-peer-urls: "https://CONTROL03_IP:2380"
initial-cluster: "CONTRL01_HOSTNAME=https://CONTROL01_IP:2380,CONTROL02_HOSTNAME=https://CONTROL02_IP:2380,CONTRL03_HOSTNAME=https://CONTROL03_IP:2380"
initial-cluster-state: existing
serverCertSANs:
- CONTRL03_HOSTNAME
- CONTROL03_IP
peerCertSANs:
- CONTRL03_HOSTNAME
- CONTROL03_IP
networking:
podSubnet: "192.168.0.0/16"

You'll need to move the files again:

 mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

And, once again you'll need to run the following commands in order bootstrap them:

kubeadm alpha phase certs all --config kubeadm-conf-03.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-conf-03.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm-conf-03.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm-conf-03.yaml
systemctl start kubelet

And then, add the nodes to the etcd cluster once more:

export CONTROL01_IP=<YOUR_IP_HERE>
export CONTROL01_HOSTNAME=cp01H
export CONTROL03_IP=<YOUR_IP_HERE>
export CONTROL03_HOSTNAME=cp03H

Next, we can set up the etcd system:

export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl exec -n kube-system etcd-${CONTROL01_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CONTROL01_IP}:2379 member add ${CONTROL03_HOSTNAME} https://${CONTROL03_IP}:2380

kubeadm alpha phase etcd local --config kubeadm-conf-03.yaml


After that's complete, we can once again deploy the rest of the components of the control plane and mark the node as a master. Run the following commands:

kubeadm alpha phase kubeconfig all --config kubeadm-conf-03.yaml

kubeadm alpha phase controlplane all --config kubeadm-conf-03.yaml

kubeadm alpha phase mark-master --config kubeadm-conf-03.yaml

Great work!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.204.142