In this chapter, you will deploy a Kubernetes cluster on virtual machines (VMs) in Google Cloud.
Provisioning Compute Resources
You will install a single control plane cluster. For this, you will need one virtual machine for the controller and several (here two) virtual machines for the workers.
Full network connectivity among all machines in the cluster is necessary. For this, you will create a Virtual Private Cloud (VPC) that will host the cluster and define a subnet to get addresses for the hosts.
From the Google Cloud Console, create a new project
my-project; then, from a local terminal, log in and set the current region, zone, and project (you can also work from the Google Cloud Shell and skip the login step):
$ gcloud auth login
[...]
$ gcloud config set compute/region us-west1
Updated property [compute/region].
$ gcloud config set compute/zone us-west1-c
Updated property [compute/zone].
$ gcloud config set project my-project
Updated property [core/project].
Create a dedicated Virtual Private Cloud (VPC):
$ gcloud compute networks create kubernetes-cluster --subnet-mode custom
Created [https://www.googleapis.com/compute/v1/projects/my-project/global/networks/kubernetes-cluster].
Create a subnet in the
kubernetes-cluster VPC
:
$ gcloud compute networks subnets create kubernetes
--network kubernetes-cluster
--range 10.240.0.0/24
Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/us-west1/subnetworks/kubernetes].
Create firewall rules for internal communications:
$ gcloud compute firewall-rules create
kubernetes-cluster-allow-internal
--allow tcp,udp,icmp
--network kubernetes-cluster
--source-ranges 10.240.0.0/24,10.244.0.0/16
Created [https://www.googleapis.com/compute/v1/projects/my-project/global/firewalls/kubernetes-cluster-allow-internal].
Create firewall rules for external communications:
$ gcloud compute firewall-rules create
kubernetes-cluster-allow-external
--allow tcp:22,tcp:6443,icmp
--network kubernetes-cluster
--source-ranges 0.0.0.0/0
Created [https://www.googleapis.com/compute/v1/projects/my-project/global/firewalls/kubernetes-cluster-allow-external].
Reserve a public IP address for the
controller:
$ gcloud compute addresses create kubernetes-controller
--region $(gcloud config get-value compute/region)
Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/us-west1/addresses/kubernetes-controller].
$ PUBLIC_IP=$(gcloud compute addresses describe kubernetes-controller
--region $(gcloud config get-value compute/region)
--format 'value(address)')
Create a VM for the controller:
$ gcloud compute instances create controller
--async
--boot-disk-size 200GB
--can-ip-forward
--image-family ubuntu-1804-lts
--image-project ubuntu-os-cloud
--machine-type n1-standard-1
--private-network-ip 10.240.0.10
--scopes compute-rw,storage-ro,service-management,service-control, logging-write, monitoring
--subnet kubernetes
--address $PUBLIC_IP
Instance creation in progress for [controller]: [...]
Create VMs for the workers:
$ for i in 0 1; do
gcloud compute instances create worker-${i}
--async
--boot-disk-size 200GB
--can-ip-forward
--image-family ubuntu-1804-lts
--image-project ubuntu-os-cloud
--machine-type n1-standard-1
--private-network-ip 10.240.0.2${i}
--scopes compute-rw,storage-ro,service-management,service-control,logging-write, monitoring
--subnet kubernetes;
done
Instance creation in progress for [worker-0]: [...]
Instance creation in progress for [worker-1]: [...]
Install Docker on the Hosts
Repeat these steps for the controller and each worker.
Connect to the host (here the controller):
$ gcloud compute ssh controller
Install Docker service:
# Install packages to allow apt to use a repository over HTTPS
$ sudo apt-get update && sudo apt-get install -y
apt-transport-https ca-certificates curl software-properties-common
# Add Docker's official GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add Docker apt repository
$ sudo add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/ubuntu
$(lsb_release -cs)
stable"
# List available versions of Docker
$ apt-cache madison docker-ce
# Install Docker CE, for example version 5:19.03.12~3-0
$ sudo apt-get update && sudo apt-get install -y
docker-ce=5:19.03.12~3-0~ubuntu-bionic
docker-ce-cli=5:19.03.12~3-0~ubuntu-bionic
$ sudo apt-mark hold containerd.io docker-ce docker-ce-cli
# Setup daemon
$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
$ sudo mkdir -p /etc/systemd/system/docker.service.d
# Restart docker
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ sudo systemctl enable docker
Install kubeadm, kubelet, and kubectl on the Hosts
Repeat these steps for the controller and each worker.
Connect to the host (here the controller):
$ gcloud compute ssh controller
Install
kubelet
, kubeadm
, and
kubectl
:
# Add GPG key
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# Add Kubernetes apt repository
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
# Get Kubernetes apt repository data
$ sudo apt-get update
# List available versions of kubeadm
$ apt-cache madison kubeadm
# Install selected version (here 1.18.6-00)
$ sudo apt-get install -y kubelet=1.18.6-00 kubeadm=1.18.6-00 kubectl=1.18.6-00
$ sudo apt-mark hold kubelet kubeadm kubectl
Initialize the Control Plane Node
Run these steps on the controller only.
Initialize the cluster (that should take several minutes):
$ gcloud config set compute/zone us-west1-c # or your selected zone
Updated property [compute/zone].
$ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute instances describe controller
--zone $(gcloud config get-value compute/zone)
--format='get(networkInterfaces[0].accessConfigs[0].natIP)')
$ sudo kubeadm init
--pod-network-cidr=10.244.0.0/16
--ignore-preflight-errors=NumCPU
--apiserver-cert-extra-sans=$KUBERNETES_PUBLIC_ADDRESS
At the end of the initialization, a message gives you a command to join the workers to the cluster (a command starting with kubeadm join). Please copy this command for later use.
Save the
kubeconfig file generated by the installation in your home directory. It will give you
admin access to the cluster:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
Controller NotReady master 1m14s v1.18.6
Install the
calico Pod network add-on:
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Wait till the end of the installation. All Pods should have a
Running status:
When all the Pods are
Running, the master node should be
Ready
:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
Controller Ready master 2m23s v1.18.6
Join the Workers
Repeat these steps for each worker.
Run the command you saved after running
kubeadm init on the controller:
$ sudo kubeadm join 10.240.0.10:6443 --token <token>
--discovery-token-ca-cert-hash sha256:<hash>
If you didn’t save the command, you have to get the token and hash. On the controller, run:
$ kubeadm token list
TOKEN TTL EXPIRES
abcdef.ghijklmnopqrstuv 23h 2020-01-19T08:25:27Z
Tokens expire after 24 hours. If yours expired, you can create a new one:
$ kubeadm token create
bcdefg.hijklmnopqrstuvw
To obtain the
hash value, you can run this command on the controller:
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt |
openssl rsa -pubin -outform der 2>/dev/null |
openssl dgst -sha256 -hex | sed 's/^.* //'
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78