Upgrading a cluster

For demonstrating how we upgrade the Kubernetes version, we'll first launch the cluster with the 1.8.7 version. For detailed instructions of parameters, refer to the previous recipes in this chapter. Input the following command:

// launch a cluster with additional parameter --kubernetes-version 1.8.7 # kops create cluster --master-count 1 --node-count 2 --zones us-east-1a,us-east-1b,us-east-1c --node-size t2.micro --master-size t2.small --topology private --networking calico --authorization=rbac --cloud-labels "Environment=dev" --state $KOPS_STATE_STORE --kubernetes-version 1.8.7 --name k8s-cookbook.net --yes 

After few minutes, we can see that the master and the nodes are up with version 1.8.7:

# kubectl get nodes 
NAME STATUS ROLES AGE VERSION
ip-172-20-44-128.ec2.internal Ready master 3m v1.8.7
ip-172-20-55-191.ec2.internal Ready node 1m v1.8.7
ip-172-20-64-30.ec2.internal Ready node 1m v1.8.7

In the following example, we'll walk through how to upgrade Kubernetes cluster from 1.8.7 to 1.9.3 using kops. Firstly, run the kops upgrade cluster command. Kops will show us the latest version that we could upgrade to:

# kops upgrade cluster k8s-cookbook.net --yes 
ITEM PROPERTY OLD NEW
Cluster KubernetesVersion 1.8.7 1.9.3
Updates applied to configuration. You can now apply these changes,
using `kops update cluster k8s-cookbook.net`

It indicates that the configuration has been updated, and that we'll need to update the cluster now. We run command with the dryrun mode to check what will be modified first:

// update cluster
# kops update cluster k8s-cookbook.net
...
Will modify resources:
LaunchConfiguration/master-us-east-1a.masters.k8s-cookbook.net
UserData
...
+ image: gcr.io/google_containers/kube-apiserver:v1.9.3
- image: gcr.io/google_containers/kube-apiserver:v1.8.7
...
+ image: gcr.io/google_containers/kube-controller
manager:v1.9.3
- image: gcr.io/google_containers/kube-controller-manager:v1.8.7
...
hostnameOverride: '@aws'
+ image: gcr.io/google_containers/kube-proxy:v1.9.3
- image: gcr.io/google_containers/kube-proxy:v1.8.7
logLevel: 2
kubeScheduler:
+ image: gcr.io/google_containers/kube-scheduler:v1.9.3
- image: gcr.io/google_containers/kube
scheduler:v1.8.7
...
Must specify --yes to apply changes

We could see all of the components moved from v1.8.7 to v1.9.3 in Auto Scaling Launch Configuration. After verifying that everything is good, we can run the same command with the --yes parameter:

// run the same command with --yes 
# kops update cluster k8s-cookbook.net --yes
...
kops has set your kubectl context to k8s-cookbook.net
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster

In this case, we need to run the rolling update for the cluster:

# kops rolling-update cluster --yes
Using cluster from kubectl context: k8s-cookbook.net
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-us-east-1a NeedsUpdate 1 0 1 1 1
nodes NeedsUpdate 2 0 2 2 2
I0414 22:45:05.887024 51333 rollingupdate.go:193] Rolling update completed for cluster "k8s-cookbook.net"!

All the nodes have been upgraded to 1.9.3! When performing the rolling update, kops drains one instance first then cordons the node. The auto-scaling group will bring up another node with the updated user data, which contains the Kubernetes component images with the updates. For avoiding downtime, you should have multiple masters and nodes as the basic deployment.

After a rolling update is completed, we can check the cluster version via kubectl get nodes:

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-116-81.ec2.internal Ready node 14m v1.9.3
ip-172-20-41-113.ec2.internal Ready master 17m v1.9.3
ip-172-20-56-230.ec2.internal Ready node 8m v1.9.3

All the nodes have been upgraded to 1.9.3!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.88.62