First, create the namespace named apps
. Then, we’ll create the ServiceAccount:
$ kubectl create namespace apps $ kubectl create serviceaccount api-access -n apps
Alternatively, you can use the declarative approach. Create the namespace from the definition in the file apps-namespace.yaml:
apiVersion
:
v1
kind
:
Namespace
metadata
:
name
:
apps
Create the namespace from the YAML file:
$ kubectl create -f apps-namespace.yaml
Create a new YAML file called api-serviceaccount.yaml with the following contents:
apiVersion
:
v1
kind
:
ServiceAccount
metadata
:
name
:
api-access
namespace
:
apps
Run the create
command to instantiate the ServiceAccount from the YAML file:
$ kubectl create -f api-serviceaccount.yaml
Use the create clusterrole
command to create the ClusterRole imperatively.
$ kubectl create clusterrole api-clusterrole --verb=watch,list,get --resource=pods
If you’d rather start with the YAML file, use content shown in the file api-clusterrole.yaml:
apiVersion
:
rbac.authorization.k8s.io/v1
kind
:
ClusterRole
metadata
:
name
:
api-clusterrole
rules
:
-
apiGroups
:
[
""
]
resources
:
[
"pods"
]
verbs
:
[
"watch"
,
"list"
,
"get"
]
Create the ClusterRole from the YAML file:
$ kubectl create -f api-clusterrole.yaml
Use the create clusterrolebinding
command to create the ClusterRoleBinding imperatively.
$ kubectl create clusterrolebinding api-clusterrolebinding --serviceaccount=apps:api-access --verb=watch,list,get --resource=pods
The declarative approach of the ClusterRoleBinding could look like the one in the file api-clusterrolebinding.yaml:
apiVersion
:
rbac.authorization.k8s.io/v1
kind
:
ClusterRoleBinding
metadata
:
name
:
api-clusterrolebinding
roleRef
:
apiGroup
:
rbac.authorization.k8s.io
kind
:
ClusterRole
name
:
api-clusterrole
subjects
:
-
apiGroup
:
""
kind
:
ServiceAccount
name
:
api-access
namespace
:
apps
Create the ClusterRoleBinding from the YAML file:
$ kubectl create -f api-clusterrolebinding.yaml
Execute the run
command to create the Pods in the different namespaces. You will need to create the namespace rm
before you can instantiate the Pod disposable
.
$ kubectl run operator --image=nginx:1.21.1 --restart=Never --expose=80 --serviceaccount=api-access -n apps $ kubectl create namespace rm $ kubectl run disposable --image=nginx:1.21.1 --restart=Never -n rm
The following YAML manifest shows the rm
namespace definition stored in the file rm-namespace.yaml:
apiVersion
:
v1
kind
:
Namespace
metadata
:
name
:
rm
The YAML representation of those Pods stored in the file api-pods.yaml could look as follows:
apiVersion
:
v1
kind
:
Pod
metadata
:
name
:
operator
namespace
:
apps
spec
:
serviceAccountName
:
api-access
containers
:
-
name
:
operator
image
:
nginx:1.21.1
ports
:
-
containerPort
:
80
---
apiVersion
:
v1
kind
:
Pod
metadata
:
name
:
disposable
namespace
:
rm
spec
:
containers
:
-
name
:
disposable
image
:
nginx:1.21.1
Create the namespace and Pods from the YAML files:
$ kubectl create -f rm-namespace.yaml $ kubectl create -f api-pods.yaml
Determine the API server endpoint and the Secret access token of the ServiceAccount. You will need this information for making the API calls.
$ kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' https://192.168.64.4:8443 $ kubectl get secret $(kubectl get serviceaccount api-access -n apps -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' -n apps | base64 --decode eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI...
Open an interactive shell to the Pod named operator
:
$ kubectl exec operator -it -n apps -- /bin/sh
Emit API calls for listing all Pods, and deleting the Pod disposable
living in the rm
namespace. You will find that the list
operation is permitted, the delete
operation isn’t.
# curl https://192.168.64.4:8443/api/v1/namespaces/rm/pods --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI... --insecure { "kind": "PodList", "apiVersion": "v1", ... } # curl -X DELETE https://192.168.64.4:8443/api/v1/namespaces /rm/pods/disposable --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hOUhI... --insecure { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "pods "disposable" is forbidden: User "system:serviceaccount:apps:api-access" cannot delete resource "pods" in API group "" in the namespace "rm"", "reason": "Forbidden", "details": { "name": "disposable", "kind": "pods" }, "code": 403 }
The solution to this sample exercise requires a lot of manual steps. The following commands do not render their the output.
Open an interactive shell to the master node using Vagrant.
$ vagrant ssh k8s-master
Upgrade kubeadm to the version 1.21.2 and apply it.
$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.21.2-00 && sudo apt-mark hold kubeadm $ sudo kubeadm upgrade apply v1.21.2
Drain the node, upgrade the kubelet and kubectl
, restart the kubelet, and uncordon the node.
$ kubectl drain k8s-master --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubelet=1.21.2-00 kubectl=1.21.2-00 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon k8s-master
The version of the node should now say v1.21.2. Exit out of the node.
$ kubectl get nodes $ exit
Open an interactive shell to the first worker node using Vagrant. Repeat all of the following steps for the other worker nodes.
$ vagrant ssh worker-1
Upgrade kubeadm to the version 1.21.2 and apply it to the node.
$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubeadm=1.21.2-00 $ sudo kubeadm upgrade node
Drain the node, upgrade the kubelet and kubectl
, restart the kubelet, and uncordon the node.
$ kubectl drain worker-1 --ignore-daemonsets $ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubelet=1.21.2-00 kubectl=1.21.2-00 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ kubectl uncordon worker-1
The version of the node should now say v1.21.2. Exit out of the node.
$ kubectl get nodes $ exit
The solution to this sample exercise requires a lot of manual steps. The following commands do not render their the output.
Open an interactive shell to the master node using Vagrant. That’s not with the etcdctl
command line tool installed.
$ vagrant ssh k8s-master
Determine the parameters of the Pod etcd-k8s-master
by describing it. Use the correct parameter values to create a snapshot file.
$ kubectl describe pod etcd-k8s-master -n kube-system $ sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd.bak
Restore the backup from the snapshot file. Edit the etcd YAML manifest and change the value of spec.volumes.hostPath.path
for the Volume named etcd-data
.
$ sudo ETCDCTL_API=3 etcdctl --data-dir=/var/bak snapshot restore /opt/etcd.bak $ sudo vim /etc/kubernetes/manifests/etcd.yaml
After a short while, the Pod etcd-k8s-master
should transition back into the “Running” status. Exit out of the node.
$ kubectl get pod etcd-k8s-master -n kube-system $ exit
3.144.35.148