Appendix - Mock CKA scenario-based practice test resolutions

Chapter 2 – Installing and Configuring Kubernetes Clusters

You have two virtual machines: master-0 and worker-0. Please complete the following mock scenarios.

Scenario 1

Install the latest version of kubeadm , then create a basic kubeadm cluster on the master-0 node, and get the node information.

  1. Update the apt package index, add a Google Cloud public signing key, and set up the Kubernetes apt repository by running the following instructions:

    sudo apt-get update

    sudo apt-get install -y apt-transport-https ca-certificates curl

    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

  2. Start by updating the apt package index, then install kubelet and kubeadm:

    sudo apt-get update

    sudo apt-get install -y kubelet kubeadm

  3. At this point, if you haven’t installed kubectl yet, you can also install kubelet, kubeadm, and kubectl in one go:

    sudo apt-get update

    sudo apt-get install -y kubelet kubeadm kubectl

  4. Use the following command to pin the version of the utilities you’re installing:

    sudo apt-mark hold kubelet kubeadm kubectl

  5. You can use the kubeadm init command to initialize the control-plane like a regular user, and gain sudo privileges from your master node machine by using the following command:

      sudo kubeadm init --pod-network-cidr=192.168.0.0/16

  6. After your Kubernetes control-plane is initialized successfully, you can execute the following commands to configure kubectl:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Scenario 2

SSH to worker-0 and join it to the master-0 node.

You can use the following command to join the worker nodes to the Kubernetes cluster. This command can be used repeatedly each time you have new worker nodes to join with the token that you acquired from the output of the kubeadm control-plane:

sudo kubeadm join --token <token>  <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Scenario 3 (optional)

Set up a local minikube cluster, and schedule your first workload called hello Packt.

Note

Check out the Installing and configuring Kubernetes cluster section in Chapter 2, to set up a single node minikube cluster.

Let’s quickly run an app on the cluster called helloPackt using busybox:

kubectl run helloPackt --image=busybox

Chapter 3 – Maintaining Kubernetes Clusters

You have two virtual machines: master-0 and worker-0. Please complete the following mock scenarios.

Scenario 1

SSH to the master-0 node, check the current kubeadm version, and upgrade to the latest kubeadm version. Check the current kubectl version, and upgrade to the latest kubectl version.

Start by checking the current version with the following commands once we’re in the master node:

   kubeadm version

   kubectl version  

Check out the latest available versions:

  apt update

  apt-cache madison kubeadm

Upgrade the kubeadm using the following command:

apt-mark unhold kubeadm &&

apt-get update && apt-get install -y kubeadm=1.xx.x-00 &&

apt-mark hold kubeadm

Check if your cluster can be upgraded, and the available versions that your cluster can be upgraded to by using the following command:

    kubeadm upgrade plan

Use the following command to upgrade the kubeadm:

    kubeadm upgrade apply v1.xx.y

Scenario 2

SSH to worker-0 node, check the current kubeadm version, and upgrade to the latest kubeadm version. Check the current kubelet version, and upgrade to the latest kubelet version.

Start by checking the current version with the following commands once we’re in the master node:

   kubeadm version

   kubectl version  

Check what the latest versions available are:

  apt update

  apt-cache madison kubeadm

Upgrade the kubelet (which also upgrades the local kubelet configuration) with the following command:

  sudo kubeadm upgrade node

Cordon the node so that we drain the workloads of preparing the node for maintenance using the following command:

kubectl drain worker-0 --ignore-daemonsets

Upgrade the kubeadm by using the following command:

apt-mark unhold kubeadm &&

apt-get update && apt-get install -y kubeadm=1.xx.x-00 &&

apt-mark hold kubeadm

Check if your cluster can be upgraded and the available versions that your cluster can be upgraded to by using the following command:

    kubeadm upgrade plan

Use the following command to upgrade the kubeadm:

    kubeadm upgrade apply v1.xx.y

Restart the kubelet for the changes to take effect:

sudo systemctl daemon-reload

sudo systemctl restart kubelet

Finally, we can uncordon the worker node and it will return the node that is now shown as uncordoned:

kubectl uncordon worker-0

Scenario 3

SSH to the master-0 node, and backup the etcd store.

Use the following command to check the endpoint status:

sudo ETCDCTL_API=3 etcdctl endpoint status --endpoints=https://172.16.16.129:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --write-out=table

Use the following command to backup etcd:

sudo ETCDCTL_API=3 etcdctl snapshot save snapshotdb

--endpoints=https://172.16.16.129:2379

--cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key

Scenario 4

SSH to the master-0 node, and restore the etcd store to the previous backup.

Restore the etcd from a previous backup operation using the following command:

sudo ETCDCTL_API=3 etcdctl --endpoints 172.16.16.129:2379 snapshot restore snapshotdb

Chapter 4 – Application scheduling and lifecycle management

You have two virtual machines: master-0 and worker-0, please complete the following mock scenarios.

Scenario 1

SSH to the worker-0 node, and provision a new pod called ngnix with a single container nginx.

Use the following command:

kubectl run nginx --image=nginx:alpine

Scenario 2

SSH to worker-0, and then scale the nginx to 5 copies.

Use the following command:

kubectl scale deployment nginx --replicas=5

Scenario 3

SSH to worker-0, set a configMap with a username and password, then attach a new pod with a busybox.

Create a yaml definition called packt-cm.yaml to define ConfigMap as the following:

  apiVersion: v1 
  kind: ConfigMap 
  metadata: 
    name: packt-configmap 
  data: 
    myKey: packtUsername 
    myFav: packtPassword

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-cm.yaml

Verify the configMap by using the following command:

kubectl get configmap

Once you have configMap ready, create a yaml definition file to config the pod to consume the configMap as the following:

apiVersion: v1 
kind: Pod 
metadata: 
  name: packt-configmap 
spec: 
  containers: 
  - name: packt-container 
    image: busybox 
    command: ['sh', '-c', "echo $(MY_VAR) && sleep 3600"] 
    env: 
    - name: MY_VAR 
      valueFrom: 
        configMapKeyRef: 
          name: packt-configmap 
          key: myKey

Use the following command to verify the configMap value:

kubectl logs packt-configmap

Scenario 4

SSH to worker-0, and create a nginx pod with an initContainer called busybox.

Create a yaml definition called packt-pod.yaml shown as follows:

apiVersion: v1 
kind: Pod 
metadata: 
  name: packtpod 
  labels: 
    app: packtapp 
spec: 
  containers: 
  - name: packtapp-container 
    image: busybox:latest 
    command: ['sh', '-c', 'echo The packtapp is running! && sleep 3600'] 
  initContainers: 
  - name: init-pservice 
    image: busybox:latest 
    command: ['sh', '-c', 'until nslookup packtservice; do echo waiting for packtservice; sleep 2; done;'] 

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-pod.yaml

Use the following command to see if the pod is up and running:

kubectl get podpackt

Scenario 5

SSH to worker-0, and create a nginx pod and then a busybox container in the same pod.

Create a yaml definition called packt-pod.yaml shown as follows:

apiVersion: v1 
kind: Pod 
metadata: 
  name: pactk-multi-pod 
  labels: 
      app: multi-app 
spec: 
  containers: 
  - name: nginx 
    image: nginx 
    ports: 
    - containerPort: 80 
  - name: busybox-sidecar 
    image: busybox 
    command: ['sh', '-c', 'while true; do sleep 3600; done;']

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-pod.yaml

Use the following command to see if the pod is up and running:

kubectl get pod pactk-multi-pod

Chapter 5 – Demystifying Kubernetes Storage

You have two virtual machines: master-0 and worker-0. Please complete the following mock scenarios.

Scenario 1

Create a new PV called packt-data-pv with a storage of 2GB, and two persistent volume claims (PVCs) requiring 1GB local storage each.

Create a yaml definition called packt-data-pv.yaml for persistent volume as the following:

  apiVersion: v1 
  kind: PersistentVolume 
  metadata: 
    name: packt-data-pv
  spec: 
    storageClassName: local-storage 
    capacity: 
      storage: 2Gi 
    accessModes: 
      - ReadWriteOnce

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-data-pv.yaml

Create a yaml definition called packt-data-pvc1.yaml for persistent volume claim as the following:

apiVersion: v1 
 kind: PersistentVolumeClaim 
 metadata: 
   name: packt-data-pvc1
 spec: 
   storageClassName: local-storage 
   accessModes: 
       - ReadWriteOnce 
   resources: 
     requests: 
        storage: 1Gi

Create a yaml definition called packt-data-pvc2.yaml for persistent volume claim as the following:

apiVersion: v1 
 kind: PersistentVolumeClaim 
 metadata: 
   name: packt-data-pvc2
 spec: 
   storageClassName: local-storage 
   accessModes: 
       - ReadWriteOnce 
   resources: 
     requests: 
        storage: 1Gi

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-data-pv1.yaml,packt-data-pv2.yaml

Scenario 2

Provision a new pod called packt-storage-pod, and assign an available PV to this pod.

Create a yaml definition called packt-data-pod.yaml shown as follows:

apiVersion: v1 
 kind: Pod 
 metadata: 
   name: packt-data-pod
 spec: 
   containers: 
     - name: busybox 
       image: busybox 
       command: ["/bin/sh", "-c","while true; do sleep 3600;  done"] 
       volumeMounts: 
       - name: temp-data 
         mountPath: /tmp/data 
   volumes: 
     - name: temp-data 
       persistentVolumeClaim: 
         claimName: packt-data-pv1
   restartPolicy: Always

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-data-pod.yaml

Use the following command to see if the pod is up and running:

kubectl get pod packt-data-pod

Chapter 6 – Securing Kubernetes

You have two virtual machines: master-0 and worker-0, please complete the following mock scenarios.

Scenario 1

Create a new service account named packt-sa in a new namespace called packt-ns.

Use the following command to create a new service account in the targeting namespace:

kubectl create sa packt-sa -n packt-ns

Scenario 2

Create a Role named packt-role and bind it with the RoleBinding packt-rolebinding. Map the packt-sa service account with list and get permissions.

Use the following command to create a cluster role in the targeting namespace:

kubectl create role packt-role --verb=get --verb=list --resource=pods --namespace=packt-ns

Use the following command to create a Role binding in the targeting namespace:

kubectl create rolebinding packt-pods-binding --role=packt-role --user=packt-user -- namespace=packt-ns

To achieve the same result, you can create a yamldefinition called packt-role.yaml:

apiVersion: rbac.authorization.k8s.io/v1 
kind: Role 
metadata: 
  namespace: packt-ns 
  name: packt-clusterrole
rules: 
- apiGroups: [""]  
  resources: ["pods"] 
  verbs: ["get", "list"]

Create another yaml definition called packt-pods-binding.yaml:

apiVersion: rbac.authorization.k8s.io/v1 
kind: RoleBinding 
metadata: 
  name: packt-pods-binding
  namespace: packt-ns 
subjects: 
- kind: User 
  apiGroup: rbac.authorization.k8s.io 
  name:packt-user
roleRef: 
  kind: Role  
  name: packt-role
  apiGroup: rbac.authorization.k8s.io

Use the following command to deploy the yaml manifest:

kubectl apply -f packt-role.yaml,packt-pods-binding.yaml

Verify the Role using the following command:

kubectl get roles -n packt-ns

Verify the rolebindings by using the following command:

kubectl get rolebindings -n packt-ns

Scenario 3

Create a new pod named packt-pod with the image busybox:1.28 in the namespace packt-ns. Expose port 80. Then assign the service account packt-sa to the pod.

Use the following command to create a deployment:

kubectl create deployment packtbusybox –-image=busybox:1.28 -n packt-ns –port 80

Export the deployment information in yaml specification form:

kubectl describe deployment packtbusybox -n packt-ns -o yaml > packt-busybox.yaml

Edit the yaml specification to reference the service account:

apiVersion: v1
kind: Deployment
metadata:
  name: packtbusybox
  namespace : packt-ns
spec:
  containers:
  - image: busybox
    name: packtbusybox
    volumeMounts:
    - mountPath: /var/run/secrets/tokens
      name: vault-token
  serviceAccountName: packt-sa
  volumes:
  - name: vault-token
    projected:
      sources:
      - serviceAccountToken:
          path: vault-token
          expirationSeconds: 7200
          audience: vault

Check out the Implementing Kubernetes RBAC section in Chapter 6, Securing Kubernetes to get further information about how to implement RBAC.

Chapter 7 – Demystifying Kubernetes networking

You have two virtual machines: master-0 and worker-0. Please complete the following mock scenarios.

Scenario 1

Deploy a new deployment nginx with the latest image of nginx for 2 replicas, in a namespace called packt-app. The container is exposed on port 80. Create a service type ClusterIP within the same namespace. Deploy a sandbox-nginx pod and make a call using curl to verify the connectivity to the nginx service.

Use the following command to create nginx deployment in the targeting namespace:

kubectl create deployment nginx --image=nginx --replicas=2 -n packt-app

Use the following command to expose nginx deployment with a ClusterIP service in the targeting namespace:

kubectl expose deployment nginx --type=ClusterIP --port 8080 --name=packt-svc --target-port 80 -n packt-app

Use the following command to get the internal IP:

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?( @.type=="INTERNAL-IP")].address}'

Use the following command to get the endpoint:

kubectl get svc packt-svc -n packt-app -o wide

Use the following command to deploy a sandbox-nginx pod in the targeting namespace using your endpoint:

kubectl run -it sandbox-nginx --image=nginx -n packt-app --rm --restart=Never -- curl -Is http://192.168.xx.x (internal IP ):31400 ( endpoint )

Scenario 2

Expose the nginx deployment with the NodePort service type; the container is exposed on port 80. Use the test-nginx pod to make a call using curl to verify the connectivity to the nginx service.

Use the following command to create nginx deployment in the targeting namespace:

kubectl expose deployment nginx --type=NodePort --port 8080 --name=packt-svc --target-port 80 -n packt-app

Use the following command to get the internal IP:

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?( @.type=="INTERNAL-IP")].address}'

Use the following command to get the endpoint:

kubectl get svc packt-svc -n packt-app -o wide

Use the following command to deploy a test-nginx pod in the targeting namespace using your endpoint:

kubectl run -it test-nginx --image=nginx -n packt-app --rm --restart=Never -- curl -Is http://192.168.xx.x (internal IP ):31400 ( endpoint )

Scenario 3

Make a call using wget or curl from the machine within the same network with that node, to verify the connectivity with the nginx NodePort service through the correct port.

Call from worker-2 using the following command:

curl -Is http://192.168.xx.x (internal IP of the worker 2 ):31400 ( the port of that node  )

Alternatively, we can use wget as the following command:

wget http://192.168.xx.x (internal IP of the worker 2 ):31400 ( the port of that node  )

Scenario 4

Use the sandbox-nginx pod to nslookup the IP address of nginx NodePort service. See what is returned.

Use the following command:

kubectl run -it sandbox-nginx --image=busybox:latest

kubect exec sandbox-nginx -- nslookup <ip address of nginx Nodeport>

Scenario 5

Use the sandbox-nginx pod to nslookup the DNS domain hostname of nginx NodePort service. See what is returned.

Use the following command:

kubectl run -it sandbox-nginx --image=busybox:latest

kubect exec sandbox-nginx -- nslookup <hostname of nginx Nodeport>

Scenario 6

Use the sandbox-nginx pod to nslookup the DNS domain hostname of nginx pod. See what is returned.

Use the following command:

kubectl run -it sandbox-nginx --image=busybox:latest

kubect exec sandbox-nginx -- nslookup x-1-0-9(pod ip address).pack-app.pod.cluster.local

Chapter 8 – Monitoring and logging Kubernetes Clusters and Applications

You have two virtual machines: master-0 and worker-0. Please complete the following mock scenarios.

Scenario 1

List all the available pods in your current cluster and find what the most CPU-consuming pods are. Write the name to the max-cpu.txt file.

Use the following command:

kubectl top pod -- all-namespaces --sort-by=cpu > max-cpu.txt

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.45.162