Chapter 13: Learning from Kubernetes CVEs

Common Vulnerabilities and Exposures (CVEs) are identifications for publicly known security vulnerabilities and exposures that are found in popular applications. The CVE ID is made up of the CVE string followed by the year and the ID number for the vulnerability. The CVE database is publicly available and is maintained by the MITRE Corporation. The CVE entries include a brief description of each issue, which is helpful to understand the root cause and severity of the issue. These entries do not include technical details about the issue. CVEs are useful for IT professionals to coordinate and prioritize updates. Each CVE has a severity associated with it. MITRE uses a Common Vulnerability Scoring System (CVSS) to assign a severity rating to a CVE. It is recommended to patch high-severity CVEs immediately. Let's look at an example of a CVE entry on cve.mitre.org.

As you can see in the following screenshot, a CVE entry includes the ID, a brief description, references, the name of the CVE Numbering Authority (CNA), and the date on which the entry was created:

Figure 13.1 – MITRE entry for CVE-2018-18264

Figure 13.1 – MITRE entry for CVE-2018-18264

For security researchers and attackers, the most interesting part of a CVE entry is the References section. References for CVEs are links to blogs published by researchers covering the technical details of the issue, as well as links to issue descriptions and pull requests. Security researchers study the references to understand the vulnerability and develop mitigations for similar issues or for known issues that don't have a fix yet. Attackers, on the other hand, study the references to find unpatched variations of the issue.

In this chapter, we'll discuss four publicly known security vulnerabilities of Kubernetes. First, we will look at a path-traversal issue—CVE-2019-11246. This issue allowed attackers to modify the content on the client side, which could potentially lead to exfiltration or code execution on the cluster administrator's machine. Next, we will discuss CVE-2019-1002100, which allows users to cause Denial-of-Service (DoS) attacks on the API server. Then, we will discuss CVE-2019-11253, which allows unauthenticated users to cause DoS attacks on kube-apiserver. Lastly, we will discuss CVE-2019-11247, which allows users with namespace privileges to modify cluster-wide resources. We will discuss mitigation strategies for each CVE. Upgrading to the latest version of Kubernetes and kubectl, which patches vulnerabilities, should be your first priority. The latest stable version of Kubernetes can be found at https://github.com/kubernetes/kubernetes/releases. The mitigation strategies that we will discuss will help strengthen your cluster against attacks of a similar nature. Finally, we will introduce kube-hunter, which can be used to scan Kubernetes clusters for known security vulnerabilities.

We will cover the following topics in this chapter:

  • The path traversal issue in kubectl cp—CVE-2019-11246
  • The DoS issue in JSON parsing—CVE-2019-1002100
  • The DoS issue in YAML parsing—CVE-2019-11253
  • The privilege-escalation issue in role parsing—CVE-2019-11247
  • Scanning known vulnerabilities using kube-hunter

The path traversal issue in kubectl cp – CVE-2019-11246

Developers often copy files to or from containers in a Pod for debugging. kubectl cp allows developers to copy files from or to a container in a Pod (by default, this is done in the first container within the Pod).

To copy files to a Pod, you can use the following:

kubectl cp /tmp/test <pod>:/tmp/bar

To copy files from a Pod, you can use the following:

kubectl cp <some-pod>:/tmp/foo /tmp/bar

When files are copied from a pod, Kubernetes first creates a TAR archive of the files inside the container. It then copies the TAR archive to the client and then finally unpacks the TAR archive for the client. In 2018, researchers found a way to use kubectl cp to overwrite files on the client's host. If an attacker has access to a pod, this vulnerability could be used to replace the TAR archive with special files that use relative paths by overwriting the original TAR binary with a malicious one. When the malformed TAR file was copied to the host, it could overwrite the files on the host when it was extracted. This could lead to data compromise and code execution on the host.

Let's look at an example where the attacker modifies the TAR archive to have two files: regular.txt and foo/../../../../bin/ps. In this archive, regular.txt is the file that the user is expecting and ps is a malicious binary. If this archive is copied to /home/user/admin, the malicious binary overwrites the well-known ps binary in the bin folder. The first patch for this issue was incomplete and attackers found a way to exploit the same issue using symlinks. Researchers found a way to bypass the fix for symlinks, which was finally addressed in versions 1.12.9, 1.13.6, and 1.14.2, and was assigned CVE-2019-11246.

Mitigation strategy

You can use the following strategies to harden your cluster against this issue and issues similar to CVE-2019-11246 that haven't yet been found:

  • Always use the updated version of kubectl: You can find the latest version of the kubectl binary by using the following command:

    $ curl https://storage.googleapis.com/kubernetes-release/release/stable.txt

    v1.18.3

  • Use admission controllers to limit the use of kubectl cp: As we discussed in Chapter 7, Authentication, Authorization, and Admission Control, Open Policy Agent can be used as an admission controller. Let's look at a policy that denies calls to kubectl cp:

    deny[reason] {

      input.request.kind.kind == "PodExecOptions"

      input.request.resource.resource == "pods"

      input.request.subResource == "exec"

      input.request.object.command[0] == "tar"

      reason = sprintf("kubectl cp was detected on %v/%v by user: %v", [

        input.request.namespace,

        input.request.object.container,

        input.request.userInfo.username])

    }

    This policy denies the execution of a TAR binary in the pod, thereby disabling kubectl cp for all users. You can update this policy to allow kubectl cp for specific users or groups.

  • Apply appropriate access controls to the client: If you are an administrator of a production cluster, there are many secrets on your work machine that the attackers might want access to. Ideally, the build machine should not be your work laptop. Having dedicated hardware that admins can ssh into to access the Kubernetes cluster is good practice. You should also ensure that any sensitive data on the build machine has appropriate access controls.
  • Set the security context for all pods: As discussed in Chapter 8, Securing Kubernetes Pods, ensure that pods have readOnlyRootFilesystem, which will prevent the files from being tampered with (for example, overwrite /bin/tar binary) by attackers in the filesystem:

    spec:

        securityContext:

            readOnlyRootFilesystem: true

  • Use Falco rules to detect file modification: We discussed Falco in Chapter 11, Defense in Depth. Falco rules (which can be found at https://github.com/falcosecurity/falco/blob/master/rules/falco_rules.yaml) can be set up to do the following:

    Detect modification of a binary in a pod: Use Write below monitored dir in the default Falco rules to detect changes to the TAR binary:

    - rule: Write below monitored dir

      desc: an attempt to write to any file below a set of binary directories

      condition: >

        evt.dir = < and open_write and monitored_dir

        and not package_mgmt_procs

        and not coreos_write_ssh_dir

        and not exe_running_docker_save

        and not python_running_get_pip

        and not python_running_ms_oms

        and not google_accounts_daemon_writing_ssh

        and not cloud_init_writing_ssh

        and not user_known_write_monitored_dir_conditions

      output: >

        File below a monitored directory opened for writing (user=%user.name

        command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2] container_id=%container.id image=%container.image.repository)

      priority: ERROR

      tags: [filesystem, mitre_persistence]

    Detect the use of a vulnerable kubectl instance: kubectl versions 1.12.9, 1.13.6, and 1.14.2 have a fix for this issue. The use of any versions earlier than this will trigger the following rule:

    - macro: safe_kubectl_version

      condition: (jevt.value[/userAgent] startswith "kubectl/v1.15" or

                  jevt.value[/userAgent] startswith "kubectl/v1.14.3" or

                  jevt.value[/userAgent] startswith "kubectl/v1.14.2" or

                  jevt.value[/userAgent] startswith "kubectl/v1.13.7" or

                  jevt.value[/userAgent] startswith "kubectl/v1.13.6" or

                  jevt.value[/userAgent] startswith "kubectl/v1.12.9")

    # CVE-2019-1002101

    # Run kubectl version --client and if it does not say client version 1.12.9,

    1.13.6, or 1.14.2 or newer,  you are running a vulnerable version.

    - rule: K8s Vulnerable Kubectl Copy

      desc: Detect any attempt vulnerable kubectl copy in pod

      condition: kevt_started and pod_subresource and kcreate and

                 ka.target.subresource = "exec" and ka.uri.param[command] = "tar" and

                 not safe_kubectl_version

      output: Vulnerable kubectl copy detected (user=%ka.user.name pod=%ka.target.name ns=%ka.target.namespace action=%ka.target.subresource command=%ka.uri.param[command] userAgent=%jevt.value[/userAgent])

      priority: WARNING

      source: k8s_audit

      tags: [k8s]

CVE-2019-11246 is a great example of why you need to keep track of security advisories and read through the technical details to add mitigation strategies to your cluster to ensure that if any variations of an issue are discovered, your cluster is safe. Next, we will look at CVE-2019-1002100, which can be used to cause DoS issues on kube-apiserver.

DoS issues in JSON parsing – CVE-2019-1002100

Patching is a commonly used technique used to update API objects at runtime. Developers use kubectl patch to update API objects at runtime. A simple example of this can be adding a container to a pod:

spec:

  template:

    spec:

      containers:

      - name: db

        image: redis

The preceding patch file allows a pod to be updated to have a new Redis container. kubectl patch allows patches to be in JSON format. The issue was in the JSON parsing code of kube-apiserver, which allowed an attacker to send a malformed json-patch instance to cause a DoS attack in the API server. In Chapter 10, Real-Time Monitoring and Resource Management of a Kubernetes Cluster, we discussed the importance of the availability of services within Kubernetes clusters. The root cause of this issue was unchecked error conditions and unbounded memory allocation to kube-apiserver for the patch requests.

Mitigation strategy

You can use the following strategies to harden your cluster against this issue and issues similar to CVE-2019-100210 that haven't yet been found:

  • Use resource monitoring tools in Kubernetes clusters: As discussed in Chapter 10, Real-Time Monitoring and Resource Management of a Kubernetes Cluster, resource-monitoring tools such as Prometheus and Grafana can help identify issues of higher memory consumption in the master node. High values in the graphs for Prometheus metrics could look as follows:

    container_memory_max_usage_bytes{pod_ name="kube-apiserver-xxx" }

    sum(rate(container_cpu_usage_seconds_total{pod_name="kube-apiserver-xxx"}[5m]))

    sum(rate(container_network_receive_bytes_total{pod_name="kube-apiserver-xxx"}[5m]))

    These resources graph maximum memory, CPU, and network usage by kube-apiserver over 5-minute intervals. Any abnormality in these usage patterns is a sign of an attack on kube-apiserver.

  • Set up high-availability Kubernetes masters: We learned about high-availability clusters in Chapter 11, Defense in Depth. High-availability clusters have multiple instances of Kubernetes components. If the load on one component is high, other instances can be used until the load is reduced or the first instance is restarted.

    Using kops, you can use --master-zones={zone1, zone2} to have multiple masters:

    kops create cluster k8s-clusters.k8s-demo-zone.com

      --cloud aws

      --node-count 3

      --zones $ZONES

      --node-size $NODE_SIZE

      --master-size $MASTER_SIZE

      --master-zones $ZONES

      --networking calico

      --kubernetes-version 1.14.3

      --yes

    kube-apiserver-ip-172-20-43-65.ec2.internal              1/1     Running   4          4h16m

    kube-apiserver-ip-172-20-67-151.ec2.internal             1/1     Running   4          4h15m

    As you can see, there are multiple kube-apiserver pods running in this cluster.

  • Limit users' privileges using RBAC: Privileges to users should also follow the principle of least privilege, which was discussed in Chapter 4, Applying the Principle of Least Privilege in Kubernetes. If a user does not require access to PATCH privileges for any resource, the role should be updated so that they don't have access.
  • Test your patches in the staging environment: Staging environments should be set up as a replica of the production environment. Developers are not perfect, so it's possible for a developer to create a malformed patch. If patches or updates to the cluster are tested in the staging environment, bugs in the patch can be found without disrupting the production services.

DoS is often considered a low-severity issue, but if it happens to the core component of your cluster, you should take it seriously. DoS attacks on kube-apiserver can disrupt the availability of the whole cluster. Next, we look at another DoS attack against an API server. This attack can be performed by unauthenticated users, making it more severe than CVE-2019-1002100.

A DoS issue in YAML parsing – CVE-2019-11253

XML bombs, or billion laughs attacks, are popular with any XML parsing code. Similar to parsing issues in XML, this was a parsing issue in YAML files that were sent to kube-apiserver. If a YAML file sent to the server has recursive references, it triggers the kube-apiserver to consume CPU resources, which causes availability issues on the API server. In most cases, requests parsed by kube-apiserver are restricted to authenticated users, so unauthenticated users should not be able to trigger this issue. There was an exception to this rule in the Kubernetes versions preceding 1.14 that allowed unauthenticated users to check whether they could perform an action using kubectl auth can-i.

This issue is similar to CVE-2019-1002100, but is more severe as unauthenticated users can also trigger this issue.

Mitigation strategy

You can use the following strategies to harden your cluster against this issue and issues similar to CVE-2019-11253 that haven't yet been found:

  • Use resource-monitoring tools in Kubernetes clusters: Similar to CVE-2019-1002100, resource-monitoring tools, such as Prometheus and Grafana, which we discussed in Chapter 10, Real-Time Monitoring and Resource Management of a Kubernetes Cluster, can help identify issues of higher memory consumption in the master node.
  • Enable RBAC: The vulnerability is caused by the improper handling of recursive entities in the YAML file by kube-apiserver and the ability of unauthenticated users to interact with the kube-apiserver. We discussed RBAC in Chapter 7, Authentication, Authorization, and Admission Control. RBAC is enabled by default in the current version of Kubernetes. You can also enable it by passing --authorization-mode=RBAC to the kube-apiserver. In this case, unauthenticated users should not be allowed to interact with kube-apiserver. For authenticated users, the principle of least privilege should be followed.
  • Disable auth can-i for unauthenticated users (for v1.14.x): Unauthenticated users should not be allowed to interact with kube-apiserver. In Kubernetes v1.14.x, you can disable auth can-i for unauthenticated servers using the RBAC file at https://github.com/kubernetes/kubernetes/files/3735508/rbac.yaml.txt:

    kubectl auth reconcile -f rbac.yaml --remove-extra-subjects --remove-extra-permissions

    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=false

    The second command disables auto-updates for clusterrolebinding, which will ensure that the changes are not overwritten on restart.

  • kube-apiserver should not be exposed to the internet: Allowing access to the API servers from trusted entities using a firewall or VPCs is good practice.
  • Disable anonymous-auth: We discussed anonymous-auth as an option that should be disabled if possible in Chapter 6, Securing Cluster Components. Anonymous authentication is enabled by default in Kubernetes 1.16+ for legacy policy rules. If you are not using any legacy rules, it is recommended to disable anonymous-auth by default passing --anonymous-auth=false to the API server.

As we discussed earlier, a DoS attack on kube-apiserver can cause a disruption of services throughout the cluster. In addition to using the latest version of Kubernetes, which includes a patch for this issue, it is important to follow these mitigation strategies to avoid similar issues in your cluster. Next, we will discuss an issue in the authorization module that triggers privilege escalation for authenticated users.

The Privilege escalation issue in role parsing – CVE-2019-11247

We discussed RBAC in detail in Chapter 7, Authentication, Authorization, and Admission Control. Roles and RoleBindings allow users to get the privileges to perform certain actions. These privileges are namespaced. If a user needs a cluster-wide privilege, ClusterRoles and ClusterRolebindings are used. This issue allowed users to make cluster-wide modifications even if their privileges were namespaced. Configurations for admission controllers, such as Open Policy Access, could be modified by users with a namespaced role.

Mitigation strategy

You can use the following strategies to harden your cluster against this issue and issues similar to CVE-2019-11247 that haven't yet been found:

  • Avoid wildcards in Roles and RoleBindings: Roles and ClusterRoles should be specific to the resource names, verbs, and API groups. Adding * to roles can allow users to have access to resources that they should not have access to. This adheres to the principle of least privilege, which we discussed in Chapter 4, Applying the Principle of Least Privilege in Kubernetes.
  • Enable Kubernetes auditing: We discussed auditing and audit policies for Kubernetes in Chapter 11, Defense in Depth. Kubernetes auditing can help identify any unintended actions in a Kubernetes cluster. In most cases, a vulnerability such as this will be used to modify and delete any additional controls within the cluster. You can use the following policy to identify instances of these kinds of exploits:

      apiVersion: audit.k8s.io/v1 # This is required.

          kind: Policy

          rules:

          - level: RequestResponse

            verbs: ["patch", "update", "delete"]

            resources:

            - group: ""

              resources: ["pods"]

              namespaces: ["kube-system", "monitoring"]

    This policy logs any instances of the deletion or modification of pods in kube-system or the monitoring namespace.

This issue is certainly an interesting one since it highlights that the security features provided by Kubernetes can also be harmful if they are misconfigured. Next, we will talk about kube-hunter, which is an open source tool to find any known security issues in your cluster.

Scanning for known vulnerabilities using kube-hunter

Security advisories and announcements (https://kubernetes.io/docs/reference/issues-security/security/) published by Kubernetes are the best way to keep track of new security vulnerabilities found in Kubernetes. The announcements and advisory emails can get a bit overwhelming and it's always possible to miss an important vulnerability. To avoid these situations, a tool that periodically checks the cluster for any known CVEs comes to the rescue. kube-hunter is an open source tool that is developed and maintained by Aqua that helps identify known security issues in your Kubernetes cluster.

The steps to set up kube-hunter are as follows:

  1. Clone the repository:

    $git clone https://github.com/aquasecurity/kube-hunter

  2. Run the kube-hunter pod in your cluster:

    $ ./kubectl create -f job.yaml

  3. View the logs to find any issues with your cluster:

    $ ./kubectl get pods

    NAME                READY   STATUS              RESTARTS   AGE

    kube-hunter-7hsfc   0/1     ContainerCreating   0          12s

    The following output shows a list of known vulnerabilities in Kubernetes v1.13.0:

Figure 13.2 – Results of kube-hunter

Figure 13.2 – Results of kube-hunter

This screenshot highlights some of the issues discovered by kube-hunter for a Kubernetes v1.13.0 cluster. The issues found by kube-hunter should be treated as critical and should be addressed immediately.

Summary

In this chapter, we discussed the importance of CVEs. These publicly known identifiers are important for cluster administrators, security researchers, and attackers. We discussed the important aspects of CVE entries, which are maintained by MITRE. We then looked at four well-known CVEs and discussed the issue and the mitigation strategy for each CVE. As a cluster administrator, upgrading the kubectl client and Kubernetes version should always be your first priority. However, adding mitigation strategies to detect and prevent exploits caused by similar issues that have not been reported publicly is equally important. Finally, we discussed an open source tool, kube-hunter, which can be used to periodically identify issues in your Kubernetes cluster. This removes the overhead of cluster administrators keeping a close eye on security advisories and announcements by Kubernetes.

Now, you should be able to understand the importance of publicly disclosed vulnerabilities and how these advisories help strengthen the overall security posture of your Kubernetes cluster. Reading through these advisories will help you identify any problems in your cluster and help harden your cluster going forward.

Questions

  1. What are the most important parts of a CVE entry for cluster administrators, security researchers, and attackers?
  2. Why are client-side security issues such as CVE-2019-11246 important for a Kubernetes cluster?
  3. Why are DoS issues in the kube-apiserver treated as high-severity issues?
  4. Compare authenticated versus unauthenticated DoS issues in the API server.
  5. Discuss the importance of kube-hunter.

Further references

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.1.158