CHAPTER 9
Kubernetes Compliance

This is the third DevSecOps tooling chapter and our first look at Kubernetes (kubernetes.io), the container orchestrator. Following an initial adoption of running containers to serve their applications, organizations soon realize, sometimes with horror, that it is almost impossible to pull together the many facets of a microservices architecture without some form of container orchestrator. And, while Kubernetes rises to that challenge and provides unparalleled levels of sophistication to herd multiple cat-like containers at once, there can be a steep learning curve for those who are new to it.

In hand with that complexity come a number of new and often unseen security challenges, which means that organizations quickly need to learn about the potential impact of the attack vectors that Kubernetes exposes. In early releases, for example, there were known attacks on the API Server component of Kubernetes. The API Server is the interface between the many components of a Kubernetes cluster, and being able to authenticate to it with privileges offered attackers a treasure trove. In this chapter, we look at the highlighting of potential security issues that industry experts consider might present your Kubernetes cluster issues.

Receiving regular reports about how secure your Kubernetes cluster is can be invaluable. And, running compliance tools within CI/CD pipelines is perfectly possible if they provide timely results and do not slow down your build processes. Using such tools also means that you are armed with the ability to detect salient changes in your security posture after configuration changes and version upgrades. If you have the ability to test a cluster's compliance against a lengthy checklist, then you gain pertinent information about numerous components that require your attention. In some cases, for the unseasoned cluster operator, this applies to aspects of a cluster that you previously were not even aware of. By testing your security compliance against the widely respected, industry-consensus-based CIS Benchmarks, you can receive useful reports to help focus where to put your efforts in relation to Kubernetes security. There are many CIS Benchmarks available (www.cisecurity.org/cis-benchmarks) that can offer invaluable insight into operating systems, applications, and cloud platforms to name but a few. The website states that there are more than 100 configuration guidelines across more than 25 vendor product families to help improve their security posture. The detail the benchmarks provide is hard to match, and from a Cloud Native perspective, you are encouraged to look up your relevant Linux distribution, Docker, Kubernetes, and cloud platform benchmarks just to get you started.

In this chapter, we will install kube-bench (github.com/aquasecurity/kube-bench) and run it against a micro cluster to offer an insight into what you might expect on a fully blown Kubernetes installation.

Mini Kubernetes

We will install Kubernetes for our lab environment locally using the sophisticated Minikube (github.com/kubernetes/minikube).

The first step is to install a kernel-based virtual machine (VM) platform, KVM (www.linux-kvm.org), on our local machine so that we can then install Minikube within a VM.

The command to install the core KVM packages is as follows (some may already be installed):

$ apt install qemu-kvm libvirt-bin bridge-utils virtinst virt-manager

Note that you will need x86 hardware with CPU extensions enabled for the virtualization, namely, Intel VT or AMD-V, for KVM to work. KVM will create a kernel module called kvm.ko, which is loaded into the local system to enable it to interact with the system for virtualization. Then an Intel or AMD module (depending on your flavor of processor) is also loaded.

Listing 9.1 shows some of the packages and their dependencies installed with KVM, which total only approximately 30MB of installation footprint. Once all of them are installed, KVM should shortly become available to us.

One of the main components is called virsh, which the manual describes as the program used as “the main interface for managing virsh guest domains.” Running on an Ubuntu 18.04 laptop (using Linux Mint on top), we can see the output of the nodeinfo option in Listing 9.2.

The installation of Minikube requires the following commands; at this point choose which directory you want to install your main Minkube file into and assume you will run your commands from this directory, too:

$ curl -O https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
$ dpkg -i minikube_latest_amd64.deb
$ ./minikube version
minikube version: v1.12.1

For other flavors of Linux, Mac, or Windows, choose the relevant installation option from this page: minikube.sigs.k8s.io/docs/start.

We need to start up Minikube next with this command, as the root user:

$ minikube start --driver=none

In Listing 9.3 we can see the execution process for Minikube.

Note the warning about the none driver. It appears because root level permissions are passed onto the Minikube environment if you install using this method. Be sure to tear down your virtual environment after running a lab like this.

We need a copy of kubectl installed now, which is the userland binary used to interact with the API Server in our Kubernetes cluster. We can query an online text file to see the latest stable release version:

$ curl -s 
https://storage.googleapis.com/kubernetes-release/release/stable.txt
# Get the latest version
V1.18.3

We can see that it is v1.18.3, so adjust the version in this next command to download the correct kubectl version (getting the version incorrect might mean that some features do not work and there's a higher risk of unexpected results):

$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl

We now need to see if that download worked after adding the file to our user's path:

$ chmod +x kubectl # make the file executable
$ mv kubectl /usr/local/bin # put this file into our user's path

Verify that the versions match with the following command; in the slightly abbreviated output, look at both client and server versions:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3"
GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40",
GitTreeState:"clean", BuildDate:"2020–05–20T12:52:00Z",

GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3"
GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40",
GitTreeState:"clean", BuildDate:"2020–05–20T12:43:34Z",
GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Now for the moment of truth. Let's see if we have a local Kubernetes lab running, with this command:

$ kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
kilo   Ready    master   3m   v1.18.3

That looks promising; we have a master node running.

Listing 9.4 shows the output from one more quick check to confirm that the cluster has all of its required components available.

Let's turn to our Kubernetes compliance tool now and see if the cluster security can be improved.

Using kube-bench

Courtesy of the Cloud Native security company Aqua Security (www.aquasec.com), kube-bench is a piece of Open Source software that runs through the comprehensive CIS Benchmarks for Kubernetes. There is no direct correlation in terms of timelines between new releases of Kubernetes and the CIS Benchmarks, so bear that in mind if new features are not flagged as having issues by kube-bench. As an aside, any managed Kubernetes clusters—the likes of EKS on AWS (aws.amazon.com/eks)—that do not make their Control Plane visible obviously cannot be queried in quite the same way as other clusters by kube-bench. It is still worth trying such tools.

Let's see kube-bench in action now, using the Kubernetes lab that we built using Minikube.

To get started, we need to decide how to execute kube-bench. If we run it from inside a container, then it will have visibility of the process table of the host for its queries. Or we could install kube-bench via a container onto the host directly. The third choice (barring compiling it from source) is to use one of the provided, oven-ready binaries. You will need to ensure that your CI/CD pipeline security is in good shape, as kube-bench will require elevated privileges to have the ability to delve deeply into your Kubernetes cluster.

We will go down the second route and execute kube-bench from the host itself having installed a kube-bench container first. The command to do just that is as follows:

$ docker run --rm -v $(pwd):/host aquasec/kube-bench:latest install

In Listing 9.5 we can see what the installation process looks like via a container.

Having run that Docker command from within the /root directory, we can now see a binary waiting to be executed, located at /root/kube-bench. Also, in that directory we can see a /cfg directory with the following files and directory listed:

cis-1.3  cis-1.4  cis-1.5  config.yaml  eks-1.0  gke-1.0
node_only.yaml  rh-0.7

Following the advice offered in Listing 9.5, let's execute kube-bench now with autodetection enabled to see if it can determine the version of Kubernetes. As we saw with the earlier get nodes command, we are running version v1.18.3, which we can declare at this stage too if the autodetect feature fails.

For a single Minikube master node, that command might look like this:

$ ./kube-bench master --version 1.18

Equally, you can adjust which version of the CIS Benchmarks kube-bench refers to. Choose a framework that you want to run kube-bench against with a command like this one:

$ ./kube-bench node --benchmark cis-1.5

However, for a Minikube master node we will use this command instead:

$ ./kube-bench master

Success! The output from that command is exceptionally long, but the end displays the following:

== Summary ==
41 checks PASS
13 checks FAIL
11 checks WARN
0 checks INFO

Let's look through some of the other findings, a section at a time. It is recommended that you download the latest CIS Benchmarks PDF yourself at this stage from the CIS site (www.cisecurity.org/benchmark/kubernetes). That way you can compare and contrast the sometimes-concise output from compliance tools, and when researching an issue, you might be able to glean information from the PDF to use in combination with the compliance tool in order to help remediate any problems.

Listing 9.6 shows some of the benchmark-matching output entries, starting from the top. Although you cannot see color in this book, on your screen red text signifies a FAIL; orange text is a WARN; green, as you might have already guessed, is a PASS; and blue is INFO.

As we can see within our Minikube installation, the etcd storage directory has come back with a FAIL. If we take note of that CIS Benchmark item number (1.1.12 in this case), then we can look further down the output and get the recommendations on how to remediate each issue.

In Listing 9.7 we can see what the clever kube-bench is reporting.

Let's look at another FAIL now. The layout of kube-bench 's output is by each Kubernetes component to keep things clearer. As a result, all items under 1.1 relate to master node configuration, items under 1.2 concern the API Server, 1.3 is for the Controller Manager, and 1.4 refers to the Scheduler.

Under 1.2.16 a FAIL relating to the API Server states the following:

[FAIL] 1.2.16 Ensure that the admission control plugin
PodSecurityPolicy is set (Scored)

In Listing 9.8 we can see how to remedy such an issue.

It looks like by default Minikube does not enable PodSecurityPolicies (PSPs), which are cluster-wide security controls to limit the permissions that pods are granted. Note that there is more information about pod security policies in Chapter 20, “Workload Hardening.”

Moving on, under a Controller Manager FAIL (1.3.1) we can also see that a garbage collection issue has been raised:

[FAIL] 1.3.1 Ensure that the --terminated-pod-gc-threshold argument
is set as appropriate (Scored)

In Listing 9.9 we can see what might fix that.

As mentioned, if you are not sure about a certain issue, then there is usually enough information offered between the PDF of the CIS Benchmark and the compliance tool. Failing that, a quick search for --terminated-pod-gc-threshold online offered the following information for clarity from the Kubernetes site (kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager):

--terminated-pod-gc-threshold int32     Default: 12500

As we can see, the default setting is to run 12,500 pods at the most. Underneath that the Kubernetes site explains:

"Number of terminated pods that can exist before the terminated pod
garbage collector starts deleting terminated pods. If <= 0, the
terminated pod garbage collector is disabled."

Settings such as these are not only good housekeeping but can also potentially enable your Kubernetes cluster to survive certain types of stress events such as a misconfiguration, a race condition, or a denial-of-service attack of some description.

In Listing 9.10 we can see an issue that was flagged about the Scheduler, namely, the --profiling argument, item 1.4.1, which should be disabled for production use. This is followed by the kube-bench recommendations on a fix for this issue.

According to an online search, --profiling is now deprecated as per this comment from the Kubernetes site (kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler):

DEPRECATED: enable profiling via web interface host:port/debug/pprof/

This option is really only for troubleshooting issues that can expose system and software versions unnecessarily, so it can be safely switched off.

Troubleshooting

If for some reason your Kubernetes cluster does not play nicely with kube-bench, you can increase logging verbosity this way:

$ ./kube-bench master -v 3 --logtostderr

To assist further with the research of the recommendations provided by kube-bench if necessary, you can turn on debugging to provide full visibility of the compliance tool's testing methods. For example, Listing 9.11 shows one such set of tests that were run with debugging enabled.

As Listing 9.11 demonstrates, a nontrivial number of tests and lines of code are whirring away internally when kube-bench is executed. The following output shows it checking for specification file permissions; here a more restrictive 600 is preferred over the looser 644:

[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored)

Automation

If you have automated the execution of kube-bench but want to omit certain tests to reduce alarms being triggered and therefore the level of report noise, you can enter the cfg/ directory mentioned earlier and tweak rulesets. Within that directory there are YAML files containing detailed information about each compliance test performed. In Listing 9.12 you can see an example for the etcd server.

Within Listing 9.12 if the line skip: true was added to the YAML file under the entry stating text:, then this test would not be run, and any output would just be shown as INFO, meaning it is for informational purposes only.

Summary

In this chapter, we set up a temporary Kubernetes lab using the clever Minikube, and then we ran a highly respected compliance checking tool, kube-bench, over its live, running config.

The speedy and comprehensive output provided by kube-bench is perfect for CI/CD pipeline tests. Even if tools like this are run only after each version upgrade or are executed daily on a schedule, with output containing FAILs being forwarded to human eyes for evaluation, then this type of approach to mitigating potentially serious issues before they occur is not to be ignored.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.181.57