In chapter 10, we discussed how to deploy and secure microservices on Docker containers. In a real production deployment, you don’t have only containers; containers are used within a container orchestration framework. Just as a container is an abstraction over the physical machine, the container orchestration framework is an abstraction over the network. Kubernetes is the most popular container orchestration framework to date.
Understanding the fundamentals of Kubernetes and its security features is essential to any microservices developer. We cover basic constructs of Kubernetes in appendix J, so if you’re new to Kubernetes, read that appendix first. Even if you’re familiar with Kubernetes, we still recommend you at least skim through appendix J, because the rest of this chapter assumes you have the knowledge contained in it.
In this section, we deploy the Docker container that we built in chapter 10 with the STS in Kubernetes. This Docker image is already published to the Docker Hub as prabath/insecure-sts-ch10:v1
. To deploy a container in Kubernetes, first we need to create a Pod. If you read appendix J, you learned that developers or DevOps don’t directly work with Pods but with Deployments. So, to create a Pod in Kubernetes, we need to create a Deployment.
A Deployment is a Kubernetes object that we represent in a YAML file. Let’s create the following YAML file (listing 11.1) with the prabath/insecure-sts-ch10:v1
Docker image. The source code related to all the samples in this chapter is available in the GitHub repository at https://github.com/microservices-security-in-action/samples in the chapter11 directory. You can also find the same YAML configuration shown in the following listing in the chapter11/sample01/sts.deployment.yaml file.
apiVersion: apps/v1 kind: Deployment metadata: name: sts-deployment labels: app: sts spec: replicas: 1 ❶ selector: ❷ matchLabels: app: sts template: ❸ metadata: labels: app: sts spec: containers: - name: sts image: prabath/insecure-sts-ch10:v1 - containerPort: 8443
❶ Instructs Kubernetes to run one replica of the matching Pods
❷ This Deployment will carry a matching Pod as per the selector. This is an optional section, which can carry multiple labels.
❸ A template describes how each Pod in the Deployment should look. If you define a selector/matchLabels, the Pod definition must carry a matching label.
In this section, we create a Deployment in Kubernetes for the STS that we defined in the YAML file in the above listing. We assume you have access to a Kubernetes cluster. If not, follow the instructions in appendix J, section J.5, to create a Kubernetes cluster with the GKE.1 Once you have access to a Kubernetes cluster, go to the chapter11/sample01 directory and run the following command from your local machine to create a Deployment for STS:
> kubectl apply -f sts.deployment.yaml deployment.apps/sts-deployment created
Use the following command to find all the Deployments in your Kubernetes cluster (under the current namespace). If everything goes well, you should see one replica of the STS up and running:
> kubectl get deployment sts-deployment NAME READY UP-TO-DATE AVAILABLE AGE sts-deployment 1/1 1 1 12s
Not everything goes fine all the time. Multiple things can go wrong. If Kubernetes complains about the YAML file, it could be due to an extra space or an error when you copy and paste the content from the text in the e-book. Rather than copying and pasting from the e-book, always use the corresponding sample file from the GitHub repo.
Also, in case you have doubts about your YAML file, you can use an online tool like YAML Lint (www.yamllint.com) to validate it, or use kubeval (www.kubeval.com), which is an open source tool. YAML Lint validates only the YAML file, while kubeval also validates your configurations against the Kubernetes schema.
Even though the kubectl
apply
command executes successfully, when you run kubectl
get
deployments
, it may show that none of your replicas are ready. The following three commands are quite useful in such cases:
The kubectl
describe
command shows a set of metadata related to the deployment:
> kubectl describe deployment sts-deployment
The kubectl
get events
command shows all the events created in the current Kubernetes namespace. If something goes wrong while creating the Deployment, you’ll notice a set of errors or warnings:
> kubectl get events
Another useful command in troubleshooting is kubectl
logs
. You can run this command against a given Pod. First, though, you can run kubectl get pods
to find the name of the Pod you want to get the logs from, and then use the following command with the Pod name (sts-deployment-799fdff46f-hdp5s
is the Pod name in the following command):
> kubectl logs sts-deployment-799fdff46f-hdp5s -follow
Once you identify the issue related to your Kubernetes Deployment, and if you need help to get that sorted out, you can either reach out to any of the Kubernetes community forums (https://discuss.kubernetes.io) or use the Kubernetes Stack Overflow channel (https://stackoverflow.com/questions/tagged/kubernetes).
In this section, we create a Kubernetes Service that exposes the STS outside the Kubernetes cluster. If you’re new to Kubernetes Services, remember to check appendix J.
Here, we use a Kubernetes Service of LoadBalancer type. If there are multiple replicas of a given Pod, the Service of LoadBalancer type acts as a load balancer. Usually, it’s an external load balancer provided by the Kubernetes hosting environment, and in our case, it’s the GKE. Let’s have a look at the YAML file to create the Service (listing 11.2). The same YAML file is available at chapter11/sample01/sts.service.yaml.
The Service listens on port 443 and forwards the traffic to port 8443. If you look at listing 11.1, you’ll notice that when we create the Deployment, the container that carries the STS microservice is listening on port 8443. Even though it’s not 100% accurate to say that a Service listens on a given port, it’s a good way to simplify what’s happening underneath. As we discussed in appendix J, what really happens when we create a Service is that each node in the Kubernetes cluster updates the corresponding iptables, so any request destined to a Service IP address/name and port will be dispatched to one of the Pods it backs.
apiVersion: v1 kind: Service metadata: name: sts-service spec: type: LoadBalancer selector: app: sts ports: - protocol: TCP targetPort: 8443
To create the Service in the Kubernetes cluster, go to the chapter11/sample01 directory and run the following command from your local machine:
> kubectl apply -f sts.service.yml service/sts-service created
Use the following command to find all the Services in your Kubernetes cluster (under the current namespace):2
> kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 134m sts-service LoadBalancer 10.39.244.238 <pending> 443:30993/TCP 20s
It takes Kubernetes a few minutes to assign an external IP address for the sts-service
we just created. If you run the same command, you’ll notice the following output after a couple of minutes, with an external IP address assigned to the sts-service
:
> kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 135m sts-service LoadBalancer 10.39.244.238 34.82.103.6 443:30993/TCP 52s
Now let’s test the STS with the following curl
command run from your local machine. This is exactly the same curl
command we used in section 7.2. The IP address in the command is the external IP address corresponding to the sts-service
from the previous command:
> curl -v -X POST --basic -u applicationid:applicationsecret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=password&username=peter&password=peter123&scope=foo" https://34.82.103.6/oauth/token3
In this command, applicationid
is the client ID of the web application, and applicationsecret
is the client secret (these are hardcoded in the STS). If everything works, the STS returns an OAuth 2.0 access token, which is a JWT (or a JWS, to be precise):
{ "access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NTEzMTIzNz YsInVzZXJfbmFtZSI6InBldGVyIiwiYXV0aG9yaXRpZXMiOlsiUk9MRV9VU0VSIl0sImp0aSI6I jRkMmJiNjQ4LTQ2MWQtNGVlYy1hZTljLTVlYWUxZjA4ZTJhMiIsImNsaWVudF9pZCI6ImFwcGxp Y2F0aW9uaWQiLCJzY29wZSI6WyJmb28iXX0.tr4yUmGLtsH7q9Ge2i7gxyTsOOa0RS0Yoc2uBuA W5OVIKZcVsIITWV3bDN0FVHBzimpAPy33tvicFROhBFoVThqKXzzG00SkURN5bnQ4uFLAP0NpZ6 BuDjvVmwXNXrQp2lVXl4lQ4eTvuyZozjUSCXzCI1LNw5EFFi22J73g1_mRm2jdEhBp1TvMaRKLB Dk2hzIDVKzu5oj_gODBFm3a1S-IJjYoCimIm2igcesXkhipRJtjNcrJSegBbGgyXHVak2gB7I07 ryVwl_Re5yX4sV9x6xNwCxc_DgP9hHLzPM8yz_K97jlT6Rr1XZBlveyjfKs_XIXgU5qizRm9mt5 xg", "token_type":"bearer", "refresh_token":"", "expires_in":5999, "scope":"foo", "jti":"4d2bb648-461d-4eec-ae9c-5eae1f08e2a2" }
Here, we talk to the STS running in Kubernetes over TLS. The STS uses TLS certificates embedded in the prabath/insecure-sts-ch10:v1
Docker image, and the Kubernetes load balancer just tunnels all the requests it gets to the corresponding container.3
In section 11.1, we used a Docker image called prabath/insecure-sts-ch10:v1
. We named it insecure-sts
for a reason. In chapter 10, we had a detailed discussion on why this image is insecure. While creating this image, we embedded all the keys and the credentials to access the keys into the image itself. Because this is in Docker Hub, anyone having access to the image can figure out all our secrets--and that’s the end of the world! You can find the source code of this insecure STS in the chapter10/sample01 directory.
To make the Docker image secure, the first thing we need to do is to externalize all the keystores and credentials. In chapter 10, we discussed how to externalize the application.properties file (where we keep all the credentials) from the Docker image as well as the two keystore files (one keystore includes the key to secure the TLS communication, while the other keystore includes the key to sign JWT access tokens that the STS issues). We published this updated Docker image to Docker Hub as prabath/secure-sts-ch10:v1
. To help you understand how this Docker image is built, the following listing repeats the Dockerfile from listing 10.4.
FROM openjdk:8-jdk-alpine ADD target/com.manning.mss.ch10.sample01-1.0.0.jar com.manning.mss.ch10.sample01-1.0.0.jar ENTRYPOINT ["java", "-jar", "com.manning.mss.ch10.sample01-1.0.0.jar"]
We’ve externalized the application.properties file. Spring Boot reads the location of the application.properties file from the SPRING_CONFIG_LOCATION
environment variable, which is set to /opt/application.properties
. So Spring Boot expects the application.properties file to be present in the /opt directory of the Docker container. Because our expectation here is to externalize the application.properties file, we can’t put it to the container filesystem.
In chapter 10, we used Docker bind mounts, so Docker loads the application .properties file from the host machine and maps it to the /opt directory of the container filesystem. Following is the command we used in chapter 10 to run the Docker container with bind mounts (only for your reference; if you want to try it, follow the instructions in section 10.2.2):
> export JKS_SOURCE="$(pwd)/keystores/keystore.jks" > export JKS_TARGET="/opt/keystore.jks" > export JWT_SOURCE="$(pwd)/keystores/jwt.jks" > export JWT_TARGET="/opt/jwt.jks" > export APP_SOURCE="$(pwd)/config/application.properties" > export APP_TARGET="/opt/application.properties" > docker run -p 8443:8443 --mount type=bind,source="$JKS_SOURCE",target="$JKS_TARGET" --mount type=bind,source="$JWT_SOURCE",target="$JWT_TARGET" --mount type=bind,source="$APP_SOURCE",target="$APP_TARGET" -e KEYSTORE_SECRET=springboot -e JWT_KEYSTORE_SECRET=springboot prabath/secure-sts-ch10:v1
In the command, we use bind mounts to pass not only the application.properties file, but also the two keystore files. If you look at the keystore locations mentioned in the application.properties file (listing 11.4), Spring Boot looks for the keystore.jks and jwt.jks files inside the /opt directory of the container filesystem. Also, in this listing, you can see that we’ve externalized the keystore passwords. Now, Spring Boot reads the password of the keystore.jks file from the KEYSTORE_SECRET
environment variable, and the password of the jwt.jks file from the JWT_KEYSTORE_SECRET
environment variable, which we pass in the docker run
command.
server.port: 8443 server.ssl.key-store: /opt/keystore.jks server.ssl.key-store-password: ${KEYSTORE_SECRET} server.ssl.keyAlias: spring spring.security.oauth.jwt: true spring.security.oauth.jwt.keystore.password: ${JWT_KEYSTORE_SECRET} spring.security.oauth.jwt.keystore.alias: jwtkey spring.security.oauth.jwt.keystore.name: /opt/jwt.jks
When you run a container in a Kubernetes environment, you can’t pass configuration files from your local filesystem as we did with Docker in section 11.2. Kubernetes introduces an object called ConfigMap to decouple configuration from containers or microservices running in a Kubernetes environment. In this section, you’ll learn how to represent the application.properties file, the keystore.jks file, the jwt.jks file, and the keystore passwords as ConfigMap objects.
A ConfigMap is not the ideal object to represent sensitive data like keystore passwords. In such cases, we use another Kubernetes object called Secret. In section 11.3, we’ll move keystore passwords from ConfigMap to a Kubernetes Secret. If you’re new to Kubernetes ConfigMaps, see appendix J for the details and to find out how it works internally.
Kubernetes lets you create a ConfigMap object with the complete content of a configuration file. Listing 11.5 shows the content of the application.properties file under the data
element with the application.properties
as the key. The name of the key must match the name of the file that we expect to be in the container filesystem. You can find the complete ConfigMap definition of the application.properties file in the chapter11/sample01/sts.configuration.yaml file.
apiVersion: v1 kind: ConfigMap metadata: name: sts-application-properties-config-map data: ❶ application.properties: | ❷ [ server.port: 8443 server.ssl.key-store: /opt/keystore.jks server.ssl.key-store-password: ${KEYSTORE_SECRET} server.ssl.keyAlias: spring spring.security.oauth.jwt: true spring.security.oauth.jwt.keystore.password: ${JWT_KEYSTORE_SECRET} spring.security.oauth.jwt.keystore.alias: jwtkey ]
❶ Creates a ConfigMap object of a file with a text representation
❷ The name of the key must match the name of the file we expect to be in the container filesystem.
Once we define the ConfigMap in a YAML file, we can use the kubectl client to create a ConfigMap object in the Kubernetes environment. We defer that until section 11.2.5, when we complete our discussion on the other three ConfigMap objects as well (in sections 11.2.3 and 11.2.4).
Kubernetes lets you create a ConfigMap object of a file with a text representation (listing 11.5) or with a binary representation. In listing 11.6, we use the binary representation option to create ConfigMaps for the keystore.jks and jwt.jks files. The base 64-encoded content of the keystore.jks file is listed under the key keystore.jks
under the element binaryData
. The name of the key must match the name of the file we expect to be in the /opt directory of the container filesystem.
You can find the complete ConfigMap definition of the keystore.jks and jwt.jks files in the chapter11/sample01/sts.configuration.yaml file. Also, the keystore.jks and jwt.jks binary files are available in the chapter10/sample01/keystores directory in case you’d like to do file-to-base64 conversion yourself.4
apiVersion: v1
kind: ConfigMap
metadata:
name: sts-keystore-config-map
binaryData: ❶
keystore.jks: [base64-encoded-text]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: sts-jwt-keystore-config-map
jwt.jks:[base64-encoded-text]
❶ Creates a ConfigMap object of a file with a binary representation
First, don’t do this in a production deployment! Kubernetes stores anything that you store in a ConfigMap in cleartext. To store credentials in a Kubernetes deployment, we use a Kubernetes object called Secret instead of a ConfigMap. We talk about Secrets later in section 11.3. Until then, we’ll define keystore credentials in a ConfigMap.
Listing 11.7 shows the definition of the sts-keystore-credentials
ConfigMap. There we pass the password to access the keystore.jks file under the KEYSTORE _PASSWORD
key, and the password to access the jwt.jks file under the JWT_KEYSTORE _PASSWORD
key, both under the data
element. You can find the complete ConfigMap definition of keystore credentials in the chapter11/sample01/sts.configuration.yaml file.
apiVersion: v1 kind: ConfigMap metadata: name: sts-keystore-credentials data: KEYSTORE_PASSWORD: springboot JWT_KEYSTORE_PASSWORD: springboot
In the file chapter11/sample01/sts.configuration.yaml, you’ll find ConfigMap definitions of all four ConfigMaps we’ve discussed in this section thus far. You can use the following kubectl
command from the chapter11/sample01 directory to create ConfigMap objects in your Kubernetes environment:
> kubectl apply -f sts.configuration.yaml configmap/sts-application-properties-config-map created configmap/sts-keystore-config-map created configmap/sts-jwt-keystore-config-map created configmap/sts-keystore-credentials created
The following kubectl
command lists all the ConfigMap objects available in your Kubernetes cluster (under the current namespace):
> kubectl get configmaps NAME DATA AGE sts-application-properties-config-map 1 50s sts-keystore-config-map 0 50s sts-jwt-keystore-config-map 0 50s sts-keystore-credentials 2 50s
In this section, we’ll go through the changes we need to introduce to the Kubernetes Deployment, that we created in listing 11.1, to read the values from the ConfigMaps we created in section 11.2.5. You’ll find the complete updated definition of the Kubernetes Deployment in the chapter11/sample01/sts.deployment.with.configmap.yaml file.
We’ll focus on two types of ConfigMaps. For one, we want to read the content of a file from a ConfigMap and mount that file into the container filesystem. For the other one, we want to read a value from a ConfigMap and set that as an environment variable in the container. The following listing shows part of the Deployment object that carries the configuration related to the containers.
spec: containers: - name: sts image: prabath/secure-sts-ch10:v1 imagePullPolicy: Always ports: - containerPort: 8443 volumeMounts: ❶ - name: application-properties ❷ mountPath: "/opt/application.properties" ❸ subPath: "application.properties" - name: keystore mountPath: "/opt/keystore.jks" subPath: "keystore.jks" - name: jwt-keystore mountPath: "/opt/jwt.jks" subPath: "jwt.jks" env: ❹ - name: KEYSTORE_SECRET ❺ valueFrom: configMapKeyRef: name: sts-keystore-credentials ❻ key: KEYSTORE_PASSWORD ❼ - name: JWT_KEYSTORE_SECRET valueFrom: configMapKeyRef: name: sts-keystore-credentials key: JWT_KEYSTORE_PASSWORD volumes: - name: application-properties ❽ configMap: name: sts-application-properties-config-map ❾ - name: keystore configMap: name: sts-keystore-config-map - name: jwt-keystore name: sts-jwt-keystore-config-map
❶ Defines the volume mounts used by this Kubernetes Deployment
❷ The name of the volume, which refers to the volumes section
❸ Location of the container filesystem to mount this volume
❹ Defines the set of environment variables read by the Kubernetes Deployment
❺ The name of the environment variable. This is the exact name you find in application.properties file.
❻ The name of the ConfigMap to read the value for this environment variable
❼ The name of the key corresponding to the value we want to read from the corresponding ConfigMap
❽ The name of the volume. This is referred by the name element under the volumeMounts section of the Deployment.
❾ The name the ConfigMap, which carries the data related to application.properties file
You can use the following kubectl
command from the chapter11/sample01 directory to update the Kubernetes Deployment with the changes annotated in listing 11.8:
> kubectl apply -f sts.deployment.with.configmap.yaml deployment.apps/sts-deployment configured
The Kubernetes Service we created in section 11.1.4 requires no changes. Make sure it’s up and running with the correct IP address by using the kubectl get services
command. Now let’s test the STS with the following curl
command run from your local machine:
> curl -v -X POST --basic -u applicationid:applicationsecret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=password&username=peter&password=peter123&scope=foo" https://34.82.103.6/oauth/token
In this command, applicationid
is the client ID of the web application, and applicationsecret
is the client secret. If everything works, the STS returns an OAuth 2.0 access token, which is a JWT (or a JWS, to be precise):
{ "access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NTEzMTIzNz YsInVzZXJfbmFtZSI6InBldGVyIiwiYXV0aG9yaXRpZXMiOlsiUk9MRV9VU0VSIl0sImp0aSI6I jRkMmJiNjQ4LTQ2MWQtNGVlYy1hZTljLTVlYWUxZjA4ZTJhMiIsImNsaWVudF9pZCI6ImFwcGxp Y2F0aW9uaWQiLCJzY29wZSI6WyJmb28iXX0.tr4yUmGLtsH7q9Ge2i7gxyTsOOa0RS0Yoc2uBuA W5OVIKZcVsIITWV3bDN0FVHBzimpAPy33tvicFROhBFoVThqKXzzG00SkURN5bnQ4uFLAP0NpZ6 BuDjvVmwXNXrQp2lVXl4lQ4eTvuyZozjUSCXzCI1LNw5EFFi22J73g1_mRm2jdEhBp1TvMaRKLB Dk2hzIDVKzu5oj_gODBFm3a1S-IJjYoCimIm2igcesXkhipRJtjNcrJSegBbGgyXHVak2gB7I07 ryVwl_Re5yX4sV9x6xNwCxc_DgP9hHLzPM8yz_K97jlT6Rr1XZBlveyjfKs_XIXgU5qizRm9mt5 xg", "token_type":"bearer", "refresh_token":"", "expires_in":5999, "scope":"foo", "jti":"4d2bb648-461d-4eec-ae9c-5eae1f08e2a2" }
In Kubernetes, we can run more than one container in a Pod, but as a practice, we run only one application container. Along with an application container, we can also run one or more init containers. If you’re familiar with Java (or any other programming language), an init container in Kubernetes is like a constructor in a Java class. Just as the constructor in a Java class runs well before any other methods, an init container in a Pod must run and complete before any other application containers in the Pod start.
This is a great way to initialize a Kubernetes Pod. You can pull any files (keystores, policies, and so forth), configurations, and so on with an init container. Just as with any other application container, we can have more than one init container in a given Pod; but unlike an application container, each init container must run to completion before the next init container starts.
Listing 11.9 modifies the STS Deployment to load keystore.jks and jwt.jks files from a Git repository by using an init container instead of loading them from a ConfigMap object (as in listing 11.8). You can find the complete updated definition of the Kubernetes Deployment in the chapter11/sample01/sts.deployment.with.initcontainer.yaml file. The following listing shows part of the updated STS deployment, corresponding to the init container.
initContainers: ❶ - name: init-keystores image: busybox:1.28 ❷ command: ❸ - "/bin/sh" - "-c" - "wget ...sample01/keystores/jwt.jks -O /opt/jwt.jks | wget ...sample01/keystores/keystore.jks -O /opt/keystore.jks" volumeMounts: ❹ - name: keystore ❺ mountPath: "/opt/keystore.jks" ❻ subPath: "keystore.jks" ❼ - name: jwt-keystore subPath: "jwt.jks"
❶ Lists out all the init containers
❷ The name of the Docker image used as the init container to pull the keystores from a Git repository
❸ The Docker container executes this command at startup. The jwt.jks and keystore.jks files are copied to the opt directory of the container.
❹ Defines a volume mount, so that the keystores loaded by the init container can be used by other containers in the Pod
❺ Any container in the Pod that refers to the same volume mount must use the same name.
❼ The subPath property specifies a subpath inside the referenced volume instead of its root.
We’ve created the init container with the busybox
Docker image. Because the busybox
container is configured as an init container, it runs before any other container in the Pod. Under the command
element, we specified the program the busybox
container should run. There we got both keystore.jks and jwt.jks files from a Git repo and copied both keystore.jks and jwt.jks files to the /opt directory of the busybox
container filesystem.
The whole objective of the init container is to get the two keystores into the Docker container that runs the STS. To do that, we need to have two volume mounts; both volumes (keystore
and jwt-keystore
) are mapped to the /opt directory. Because we already have volume mounts with these two names (under the secure-sts
container in the following listing), the two keystores are also visible to the secure-sts
container filesystem.
volumeMounts: - name: application-properties mountPath: "/opt/application.properties" subPath: "application.properties" - name: keystore mountPath: "/opt/keystore.jks" subPath: "keystore.jks" - name: jwt-keystore mountPath: "/opt/jwt.jks" subPath: "jwt.jks"
Finally, to support init containers, we also need to make one more change to the original STS Deployment. Earlier, under the volumes
element of the STS Deployment, we pointed to the corresponding ConfigMaps, and now we need to point to a special volume called emptyDir
, as shown here. The emptyDir
volume gets created empty when Kubernetes creates the corresponding Pod, and the keystore files pulled from a Git repo by the init container populates it. You will lose the content of an emptyDir
volume when you delete the corresponding Pod:
volumes: - name: application-properties configMap: name: sts-application-properties-config-map - name: keystore emptyDir: {} - name: jwt-keystore emptyDir: {}
Let’s use the following kubectl
command with the chapter11/sample01/sts.deployment .with.init.containers.yaml file to update the STS deployment to use init containers:
> kubectl apply -f sts.deployment.with.initcontainer.yaml deployment.apps/sts-deployment configured
As we discussed in section 11.2.4, ConfigMap is not the right way of externalizing sensitive data in Kubernetes. Secret is a Kubernetes object, just like ConfigMap, that carries name/value pairs but is ideal for storing sensitive data. In this section, we discuss Kubernetes Secrets in detail and see how to update the STS Deployment with Kubernetes Secrets, instead of using ConfigMaps, to externalize keystore credentials.
Kubernetes provisions a Secret to each container of the Pod it creates. This is called the default token secret. To see the default token secret, run the following kubectl
command:
> kubectl get secrets NAME TYPE DATA AGE default-token-l9fj8 kubernetes.io/service-account-token 3 10d
Listing 11.11 shows the structure of the default token secret returned by kubectl in YAML format. The name/value pairs under the data
element carry the confidential data in base64-encoded format. The default token secret has three name/value pairs: ca.crt
, namespace
, and token
. This listing shows only part of the values for ca.crt
and token
.
> kubectl get secret default-token-l9fj8 -o yaml apiVersion: v1 kind: Secret metadata: annotations: kubernetes.io/service-account.name: default kubernetes.io/service-account.uid: ff3d13ba-d8ee-11e9-a88f-42010a8a01e4 name: default-token-l9fj8 namespace: default type: kubernetes.io/service-account-token data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ... namespace: ZGVmYXVsdA== token: ZXlKaGJHY2lPaUpTVXpJMU5pSX...
The value of ca.crt
is, in fact, the root certificate of the Kubernetes cluster. You can use an online tool like Base64 Decode Online (https://base64.guru/converter/decode/file) to convert base64-encoded text to a file. You’ll see something similar to the following, which is the PEM-encoded root certificate of the Kubernetes cluster:
-----BEGIN CERTIFICATE----- MIIDCzCCAfOgAwIBAgIQdzQ6l91oRfLI141a9hEPoTANBgkqhkiG9w0BAQsFADAv MS0wKwYDVQQDEyRkMWJjZGU1MC1jNjNkLTQ5MWYtOTZlNi0wNTEwZDliOTI5ZTEw HhcNMTkwOTE3MDA1ODI2WhcNMjQwOTE1MDE1ODI2WjAvMS0wKwYDVQQDEyRkMWJj ZGU1MC1jNjNkLTQ5MWYtOTZlNi0wNTEwZDliOTI5ZTEwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQChdg15gweIqZZraHBFH3sB9FKfv2lDZ03/MAq6ek3J NJj+7huiJUy6PuP9t5rOiGU/JIvRI7iXipqc/JGMRjmMVwCmSv6D+5N8+JmvhZ4i uzbjUOpiuyozRsmf3hzbwbcLbcA94Y1d+oK0TZ+lYs8XNhX0RCM+gDKryC5MeGnY zqd+/MLS6zajG3qlGQAWn9XKClPpRDOJh5h/uNQs+r2Y9Uz4oi4shVUvXibwOHrh 0MpAt6BGujDMNDNRGH8/dK1CZ1EYJYoUaOTOeF21RSJ2y82AFS5eA17hSxY4j6x5 3ipQt1pe49j5m7QU5s/VoDGsBBge6vYd0AUL9y96xFUvAgMBAAGjIzAhMA4GA1Ud DwEB/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQB4 33lsGOSU2z6PKLdnZHrnnwZq44AH3CzCQ+M6cQPTU63XHXWcEQtxSDcjDTm1xZqR qeoUcgCW4mBjdG4dMkQD+MuBUoGLQPkv5XsnlJg+4zRhKTD78PUEI5ZF8HBBX5Vt +3IbrBelVhREuwDGClPmMR0/081ZlwLZFrbFRwRAZQmkEgCtfcOUGQ3+HLQw1U2P xKFLx6ISUNSkPfO5pkBW6Tg3rJfQnfuKUPxUFI/3JUjXDzl2XLx7GFF1J4tW812A T6WfgDvYS2Ld9o/rw3C036NtivdjGrnb2QqEosGeDPQOXs53sgFT8LPNkQ+f/8nn G0Jk4TNzdxezmyyyvxh2 -----END CERTIFICATE-----
To get something meaningful out of this, you can use an online tool like the Report URI PEM decoder (https://report-uri.com/home/pem_decoder) to decode the PEM file, resulting in something similar to the following:
Common Name: d1bcde50-c63d-491f-96e6-0510d9b929e1 Issuing Certificate: d1bcde50-c63d-491f-96e6-0510d9b929e1 Serial Number: 77343A97DD6845F2C8D78D5AF6110FA1 Signature: sha256WithRSAEncryption Valid From: 00:58:26 17 Sep 2019 Valid To: 01:58:26 15 Sep 2024 Key Usage: Certificate Sign Basic Constraints: CA:TRUE
The token
under the data
element in listing 11.11 carries a JSON Web Token (see appendix B for details on JWT). This JWT is itself base64 encoded. You can use an online tool like Base64 Encode and Decode (www.base64decode.org) to base64-decode the token
, and an online JWT decoder like JWT.IO (http://jwt.io) to decode the JWT. The following shows the decoded payload of the JWT:
{ "iss": "kubernetes/serviceaccount", "kubernetes.io/serviceaccount/namespace": "default", "kubernetes.io/serviceaccount/secret.name": "default-token-l9fj8", "kubernetes.io/serviceaccount/service-account.name": "default", "kubernetes.io/serviceaccount/service-account.uid": "ff3d13ba-d8ee-11e9-a88f-42010a8a01e4", "sub": "system:serviceaccount:default:default" }
Each container in a Kubernetes Pod has access to this JWT from the /var/run/secrets/kuberenetes.io/serviceaccount directory, in its own container filesystem. If you want to access the Kubernetes API server from a container, you can use this JWT for authentication. In fact, this JWT is bound to a Kubernetes service account. We discuss service accounts in detail in section 11.6.
In section 11.2, we updated the STS Deployment to use ConfigMaps to externalize configuration data. Even for keystore credentials, we used ConfigMaps instead of Secrets. In this section, we’re going to update the STS Deployment to use Secrets to represent keystore credentials. First, we need to define the Secret object as shown in listing 11.12. The complete definition of the Secret object is in the chapter11/sample01/sts.secrets.yaml file.
apiVersion: v1 kind: Secret metadata: name: sts-keystore-secrets stringData: JWT_KEYSTORE_PASSWORD: springboot
To create the Secret in the Kubernetes environment, run the following command from the chapter11/sample01 directory:
> kubectl apply -f sts.secrets.yaml secret/sts-keystore-secrets created
In listing 11.12, we defined keystore credentials under the stringData
element. Another option is to define credentials under the data
element. In listing 11.16 (later in the chapter), we have an example.
When you define credentials under the data
element, you need to base64-encode the values. If you mostly use binary credentials like private keys, you need to use the data
element. For text credentials, the stringData
element is the preferred option. Another important thing to notice is that Kubernetes has designed the stringData
element to be write-only. That means, when you try to view a Secret you defined with stringData
, it won’t return as a stringData
element; instead, Kubernetes base64-encodes the values and returns those under the data
element. You can use the following kubectl
command to list the definition of the Secret object we created in listing 11.12 in YAML format:
> kubectl get secret sts-keystore-secrets -o yaml apiVersion: v1 kind: Secret metadata: name: sts-keystore-secrets data: KEYSTORE_PASSWORD: c3ByaW5nYm9vdA== JWT_KEYSTORE_PASSWORD: c3ByaW5nYm9vdA==
Now let’s see how to update the STS Deployment to use the Secret object we created. You can find the updated YAML configuration for the STS Deployment in the chapter11/sample01/sts.deployment.with.secrets.yaml file. The following listing shows part of the complete STS Deployment, which reads keystore credentials from the Secret object and populates the environment variables.
env: - name: KEYSTORE_SECRET valueFrom: secretKeyRef: name: sts-keystore-secrets key: KEYSTORE_PASSWORD - name: JWT_KEYSTORE_SECRET valueFrom: secretKeyRef: name: sts-keystore-secrets key: JWT_KEYSTORE_PASSWORD
Let’s run the following kubectl
command from chapter11/sample01 to update the STS Deployment:
> kubectl apply -f sts.deployment.with.secrets.yaml deployment.apps/sts-deployment configured
You have to pick Secrets over ConfigMaps to store sensitive data because of the way Kubernetes internally handles Secrets. Kubernetes makes sure that the sensitive data Kubernetes represents as Secrets are accessible only to the Pods that need them, and even in such cases, none of the Secrets are written to disk, but only kept in memory. The only place Kubernetes writes Secrets to disk is at the master node, where all the Secrets are stored in etcd (see appendix J), which is the Kubernetes distributed key-value store. From the Kubernetes 1.7 release onward, etcd stores Secrets only in an encrypted format.
In this section, we’re going to deploy the Order Processing microservice in Kubernetes. As in figure 11.1, the Order Processing microservice trusts the tokens issued by the STS, which we now have running in Kubernetes. Once the client application passes the JWT to the Order Processing microservice, the Order Processing microservice talks to the STS to retrieve its public key to validate the signature of the JWT. This is the only communication that happens between the Order Processing microservice and the STS. In fact, to be precise, the Order Processing microservice doesn’t wait until it gets a request to talk to the STS; it talks to the STS at startup to get its public key and stores it in memory.
In chapter 10, we explained how to run the Order Processing microservice as a Docker container. This is the Docker command we used in section 10.4, which externalized the application.properties file, the keystore (keystore.jks), the trust store (trust-store.jks), the keystore credentials, and the trust store credentials. You don’t need to run this command now; if you want to try it out, follow the instructions in chapter 10:
> export JKS_SOURCE="$(pwd)/keystores/keystore.jks" > export JKS_TARGET="/opt/keystore.jks" > export JWT_SOURCE="$(pwd)/keystores/jwt.jks" > export JWT_TARGET="/opt/jwt.jks" > export APP_SOURCE="$(pwd)/config/application.properties" > export APP_TARGET="/opt/application.properties" > docker run -p 8443:8443 --name sts --net manning-network --mount type=bind,source="$JKS_SOURCE",target="$JKS_TARGET" --mount type=bind,source="$JWT_SOURCE",target="$JWT_TARGET" --mount type=bind,source="$APP_SOURCE",target="$APP_TARGET" -e KEYSTORE_SECRET=springboot -e JWT_KEYSTORE_SECRET=springboot prabath/order-processing:v1
To deploy the Order Processing microservice in Kubernetes, we need to create a Kubernetes Deployment and a Service. This is similar to what we did before when deploying the STS in Kubernetes.
In this section, we create three ConfigMaps to externalize the application.properties file and two keystores (keystore.jks and trust-store.jks) and a Secret to externalize the keystore credentials. Listing 11.14 shows the definition of the ConfigMap for the application.properties file. The value of security.oauth2.resource.jwt.key-uri in this listing carries the endpoint of the STS. Here the sts-service
hostname is the name of Kubernetes Service we created for the STS.
apiVersion: v1 kind: ConfigMap metadata: name: orders-application-properties-config-map data: application.properties: | [ server.port: 8443 server.ssl.key-store: /opt/keystore.jks server.ssl.key-store-password: ${KEYSTORE_SECRET} server.ssl.keyAlias: spring server.ssl.trust-store: /opt/trust-store.jks server.ssl.trust-store-password: ${TRUSTSTORE_SECRET} security.oauth2.resource.jwt.key-uri: https://sts-service/oauth/token_key inventory.service: https://inventory-service/inventory logging.level.org.springframework=DEBUG ]
Listing 11.15 shows the ConfigMap definition for the keystore.jks and trust-store.jks files. Each binaryData
element in each ConfigMap definition in this listing carries the base64-encoded text of the corresponding keystore file.
apiVersion: v1 kind: ConfigMap metadata: name: orders-keystore-config-map binaryData: keystore.jks: [base64-encoded-text] --- apiVersion: v1 kind: ConfigMap metadata: name: orders-truststore-config-map binaryData: trust-store.jks: [base64-encoded-text]
Listing 11.16 shows the Secret definition of the credentials in the keystore.jks and trust-store.jks files. The value of each key under the data
element in this listing carries the base64-encoded text of corresponding credentials. You can use the following command on a Mac terminal to generate the base64encoded value of a given text:
> echo -n "springboot" | base64 c3ByaW5nYm9vdA==
apiVersion: v1 kind: Secret metadata: name: orders-key-credentials type: Opaque data: KEYSTORE_PASSWORD: c3ByaW5nYm9vdA== TRUSTSTORE_PASSWORD: c3ByaW5nYm9vdA==
In the chapter11/sample02/order.processing.configuration.yaml file, you’ll find ConfigMap and Secret definitions of all that we discussed in this section. You can use the following kubectl
command from the chapter11/sample02 directory to create ConfigMap and Secret objects in your Kubernetes environment:
> kubectl apply -f order.processing.configuration.yaml configmap/orders-application-properties-config-map created configmap/orders-keystore-config-map created configmap/orders-truststore-config-map created secret/orders-key-credentials created
The following two kubectl
commands list all the ConfigMap and Secret objects available in your Kubernetes cluster (under the current namespace):
> kubectl get configmaps NAME DATA AGE orders-application-properties-config-map 1 50s orders-keystore-config-map 0 50s orders-truststore-config-map 0 50s > kubectl get secrets NAME DATA AGE orders-key-credentials 2 50s
In this section, we create a Deployment in Kubernetes for the Order Processing microservice that we defined in the order.processing.deployment.with.configmap.yaml file found in the chapter11/sample02/ directory. You can use the following kubectl
command from the chapter11/sample02 directory to create the Kubernetes Deployment:
>kubectl apply -f order.processing.deployment.with.configmap.yaml deployment.apps/orders-deployment created
To expose the Kubernetes Deployment we created in section 11.4.2 for the Order Processing microservice, we also need to create a Kubernetes Service. You can find the definition of this Service in the YAML file in the chapter11/sample02/order .processing.service.yml file. Use the following kubectl
command from the chapter11 /sample02 directory to create the Kubernetes Service:
> kubectl apply -f order.processing.service.yml service/orders-service created
Then use the following command to find all the Services in your Kubernetes cluster (under the current namespace). It takes a few minutes for Kubernetes to assign an external IP address for the order-service
we just created. After a couple of minutes, you’ll notice the following output with an external IP address assigned to the Service. That is the IP address you should be using to access the Order Processing microservice:
> kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.39.240.1 <none> 443/T 5d21h orders-service LoadBalancer 10.39.249.66 35.247.11.161 443:32401/TCP 72s sts-service LoadBalancer 10.39.255.168 34.83.188.72 443:31749/TCP 8m39s
Both the Kubernetes Services we created in this chapter for the STS and the Order Processing microservices are of LoadBalancer type. For a Service of the LoadBalancer type to work, Kubernetes uses an external load balancer. Since we run our examples in this chapter on GKE, GKE itself provides this external load balancer.
In this section, we test the end-to-end flow (figure 11.2, which is the same as figure 11.1, but we repeat here for convenience). We need to first get a token from the STS and then use it to access the Order Processing microservice. Now we have both microservices running on Kubernetes. Let’s use the following curl
command, run from your local machine, to a get a token from the STS. Make sure you use the correct external IP address of the STS:
> curl -v -X POST --basic -u applicationid:applicationsecret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=password&username=peter&password=peter123&scope=foo" https://34.83.188.72/oauth/token
In this command, applicationid
is the client ID of the web application, and applicationsecret
is the client secret. If everything works, the STS returns an OAuth 2.0 access token, which is a JWT (or a JWS, to be precise):
{ "access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NTEzMTIzNz YsInVzZXJfbmFtZSI6InBldGVyIiwiYXV0aG9yaXRpZXMiOlsiUk9MRV9VU0VSIl0sImp0aSI6I jRkMmJiNjQ4LTQ2MWQtNGVlYy1hZTljLTVlYWUxZjA4ZTJhMiIsImNsaWVudF9pZCI6ImFwcGxp Y2F0aW9uaWQiLCJzY29wZSI6WyJmb28iXX0.tr4yUmGLtsH7q9Ge2i7gxyTsOOa0RS0Yoc2uBuA W5OVIKZcVsIITWV3bDN0FVHBzimpAPy33tvicFROhBFoVThqKXzzG00SkURN5bnQ4uFLAP0NpZ6 BuDjvVmwXNXrQp2lVXl4lQ4eTvuyZozjUSCXzCI1LNw5EFFi22J73g1_mRm2jdEhBp1TvMaRKLB Dk2hzIDVKzu5oj_gODBFm3a1S-IJjYoCimIm2igcesXkhipRJtjNcrJSegBbGgyXHVak2gB7I07 ryVwl_Re5yX4sV9x6xNwCxc_DgP9hHLzPM8yz_K97jlT6Rr1XZBlveyjfKs_XIXgU5qizRm9mt5 xg", "token_type":"bearer", "refresh_token":"", "expires_in":5999, "scope":"foo", "jti":"4d2bb648-461d-4eec-ae9c-5eae1f08e2a2" }
Now try to invoke the Order Processing microservice with the JWT you got from the previous curl
command. Set the same JWT we got, in the HTTP Authorization Bearer header, using the following curl
command, and invoke the Order Processing microservice. Because the JWT is a little lengthy, you can use a small trick when using the curl
command in this case. Export the value of the JWT to an environmental variable (TOKEN
) and then use that environmental variable in your request to the Order Processing microservice, as shown here:
> export TOKEN=jwt_access_token > curl -k -H "Authorization: Bearer $TOKEN" https://35.247.11.161/orders/11 { "customer_id":"101021", "order_id":"11", "payment_method":{ "card_type":"VISA", "expiration":"01/22", "name":"John Doe", "billing_address":"201, 1st Street, San Jose, CA" }, "items":[ { "code":"101", "qty":1 }, { "code":"103", "qty":5 } ], "shipping_address":"201, 1st Street, San Jose, CA" }
In this section, we introduce another microservice, the Inventory microservice, to our Kubernetes environment and see how service-to-service communication works (figure 11.3). Here, when you invoke the Order Processing microservice with a JWT obtained from the STS, the Order Processing microservice internally talks to the Inventory microservice.
Because the process of deploying the Inventory microservice on Kubernetes is similar to the process we followed while deploying the Order Processing microservice, we won’t go into details. The only key difference is that the Kubernetes Service corresponding to the Inventory microservice is of ClusterIP
type (or the default Service type) because we don’t want external client applications to directly access it.
Let’s run the following kubectl
command from the chapter11/sample03 directory to create a Kubernetes Deployment for the Inventory microservice. This command creates a set of ConfigMaps, a Secret, a Deployment, and a Service:
> kubectl apply -f . configmap/inventory-application-properties-config-map created configmap/inventory-keystore-config-map created configmap/inventory-truststore-config-map created secret/inventory-key-credentials created deployment.apps/inventory-deployment created service/inventory-service created
Use the following command to find all the Services in your Kubernetes cluster (under the current namespace). Because the Inventory microservice is a Service of ClusterIP
type, you won’t find an external IP address for it:
> kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) inventory-service ClusterIP 10.39.251.182 <none> 443/TCP orders-service LoadBalancer 10.39.245.40 35.247.11.161 443:32078/TCP sts-service LoadBalancer 10.39.252.24 34.83.188.72 443:30288/TCP
Let’s test the end-to-end flow (figure 11.3). We need to first get a token from the STS and then use it to access the Order Processing microservice. Now we have all three microservices running on Kubernetes. Let’s use the following curl
command, run from your local machine, to a get a token from the STS. Make sure you use the correct external IP address of the STS:
> curl -v -X POST --basic -u applicationid:applicationsecret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=password&username=peter&password=peter123&scope=foo" https://34.83.188.72/oauth/token
In this command, applicationid
is the client ID of the web application, and applicationsecret
is the client secret. If everything works, the STS returns an OAuth 2.0 access token, which is a JWT (or a JWS, to be precise):
{ "access_token":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1NTEzMTIzNz YsInVzZXJfbmFtZSI6InBldGVyIiwiYXV0aG9yaXRpZXMiOlsiUk9MRV9VU0VSIl0sImp0aSI6I jRkMmJiNjQ4LTQ2MWQtNGVlYy1hZTljLTVlYWUxZjA4ZTJhMiIsImNsaWVudF9pZCI6ImFwcGxp Y2F0aW9uaWQiLCJzY29wZSI6WyJmb28iXX0.tr4yUmGLtsH7q9Ge2i7gxyTsOOa0RS0Yoc2uBuA W5OVIKZcVsIITWV3bDN0FVHBzimpAPy33tvicFROhBFoVThqKXzzG00SkURN5bnQ4uFLAP0NpZ6 BuDjvVmwXNXrQp2lVXl4lQ4eTvuyZozjUSCXzCI1LNw5EFFi22J73g1_mRm2jdEhBp1TvMaRKLB Dk2hzIDVKzu5oj_gODBFm3a1S-IJjYoCimIm2igcesXkhipRJtjNcrJSegBbGgyXHVak2gB7I07 ryVwl_Re5yX4sV9x6xNwCxc_DgP9hHLzPM8yz_K97jlT6Rr1XZBlveyjfKs_XIXgU5qizRm9mt5 xg", "token_type":"bearer", "refresh_token":"", "expires_in":5999, "scope":"foo", "jti":"4d2bb648-461d-4eec-ae9c-5eae1f08e2a2" }
Now let’s invoke the Order Processing microservice with the JWT you got from the previous curl
command. Set the same JWT you got, in the HTTP Authorization Bearer header using the following curl
command and invoke the Order Processing microservice. Because the JWT is a little lengthy, you can use a small trick when using the curl
command. Export the JWT to an environment variable (TOKEN
), then use that environment variable in your request to the Order Processing microservice:
> export TOKEN=jwt_access_token > curl -v -k https://35.247.11.161/orders -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d @- << EOF { "customer_id":"101021", "payment_method":{ "card_type":"VISA", "expiration":"01/22", "name":"John Doe", "billing_address":"201, 1st Street, San Jose, CA" }, "items":[ { "code":"101", "qty":1 }, { "code":"103", "qty":5 } ], "shipping_address":"201, 1st Street, San Jose, CA" } EOF
In the previous command, we do an HTTP POST
to the Order Processing microservice so that the Order Processing microservice can talk to the Inventory microservice. In return, you won’t get any JSON payload, but only an HTTP 201 status code. When the Order Processing microservice talks to the Inventory microservice, the Inventory microservice prints the item codes in its logs. You can tail the logs with the following command that includes the Pod name corresponding to the Inventory microservice:
> kubectl logs inventory-deployment-f7b8b99c7-4t56b --follow
Kubernetes uses two types of accounts for authentication and authorization: user accounts and service accounts. The user accounts aren’t created or managed by Kubernetes, while the service accounts are. In this section, we discuss how Kubernetes manages service accounts and associates those with Pods.
In appendix J, we talked about the high-level Kubernetes architecture and how a Kubernetes node communicates with the API server. Kubernetes uses service accounts to authenticate a Pod to the API server. A service account provides an identity to a Pod, and Kubernetes uses the ServiceAccount object to represent a service account. Let’s use the following command to list all the service accounts available in our Kubernetes cluster (under the default namespace):
> kubectl get serviceaccounts NAME SECRETS AGE default 1 11d
By default, at the time you create a Kubernetes cluster, Kubernetes also creates a service account for the default namespace. To find more details about the default service account, use the following kubectl
command. It lists the service account definition in YAML format. There you can see that the default service account is bound to the default token secret that we discussed in section 11.3.1:
> kubectl get serviceaccount default -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: "2019-09-17T02:01:00Z" name: default namespace: default resourceVersion: "279" selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: ff3d13ba-d8ee-11e9-a88f-42010a8a01e4 secrets: - name: default-token-l9fj8
Kubernetes binds each Pod to a service account. You can have multiple Pods bound to the same service account, but you can’t have multiple service accounts bound to the same Pod (figure 11.4). For example, when you create a Kubernetes namespace, by default Kubernetes creates a service account. That service account is assigned to all the Pods that are created in the same namespace (unless you create a Pod under a specific service account). Under each namespace, you'll find a service account called default
.
In this section, we create a service account called ecomm
, and update the STS Deployment to use it. We want all the Pods running under the STS Deployment to run under the ecomm
service account. Let’s use the following kubectl
command to create the ecomm
service account:
> kubectl create serviceaccount ecomm serviceaccount/ecomm created
At the time of creating the service account, Kubernetes also creates a token secret and attaches it to the service account. When we update the STS Deployment to run under the ecomm
service account, all the Pods under the STS Deployment can use this token secret (which is a JWT) to authenticate to the API server. The following command shows the details of the ecomm
service account in YAML format:
> kubectl get serviceaccount ecomm -o yaml apiVersion: v1 kind: ServiceAccount metadata: name: ecomm namespace: default secrets: - name: ecomm-token-92p7g
Now let’s set the ecomm
service account for the STS Deployment. The complete up-dated definition of the STS Deployment is in the chapter11/sample01/sts.deployment .with.service.account.yaml file. We are introducing these new changes on top of the STS Deployment created in section 11.3.2. As shown in the following listing, the only change was to add the serviceAccountName
element under the spec
element (corresponding to the Pod) of the Deployment.
spec: serviceAccountName: ecomm containers: - name: sts image: prabath/secure-sts-ch10:v1 imagePullPolicy: Always ports: - containerPort: 8443
Let’s use the following command from the chapter11/sample01 directory to update the STS Deployment:
> kubectl apply -f sts.deployment.with.service.account.yaml deployment.apps/sts-deployment configured
If you run the kubectl
describe
pod
command against the Pod Kubernetes created under the STS Deployment now, you’ll find that it uses the token secret Kubernetes automatically created for the ecomm
service account.
If you don’t specify a service account under the Pod spec
of a Deployment (listing 11.17), Kubernetes runs all the corresponding Pods under the same default service account, created under the corresponding Kubernetes namespace.5
Note Having different service accounts for each Pod or for a group of Pods helps you isolate what each Pod can do with the Kubernetes API server. Also, it helps you enforce fine-grained access control for the communications among Pods.
This is one security best practice we should follow in a Kubernetes Deployment. Then again, even if you have different service accounts for different Pods, if you don’t enforce authorization checks at the API server, it adds no value. GKE enables role-based access control by default.
If your Kubernetes cluster doesn’t enforce authorization checks, there’s another option. If you don’t want your Pod to talk to the API server at all, you can ask Kubernetes not to provision the default token secret to that corresponding Pod. Without the token secret, none of the Pods will be able to talk to the API server. To disable the default token provisioning, you need to set the automountServiceAccountToken
element to false
under the Pod spec
of the Deployment (listing 11.17).
Role-based access control (RBAC) in Kubernetes defines the actions a user or a service (a Pod) can perform in a Kubernetes cluster. A role, in general, defines a set of permissions or capabilities. Kubernetes has two types of objects to represent a role: Role and ClusterRole. The Role object represents capabilities associated with Kubernetes resources within a namespace, while ClusterRole represents capabilities at the Kubernetes cluster level.
Kubernetes defines two types of bindings to bind a role to one or more users (or services): RoleBinding and ClusterRoleBinding. The RoleBinding object represents a binding of namespaced resources to a set of users (or services) or, in other words, it binds a Role to a set of users (or services). The ClusterRoleBinding object represents a binding of cluster-level resources to a set of users (or services) or, in other words, it binds a ClusterRole to a set of users (or services). Let’s use the following command to list all the ClusterRoles available in your Kubernetes environment. The truncated output shows the ClusterRoles available in GKE by default:
> kubectl get clusterroles NAME AGE admin 12d cloud-provider 12d cluster-admin 12d edit 12d gce:beta:kubelet-certificate-bootstrap 12d gce:beta:kubelet-certificate-rotation 12d gce:cloud-provider 12d kubelet-api-admin 12d system:aggregate-to-admin 12d system:aggregate-to-edit 12d system:aggregate-to-view 12d
To view the capabilities of a given ClusterRole, let’s use the following kubectl
command. The output in YAML format shows that under the rules
section, the cluster -admin
role can perform any verb (or action) on any resource belongs to any API group. In fact, this role provides full access to the Kubernetes cluster:
> kubectl get clusterrole cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*'
Let’s use the following command to list all the ClusterRoleBindings available in your Kubernetes environment. The truncated output shows the ClusterRoleBindings available in GKE by default:
> kubectl get clusterrolebinding NAME AGE cluster-admin 12d event-exporter-rb 12d gce:beta:kubelet-certificate-bootstrap 12d gce:beta:kubelet-certificate-rotation 12d gce:cloud-provider 12d heapster-binding 12d kube-apiserver-kubelet-api-admin 12d kubelet-bootstrap 12d
To view the users and services attached to a given ClusterRoleBinding, let’s use the following kubectl
command. The output of the command, in YAML, shows that under the roleRef
section, cluster-admin
refers to the cluster-admin
ClusterRole, and under the subjects
section, the system:masters
group is part of the role binding. Or, in other words, the cluster-admin
ClusterRoleBinding binds the system:masters
group to the cluster-admin
ClusterRole, so anyone in the system:masters
group has full access to the Kubernetes cluster:
> kubectl get clusterrolebinding cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters
As we discussed in section 11.5, Kubernetes has two types of accounts: users and service accounts, and users aren’t managed by Kubernetes. Also, you can use a construct called a group to group both the users and service accounts. In this case, we have a group called system:masters
.
Kubernetes has a plugin architecture to authenticate and authorize requests. Once an authentication plugin completes authenticating a request, it returns the username and the group information with respect to the corresponding account (a user or a service account) to the authorization plugin chain. How the authentication plugin finds the user’s group information depends on how the plugin is implemented. Kubernetes needs to not maintain group information internally; the authentication plugin can connect to any external source to find the account-to-group mapping. That being said, Kubernetes also manages a set of predefined groups for service accounts. For example, the group system:serviceaccounts:default
assumes all the service accounts under the default namespace.
Let’s go through a practical example to understand how Kubernetes uses groups. Some time ago, when the developers of Docker Desktop decided to add Kubernetes support, they wanted to promote all the service accounts in the Kubernetes environment to cluster admins. To facilitate that, they came up with a ClusterRoleBinding called docker-for-desktop-binding
, which binds the cluster-admin
ClusterRole to the group system:serviceaccounts
. The system:serviceaccounts
group is a built-in Kubernetes group that assumes all the system accounts in the Kubernetes cluster are members of it. The following shows the definition of the docker-for-desktop-binding
ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: docker-for-desktop-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts
Let’s say, for example, we need the STS to talk to the API server. Ideally, we’ll do that in the STS code itself. Because this is just an example, we’ll use curl from a container that runs the STS. Use the following kubectl
command to directly access the shell of an STS Pod. Because we have only one container in each Pod, we can simply use the Pod name (sts-deployment-69b99fc78c-j76tl
) here:
> kubectl -it exec sts-deployment-69b99fc78c-j76tl sh #
After you run the command, you end up with a shell prompt within the corresponding container. Also, we assume that you’ve followed along in section 11.6.1 and updated the STS Deployment, where now it runs under the ecomm
service account.
Because we want to use curl to talk to the API server, we need to first install it with the following command in the STS container. And because the containers are immutable, if you restart the Pod during this exercise, you’ll need to install curl again:
# apk add --update curl && rm -rf /var/cache/apk/*
To invoke an API, we also need to pass the default token secret (which is a JWT) in the HTTP authorization header. Let’s use the following command to export the token secret to the TOKEN
environment variable. As we’ve previously mentioned, the default token secret is accessible to every container from the /var/run/secrets/kubernetes.io /serviceaccount/token file:
# export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
The following curl
command talks to the Kubernetes API server to list all the metadata associated with the current Pod. Here, we pass the default token secret, which we exported to the TOKEN
environment variable, in the HTTP authorization header. Also, inside a Pod, Kubernetes itself populates the value of the HOSTNAME
environment variable with the corresponding Pod name, and the kubernetes.default .svc
hostname is mapped to the IP address of the API server running in the Kubernetes control plane:
# curl -k -v -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/api/v1/namespaces/default/pods/$HOSTNAME
In response to this command, the API server returns the HTTP 403 code, which means the ecomm
service account isn’t authorized to access this particular API. In fact, it’s not only this specific API; the ecomm
service account isn’t authorized to access any of the APIs on the API server! That’s the default behavior of GKE. Neither the default service account that Kubernetes creates for each namespace nor a custom service account you create are associated with any roles.
Associating a service account with a ClusterRole gives that particular service account the permissions to do certain tasks authorized under the corresponding ClusterRole. There are two ways to associate the ecomm
service account with a ClusterRole.
One way to associate a service account with a ClusterRole is to create a new ClusterRoleBinding; the other way is to update an existing ClusterRoleBinding. In this section, we follow the first approach and create a new ClusterRoleBinding called ecomm-cluster-admin
in GKE. You can find the definition of the ecomm-cluster-admin
ClusterRoleBinding in the chapter11/sample01/ecomm.clusterrole.binding.yaml file (and in the following listing ).
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: ecomm-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount namespace: default name: ecomm
Let’s use the following command from the chapter11/sample01 directory to update the Kubernetes environment with the new ClusterRoleBinding:
> kubectl apply -f ecomm.clusterrole.binding.yaml clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver-kubelet-api-admin configured
Now if you redo the exercise in section 11.7.1, you’ll get a successful response from the API server, as the ecomm
service account, which is now associated with a ClusterRole, is authorized to list all the metadata associated with the current Pod.
If you’d like to know more about the authorization model of Kubernetes, refer to the online documentation at https://kubernetes.io/docs/reference/access-authn-authz/authorization/.
Kubernetes uses ConfigMaps to externalize configurations from the application code, which runs in a container, but it’s not the correct way of externalizing sensitive data in Kubernetes.
The ideal way to store sensitive data in a Kubernetes environment is to use Secrets; Kubernetes stores the value of a Secret in its etcd distributed key-value store in an encrypted format.
Kubernetes dispatches Secrets only to the Pods that use them, and even in such cases, the Secrets are never written to disk, only kept in memory.
Each Pod, by default, is mounted with a token secret, which is a JWT. A Pod can use this default token secret to talk to the Kubernetes API server.
Kubernetes has two types of accounts: users and service accounts. The user accounts aren’t created or managed by Kubernetes, while the service accounts are.
By default, each Pod is associated with a service account (with the name default
), and each service account has its own token secret.
It’s recommended that you always have different service accounts for different Pods (or for a group of Pods). This is one of the security best practices we should always follow in a Kubernetes Deployment.
If you have a Pod that doesn’t need to access the API server, it’s recommended that you not provision the token secret to such Pods.
Kubernetes uses Roles/ClusterRoles and RoleBindings/ClusterRoleBindings to enforce access control on the API server.
1.All the examples in this book use Google Cloud, which is more straightforward and hassle-free when trying out the examples, rather having your own local Kubernetes environment. If you still need to try out the examples locally, you can either use Docker Desktop or Minikube to set up a local, single-node Kubernetes cluster.
2.If you’re new to the namespace concept in Kubernetes, check appendix J. All the samples in this chapter use the default namespace.
3.In addition to TLS tunneling, we can also do TLS termination at the Kubernetes load balancer. Then a new connection is created between the load balancer and the corresponding microservice.
4.To convert a binary file to a base64-encoded text file, you can use an online tool like Browserling (www .browserling.com/tools/file-to-base64).
5.The Pod spec
in a Kubernetes Deployment object defines the parameters for the corresponding Pod.
3.17.181.21