Creating ServiceAccounts

Let's take a look at service accounts currently available in the default Namespace.

 1  kubectl get sa

The output is as follows.

NAME    SECRETS AGE
default 1       24m

At the moment, there is only one ServiceAccount called default. We already saw the limitations of that account. It is stripped from (almost) all the privileges. If we check the other Namespaces, we'll notice that all of them have only the default ServiceAccount. Whenever we create a new Namespace, Kubernetes creates that account for us.

A note to minishift users
OpenShift is an exception. Unlike most other Kubernetes flavors, it created a few ServiceAccounts in the default Namespace. Feel free to explore them later when you learn more about ServiceAccounts and their relations to Roles.

We already established that we'll need to create new ServiceAccounts if we are ever to allow processes in containers to communicate with Kube API. As the first exercise, we'll create an account that will enable us to view (almost) all the resources in the default Namespace. The definition is available in the sa/view.yml file.

 1  cat sa/view.yml

The output is as follows.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: view
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: view roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: view

The YAML defines two resources. The first one is the ServiceAccount named view. The ServiceAccount kind of resources is on pair with Namespace in its simplicity. Excluding a few flags which we won't explore just yet, the only thing we can do with it is to declare its existence. The real magic is defined in the RoleBinding.

Just as with RBAC users and groups, RoleBindings are tying Roles to ServiceAccounts. Since our objective to provide read-only permissions that can be fulfilled with the ClusterRole view, we did not need to create a new Role. Instead, we're binding the ClusterRole view with the ServiceAccount with the same name.

If you are already experienced with RBAC applied to users and groups, you probably noticed that ServiceAccounts follow the same pattern. The only substantial difference, from YAML perspective, is that the kind of the subject is now ServiceAccount, instead of being User or Group.

Let's create the YAML and observe the results.

 1  kubectl apply -f sa/view.yml --record

The output should show that the ServiceAccount and the RoleBinding were created.

Next, we'll list the ServiceAccounts in the default Namespace and confirm that the new one was created.

 1  kubectl get sa

The output is as follows.

NAME    SECRETS AGE
default 1       27m
view    1       6s

Let's take a closer look at the ServiceAccount view we just created.

 1  kubectl describe sa view

The output is as follows.

Name:        view
Namespace:   default
Labels:      <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"view","namespace":"default"}}
             kubernetes.io/change-cause=kubectl apply --filename=sa/view.yml --record=true
Image pull secrets: <none>
Mountable secrets:  view-token-292vm
Tokens:             view-token-292vm
Events:             <none>

There's not much to look at since, as we already saw from the definition, ServiceAccounts are only placeholders for bindings.

We should be able to get more information from the binding.

 1  kubectl describe rolebinding view

The output is as follows.

Name:        view
Labels:      <none>
Annotations: kubectl.kubernetes.io/last-applied-
configuration={"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind
":"RoleBinding","metadata":{"annotations":{},"name":"view","namespace
":"default"},"roleRef":{"ap...
kubernetes.io/change-cause=kubectl apply --
filename=sa/view.yml --record=true
Role: Kind: ClusterRole Name: view Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount view

We can see that the ClusterRole named view has a single subject with the ServiceAccount.

Now that we have the ServiceAccount that has enough permissions to view (almost) all the resources, we'll create a Pod with kubectl which we can use to explore permissions.

The definition is in the sa/kubectl-view.yml file.

 1  cat sa/kubectl-view.yml

The output is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: kubectl
spec:
  serviceAccountName: view
  containers:
  - name: kubectl
    image: vfarcic/kubectl
    command: ["sleep"]
    args: ["100000"]

The only new addition is the serviceAccountName: view entry. It associates the Pod with the account.

Let's create the Pod and describe it.

 1  kubectl apply 
 2      -f sa/kubectl-view.yml 
 3      --record
4 5 kubectl describe pod kubectl

The relevant parts of the output of the latter command are as follows.

Name: kubectl
...
Containers:
  kubectl:
    ...
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from view-token-292vm (ro)
...
Volumes:
  view-token-292vm:
    Type:       Secret (a volume populated by a Secret)
    SecretName: view-token-292vm
...
  

Since we declared that we want to associate the Pod with the account view, Kubernetes mounted a token to the container.

If we defined more containers in that Pod, all would have the same mount.

Further on, we can see that the mount is using a Secret. It contains the same file structure we observed earlier with the default account.

Let's see whether we can retrieve the Pods from the same Namespace.

 1  kubectl exec -it kubectl -- sh
2 3 kubectl get pods

We entered into the shell of the container and listed the Pods.

The output is as follows.

NAME    READY STATUS  RESTARTS AGE
kubectl 1/1   Running 0        55s
  

Now that the Pod is associated with the ServiceAccount that has view permissions, we can indeed list the Pods. At the moment, we can see the kubectl Pod since it is the only one running in the default Namespace.

A note to minishift users
You should see three Pods instead of one. When we created the OpenShift cluster with minikube, it deployed Docker Registry and a Router to the default Namespace.

Even though you probably know the answer, we'll confirm that we can indeed only view the resources and that all other operations are forbidden. We'll try to create a new Pod.

 1  kubectl run new-test 
 2      --image=alpine 
 3      --restart=Never 
 4      sleep 10000

The output is a follows.

Error from server (Forbidden): pods is forbidden: User 
"system:serviceaccount:default:view" cannot create pods in the
namespace "default"

As expected, we cannot create Pods. Since we bound the ClusterRole view to the ServiceAccount, and that allows us only read-only access to resources.

We'll exit the container and delete the resources we created before we continue exploring ServiceAccounts.

 1  exit
2 3 kubectl delete -f sa/kubectl-view.yml

Jenkins' Kubernetes plugin needs to have full permissions related to Pods. It should be able to create them, to retrieve logs from them, to execute processes, to delete them, and so on. The resources defined in sa/pods.yml should allow us just that.

 1  cat sa/pods.yml

The output is as follows.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pods-all
  namespace: test1
---
kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: pods-all namespace: test1 rules: - apiGroups: [""] resources: ["pods", "pods/exec", "pods/log"] verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: pods-all namespace: test1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pods-all subjects: - kind: ServiceAccount name: pods-all

Just as before, we are creating a ServiceAccount. This time we're naming it pods-all. Since none of the existing roles provide the types of privileges we need, we're defining a new one that provides full permissions with Pods.

Finally, the last resource is the binding that ties the two together. All three resources are, this time, defined in the Namespace test1.

Let's create the resources.

 1  kubectl apply -f sa/pods.yml 
 2      --record

So far, we did not explore the effect of ServiceAccounts to Namespaces other than those where we created them, so we'll create another one called test2.

 1  kubectl create ns test2

Finally, we'll run kubectl as a Pod so that we can test the new account. Let's take a very quick look at the updated kubectl Pod definition.

 1  cat sa/kubectl-test1.yml

The output is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: kubectl
  namespace: test1
spec:
  serviceAccountName: pods-all
  containers:
  - name: kubectl
    image: vfarcic/kubectl
    command: ["sleep"]
    args: ["100000"]

The only differences from the previous kubectl definition are in the namespace and the serviceAccountName. I'll assume that there's no need for an explanation. Instead, we'll proceed and apply the definition.

 1  kubectl apply 
 2      -f sa/kubectl-test1.yml 
 3      --record

Now we're ready to test the new account.

 1  kubectl -n test1 exec -it kubectl -- sh
2 3 kubectl get pods

We entered the kubectl container and retrieved the Pods.

The output is as follows.

NAME    READY STATUS  RESTARTS AGE
kubectl 1/1   Running 0        5m
  

We already experienced the same result when we used the ServiceAccount with the view role, so let's try something different and try to create a Pod.

 1  kubectl run new-test 
 2      --image=alpine 
 3      --restart=Never 
 4      sleep 10000

Unlike before, this time the pod "new-test" was created. We can confirm that by listing the Pods one more time.

 1  kubectl get pods

The output is as follows.

NAME     READY STATUS  RESTARTS AGE
kubectl  1/1   Running 0        6m
new-test 1/1   Running 0        17s

How about creating a Deployment?

 1  kubectl run new-test 
 2      --image=alpine sleep 10000

As you hopefully already know, if we execute a run command without the --restart=Never argument, Kubernetes creates a Deployment, instead of a Pod.

The output is as follows.

Error from server (Forbidden): deployments.extensions is forbidden: User "system:serviceaccount:test1:pods-all" cannot create deployments.extensions in the namespace "test1"

It's obvious that our ServiceAccount does not have any permissions aside from those related to Pods, so we were forbidden from creating a Deployment.

Let's see what happens if, for example, we try to retrieve the Pods from the Namespace test2. As a reminder, we are still inside a container that forms the Pod in the test1 Namespace.

 1  kubectl -n test2 get pods

The ServiceAccount was created in the test1 Namespace. Therefore only the Pods created in the same Namespace can be attached to the pods-all ServiceAccount. In this case, the principal thing to note is that the RoleBinding that gives us the permissions to, for example, retrieve the Pods, exists only in the test1 Namespace. The moment we tried to retrieve the Pods from a different Namespace, the API server responded with an error notifying us that we do not have permissions to list pods in the namespace "test2".

While user accounts are global, ServiceAccounts are namespaced. After all, a human user is almost always invoking the API from outside the cluster while the processes inside containers are always inside the Pods which are inside the Namespaces.

We'll exit the container and delete the resources we created, before we explore other aspects of ServiceAccounts.

 1  exit
2 3 kubectl delete -f sa/kubectl-test1.yml

We saw that, with the previous definition, we obtained permissions only within the same Namespace as the Pod attached to the ServiceAccount. In some cases that is not enough. Jenkins is a good use-case. We might decide to run Jenkins master in one Namespace but run the builds in another. Or, we might create a pipeline that deploys a beta version of our application and tests it in one Namespace and, later on, deploys it as production release in another. Surely there is a way to accommodate such needs.

Let's take a look at yet another YAML.

 1  cat sa/pods-all.yml

The output is as follows.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pods-all
  namespace: test1
---
kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: pods-all namespace: test1 rules: - apiGroups: [""] resources: ["pods", "pods/exec", "pods/log"] verbs: ["*"]
---
kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: pods-all namespace: test2 rules: - apiGroups: [""] resources: ["pods", "pods/exec", "pods/log"] verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: pods-all namespace: test1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pods-all subjects: - kind: ServiceAccount name: pods-all
---
apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: pods-all namespace: test2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pods-all subjects: - kind: ServiceAccount name: pods-all namespace: test1

We start by creating a ServiceAccount and a Role called pods-all in the test1 Namespace. Further on, we're creating the same role but in the test2 Namespace.

Finally, we're creating RoleBindings in both Namespaces. The only difference between the two is in a single line. The RoleBinding in the test2 Namespace has the subjects entry namespace: test1. With it, we are linking a binding from one Namespace to a ServiceAccount in another. As a result, we should be able to create a Pod in the test1 Namespace that will have full permissions to operate Pods in both Namespaces.

The two Roles could have been with different permission. They are the same (for now) only for simplicity reasons. We could have simplified the definition by defining a single ClusterRole and save ourselves from having two Roles (one in each Namespace). However, once we get back to Jenkins, we'll probably want to have different permissions in different Namespaces, so we're defining two Roles as a practice.

Let's apply the new definition, create a kubectl Pod again, and enter inside its only container.

 1  kubectl apply -f sa/pods-all.yml 
 2      --record
3 4 kubectl apply 5 -f sa/kubectl-test2.yml 6 --record
7 8 kubectl -n test1 exec -it kubectl -- sh

There's probably no need to confirm again that we can retrieve the Pods from the same Namespace so we'll jump straight to testing whether we can now operate within the test2 Namespace.

 1  kubectl -n test2 get pods

The output shows that no resources were found. If we did not have permissions to view the files, we'd get the already familiar forbidden message instead.

To be on the safe side, we'll try to create a Pod in the test2 Namespace.

 1  kubectl -n test2 
 2      run new-test 
 3      --image=alpine 
 4      --restart=Never 
 5      sleep 10000
6 7 kubectl -n test2 get pods

The output of the latter command is as follows.

NAME     READY STATUS  RESTARTS AGE
new-test 1/1   Running 0        18s

The Pod was created in the test2 Namespace. Mission was accomplished, and we can get out of the container and delete the Namespaces we created, before we try to apply the knowledge about ServiceAccounts to Jenkins.

 1  exit
2 3 kubectl delete ns test1 test2
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.254.103