We still have one more pending issue that we can solve with ServiceAccounts. In the previous chapter we tried to use cvallance/mongo-k8s-sidecar container in hopes it'll dynamically create and manage a MongoDB replica set.
We failed because, at that time, we did not know how to create sufficient permissions that would allow the side-car to do its job. Now we know better.
Let's take a look at an updated version of our go-demo-3 application.
1 cat sa/go-demo-3.yml
The relevant parts of the output are as follows
... apiVersion: v1 kind: ServiceAccount metadata: name: db namespace: go-demo-3
---
kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: db namespace: go-demo-3 rules: - apiGroups: [""] resources: ["pods"] verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: db namespace: go-demo-3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: db subjects: - kind: ServiceAccount name: db
---
apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: db namespace: go-demo-3
spec: ... template: ... spec: serviceAccountName: db ...
Just as with Jenkins, we have a ServiceAccount, a Role, and a RoleBinding. Since the side-car needs only to list the Pods, the Role is this time more restrictive than the one we created for Jenkins. Further down, in the StatefulSet, we added serviceAccountName: db entry that links the set with the account. By now, you should be familiar with all those resources. We're applying the same logic to the side-car as to Jenkins.
Since there's no need for a lengthy discussion, we'll move on and apply the definition.
1 kubectl apply 2 -f sa/go-demo-3.yml 3 --record
Next, we'll take a look at the Pods created in the go-demo-3 Namespace.
1 kubectl -n go-demo-3 2 get pods
After a while, the output should be as follows.
NAME READY STATUS RESTARTS AGE api-... 1/1 Running 1 1m api-... 1/1 Running 1 1m api-... 1/1 Running 1 1m db-0 2/2 Running 0 1m db-1 2/2 Running 0 1m db-2 2/2 Running 0 54s
All the Pods are running so it seems that, this time, the side-car did not have trouble communicating with the API.
To be on the safe side, we'll output the logs of one of the side-car containers.
1 kubectl -n go-demo-3 2 logs db-0 -c db-sidecar
The output, limited to the last entries, is as follows.
... { _id: 1, host: 'db-1.db.go-demo-3.svc.cluster.local:27017', arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: 'db-2.db.go-demo-3.svc.cluster.local:27017' } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 2000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: 5aef9e4c52b968b72a16ea5b } }
The details behind the output are not that important. What matters is that there are no errors. The side-car managed to retrieve the information it needs from Kube API, and all that's left for us is to delete the Namespace and conclude the chapter.
1 kubectl delete ns go-demo-3