Creating Namespaces dedicated to continuous deployment processes

If we are to accomplish a reasonable level of security of our pipelines, we need to run them in dedicated Namespaces. Our cluster already has RBAC enabled, so we'll need a ServiceAccount as well. Since security alone is not enough, we also need to make sure that our pipeline does not affect other applications. We'll accomplish that by creating a LimitRange and a ResourceQuota.

I believe that in most cases we should store everything an application needs in the same repository. That makes maintenance much simpler and enables the team in charge of that application to be in full control, even though that team might not have all the permissions to create the resources in a cluster.

We'll continue using go-demo-3 repository but, since we'll have to change a few things, it is better if you apply the changes to your fork and, maybe, push them back to GitHub.

 1  open "https://github.com/vfarcic/go-demo-3"

If you're not familiar with GitHub, all you have to do is to log in and click the Fork button located in the top-right corner of the screen.

Next, we'll remove the go-demo-3 repository (if you happen to have it) and clone the fork.

Make sure that you replace [...] with your GitHub username.

 1  cd ..
 2 
 3  rm -rf go-demo-3
 4
 5  export GH_USER=[...]
 6 
 7  git clone https://github.com/$GH_USER/go-demo-3.git
 8 
 9  cd go-demo-3  

The only thing left is to edit a few files. Please open k8s/build.yml and k8s/prod.yml files in your favorite editor and change all occurrences of vfarcic with your Docker Hub user.

The namespace dedicated for all building and testing activities of the go-demo-3 project is defined in the k8s/build-ns.yml file stored in the project repository.

 1  git pull
 2
 3  cat k8s/build-ns.yml

The output is as follows.

apiVersion: v1
kind: Namespace
metadata:
  name: go-demo-3-build

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: build
  namespace: go-demo-3-build

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: build
  namespace: go-demo-3-build
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- kind: ServiceAccount
  name: build

---
    
apiVersion: v1
kind: LimitRange
metadata:
  name: build
  namespace: go-demo-3-build
spec:
  limits:
  - default:
      memory: 200Mi
      cpu: 0.2
    defaultRequest:
      memory: 100Mi
      cpu: 0.1
    max:
      memory: 500Mi
      cpu: 0.5
    min:
      memory: 10Mi
      cpu: 0.05
    type: Container

---
apiVersion: v1 kind: ResourceQuota metadata: name: build namespace: go-demo-3-build spec: hard: requests.cpu: 2 requests.memory: 3Gi limits.cpu: 3 limits.memory: 4Gi pods: 15

If you are familiar with Namespaces, ServiceAccounts, LimitRanges, and ResourceQuotas, the definition should be fairly easy to understand.

We defined the go-demo-3-build Namespace which we'll use for all our CDP tasks. It'll contain the ServiceAccount build bound to the ClusterRole admin. As a result, containers running inside that Namespace will be able to do anything they want. It'll be their playground.

We also defined the LimitRange named build. It'll make sure to give sensible defaults to the Pods that running in the Namespace. That way we can create Pods from which we'll build and test without worrying whether we forgot to specify resources they need. After all, most of us do not know how much memory and CPU a build needs. The same LimitRange also contains some minimum and maximum limits that should prevent users from specifying too small or too big resource reservations and limits.

Finally, since the capacity of our cluster is probably not unlimited, we defined a ResourceQuota that specifies the total amount of memory and CPU for requests and limits in that Namespace. We also defined that the maximum number of Pods running in that Namespace cannot be higher than fifteen.

If we do have more Pods than what we can place in that Namespace, some will be pending until others finish their work and resources are liberated.

It is very likely that the team behind the project will not have sufficient permissions to create new Namespaces. If that's the case, the team would need to let cluster administrator know about the existence of that YAML. In turn, he (or she) would review the definition and create the resources, once he (or she) deduces that they are safe. For the sake of simplicity, you are that person, so please execute the command that follows.

 1  kubectl apply 
 2      -f k8s/build-ns.yml 
 3      --record

As you can see from the output, the go-demo-3-build Namespace was created together with a few other resources.

Now that we have a Namespace dedicated to the lifecycle of our application, we'll create another one that to our production release.

 1  cat k8s/prod-ns.yml

The go-demo-3 Namespace is very similar to go-demo-3-build. The major difference is in the RoleBinding. Since we can assume that processes running in the go-demo-3-build Namespace will, at some moment, want to deploy a release to production, we created the RoleBinding build which binds to the ServiceAccount build in the Namespace go-demo-3-build.

We'll apply this definition while still keeping our cluster administrator's hat.

 1  kubectl apply 
 2      -f k8s/prod-ns.yml 
 3      --record

Now we have two Namespaces dedicated to the go-demo-3 application. We are yet to figure out which tools we'll need for our continuous deployment pipeline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.226.28