Did we do it?

We only partially succeeded in defining our continuous deployment stages. We did manage to execute all the necessary steps. We cloned the code, we run unit tests, and we built the binary and the Docker image. We deployed the application under test without affecting the production release, and we run functional tests. Once we confirmed that the application works as expected, we updated production with the new release. The new release was deployed through rolling updates but, since it was the first release, we did not see the effect of it. Finally, we run another round of tests to confirm that rolling updates were successful and that the new release is integrated with the rest of the system.

You might be wondering why I said that "we only partially succeeded." We executed the full pipeline. Didn't we?

One of the problems we're facing is that our process can run only a single pipeline for an application. If another commit is pushed while our pipeline is in progress, it would need to wait in a queue. We cannot have a separate Namespace for each build since we'd need to have cluster-wide permissions to create Namespaces and that would defy the purpose of having RBAC. So, the Namespaces need to be created in advance. We might create a few Namespaces for building and testing, but that would still be sub-optimal. We'll stick with a single Namespace with the pending task to figure out how to deploy multiple revisions of an application in the same Namespace given to us by the cluster administrator.

Another problem is the horrifying usage of sed commands to modify the content of a YAML file. There must be a better way to parametrize definition of an application. We'll try to solve that problem in the next chapter.

Once we start running multiple builds of the same application, we'll need to figure out how to remove the tools we create as part of our pipeline. Commands like kubectl delete pods --all will obviously not work if we plan to run multiple pipelines in parallel. We'll need to restrict the removal only to the Pods spin up by the build we finished, not all those in a Namespace. CI/CD tools we'll use later might be able to help with this problem.

We are missing quite a few steps in our pipeline. Those are the issues we will not try to fix in this book. Those that we explored so far are common to almost all pipelines. We always run different types of tests, some of which are static (for example, unit tests), while others need a live application (for example, functional tests). We always need to build a binary or package our application. We need to build an image and deploy it to one or more locations. The rest of the steps differs from one case to another. You might want to send test results to SonarQube, or you might choose to make a GitHub release. If your images can be deployed to different operating systems (for example, Linux, Windows, ARM), you might want to create a manifest file. You'll probably run some security scanning as well. The list of the things you might do is almost unlimited, so I chose to stick with the steps that are very common and, in many cases, mandatory. Once you grasp the principles behind a well defined, fully automated, and container-based pipeline executed on top of a scheduler, I'm sure you won't have a problem extending our examples to fit your particular needs.

How about building Docker images? That is also one of the items on our to-do list. We shouldn't build them inside Kubernetes cluster because mounting Docker socket is a huge security risk and because we should not run anything without going through Kube API. Our best bet, for now, is to build them outside the cluster. We are yet to discover how to do that effectively. I suspect that will be a very easy challenge.

One message I tried to convey is that everything related to an application should be in the same repository. That applies not only to the source code and tests, but also to build scripts, Dockerfile, and Kubernetes definitions.

Outside of that application-related repository should be only the code and configurations that transcends a single application (for example, cluster setup). We'll continue using the same separation throughout the rest of the book. Everything required by go-demo-3 will be in the vfarcic/go-demo-3 (https://github.com/vfarcic/go-demo-3) repository. Cluster-wide code and configuration will continue living in vfarcic/k8s-specs (https://github.com/vfarcic/k8s-specs).

The logic behind everything-an-application-needs-is-in-a-single-repository mantra is vital if we want to empower the teams to be in charge of their applications. It's up to those teams to choose how to do something, and it's everyone else's job to teach them the skills they need. With some other tools, such approach would pose a big security risk and could put other teams in danger. However, Kubernetes provides quite a lot of tools that can help us to avoid those risks without sacrificing autonomy of the teams in charge of application development. We have RBAC and Namespaces. We have ResourceQuotas, LimitRanges, PodSecurityPolicies, NetworkPolicies, and quite a few other tools at our disposal.

We can provide autonomy to the teams without sacrificing security and stability. We have the tools and the only thing missing is to change our culture and processes.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.147.131