Exploring application's repository and preparing the environment

Before I wrote this chapter, I forked the vfarcic/go-demo-3 (https://github.com/vfarcic/go-demo-3) repository into vfarcic/go-demo-5 (https://github.com/vfarcic/go-demo-5). Even though most the code of the application is still the same, I thought it would be easier to apply and demonstrate the changes in a new repository instead of creating a new branch or doing some other workaround that would allow us to have both processes in the same repository. All in all, go-demo-5 is a copy of go-demo-3 on top of which I made some changes which we'll comment soon.

Since we'll need to change a few configuration files and push them back to the repository, you should fork vfarcic/go-demo-5, just as you forked vfarcic/k8s-prod.

Next, we'll clone the repository before we explore the relevant files.

 1  cd ..
2 3 git clone 4 https://github.com/$GH_USER/go-demo-5.git
5 6 cd go-demo-5

The chart located in helm directory is the same as the one we used in go-demo-3 so we'll skip commenting it. Instead, we'll replace my GitHub user (vfarcic) with yours.

Before you execute the commands that follow, make sure you replace [...] with your Docker Hub user.

 1  DH_USER=[...]
2 3 cat helm/go-demo-5/deployment-orig.yaml 4 | sed -e "s@vfarcic@$DH_USER@g" 5 | tee helm/go-demo-5/templates/deployment.yaml

In go-demo-3, the resources that define the Namespace, ServiceAccount, RoleBinding, LimitRange, and ResourceQuota were split between ns.yml and build-config.yml files.

I got tired of having them separated, so I joined them into a single file build.yml. Other than that, the resources are the same as those we used before so we'll skip commenting on them as well. The only difference is that the Namespace is now go-demo-5.

 1  kubectl apply -f k8s/build.yml --record

Finally, the only thing related to the setup of the environment we'll use for go-demo-5 is to install Tiller, just as we did before.

 1  helm init --service-account build 
 2      --tiller-namespace go-demo-5-build

The two key elements of our pipeline will be Dockerfile and Jenkinsfile files. Let's explore the former first.

 1  cat Dockerfile

The output is as follows.

FROM alpine:3.4
MAINTAINER  Viktor Farcic <[email protected]>
RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2
EXPOSE 8080 ENV DB db CMD ["go-demo"]
COPY go-demo /usr/local/bin/go-demo RUN chmod +x /usr/local/bin/go-demo

You'll notice that we are not using multi-stage builds. That makes me sad since I think that is one of the greatest additions to Docker's build process. The ability to run unit tests and build a binary served us well so far. The process was streamlined through a single docker image build command, it was documented in a single Dockerfile file, and we did not have to sacrifice the size of the final image. So, why did I choose not to use it now?

We'll switch from building Docker images in a separate VM outside the cluster to using Docker socket to build it in one of the Kubernetes worker nodes. That does reduce security (Docker on that node could be abducted), and it can cause potential problems with Kubernetes (we're using containers without its knowledge). Yet, using the socket is somewhat easier, cleaner, and faster. Even though we explored this option through Shell commands, we did not use it in our Jenkins pipelines. So, I thought that you should experience both ways of building images in a Jenkins pipeline and choose for yourself which method fits your use-case better.

The goal is to find the balance and gain experience that will let you decide what works best for you. There will be quite a few other changes further on. They all aim at giving you better insight into different ways of accomplishing the same goals. You will have to make a choice on how to combine them into the solution that works the best in your organization.

Going back to the reason for NOT using Docker's multi-stage builds... Given that we're about to use Docker in one of the worker nodes of the cluster, we depend on Docker version running inside that cluster. At the time of this writing (August 2018), some Kubernetes clusters still use more than a year old Docker. If my memory serves me, multi-stage builds were added in Docker 17.05, and some Kubernetes flavors (even when on the latest version), still use Docker 17.03 or even older. Kops is a good example, even though it is not the only one. Release 1.9.x (the latest stable at the time of this writing), uses Docker 17.03. Since I'm committed to making all the examples in this book working in many different Kubernetes flavors, I had to remove multi-stage builds.

Check Docker version in your cluster and, if it's 17.05 or newer, I'd greatly recommend you continue using multi-stage builds. They are too good of a feature to ignore it, if not necessary.

All in all, the Dockerfile assumes that we already executed our tests and that we built the binary. We'll see how to do that inside a Jenkins pipeline soon.

We'll explore the pipeline stored in Jenkinsfile in the repository we cloned. However, before we do that, we'll go through declarative pipeline syntax since that's the one we'll use in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.26.152