Demystifying Declarative Pipeline through a practical example

Let's take a look at a Jenkinsfile.orig which we'll use as a base to generate Jenkinsfile that will contain the correct address of the cluster and the GitHub user.

 1  cat Jenkinsfile.orig

The output is too big for us to explore it in one go, so we'll comment on each section separately. The first in line is the options block.

...
options {
  buildDiscarder logRotator(numToKeepStr: '5')
  disableConcurrentBuilds()
}
...

The first option will result in only the last five builds being preserved in history. Most of the time there is no reason for us to keep all the builds we ever made. The last successful build of a branch is often the only one that matters. We set them to five just to prove that I'm not cheap. By discarding the old builds, we're ensuring that Jenkins will perform faster. Please note that the last successful build is kept even if, in this case, more than five last builds failed.

The second option disables concurrent builds. Each branch will have a separate job (just as in the previous chapter). If commits to different branches happen close to each other, Jenkins will process them in parallel by running builds for corresponding jobs. However, there is often no need for us to run multiple builds of the same job (branch) at the same time. In some cases, that can even produce adverse effects. With disableConcurrentBuilds, if we ever make multiple commits rapidly, they will be queued and executed sequentially.

It's up to you to decide whether those options are useful. If they are, use them. If they aren't, discard them. My mission was to show you a few of the many options we can use.

The next block is agent.

...
agent {
  kubernetes {
    cloud "go-demo-5-build"
    label "go-demo-5-build"
    serviceAccount "build"
    yamlFile "KubernetesPod.yaml"
  }      
}
...

In our case, the agent block contains a kubernetes block. That is an indication that the pipeline should create a Pod based on Kubernetes Cloud configuration.

That is further refined with the cloud entry which specifies that it must be the cloud config named go-demo-5-build. We'll create that cloud later. For now, we'll have to assume that it'll exist.

The benefit of that approach is that we can define only part of the agent information outside Pipeline and help other teams worry less about the things they need to put into their Jenkinsfile. As an example, you will not see a mention of a Namespace where the build should create a Pod that acts as a Jenkins agent. That will be defined elsewhere, and every build that uses go-demo-5-build will be run in that same Namespace.

There is another, less apparent reason for using a cloud dedicated to the builds in go-demo-5-build Namespace. Declarative syntax does not allow us to specify Namespace. So, we'll have to have as many cloud configurations as there are Namespaces, or more.

The label defines the prefix that will be used to name the Pods that will be spin by the builds based on this pipeline.

Next, we're defining serviceAccount as build. We already created that ServiceAccount inside the go-demo-5-build Namespace when we applied the configuration from build.yml. Now we're telling Jenkins that it should use it when creating Pod.

Finally, we changed the way we define a Pod that will act as Jenkins agent. Instead of embedding Pod definition inside Jenkinsfile, we're using an external file defined as yamlFile. My opinion on that feature is still divided. Having a Pod definition in Jenkinsfile (as we did in the previous chapter) allows us to inspect everything related to the job from a single location. On the other hand, moving the Pod definition to yamlFile enable us to focus on the flow of the pipeline, and leave lengthy Pod definition outside. It's up to you to choose which approach you like more. We'll explore the content of the KubernetesPod.yaml a bit later.

The next section in Jenkinsfile.orig is environment.

... 
environment { 
  image = "vfarcic/go-demo-5" 
  project = "go-demo-5" 
  domain = "acme.com" 
  cmAddr = "cm.acme.com" 
} 
... 

The environment block defines a few variables that we'll use in our steps. They are similar to those we used before, and they should be self-explanatory. Later on, we'll have to change vfarcic to your Docker Hub user and acme.com to the address of your cluster.

You should note that Declarative Pipeline allows us to use the variables defined in environment block both as "normal" (for example, ${VARIABLE_NAME}) and as environment variables ${env.VARIABLE_NAME}.

Now we reached the "meat" of the pipeline. The stages block contains three stage sub-blocks, with steps inside each.

 1  ... 
 2  stages { 
 3    stage("build") { 
 4      steps { 
 5        ... 
 6      } 
 7    } 
 8    stage("func-test") { 
 9      steps { 
10        ... 
11      } 
12    } 
13    stage("release") { 
14      steps { 
15        ... 
16      } 
17    } 
18  } 
19  ... 

Just as in the continuous deployment pipeline, we're having build, func-test, and release stages. However, the deploy stage is missing. This time, we are NOT going to deploy a new release to production automatically. We'll need a manual intervention to do that. One possible way to accomplish that would be to add the deploy block to the pipeline and an additional input step in front of it. It would pause the execution of the pipeline until we choose to click the button to proceed with deployment to production. However, we will not take that approach. Instead, we'll opt for GitOps principle which we'll discuss later. For now, just remember that our pipeline's goal is to make a release, not to deploy it to production.

Let us briefly go through each of the stages of the pipeline. The first one is the build stage.

 1  ... 
 2  stage("build") { 
 3    steps { 
 4      container("golang") { 
 5        script { 
 6          currentBuild.displayName = new SimpleDateFormat
("yy.MM.dd").format(new Date()) + "-${env.BUILD_NUMBER}" 7 } 8 k8sBuildGolang("go-demo") 9 } 10 container("docker") { 11 k8sBuildImageBeta(image, false) 12 } 13 } 14 } 15 ...

The first set of steps of the build stage starts in the golang container. The first action is to customize the name of the build by changing the value of the displayName. However, that is not allowed in Declarative Pipeline. Luckily, there is a way to bypass that limitation by defining the script block. Inside it can be any set of pipeline instructions we'd typically define in a Scripted Pipeline. A script block is a nifty way to temporarily switch from Declarative to Scripted Pipeline which allows much more freedom and is not bound by Declarative's strict format rules.

There was no particular reason for using golang container to set the displayName. We could have done it in any of the other containers available in our agent defined through yamlFile. The only reason why we chose golang over any other lies in the next step.

Since, this time, our Dockerfile does not use multi-stage builds and, therefore, does not run unit tests nor it builds the binary needed for the final image, we have to run those steps separately. Given that the application is written in Go, we need its compiler available in the golang container. The actual steps are defined as k8sBuildGolang.groovy (https://github.com/vfarcic/jenkins-shared-libraries/blob/master/vars/k8sBuildGolang.groovy) inside the same repository we used in the previous chapter. Feel free to explore it, and you'll see that it contains the same commands we used before inside the first stage of our multi-stage build defined in go-demo-3 Dockerfile.

Once the unit tests are executed, and the binary is built, we're switching to the docker container to build the image. This one is based on the same shared libraries we used before, just as most of the other steps in this pipeline. Since you're already familiar with them, I'll comment only if there is a substantial change in the way we utilize those libraries, or if we add a new one that we haven't used before. If you already forgot how those libraries work, please consult their code (*.groovy) or their corresponding helper files (*.txt) located in the vars directory of the jenkins-shared-libraries repository you already forked.

Let's move into the next stage.

 1  ... 
 2  stage("func-test") { 
 3    steps { 
 4      container("helm") { 
 5        k8sUpgradeBeta(project, domain, "--set replicaCount=2 --set
dbReplicaCount=1") 6 } 7 container("kubectl") { 8 k8sRolloutBeta(project) 9 } 10 container("golang") { 11 k8sFuncTestGolang(project, domain) 12 } 13 } 14 post { 15 always { 16 container("helm") { 17 k8sDeleteBeta(project) 18 } 19 } 20 } 21 } 22 ...

The steps of the func-test stage are the same as those we used in the continuous deployment pipeline. The only difference is in the format of the blocks that surround them. We're jumping from one container to another and executing the same shared libraries as before.

The real difference is in the post section of the stage. It contains an always block that guarantees that the steps inside it will execute no matter the outcome of the steps in this stage. In our case, the post section has only one step that invokes that k8sDeleteBeta library which deletes the installation of the release under test.

As you can see, the func-test stage we just explored is functionally the same as the one we used in the previous chapter when we defined the continuous deployment pipeline. However, I'd argue that the post section available in Declarative Pipeline is much more elegant and easier to understand than try/catch/finally blocks we used inside the Scripted Pipeline. That would be even more evident if we'd use a more complex type of post criteria, but we don't have a good use-case for them.

It's time to move into the next stage.

 1  ... 
 2  stage("release") { 
 3    when { 
 4        branch "master" 
 5    } 
 6    steps { 
 7      container("docker") { 
 8        k8sPushImage(image, false) 
 9      } 
10      container("helm") { 
11        k8sPushHelm(project, "", cmAddr, true, true) 
12      } 
13    } 
14  } 
15  ... 

The release stage, just as its counterpart from the previous chapter, features the same step that tags and pushes the production release to Docker Hub (k8sPushImage) as well as the one that packages and pushes the Helm Chart to ChartMuseum (k8sPushHelm). The only difference is that the latter library invocation now uses two additional arguments. The third one, when set to true, replaces the image.tag value to the tag of the image built in the previous step. The fourth argument, also when set to true, fails the build if the version of the Chart is unchanged or, in other words, if it already exists in ChartMuseum. When combining those two, we are guaranteeing that the image.tag value in the Chart is the same as the image we built, and that the version of the Chart is unique. The latter forces us to update the version manually. If we'd work on continuous deployment, manual update (or any other manual action), would be unacceptable. But, continuous delivery does involve a human decision when and what to deploy to production. We're just ensuring that the human action of changing the version of the Chart was indeed performed. Please open the source code of k8sPushHelm.groovy (https://github.com/vfarcic/jenkins-shared-libraries/blob/master/vars/k8sPushHelm.groovy) to check the code behind that library and compare it with the statements you just read.

You'll notice that there is a when statement above the steps. Generally speaking, it is used to limit the executions within a stage only to those cases that match the condition. In our case, that condition states that the stage should be executed only if the build is using a commit from the master branch. It is equivalent to the if ("${BRANCH_NAME}" == "master") block we used in the continuous deployment pipeline in the previous chapter. There are other conditions we could have used but, for our use-case, that one is enough.

You might want to explore other types of when conditions by going through the when statement documentation (https://jenkins.io/doc/book/pipeline/syntax/#when).

You'll notice that we did not define git or checkout scm step anywhere in our pipeline script. There's no need for that with Declarative Pipeline.

It is intelligent enough to know that we want to clone the code of the commit that initiated a build (through a Webhook, if we'd have it). When a build starts, cloning the code will be one of its first actions.

Now that we went through the content of the Jenkinsfile.orig file, we should go back to the referenced KubernetesPod.yaml file that defines the Pod that will be used as a Jenkins agent.

 1  cat KubernetesPod.yaml

The output is as follows.

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: docker
    image: docker:18.06
    command: ["cat"]
    tty: true
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker-socket
  - name: helm
    image: vfarcic/helm:2.9.1
    command: ["cat"]
    tty: true
  - name: kubectl
    image: vfarcic/kubectl
    command: ["cat"]
    tty: true
  - name: golang
    image: golang:1.12
    command: ["cat"]
    tty: true
  volumes:
  - name: docker-socket
    hostPath:
      path: /var/run/docker.sock
      type: Socket

That Pod definition is almost the same as the one we used inside Jenkinsfile in the go-demo-3 repository. Apart from residing in a separate file, the only difference is in an additional container named docker. In this scenario, we are not using external VMs to build Docker images.

Instead, we have an additional container through which we can execute Docker-related steps. Since we want to execute Docker commands on the node, and avoid running Docker-in-Docker, we mounted /var/run/docker.sock as a volume.

A note to minishift users
We need to relax security so that Pods are allowed to use hostPath volume plug-in. Please execute the command that follows.
oc adm policy add-scc-to-user hostmount-anyuid -z build -n go-demo-5-build
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.1.232