Reusing pipeline snippets through global pipeline libraries

The pipeline we designed works as we expect. However, we'll have a problem on our hands if other teams start copying and pasting the same script for their pipelines. We'd end up with a lot of duplicated code that will be hard to maintain.

Most likely it will get worse than the simple practice of duplicating code since not all pipelines will be the same. There's a big chance each is going to be different, so copy and paste practice will only be the first action. People will find the pipeline that is closest to what they're trying to accomplish, replicate it, and then change it to suit their needs. Some steps are likely going to be the same for many (if not all) projects, while others will be specific to only one, or just a few pipelines.

The more pipelines we design, the more patterns will emerge. Everyone might want to build Docker images with the same command, but with different arguments. Others might use Helm to install their applications, but will not (yet) have any tests to run (be nice, do not judge them). Someone might choose to use Rust (https://www.rust-lang.org/) for the new project, and the commands will be unique only to a single pipeline.

What we need to do is look for patterns. When we notice that a step, or a set of steps, are the same across multiple pipelines, we should be able to convert that snippet into a library, just as what we're likely doing when repetition happens in code of our applications. Those libraries need to be accessible to all those who need it, and they need to be flexible so that their behavior can be adjusted to slightly different needs. We should be able to provide arguments to those libraries. What we truly need is the ability to create new pipeline steps that are tailored to our needs. Just as there is a general step git, we might want something like k8sUpgrade that will perform Helm's upgrade command. We can accomplish that, and quite a few other things through Jenkins' Global Pipeline Libraries.

We'll explore libraries through practical examples, so the firsts step is to configure them.

 1  open "http://$JENKINS_ADDR/configure"

Please search for Global Pipeline Libraries section of the configuration, and click the Add button.

Type my-library as the Name (it can be anything else) and master as the Default version. In our context, the latter defines the branch from which we'll load the libraries.

Next, we'll click the Load implicitly checkbox. As a result, the libraries will be available automatically to all the pipeline jobs.

Otherwise, our jobs would need to have @Library('my-library') instruction.

Select Modern SCM from the Retrieval method section and select Git from Source Code Management.

We're almost done. The only thing left is to specify the repository from which Jenkins will load the libraries. I already created a repo with all the libraries we'll use (and a few others we won't need). However, GitHub API has a limit to the number of requests that can be made per hour so if you (and everyone else) uses my repo, you might see some undesirable effects. My recommendation is to go to vfarcic/jenkins-shared-libraries.git (https://github.com/vfarcic/jenkins-shared-libraries) and fork it. Once the fork is created, copy the address from the Clone and download drop-down list, return to Jenkins UI, and paste it into the Project Repository field.

We're finished with the configuration. Don't forget to click the Save button to persist the changes.

Figure 7-12: Jenkins Global Pipeline Libraries configuration screen

Let's take a closer look at the repository we'll use as the global pipeline library.

 1  export GH_USER=[...]
2 3 open "https://github.com/$GH_USER/jenkins-shared-libraries.git"

Please replace [...] with your GitHub user before opening the forked repository in a browser.

You'll see that the repository has only .gitignore file and the vars directory in the root. Jenkins' Global Pipeline Libraries use a naming convention to discover the functions we'd like to use. They can be either in src or vars folder. The former is rarely used these days, so we're having only the latter.

If you enter into the vars directory, you'll see that there are quite a few *.groovy files mixed with a few *.txt files. We'll postpone exploration of the latter group of files and concentrate on the Groovy files instead. We'll use those with names that start with k8s and oc (in case you're using OpenShift).

Please find the k8sBuildImageBeta.groovy file and open it. You'll notice that the code inside it is almost the same as the one we used in the build stage. There are a few differences though, so let's go through the structure of the shared functions. It'll be a concise explanation.

The name of the file (for example, k8sBuildImageBeta.groovy) becomes a pipeline step (for example, k8sBuildImageBeta). If we use a step converted from a file, Jenkins will invoke the function call. So, every Groovy file needs to have such a function, even though additional internal functions can be defined as well. The call function can have any number of arguments. If we continue using the same example, you'll see that call inside k8sBuildImageBeta.groovy has a single argument image. It could have been defined with the explicit type like String image, but in most cases, there's no need for it. Groovy will figure out the type.

Inside the call function are almost the same steps as those we used inside the build stage. I copied and pasted them. The only modification to the steps was to replace Docker image references with the image argument. Since we already know that Groovy extrapolates arguments in a string when they are prefixed with the dollar sign ($) and optional curly braces ({ and }), our image argument became ${image}.

Using arguments in the functions is essential. They make them reusable across different pipelines. If k8sBuildImageBeta.groovy would have go-demo-3 image hard-coded, that would not be useful to anyone except those trying to build the go-demo application.

The alternative would be to use environment variables and ditch arguments altogether. I've seen that pattern in many organizations, and I think it's horrible. It does not make it clear what is needed to use the function. There are a few exceptions though. My usage of environment variables is limited to those available to all builds. For example, ${env.BRANCH_NAME} is always available. One does not need to create it when writing a pipeline script. For everything else, please use arguments. That will be a clear indication to the users of those functions what is required.

I won't go through all the Groovy files that start with k8s (and oc) since they follow the same logic as k8sBuildImageBeta.groovy. They are copies of what we used in our pipeline, with the addition of a few arguments. So, instead of me going over all the functions, please take some time to explore them yourself. Return here once you're done, and we'll put those functions to good use and clarify a few other important aspects of Jenkins' shared libraries.

Before we continue, you might want to persist the changes we did to Jenkins configuration. All the information about the shared libraries is available in org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml file. We just need to copy it.

 1  kubectl -n go-demo-3-jenkins cp 
 2      $JENKINS_POD:var/jenkins_home/org.jenkinsci.plugins.workflow.libs.
GlobalLibraries.xml
3 cluster/jenkins/secrets

I already modified the template of the Jenkins Helm Chart to include the file we just copied. All you have to do the next time you install Jenkins with Helm is to add jenkins.master.GlobalLibraries value. The full argument should be as follows.

 1  --set jenkins.master.GlobalLibraries=true

Now we can refactor our pipeline to use shared libraries and see whether that simplifies things.

 1  open "http://$JENKINS_ADDR/job/go-demo-3/configure"

If you are NOT using minishift, please replace the existing code with the content of the cdp-jenkins-lib.groovy (https://gist.github.com/vfarcic/e9821d0430ca909d68eecc7ccbb1825d) Gist.

If you are using minishift, please replace the existing code with the content of the cdp-jenkins-lib-oc.groovy (https://gist.github.com/vfarcic/ff6e0b04f165d2b26d326c116a7cc14f) Gist.

We'll explore only the differences from the previous iteration of the pipeline. They are as follows.

 1  ... 
 2  env.PROJECT = "go-demo-3" 
 3  env.REPO = "https://github.com/vfarcic/go-demo-3.git" 
 4  env.IMAGE = "vfarcic/go-demo-3" 
 5  env.DOMAIN = "acme.com" 
 6  env.ADDRESS = "go-demo-3.acme.com" 
 7  env.CM_ADDR = "cm.acme.com" 
 8  env.CHART_VER = "0.0.1" 
 9  ... 
10    node("kubernetes") { 
11      node("docker") { 
12        stage("build") { 
13          git "${env.REPO}" 
14          k8sBuildImageBeta(env.IMAGE) 
15        } 
16      } 
17      stage("func-test") { 
18        try { 
19          container("helm") { 
20            git "${env.REPO}" 
21            k8sUpgradeBeta(env.PROJECT, env.DOMAIN, "--set replicaCount=2
--set dbReplicaCount=1") 22 } 23 container("kubectl") { 24 k8sRolloutBeta(env.PROJECT) 25 } 26 container("golang") { 27 k8sFuncTestGolang(env.PROJECT, env.DOMAIN) 28 } 29 } catch(e) { 30 error "Failed functional tests" 31 } finally { 32 container("helm") { 33 k8sDeleteBeta(env.PROJECT) 34 } 35 } 36 } 37 stage("release") { 38 node("docker") { 39 k8sPushImage(env.IMAGE) 40 } 41 container("helm") { 42 k8sPushHelm(env.PROJECT, env.CHART_VER, env.CM_ADDR) 43 } 44 } 45 stage("deploy") { 46 try { 47 container("helm") { 48 k8sUpgrade(env.PROJECT, env.ADDRESS) 49 } 50 container("kubectl") { 51 k8sRollout(env.PROJECT) 52 } 53 container("golang") { 54 k8sProdTestGolang(env.ADDRESS) 55 } 56 } catch(e) { 57 container("helm") { 58 k8sRollback(env.PROJECT) 59 } 60 } 61 } 62 } 63 }

We have fewer environment variables since part of the logic for constructing the values is moved into the functions. The podTemplate is still the same, and the real differences are noticeable inside stages.

All the stages now contain fewer steps. Everything is much simpler since the logic, steps, and the commands are moved to functions. All we're doing is treat those functions as simplified steps.

You might say that even though the pipeline is now much more straightforward, it is still not trivial. You'd be right. We could have replaced them with bigger and fewer functions. We could have had only four like build, test, release, and deploy. However, that would reduce flexibility. Every team in our organization would need to build, test, release, and deploy in the same way, or skip using the library and do the coding inside the pipeline. If the functions are too big, people must choose to adopt the whole process or not use them at all. By having a very focused function that does only one, or just a few things, we gain more flexibility when combining them.

Good examples are the functions used in the deploy stage. If there were only one (for example, k8sDeploy), everyone would need to use Go to test. As it is now, a different team could choose to use k8sUpgrade and k8sRollout functions but skip k8sProdTestGolang. Maybe their application is coded in Rust, and they will need a separate function. Or, there might be only one project that uses Rust, and there's no need for a function since there is no repetition. The point is that teams should be able to choose to re-use libraries that fit their process and write themselves whatever they're missing.

From my experience, functions from Jenkins' global pipeline libraries should be small and with a single purpose. That way, we can combine them as if they are pieces of a puzzle, instead of continually adding complexity by trying to fit all the scenarios into one or a few libraries.

Another thing worth mentioning is that node and container blocks are not inside libraries. There are two reasons for that. First, I think it is easier to understand the flow of the pipeline (without going into libraries) when those blocks are there. The second and the much more important reason is that they are not allowed in a declarative pipeline. We are using scripted flavor only because a few things are missing in declarative. However, the declarative pipeline is the future, and you should be prepared to switch once those issues are resolved. I will refactor the code into declarative once that becomes an option.

Before we move forward, please replace the values of the environment variables to fit your situation. As a reminder, you most likely need to change vfarcic with your GitHub and Docker Hub users, and acme.com with the value of the environment variable ADDR available in your terminal session.

Once you're finished adjusting the values, please click the Save button to persist the changes. Click the Open Blue Ocean link from the left-hand menu, followed with the Run button. Go to the new build and wait until it is finished.

We refactored the pipeline by making it more readable and easier to maintain. We did not introduce new functionalities, so the result of this build should be functionally the same as the previous that was done with the prior iteration of the code. Let's confirm that.

Did we push a new image to Docker Hub?

 1  open "https://hub.docker.com/r/$DH_USER/go-demo-3/tags/"

The new image (with a few tags) was pushed.

How about Helm upgrades?

 1  helm ls 
 2      --tiller-namespace go-demo-3-build

The output is as follows.

NAME      REVISION UPDATED        STATUS   CHART           NAMESPACE
go-demo-3 2        Wed Jul 18 ... DEPLOYED go-demo-3-0.0.1 go-demo-3

We are now on the second revision, so that part seems to be working as expected. To be on the safe side, we'll check the history.

 1  helm history go-demo-3 
 2      --tiller-namespace go-demo-3-build

The output is as follows.

REVISION UPDATED        STATUS     CHART           DESCRIPTION
1        Wed Jul 18 ... SUPERSEDED go-demo-3-0.0.1 Install complete
2        Wed Jul 18 ... DEPLOYED   go-demo-3-0.0.1 Upgrade complete 

The first revision was superseded by the second.

Our mission has been accomplished, but our pipeline is still not as it's supposed to be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.200.66