Defining the functional testing stage

For the functional testing stage, the first step is to install the application under test. To avoid the potential problems of installing the same release twice, we'll use helm upgrade instead of install.

As you already know, Helm only acknowledges that the resources are created, not that all the Pods are running. To mitigate that, we'll wait for rollout status before proceeding with tests.

Once the application is rolled out, we'll run the functional tests. Please note that, in this case, we will run only one set of tests. In the "real" world scenario, there would probably be others like, for example, performance tests or front-end tests for different browsers.

When running multiple sets of different tests, consider using parallel construct. More information can be found in the Parallelism and Distributed Builds with Jenkins (https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins) article.

Finally, we'll have to delete the chart we installed. After all, it's pointless to waste resources by running an application longer than we need it. In our scenario, as soon as the execution of the tests is finished, we'll remove the application under test. However, there is a twist. Jenkins, like most other CI/CD tools, will stop the execution of the first error. Since there is no guarantee that none of the steps in this stage will fail, we'll have to envelop all the inside a big try/catch/finally statement.

Figure 7-4: The essential steps of the functional stage

Before we move on and write a new version of the pipeline, we'll need an address that we'll use as Ingress host of our application under tests.

 1  export ADDR=$LB_IP.nip.io
2
3 echo $ADDR

Please copy the output of the echo. We'll need it soon.

Next, we'll open the job's configuration screen.

 1  open "http://$JENKINS_ADDR/job/go-demo-3/configure"

If you are NOT using minishift, please replace the existing code with the content of the cdp-jenkins-func.groovy (https://gist.github.com/vfarcic/4edc53d5dd11814651485c9ff3672fb7) Gist.

If you are using minishift, replace the existing code with the content of the cdp-jenkins-func-oc.groovy (https://gist.github.com/vfarcic/1661c2527eda2bfe1e35c77f448f7c34) Gist.

We'll explore only the differences between the two revisions of the pipeline. They are as follows.

 1  ... 
 2  env.ADDRESS = "go-demo-3-${env.BUILD_NUMBER}-
${env.BRANCH_NAME}.acme.com"
3 env.CHART_NAME = "go-demo-3-${env.BUILD_NUMBER}-${env.BRANCH_NAME}" 4 def label = "jenkins-slave-${UUID.randomUUID().toString()}" 5 6 podTemplate( 7 label: label, 8 namespace: "go-demo-3-build", 9 serviceAccount: "build", 10 yaml: """ 11 apiVersion: v1 12 kind: Pod 13 spec: 14 containers: 15 - name: helm 16 image: vfarcic/helm:2.9.1 17 command: ["cat"] 18 tty: true 19 - name: kubectl 20 image: vfarcic/kubectl 21 command: ["cat"] 22 tty: true 23 - name: golang 24 image: golang:1.12 25 command: ["cat"] 26 tty: true 27 """ 28 ) { 29 node(label) { 30 node("docker") { 31 stage("build") { 32 ... 33 } 34 } 35 stage("func-test") { 36 try { 37 container("helm") { 38 git "${env.REPO}" 39 sh """helm upgrade 40 ${env.CHART_NAME} 41 helm/go-demo-3 -i 42 --tiller-namespace go-demo-3-build 43 --set image.tag=${env.TAG_BETA} 44 --set ingress.host=${env.ADDRESS} 45 --set replicaCount=2 46 --set dbReplicaCount=1""" 47 } 48 container("kubectl") { 49 sh """kubectl -n go-demo-3-build 50 rollout status deployment 51 ${env.CHART_NAME}""" 52 } 53 container("golang") { // Uses env ADDRESS 54 sh "go get -d -v -t" 55 sh """go test ./... -v 56 --run FunctionalTest""" 57 } 58 } catch(e) { 59 error "Failed functional tests" 60 } finally { 61 container("helm") { 62 sh """helm delete 63 ${env.CHART_NAME} 64 --tiller-namespace go-demo-3-build 65 --purge""" 66 } 67 } 68 } 69 } 70 }

We added a few new environment variables that will simplify the steps that follow. The ADDRESS will be used to provide a unique host for the Ingress of the application under test. The uniqueness is accomplished by combining the name of the project (go-demo-3), the build number, and the name of the branch. We used a similar pattern to generate the name of the Chart that will be installed. All in all, both the address and the Chart are unique for each release of each application, no matter the branch.

We also defined label variable with a unique value by adding a suffix based on random UUID. Further down, when we define podTemplate, we'll use the label to ensure that each build uses its own Pod.

The podTemplate itself is very similar to those we used in quite a few occasions. It'll be created in the go-demo-3-build Namespace dedicated to building and testing applications owned by the go-demo-3 team. The yaml contains the definitions of the Pod that includes containers with helm, kubectl, and golang. Those are the tools we'll need to execute the steps of the functional testing stage.

A note to minishift users
Your version of the pipeline contains a few things that other Kubernetes users do not need. You'll notice that there is an additional container named oc in the podTemplate. Further down, in the func-test stage, we're using that container to create an Edge route that provides the same functionality as Ingress controller used by other Kubernetes flavors.

The curious part is the way nodes (agents) are organized in this iteration of the pipeline. Everything is inside one big block of node(label). As a result, all the steps will be executed in one of the containers of the podTemplate. However, since we do not want the build steps to run inside the cluster, inside the node based on the podTemplate is the same node("docker") block we are using for building and pushing Docker images.

The reason for using nested node blocks lies in Jenkins' ability to delete unused Pods. The moment podTemplate node is closed, Jenkins will remove the associated Pod. To preserve the state we'll generate inside that Pod, we're making sure that it is alive through the whole build by enveloping all the steps (even those running somewhere else) inside one colossal node(label) block.

Inside the func-test stage is a try block that contains all the steps (except cleanup). Each of the steps is executed inside a different container. We enter helm to clone the code and execute helm upgrade that installs the release under test. Next, we jump into the kubectl container to wait for the rollout status that confirms that the application is rolled out completely. Finally, we switch into the golang container to run our tests.

Please note that we are installing only two replicas of the application under test and one replica of the database. That's more than enough to validate whether it works as expected from the functional point of view. There's no need to have the same number of replicas as what we'll run in the production Namespace.

You might be wondering why we checked out the code for the second time. The reason is simple. In the first stage, we cloned the code inside the VM dedicated to (or dynamically created for) building Docker images. The Pod created through podTemplate does not have that code, so we had to clone it again. We did that inside the helm container since that's the first one we're using.

Why didn't we clone the code to all the containers of the Pod? After all, almost everything we do needs the code of the application. While that might not be true for the kubectl container (it only waits for the installation to roll out), it is undoubtedly true for golang. The answer lies in Jenkins podTemplate "hidden" features. Among other things, it creates a volume and mounts it to all the containers of the Pod as the directory /workspace. That directory happens to be the default directory in which it operates when inside those containers. So, the state created inside one of the containers, exists in all the others, as long as we do not switch to a different folder.

The try block is followed with catch that is executed only if one of the steps throws an error. The only purpose for having the catch block is to re-throw the error if there is any.

The sole purpose for using try/catch is in the finally block. In it, we are deleting the application we deployed. Since it executes no matter whether there was an error, we have a reasonable guarantee that we'll have a clean system no matter the outcome of the pipeline.

To summarize, try block ensures that errors are caught. Without it, the pipeline would stop executing on the first sign of failure, and the release under test would never be removed. The catch block re-throws the error, and the finally block deletes the release no matter what happens.

Before we test the new iteration of the pipeline, please replace the values of the environment variables to fit your situation. As a minimum, you'll need to change vfarcic to your GitHub and Docker Hub users, and you'll have to replace acme.com with the value stored in the environment variable ADDR in your terminal session.

Once finished with the changes, please click the Save button. Use the Open Blue Ocean link from the left-hand menu to switch to the new UI and click the Run button followed with a click on the row of the new build.

If you configured Jenkins to spin up new Docker nodes in AWS or GCP, it will take around a minute until the VM is created and operational.

Please wait until the build reaches the func-test stage and finishes executing the second step that executes helm upgrade. Once the release under test is installed, switch to the terminal session to confirm that the new release is indeed installed.

 1  helm ls 
 2      --tiller-namespace go-demo-3-build

The output is as follows.

NAME             REVISION UPDATED        STATUS   CHART           NAMESPACE
go-demo-3-2-null 1        Tue Jul 17 ... DEPLOYED go-demo-3-0.0.1 go-demo-3-build

As we can see, Jenkins did initiate the process that resulted in the new Helm chart being installed in the go-demo-3-build Namespace.

To be on the safe side, we'll confirm that the Pods are running as well.

 1  kubectl -n go-demo-3-build 
 2      get pods

The output is as follows

NAME                  READY STATUS  RESTARTS AGE
go-demo-3-2-null-...  1/1   Running 4        2m
go-demo-3-2-null-...  1/1   Running 4        2m
go-demo-3-2-null-db-0 2/2   Running 0        2m
jenkins-slave-...     4/4   Running 0        6m
tiller-deploy-...     1/1   Running 0        14m

As expected, the two Pods of the API and one of the database are running together with jenkins-slave Pod created by Jenkins.

Please return to Jenkins UI and wait until the build is finished.

Figure 7-5: Jenkins build with the build and the functional testing stage

If everything works as we designed, the release under test was removed once the testing was finished. Let's confirm that.

 1  helm ls 
 2      --tiller-namespace go-demo-3-build

This time the output is empty, clearly indicating that the chart was removed.

Let's check the Pods one more time.

 1  kubectl -n go-demo-3-build 
 2      get pods

The output is as follows

NAME              READY STATUS  RESTARTS AGE
tiller-deploy-... 1/1   Running 0        31m

Both the Pods of the release under tests as well as Jenkins agent are gone, leaving us only with Tiller. We defined the steps that remove the former, and the latter is done by Jenkins automatically.

Let's move onto the release stage.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.171.153