Pipeline-as-code

We previously saw the benefits of representing configuration as code, primarily infrastructure as code files. The same motivations led to pipeline as code definitions, configuration that specifies the CI server pipeline steps.

In the past, many CI servers such as Jenkins required to be configured manually. CI server jobs had to be laboriously clicked together to build up pipelines. Especially, rebuilding pipelines for new applications or feature branches thereof required cumbersome manual work.

Pipeline as code definitions specify the Continuous Delivery pipeline as part of the software project. The CI server builds up and executes the pipeline appropriately, following the script. This vastly simplifies defining and reusing project build pipelines.

There are a lot of CI servers that support pipeline definitions as code. The most important aspect is that engineers understand the motivations and benefits behind this technology. The following shows examples for Jenkins, a widely used CI server in the Java ecosystem.

Users of Jenkins can craft pipelines in a Jenkinsfile, which is defined using a Groovy DSL. Groovy is an optionally typed, dynamic JVM language, that suits well for DSL and scripts. Gradle build scripts use a Groovy DSL, as well.

The following examples show the steps of a very simple pipeline of a Java enterprise project. The examples are meant to give a rough understanding of the executed process. For full information on Jenkinsfiles, their syntax and semantics, refer to the documentation.

The following shows an example Jenkinsfile, containing a basic pipeline definition.

node {
    prepare()

    stage('build') {
        build()
    }

    parallel failFast: false,
            'integration-test': {
                stage('integration-test') {
                    integrationTest()
                }
            },
            'analysis': {
                stage('analysis') {
                    analysis()
                }
            }

    stage('system-test') {
        systemTest()
    }

    stage('performance-test') {
        performanceTest()
    }

    stage('deploy') {
        deployProduction()
    }
}

// method definitions

The stage definitions refer to steps in the Jenkins pipeline. Since the Groovy script offers a full-fledged programming language, it is possible and advisable to apply clean code practices that produce readable code. Therefore, the contents of the specific steps are refactored to separate methods, all in the same layer of abstraction.

The prepare() step, for example, encapsulates several executions to fulfill build prerequisites, such as checking out the build repository. The following code shows its method definition:

def prepare() {
    deleteCachedDirs()
    checkoutGitRepos()
    prepareMetaInfo()
}

The build stage also encapsulates several sub-steps, from executing the Maven build, recording metadata and test results, to building the Docker images. The following code shows its method definition:

def build() {
    buildMaven()
    testReports()
    publishArtifact()
    addBuildMetaInfo()

    buildPushDocker(dockerImage, 'cars')
    buildPushDocker(databaseMigrationDockerImage, 'cars/deployment/database-migration')
    addDockerMetaInfo()
}

These examples provide insight into how to define and encapsulate specific behavior into steps. Providing detailed Jenkinsfile examples is beyond the scope of this book. I will show you the rough steps necessary to give an idea of what logical executions are required, and how to define them in these pipeline scripts in a readable, productive way. The actual implementations, however, heavily depend on the project.

Jenkins pipeline definitions provide the possibility to include so-called pipeline libraries. These are predefined libraries that contain often-used functionality to simplify usage and reduce duplication beyond several projects. It is advisable to outsource certain functionality, especially in regard to environment specifics, into company-specific library definitions.

The following example shows the deployment of the car manufacture application to a Kubernetes environment. The deploy() method would be called from within the build pipeline when deploying a specific image and database schema version to a Kubernetes namespace:

def deploy(String namespace, String dockerImage, String databaseVersion) {
    echo "deploying $dockerImage to Kubernetes $namespace"

    updateDeploymentImages(dockerImage, namespace, databaseVersion)
    applyDeployment(namespace)
    watchRollout(namespace)
}

def updateDeploymentImages(String dockerImage, String namespace, String databaseVersion) {
    updateImage(dockerImage, 'cars/deployment/$namespace/*.yaml')
    updateDatabaseVersion(databaseVersion 'cars/deployment/$namespace/*.yaml')

    dir('cars') {
        commitPush("[jenkins] updated $namespace image to $dockerImage" +
            " and database version $databaseVersion")
    }
}

def applyDeployment(namespace) {
    sh "kubectl apply --namespace=$namespace -f car-manufacture/deployment/$namespace/"
}

def watchRollout(namespace) {
    sh "kubectl rollout status --namespace=$namespace deployments car-manufacture"
}

This example updates and commits the Kubernetes YAML definitions in the VCS repository. The execution applies the infrastructure as code to the Kubernetes namespace and waits for the deployment to finish.

These examples aim to give the reader an idea of how to integrate Continuous Delivery pipelines as pipeline as code definitions with a container orchestration framework such as Kubernetes. As mentioned earlier, it is also possible to make use of pipeline libraries to encapsulate often-used kubectl shell commands. Dynamic languages such as Groovy allow engineers to develop pipeline scripts in a readable way, treating them with the same effort as other code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.73.175