© Banu Parasuraman 2023
B. ParasuramanPractical Spring Cloud Functionhttps://doi.org/10.1007/978-1-4842-8913-6_3

3. CI/CD with Spring Cloud Function

Banu Parasuraman1  
(1)
Frisco, TX, USA
 

As you learned in Chapter 2, you can build a Spring Cloud Function and deploy it to multiple environments. You can use various manual methods such as Azure-Function:Deploy, Gcloud CLI, AWS CLI, Kubectl, and Knative CLI. These manual approaches are not sustainable in an enterprise with many different teams, a lot of programmers, and a lot of code. It will be a management nightmare if every team member uses their own method to build and deploy code. Also, as you can see, this process is repeatable. Since it is a repeatable process, there is a chance to leverage automation.

This chapter explores ways to automate the deployment process. It leverages some popular approaches for automating your deploys. It explores GitHub Actions for deploying Lambda, Google Cloud Functions, and Azure Functions and you will integrate with ArgoCD to push to a Kubernetes/Knative environment. While you can use GitHub Actions alone for all environments, that would require custom scripting to push to Kubernetes. ArgoCD has built-in hooks to deploy to Kubernetes, which is the preferred way. More information on GitHub Actions can be found at https://github.com/features/actions, and information on ArgoCD can be found at https://argoproj.github.io/cd/.

Let’s dig a bit deeper into GitHub Actions and ArgoCD.

3.1 GitHub Actions

This is a CI/CD platform tightly integrated with GitHub; it allows you to create and trigger workflows from GitHub. So, if you are a fan of GitHub, you will really like this new feature. When you sign up for GitHub, GitHub Actions will automatically be integrated into your project, so you do not have to use a separate tool like Jenkins or Circle CI. Of course, this means that you are restricted to GitHub as your code repository. Creating a workflow is quite straightforward. You can create a workflow directly on the GitHub website by navigating to your project and clicking the New Workflow button in the Actions tab, as shown in the Figure 3-1.

A screenshot of the banup-kubeforce window. Tab action is selected. It describes all workflows. It contains runs from all workflows, events, statuses, brands, and actors.

Figure 3-1

Creating a new GitHub Actions workflow

Upon clicking New Workflow, as shown in Figure 3-2, you will be taken to the workflow “marketplace,” where you can choose from the suggested flows or set up a workflow yourself. Click the Set Up a Workflow Yourself link to start creating a custom workflow.

A screenshot of the banup-kubeforce window. Tab actions is selected which displays a message to choose a workflow.

Figure 3-2

Workflow marketplace to choose your workflow setup

The Set Up a Workflow Yourself window, as shown in Figure 3-3, will take you to the page where you can write the script to create your workflow.

A screenshot of a banup-kubeforce window. Tab code is selected. Code is present on the left side of the window. The right side of the window contains feature actions under the tab marketplace. The featured actions are upload a build artifact and set up Java J D K.

Figure 3-3

Workflow page to create custom workflows

As you can see in Figure 3-3, the workflow points to a main.yaml file that is created in the ./github/workflows directory under your root project folder. You can also create the same file in your IDE and it will show up under the Actions tab once you commit the code. Listing 3-1 shows the sample code created for AWS Lambda.
name: CI
on:
  push:
    branches: [ "master" ]
jobs:
  build-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-java@v2
        with:
          java-version: '8'
          distribution: 'temurin'
          cache: maven
      - uses: aws-actions/setup-sam@v1
      - uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-2
      - name: Build with Maven
        run: mvn -B package --file pom.xml
      # sam package
      - run: sam package --template-file template.yaml --output-template-file packaged.yaml --s3-bucket payrollbucket
# Run Unit tests- Specify unit tests here
# sam deploy
      - run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name payroll-aws --s3-bucket payrollbucket --capabilities CAPABILITY_IAM --region us-east-2
Listing 3-1

Workflow for Payroll Function to be Deployed on AWS Lambda

Let’s dive deep into the components of the YAML file.

A screenshot of workflow code elaboration. It contains name, on-push, jobs, runs-on ubuntu-latest, steps, uses, and run along with its details.

Figure 3-4

Work flow code elaboration

This sets up the GitHub Actions workflow and the triggers. Now, every time you commit or push code to this project repository in GitHub, this workflow will execute. Figure 3-5 shows a sample execution of this workflow.

A screenshot of chapter 3 S C F plus Lambda. Build-deploy is selected under jobs. Build-deploy window contains set up job, run action, run a w s-actions, build with Maven, run Sam package and deploy, post run a w s, etc.

Figure 3-5

A successful execution of the workflow

3.2 ArgoCD

While GitHub Actions can push code to serverless environments such as Lambda, it lacks a good graphical representation of code deployed when it comes to the Kubernetes environment. Kubernetes is a orchestrator of containers and has a plethora services that manage deployments. ArgoCD was created for Kubernetes. ArgoCD is a declarative CD (Continuous Delivery) tool, which means application definitions, configurations, and environments can be version controlled. ArgoCD, similar to GitHub Actions, uses Git repositories as a single source of truth. This is also known as GitOps.

Declarative means configuration is guaranteed by a set of facts instead of by a set of instructions.

Declarative GitOps allows the programmer or the ones who created the application to control the configuration of the environment in which the environment will run. This means the programmer does not have to rely on different teams, such as infrastructure or DevOps teams, to manage the pieces of the application. The programmers are in control, and this is a good thing.

ArgoCD set up is mostly programmatic and relies on the underlying Kubernetes configmaps. This is, as you can see, is different from other tools like Jenkins.

Here is how I set up the ArgoCD environment.

Prerequisites:

Step 1: Create a namespace for ArgoCD.

Run the following command against your Kubernetes cluster:
$kubectl create namespace argocd

A screenshot of creating an Argo C D namespace. The screenshot reads, dollar sign Kubectl create namespace Argo c d namespace forward slash Argo c d created.

Figure 3-6

Create an ArgoCD namespace

Step 2: install ArgoCD.
$kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
This will go through a lengthy process to install ArgoCD. Once the installation is successful, you can validate it by using the following command:
$kubectl get all -n argocd

A screenshot describes the status of an Argo C D installation run logs. It includes name, status, restarts, etc.

Figure 3-7

View the status of an ArgoCD installation

Now you can see that the ArgoCD services are up and running. Notice that an external IP has not been associated with service/argocd-server.

Run the following command to attach a LoadBalancer to the argocd-server:
$kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Now run the following command:
$kubectl get svc -n argocd

A screenshot of the external I P of Argo c d-server run logs. It contains name, type, cluster-I P, external-I P, port, and age.

Figure 3-8

Take note of the external IP of argocd-server

You will see an external IP associated with argocd-server. This will allow you to connect to the argocd-server.

Before you start to use ArgoCD, you need to change the “admin” user password. You can use Kubectl to read the secret associated with the “admin” user.

Run the following command:
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

A screenshot of getting the password of admin user. The screenshot depicts a code.

Figure 3-9

Get the password of the "admin" user

The output of this command is the password for the “admin” user.

You can now log in using the web browser. Then navigate to the 20.119.112.240 URL to change the password.

Log in to ArgoCD with the following command:
$argocd login 20.119.112.240

A screenshot of logging in to Argo C D run logs. If it gets a warning error, the command is a certificate signed by an unknown authority. Run logs contain details and the admin logged in successfully.

Figure 3-10

Log in to ArgoCD

You have successfully installed and connected to ArgoCD. Figure 3-11 shows a sample of the ArgoCD UI.

A screenshot of deploying an application in Argo C D U I. It depicts Payroll-h 2 which contains project, labels, status, repository, path, destination, etc. The status is healthy and synced.

Figure 3-11

ArgoCD UI showing an app deployed

Now that you have learned about GitHub Actions and ArgoCD, you can move on to deploying your application and automating the CI/CD process.

3.3 Building a Simple Example with Spring Cloud Function

You will use the same example from Chapter 2, but instead of using the EmployeeConsumer interface, this example uses EmployeeSupplier. In order to do that, you need a prepopulated database. You’ll then query the database using a supplier function. You can find the code at https://github.com/banup-kubeforce/payroll-h2.

Here are the required changes.

Step 1: Create scripts to populate the H2 database when the function starts up. Create a schema that creates the employee table. Store the script in a Schema.sql file, as shown in Listing 3-2.
DROP TABLE IF EXISTS employee;
CREATE TABLE employee (
 id INT AUTO_INCREMENT PRIMARY KEY,
 name varchar(250),
 employeeid varchar(250),
 email varchar(250),
 salary varchar(250)
);
Listing 3-2

Schema.sql

Populate the database with an INSERT statement. Then create a file called Data.sql and store the INSERT statement in it, as shown in Listing 3-3.
INSERT INTO employee (name, employeeid, email, salary) values
('banu','001','[email protected]','10000');
Listing 3-3

Data.sql

Add these two files to the resources folder of the main project, as shown in Figure 3-12.

A screenshot of the spring boot project structure. In a project structure, the s r c folder is selected. It contains a resources folder. Resources folder includes data. s q l, and schema. s q l folders along with other folders.

Figure 3-12

Spring Boot project structure with data.sql and schema.sql

Modify the Application.properties as follows. Change spring.cloud.function.definition from employeeConsumer to employeeSupplier. This will route function calls to employeeSupplier. See Listing 3-4.
spring.cloud.function.definition=employeeSupplier
spring.datasource.url=jdbc:h2:mem:employee
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.h2.console.enabled=true
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.defer-datasource-initialization=true
Listing 3-4

Application.properties

Also add spring.jpa.defer-datasource-initialization=true to ensure that the data gets populated on startup.

No other code changes are required. It is important to note that the changes you made only affect the configuration file.

3.4 Setting Up a CI/CD Pipeline to Deploy to a Target Platform

As discussed in the introduction of this chapter, you’ll use two tools for the CI/CD process. GitHub Actions can be used as a tool for both CI (Continuous Integration) and CD (Continuous Deployment), while ArgoCD is a CD tool.

ArgoCD was designed for Kubernetes, so you can leverage this tool exclusively for Knative/Kubernetes deployment. You’ll use GitHub Actions for serverless environments such as Lambda.

Figure 3-13 shows the flow when deploying to serverless environments like AWS Lambda, Google Cloud Functions, and Azure Functions.

The process steps are as follows:
  1. 1)

    Create code and push/commit code to GitHub.

     
  2. 2)

    GitHub Actions senses the event trigger of the commit and starts the build and deploy process to the serverless environments defined in the actions script.

     

A flow diagram of deploying to serverless functions. The code created is pushed to GitHub through Git flows repo name and branch name. Event-triggered from GitHub results in GitHub actions and deploys to spring cloud function.

Figure 3-13

Deploying to serverless functions environments

Figure 3-13 shows the flow when deploying Spring Cloud Function to a Kubernetes environment with Knative configured.

The process steps are as follows:
  1. 1)

    Create code and push/commit code to GitHub.

     
  2. 2)

    GitHub Actions senses the event trigger of the commit and starts the build process and deploys the created container image into Docker Hub.

     
  3. 3)

    ArgoCD polls for changes in GitHub and triggers a “sync.” It then retrieves the container image from Docker hub and deploys to Knative on Kubernetes. See Figure 3-14.

     

A flow diagram describes deploying to a Knative-kubernetes environment. Commit code through Git is pushed to GitHub. Event-triggered GitHub actions are pushed to docker and synced and pulled to Argo. It deploys the spring cloud function on kubernetes.

Figure 3-14

Deploying to a Knative-Kubernetes environment

3.5 Deploying to the Target Platform

This section looks at the process of deploying the Spring Cloud Function to the target environments such as AWS Lambda, Google Cloud Functions, Azure Functions, and Knative on Kubernetes.

Here are the prerequisites for all the environments:
  • GitHub repository with code deployed to GitHub

  • Access and connection information to the environments

  • All Chapter 2 prerequisites for each of the environments. Refer to Chapter 2 for each environment

  • Use the successful deployments of Chapter 2 as a reference for each of the deploys

3.5.1 Deploying to AWS Lambda

Deploying to AWS Lambda requires using a SAM (Serverless Application Model) based GitHub Actions script. This section explains how to use SAM and GitHub Actions. There is no additional coding required.

Prerequisites:

A flow diagram describes deploying spring cloud function with GitHub actions. Commit code through Git is pushed to GitHub. Event-triggered GitHub actions deploy spring cloud function on lambda.

Figure 3-15

Deploying Spring Cloud Function with GitHub Actions on AWS Lambda

Step 1: Spring Cloud Function code in GitHub.

Push the code to GitHub. You can bring down code from GitHub at https://github.com/banup-kubeforce/payroll-aws-h2. This code can be modified to your specs and deployed to the repository of your choice.

Step 2: Implement GitHub Actions with AWS SAM. AWS SAM (Serverless Application Model) is a framework for building serverless applications. More information can be found at https://aws.amazon.com/serverless/sam/.

AWS has a sample SAM-based actions script that is available in the GitHub marketplace that you can leverage. This script will execute the SAM commands.

The action code in Listing 3-5 can be created on the GitHub Actions dashboard or in your IDE
name: CI
on:
  push:
    branches: [ "master" ]
jobs:
  build-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-java@v2
        with:
          java-version: '11'
          distribution: 'temurin'
          cache: maven
      - uses: aws-actions/setup-sam@v1
      - uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-2
      - name: Build with Maven
        run: mvn -B package --file pom.xml
      # sam package
      - run: sam package --template-file template.yaml --output-template-file packaged.yaml --s3-bucket payrollbucket
# Run Unit tests- Specify unit tests here
# sam deploy
      - run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name payroll-aws --s3-bucket payrollbucket --capabilities CAPABILITY_IAM --region us-east-2
Listing 3-5

Workflow for Payroll Function to be Deployed on AWS Lambda

The secrets for these two elements can be stored in GitHub secrets:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Figure 3-16 shows the place to store configuration secrets for GitHub Actions. It is under the Settings tab.

A screenshot of the ban up-Kubeforce window. The settings tab is selected. It displays action secrets and environmental secrets with its details.

Figure 3-16

Actions secrets for credentials and configuration

Step 3: Execute GitHub Actions. Once the workflow is configured, the actions can be triggered from the GitHub console or through a code commit, as defined by the following code in the sam-pipeline.yaml file.
on:
  push:
    branches: [ "master" ]
GitHub Actions execute the steps outlined in the YAML file and deploy the function to AWS Lambda. Figure 3-17 shows a successful run.

A screenshot of the banup-kubeforce window. The build deploy button is selected. It displays a new window on build deploy which includes set up job, build with Maven, run Sam package, etc.

Figure 3-17

A successful execution of GitHub Actions on AWS Lambda

Step 4: Verify that the function is up and running in AWS Lambda.

A screenshot of A W S Lambda window. The function is selected. It displayed the function name.

Figure 3-18

Function created after execution of GitHub Actions

Since the payroll-aws-h2 application exposes EmployeeSupplier, you will do a simple GET against the function to see if you get the result of data that has been inserted into the database on Spring Cloud Function startup.

A screenshot of the Lambda function window. It depicts function overview with its description.

Figure 3-19

Testing if the function was successful. See the JSON response

In Figure 3-19, you can see that the test was successful and see a JSON result of what is in the database.

3.6 Deploying to GCP Cloud Functions

Deploying to GCP Cloud Functions using GitHub Actions is a bit intrusive, as you have to add a MANIFEST.MF file to the resources folder. See the code in GitHub.

Prerequisites:

A flow diagram describes G C P and GitHub actions flow. Commit code through Git is pushed to GitHub. Event-triggered GitHub actions deploy spring cloud function in google cloud function.

Figure 3-20

GCP and GitHub Actions flow

Step 1: Spring Cloud Function code in GitHub. Push your code to GitHub. If you have cloned the https://github.com/banup-kubeforce/payroll-gcp-h2.git then you have everything that you need to push the code to your repository.

Step 2: Set up Cloud Functions actions.

Set up GitHub actions to run the Cloud Functions command. You have two choices:
  • Use deploy-cloud-functions runner

  • Use gcloud-cli

Listing 3-6 shows the GitHub Actions file.
name: Google Cloud Functions
on:
  push:
    branches: [ "master" ]
jobs:
  build-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-java@v2
        with:
          java-version: '11'
          distribution: 'temurin'
          cache: maven
      - name: Build with Maven
        run: mvn -B package --file pom.xml
      - id: 'auth'
        uses: 'google-github-actions/auth@v0'
        with:
          credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
      - name: 'Set up Cloud SDK'
        uses: 'google-github-actions/setup-gcloud@v0'
      - name: 'Use gcloud CLI'
        run: 'gcloud functions deploy payroll-gcp --entry-point org.springframework.cloud.function.adapter.gcp.GcfJarLauncher --runtime java11 --trigger-http --source target/deploy --memory 512MB'
Listing 3-6

Workflow for Payroll Function to be Deployed on GCP Cloud Functions

Note that you will have to store your GCP_CREDENTIALS in the GitHub Secrets dashboard.

As in the previous example with AWS Lambda, note that the steps to check out, set up, and build Maven are the same. For the authentication and deployment, you use the Google Cloud CLI. The Set up Cloud SDK task will download and set up the Google CLI. You can use the same command line script that you used when you deployed from a laptop in Chapter 2.

Step 3: Commit and push code to trigger the GitHub Actions. This trigger is defined in the actions code. In this example, any push or commit to the “master” branch will trigger the GitHub Actions.
on:
  push:
    branches: [ "master" ]
This can be done on the GitHub Actions website or in the IDE. You can go to GitHub and commit a change by doing a simple modification at the “master” branch. This will start the GitHub Action flow.

A screenshot of the banup-kubeforce window. The build deploy button is selected. A new window on build deployment is displayed, it includes set up job, build with Maven, run Sam package, etc.

Figure 3-21

GitHub Actions gets triggered by a code commit

Once the actions successfully complete the job, you can go to the Google Cloud Functions dashboard and test the function. Again, you execute a simple GET against the EmployeeSupplier function.

Step 4: Test the function.

Before you test the function, ensure that you pick the function to be invoked from an unauthenticated device such as your laptop. Once you’re done testing, remove the privilege to avoid unnecessary invocations.

A screenshot of the unauthenticated set for payroll-g c p- h 2 which is allowed. It includes a link to 1st gen payroll along with other details.

Figure 3-22

Allow unauthenticated set for Payroll-gcp-h2

You can go to the console of your function in the Google Cloud Functions dashboard and execute the test. You do not have to provide any input; simply click the Test the Function button to execute the test. You will see the output of EmployeeSupplier in the Output section, as shown in Figure 3-23.

A screenshot of the google cloud window. It displays cloud functions. In payroll-g c p-h 2, a function is selected. It configures triggering events and displays the test command.

Figure 3-23

Successful output of the Spring Cloud Function test in GCP Cloud Functions

3.7 Deploying to Azure Functions

Spring Cloud Function on Azure Functions require a bit of tweaking, as you learned in Chapter 2. This is because the configuration is not externalized, as with AWS Lambda or GCP Cloud Functions. This does not mean that you cannot deploy easily. You have to understand how Azure Function code interprets Spring Cloud Function code and execute. See Chapter 2 for discussions around this issue; make sure that you execute and test locally before pushing to the Azure cloud.

Step 1: Spring Cloud Function code in GitHub. Push your code to GitHub. If you cloned the https://github.com/banup-kubeforce/payroll-azure-h2.git, you have everything that you need to push the code to your repository.

A flow diagram describes flow of spring cloud function deployment on azure. Commit code through Git is pushed to GitHub. Event-triggered GitHub actions and deploy spring cloud function in Azure functions.

Figure 3-24

Flow of Spring Cloud Function deployment on Azure

Step 2: Set up Azure Function App Actions. See Listing 3-7.
name: Deploy Java project to Azure Function App
on:
  push:
    branches: [ "master" ]
# CONFIGURATION
# For help, go to https://github.com/Azure/Actions
#
# 1. Set up the following secrets in your repository:
#   AZURE_FUNCTIONAPP_PUBLISH_PROFILE
#
# 2. Change these variables for your configuration:
env:
  AZURE_FUNCTIONAPP_NAME: payroll-kubeforce-new      # set this to your function app name on Azure
  POM_XML_DIRECTORY: '.'                     # set this to the directory which contains pom.xml file
  POM_FUNCTIONAPP_NAME: payroll-kubeforce-new       # set this to the function app name in your local development environment
  JAVA_VERSION: '11'                      # set this to the java version to use
jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    environment: dev
    steps:
    - name: 'Checkout GitHub Action'
      uses: actions/checkout@master
    - name: Setup Java Sdk ${{ env.JAVA_VERSION }}
      uses: actions/setup-java@v1
      with:
        java-version: ${{ env.JAVA_VERSION }}
    - name: 'Restore Project Dependencies Using Mvn'
      shell: bash
      run: |
        pushd './${{ env.POM_XML_DIRECTORY }}'
        mvn clean package
        popd
    - name: 'Run Azure Functions Action'
      uses: Azure/functions-action@v1
      id: fa
      with:
        app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
       # package: './${{ env.POM_XML_DIRECTORY }}/target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
        package: './${{ env.POM_XML_DIRECTORY }}/target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
#        package: '${{ env.POM_XML_DIRECTORY }}/target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
      #  package: 'target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
        publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }}
Listing 3-7

Workflow for Payroll Function to be Deployed on Azure Functions

Step 3: Execute GitHub Actions. GitHub Actions are executed by this setting in the actions file:
on:
  push:
    branches: [ “master” ]
Any commit or push to the “master” branch will trigger an execution of GitHub Actions; see Figure 3-25.

A screenshot of the summary window. The build and deploy button is selected which displays build deployment window which includes set up job, build with Maven, run Sam package, etc.

Figure 3-25

Successful deployment of Payroll Spring Cloud Function using GitHub Actions

After successfully deploying using GitHub actions, you need to verify the deployment in the Azure Functions dashboard; see Figure 3-26.

A screenshot of the azure functions dashboard. Functions selected under payroll-kubeforce includes name, trigger and status with its details.

Figure 3-26

Azure Functions dashboard showing the function employeeSupplier has been deployed

Click the employeeSupplier link to get to Figure 3-27.

A screenshot of the employee supplier dashboard. The tab overview is selected. The get function under overview is selected which displays essentials along with other details such as function app, status, etc.

Figure 3-27

Click the Get Function URL on the employeeSupplier dashboard

A screenshot of the employee supplier dashboard. The tab overview is selected. It displays the Get function U r l along with the default function key and ok tab.

Figure 3-28

Get the URL of the function

The URL of the function is https://payroll-kubeforce.azurewebsites.net/api/employeeSupplier. Use this URL for testing.

Step 4: Testing. You will use an external testing tool to see if the deployed functions work. The tool you use here is Postman.

You can simply use a GET operation to test, as shown in Figure 3-29.

A screenshot of the window of testing of the payroll-Kubeforce azure website. The tab body is selected. It displays options raw, binary, and Graph Q L along with other details.

Figure 3-29

Successful test result with Postman

This completes the deployment of Spring Cloud Function on Azure Functions using GitHub Actions.

3.8 Deploying to Knative on Kubernetes

The CI/CD for deploying Spring Cloud Function on Knative are similar for every Kubernetes. The only change is the cluster name. This section uses ArgoCD (http://argoproj.github.io) for CD even though you can achieve the same result with GitHub Actions. I found GitHub Actions a bit code-intensive. I wanted to separate the CD process and have a good visual tool that shows the deployment. ArgoCD provides a good visual interface.

To have a common repository for all the cloud environments, you’ll use Docker hub in this example. Docker hub provides a good interface for managing images and it is popular with developers. If you use ECR, GCR, or ACR, you’ll experience vendor lock-in.

The prerequisites for deploying to any Kubernetes platform are the same:
  • Get the code from GitHub. You can use https://github.com/banup-kubeforce/payroll-h2.git or push your custom code

  • A Dockerhub account

  • A Dockerfile to push to Dockerhub

  • Actions code in your GitHub project

  • Access to a Kubernetes Cluster with Knative configured

  • ArgoCD up and running

  • An app in ArgoCD that is configured to poll the GitHub project

A flow diagram describes deployment flow for spring cloud function with Github actions and Argo C D. Commit code through Git pushes it to GitHub. Event-triggered Github actions push to docker and pull and sync to Argo. It deploys spring cloud function in kubernetes.

Figure 3-30

Deployment flow for Spring Cloud Function with GitHub Actions and ArgoCD

Once you have the prerequisites set up, you can begin configuring an automated CI/CD pipeline. For this example implementation, you’ll use the code from GitHub at https://github.com/banup-kubeforce/payroll-h2.git.

Step 1: Spring Cloud Function code in GitHub. Push your code to GitHub. You can use the code for payroll-h2 in GitHub.

Step 2: Create a GitHub Action. Listing 3-8 shows the code for the action.
name: ci
on:
  push:
    branches:
      - 'main'
jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      - uses: actions/checkout@v2
      - uses: actions/setup-java@v2
        with:
          java-version: '8'
          distribution: 'temurin'
          cache: maven
      -
        name: Login to DockerHub
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - name: Extract metadata (tags, labels) for Docker1
        id: meta
        uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
        with:
          images: banupkubeforce/springcloudfunctions
      - name: Build with Maven
        run: mvn -B package --file pom.xml
      - name: Build and push Docker image
        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
        with:
          context: .
          file: ./DockerFile
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
Listing 3-8

Workflow for Payroll Function Image to be Pushed to Docker Hub

This code creates a Docker image and pushes it to the Docker hub. You can store the username and password as secrets in the GitHub site (see Figure 3-31).

A screenshot of the payroll-Kubeforce window. In settings, action is selected. It displays environment and repository secrets and their details.

Figure 3-31

GitHub Secrets store for configurations and credentials

                  username: ${{ secrets.DOCKERHUB_USERNAME }}
                  password: ${{ secrets.DOCKERHUB_TOKEN }}

Step 3: Execute GitHub Actions to build and push the Docker image.

The execution of GitHub Actions can be triggered by a push/commit. The trigger is defined in the GitHub Actions YAML file:
on:
  push:
    branches:
      - 'main'

A screenshot of the banup-kubeforce window. The docker button is selected. under docker the following is displayed setup job, set up Q E M U, build with Maven, run the Sam package, etc.

Figure 3-32

Successful run of GitHub Actions

Step 4: Configure ArgoCD.

Follow the steps outlined in the introduction of this chapter for ArgoCD. You need to connect to your cluster on your local machine before executing this command; see Listing 3-9.
$ argocd app create payroll-h2 --repo https://github.com/banup-kubeforce/payroll-h2.git --path knative --dest-server https://kubernetes.default.svc --dest-namespace default –upsert
Listing 3-9

ArgoCD Script to Create a Project and Point to the payroll-h2 Repo

This will create app payroll-h2 in ArgoCD, as shown in Figure 3-33.

A screenshot of the application window. It depicts the Payroll-h 2 which includes project, labels, status, repository, path, destination, etc. The status is healthy and synced.

Figure 3-33

Application payroll-h2 deployed on ArgoCD

Step 5: Sync the project in Argo CD.

Now that you have created the app and it is pointing to the GitHub repository, make sure you have a connection to the repo, as shown in Figure 3-34. I connected to the repo using HTTPS. This will allow the app to poll for changes and trigger the flow to push the Docker image to the specified Kubernetes environment.

A screenshot of the repository window. It depicts type, name, repository, and connection status.

Figure 3-34

Connect payroll-h2 to the repo using HTTPS

You can also run a deployment manually by clicking SYNC, as shown in Figure 3-35.

A screenshot of payroll-h 2 window. It depicts the project, labels, status, repository, target, path, destination, and name. The status is healthy and synced.

Figure 3-35

The SYNC button on payroll-h2 app in ArgoCD

Figure 3-36 shows a successful sync process in ArgoCD.

A screenshot of deploying the application window. Select synced button. Payroll-h 2 is displayed. It includes app health, current synchronize status, and last synchronize result.

Figure 3-36

Successful run of the sync showing the deployment flow

Step 6: Check if the function has been deployed. Navigate to the Kubernetes dashboard on the Azure Portal and verify that the service has been deployed. See Figure 3-37.

A screenshot of Kubernetes services dashboard. It displays payroll c l s t r of services and ingresses. Services and ingresses are selected and the status all are ok.

Figure 3-37

Successful deployment of Spring Cloud Function (payroll-h2) on Azure

Step 7: Testing. The best way to get the URL to test is to connect to the cluster via the command line and get the URL, as explained in Chapter 2.

Run $kn service list to get the URL for testing, as shown in Figure 3-38.

A screenshot of service list run logs. It contains name, U R L, latest, age, conditions, ready, and reason.

Figure 3-38

URL for testing payroll-h2

You can use Postman for testing. See Figure 3-39.

A screenshot of the Get window. params selected. It contains query parameters of key, value, description, and bulk edit. The body includes a code with other tabs such as pretty, raw, preview, etc.

Figure 3-39

Successful execution of test against payroll-h2 deployed on AKS Knative

This completes the successful deployment using GitHub Actions and ArgoCD.

3.9 Summary

In this chapter, you learned how to set up some CI/CD tools to create an automated deployment for your Spring Cloud Function.

You learned how to trigger the deployment of functions on Lambda, Google Cloud Functions, and Azure Functions.

You also learned that you can combine the build of Docker images stored in Docker hub and ArgoCD to deploy the image to any Kubernetes cluster that is running Knative.

If you want to achieve “write-once deploy-anywhere,” you have to look at using Kubernetes and Knative. Spring Cloud Function is really a portable function.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.237.77