This will be a fully hands-on chapter where we will look at the end-to-end automation of an application and all its dependencies. The dependencies will involve the setup of the project repository, creating the Continuous Integration and Continuous Deployment (CI/CD) pipelines, dependent infrastructure resources, and so on. You will see the real power of how Crossplane can automate every possible step, starting from the initial repository setup. We will go through the hands-on journey from the perspective of three different personas. The three personas are the platform developer creating the required XR/claim APIs, the application operator configuring the application deployment using the XR/claim, and the developer contributing to the application development. The platform developer persona is the key to the whole journey, so most of the content in this chapter will be from their perspective. Whenever required, we will explicitly mention the other personas. The hands-on journey will cover application, services, and infrastructure, all three aspects of automation with Crossplane.
The following are the topics covered in this chapter:
We will start with the requirement from the product team to explore the ways to automate.
We will start our high-level requirement story from the perspective of an imaginary organization, X. They are planning to develop a new e-commerce website named product-a. It has many modules, each functional at a different time in the customer journey, for example, cart, payment, and customer support. Each model requires independent release and scaling capabilities while sharing a standard website theme and a unified experience. The product architecture group has recommended micro-frontend architecture with separate deployment for each module in Kubernetes. They also suggested that an individual team will develop the website framework, shared UI components, and cross-cutting concerns in the form of a library. The independent module team can use these dependent libraries to build their features. The product team has recently heard about Crossplane and its ability to automate the applications from end to end. They wanted to use the opportunity of developing a greenfield product and experiment with Crossplane to set up a high-velocity, reliable product development practice. They have reached the platform team, requesting help to develop a proof of concept (POC). The POC project will be the scope of our hands-on journey in this chapter. The following diagram represents what the product development team wanted to achieve:
Information
Please note that both the requirements and solutions discussed in the chapter are not exhaustive. Our attempt here is to look for ways to approach automation from end to end, covering the entire application life cycle and its dependencies.
The following section explores one possible solution option from the perspective of a platform engineer using Crossplane.
We will approach the solution in three steps:
Information
We will create a template GitLab project with the dependent library and kick-start the micro-frontend development using a repository cloned from the base template repository.
The following diagram represents the complete solution:
The following stages cover the high-level solution in the preceding diagram in a bit more detail:
The rest of the chapter will investigate details of how we configure Crossplane and implement the solution discussed. The following section will deep dive into the control plan setup required to implement the use case.
Information
The complete example is available at https://github.com/PacktPublishing/End-to-End-Automation-with-Kubernetes-and-Crossplane/tree/main/Chapter10/Hands-on-example.
This is the stage to install the required components into the Crossplane cluster. We will establish the necessary providers and respective configurations. The first step will be to install the GCP provider.
This is the same step we took in Chapter 3, Automating Infrastructure with Crossplane, but slightly deviating from it. We will differ in how we create and use the GCP provider configuration. It is good to have an individual provider configuration for each product team to enhance security, auditing, policy compliance, governance, and so on in using the XR/claim APIs. Each product team and platform team should create a different provider configuration referring to a separate GCP service account secret. The provider configurations will be named against the product (product-a), and a new namespace will be created with the same name. The compositions will be developed in such a way to refer to the provider configuration based on the claim namespace dynamically. It is one of the multi-tenancy patterns we discussed in Chapter 7, Extending and Scaling Crossplane. To finish the GCP setup, do the following:
The preceding steps will ensure that the GCP provider is fully set. In the following section, we will look at the GitLab provider.
We will use the GitLab provider to manage the micro-frontend repository and CI pipeline. The free account provided by GitLab is good enough to continue with our experiment. The provider setup is done in three steps:
# Create Kubernetes secret with the access token
kubectl create secret generic gitlab-credentials -n crossplane-system --from-literal=gitlab-credentials=<YOUR_ACCESS_TOKEN>
We are done with the GitLab provider setup. The following section will look at the Helm and Kubernetes provider setup.
Both the Helm and Kubernetes providers are helpful to configure a remote or the same Kubernetes cluster. It is the remote Kubernetes cluster created for product-a in our case. Both providers require credentials to access the remote cluster. The product-specific provider configuration will be created automatically for the remote cluster when we provision the cluster with our XR API. We will look at more details on this in the next section. We will only install the provider for now. Execute Helm-Provider.yaml and k8s-Provider.yaml to install the providers. Refer to the following screenshot showing the installation of all providers and respective configuration setup:
To run the setup yourself, use the following commands:
# GCP Provider
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/GCP
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/GCP/product-a
# Helm Provider
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/Helm
# GitLab Provider
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/Gitlab
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/Gitlab/product-a
# Kubernetes Provider
kubectl apply -f Step-1-ProviderSetup/Platform-OPS/k8s
This takes us to the end of configuring the Crossplane control plane. All these activities are meant to be done by the platform team. In the following section, we will deep dive into setting up a remote Kubernetes cluster as a deployment environment for product-a.
The complete Kubernetes cluster creation and configuring of the cross-cutting concerns will be automated using this step. We will develop an XR/claim API, which does the following:
Let’s look at the XRD and composition to understand the API in detail (refer to the XRD and composition in the book’s GitHub repository). We will capture two mandatory parameters (node count and machine size). The size parameter takes either BIG or SMALL as an enum value. Inside the composition, we have composed five resources. The following is the list of resources and their purpose:
patches:
- fromFieldPath: spec.claimRef.namespace
toFieldPath: spec.providerConfigRef.name
- fromFieldPath: spec.claimRef.name
toFieldPath: metadata.name
- fromFieldPath: spec.claimRef.namespace
toFieldPath: spec.writeConnectionSecretToRef.namespace
- fromFieldPath: spec.claimRef.name
toFieldPath: spec.writeConnectionSecretToRef.name
transforms:
- type: string
string:
fmt: "%s-secret"
# Patches and reediness check from the Helm Provider config
patches:
- fromFieldPath: spec.claimRef.namespace
toFieldPath: spec.credentials.secretRef.namespace
- fromFieldPath: spec.claimRef.name
toFieldPath: spec.credentials.secretRef.name
transforms:
- type: string
string:
fmt: "%s-secret"
- fromFieldPath: spec.claimRef.name
toFieldPath: metadata.name
transforms:
- type: string
string:
fmt: "%s-helm-provider-config"
readinessChecks:
- type: None
Information
Note that the cluster creation XR/claim API example discussed here is not production ready. You should be installing other cross-cutting concerns using the Helm or Kubernetes provider. Also, we missed many fine-grained cluster configurations. Refer to https://github.com/upbound/platform-ref-gcp for a more detailed cluster configuration.
To establish and validate our cluster API into the control plane, execute the following commands:
# Install GCP Cluster XR/Claim API
kubectl apply -f Step-2-CreateProductTeamsKubernetesCluster/Platform-OPS
# Validate the health of installed API
kubectl get xrd
kubectl get composition
The platform team that manages the control plane will do the preceding operations. Refer to the following screenshot where the APIs are established:
As a next step, the application operator close to the product team can create the cluster using a claim configuration. The application operator will create a GKE cluster with the name product-a using the following commands:
# Create the GCP Cluster using a Claim object
kubectl apply -f Step-2-CreateProductTeamsKubernetesCluster/Application-OPS
# Validate the health of the GKE cluster and the Argo CD
kubectl get GCPCluster -n product-a
kubectl get release
Refer to the following screenshot where the GKE cluster and Helm releases are established:
We are all good with the cluster creation. We will discuss the next stage to onboard the micro-frontend repository in the following section.
At this stage, an XR/claim is developed to clone the template repository to create the new micro-frontend repository and CI pipeline. We can do this in two steps. First, we will configure GitLab, and then we’ll develop an XR/claim API.
We need to make the following one-time configurations in GitLab before we start the XR/claim API development:
Tip
Note that the group creation and user onboarding into the group can be automated. Considering doing that with Crossplane. An example of this is available at https://github.com/crossplane-contrib/provider-gitlab/tree/master/examples/groups.
We have all the components to develop our project onboarding XR/claim API. The following section will look at the details of the onboarding API.
If we look at the XRD (gitproject-xrd.yaml), we take in two parameters as inputs. The template’s name refers to the template repository from which we should be cloning, and the group ID will determine the GitLab group under which the repository will be created. You can get the group ID from the GitLab group details page or group settings page. These two parameters make the API generic, so it can be used across the organization. The newly created micro-frontend repo URL and an access token to work with the repository will be stored as connection Secrets. We can use these with Argo CD to read the repo. Our example doesn’t require the access token as the repository is public. It will be a simple composition to map the template name with a template URL, clone the repository into the specified group, and copy back the repository details into the Secret. The repository’s name will be referred to from the name of the claim object. To establish and validate the onboarding API into the control plane, execute the following commands:
# Install the onboarding API
kubectl apply -f Step-3-GitProjectOnboarding/Platform-OPS
# Validate the health of installed API
kubectl get xrd
kubectl get composition
Refer to the following screenshot, where the APIs are established:
As a final step in the onboarding stage, the application operator can onboard the repository and CI pipeline using a Claim configuration. The application operator will create a repository with the name micro-frontend-one using the following commands:
# Create claim and validate
kubectl apply -f Step-3-GitProjectOnboarding/Application-OPS
kubectl get gitproject -n product-akubectl get xrd
Refer to the following screenshot where the claims are created in GitLab:
You can go to the CI/CD section of the new repository to run the CI pipeline to see that the Docker images are created and pushed into Docker Hub. Developers can now make changes to the repository, and any new commit will automatically trigger the GitLab CI pipeline. In the following section, we can investigate the final stage to set up CD and provision other infrastructure dependencies.
The final stage is to automate the deployment dependencies for the micro-frontend. Automating the deployment dependencies means taking care of two aspects:
We will build a nested XR to satisfy the preceding requirement. The XWebApplication will be the parent API, and XGCPdb will be the nested inner XR. The parent API captures the product Git group and database size as input. The micro-frontend name will be another input derived from the name of the claim. The parent composition will compose the Argo CD config and an XGCPdb resource (inner XR). Refer to our example repo’s application and database folder to go through the XRD and composition of both XRs. The following are a few code snippets that are key to understanding. In the Argo CD object, the following is the patch for the repository URL. We construct the GitLab URL from the group name and claim name (repository name). Look at the claim to see the actual input (Claim-Application.yaml). The following is the repository URL patch code:
- type: CombineFromComposite
toFieldPath: spec.forProvider.manifest.spec.source.repoURL
combine:
variables:
- fromFieldPath: spec.parameters.productGitGroup
- fromFieldPath: spec.claimRef.name
strategy: string
string:
fmt: "https://gitlab.com/%s/%s.git"
We dynamically patch the Kubernetes provider config name using a predictable naming strategy. The following is the code snippet for this:
- fromFieldPath: spec.claimRef.namespace
toFieldPath: spec.providerConfigRef.name
transforms:
- type: string
string:
fmt: "%s-cluster-k8s-provider-config"
Another important patch is to bind the Docker image name dynamically. In our CI pipeline, we use the repository name as the Docker image name. As the claim name and the repository name are the same, we can use the claim name to dynamically construct the Docker image name. The following is the patch code snippet for this:
- fromFieldPath: spec.claimRef.name
toFieldPath: spec.forProvider.manifest.spec.source.helm.parameters[0].value
transforms:
- type: string
string:
fmt: "arunramakani/%s
source and destination are two key sections under the Argo CD config. This configuration provides information about the source of the Helm chart and how to deploy this in the destination Kubernetes cluster. The following is the code snippet for this:
source:
# we just saw how this patched
repoURL: # To be patched
# The branch in which Argo CD looks for change
# When the code is ready for release, move to this branch
targetRevision: HEAD
# Folder in the repository in which ArgoCD will look for automatic sync
path: template-helm
helm:
# We will patch our clime name here
releaseName: # To be patched
parameters:
- name: "image.repository"
# we just saw how this patched
value: # To be patched
- name: "image.tag"
value: latest
- name: "service.port"
value: "3000"
destination:
# Indicates that the target Kubernetes cluster is the same local Kubernetes cluster in which ArgoCD is running.
server: https://kubernetes.default.svc
# Namespace in which the application is deployed
namespace: # to be patched
To establish and validate our APIs in the control plane, execute the following commands:
kubectl apply -f Step-4-WebApplication/Platform-OPS/Application
kubectl apply -f Step-4-WebApplication/Platform-OPS/DB
kubectl get xrd
kubectl get composition
Refer to the following screenshot, where the APIs are established and validated:
Tip
Note that we did not configure any access token for Argo CD to access GitLab as it is a public repository. We will have private repositories in real life, and a token is required. Refer to https://argo-cd.readthedocs.io/en/release-1.8/operator-manual/declarative-setup/#repositories to see how to set up an access token. Again, this can be automated as a part of repository onboarding.
As a final step in the application deployment automation stage, the application operator can provision the database as an infrastructure dependency and configure the CD setup using the following claim configuration:
apiVersion: learn.unified.devops/v1alpha1
kind: WebApplication
metadata:
# Use the same name as the repository
name: micro-frontend-one
namespace: product-a
spec:
compositionRef:
name: web-application-dev
parameters:
# Group name in gitlab for the product-a
productGitGroup: unified-devops-project-x
databaseSize: SMALL
The application operator will use the following commands:
# Apply the claim
kubectl apply -f Step-4-WebApplication/Application-OPS
# Verify the application status, including the database and ArgoCD config
kubectl get webapplications -n product-a
kubectl get XGCPdb
kubectl get object
Refer to the following screenshot, where the application infrastructure dependencies and CD configurations are provisioned:
Tip
We have used Argo CD and Helm chart deployment to handle application automation. We can replace Helm with KubeVela, combine Helm/KubeVela with Kustomize, or even use a plain Kubernetes object as required for your team. Even Argo CD can be replaced with other GitOps tools, such as Flex.
This takes us to the end of the hands-on journey to automate the application from end to end. Our micro-frontend example and its dependent database are up and running now. In the following section of this chapter, we will discuss the reasoning behind our XR/claim API boundaries.
We divided the end-to-end automation into four stages. We can ignore stage one as it is about preparing the Crossplane control plane itself. It’s essential to understand why we split the remaining stages into three with four XR/claim APIs. The following are the ideas behind our API boundaries:
Tip
The repository URL and access token created with the onboarding API is required in the application API to set up CI. The onboarding API is a one-time activity, and the application API is used in every environment. If we have a different Crossplane for every environment (production, staging, and development), sharing the credentials across in an automated way could be challenging. Consider using an external key vault to sync the repository details from the onboarding API. Other Crossplane environments can synchronize these Secrets using tools such as External Secrets (https://external-secrets.io/v0.5.3/).
This chapter discussed one of the approaches to handling the end-to-end automation of applications, infrastructure, and services. There are multiple patterns to approach end-to-end control plane-based automation using the ways we learned throughout the book. I can’t wait to see what unique ways you come up with. This chapter takes us to the end of learning Crossplane concepts and patterns and our hands-on journey.
In the final chapter, we will look at some inspirations to run a platform as a product. You will learn essential engineering practices that make our Crossplane platform team successful.
13.59.34.87