In the previous chapter, you had an overview of implementing GitOps workflows using Argo CD recipes. Argo CD is a famous and influential open source project that helps with both simple use cases and more advanced ones. In this chapter, we will discuss topics needed when you move forward in your GitOps journey, and you need to manage security, automation, and advanced deployment models for multicluster scenarios.
Security is a critical aspect of automation and DevOps. DevSecOps is a new definition of an approach where security is a shared responsibility throughout the entire IT lifecycle. Furthermore, the DevSecOps Manifesto specifies security as code to operate and contribute value with less friction. And this goes in the same direction as GitOps principles, where everything is declarative.
On the other hand, this also poses the question of avoiding storing unencrypted plain-text credentials in Git. As stated in the book Path to GitOps by Christian Hernandez, Argo CD luckily currently provides two patterns to manage security in GitOps workflows:
Storing encrypted secrets in Git, such as with a Sealed Secret (see Recipe 8.1)
Storing secrets in external services or vaults, then storing only the reference to such secrets in Git (see Recipe 8.2)
The chapter then moves to advanced deployment techniques, showing how to manage webhooks with Argo CD (see Recipe 8.3) and with ApplicationSets (see Recipe 8.4). ApplicationSets is a component of Argo CD that allows management deployments of many applications, repositories, or clusters from a single Kubernetes resource. In essence, a templating system for the GitOps application is ready to be deployed and synced in multiple Kubernetes clusters (see Recipe 8.5).
Last but not least, the book ends with a recipe on Progressive Delivery for Kubernetes with Argo Rollouts (Recipe 8.6), useful for deploying the application using an advanced deployment technique such as blue-green or canary.
Sealed Secrets is an open source project by Bitnami used to encrypt a Kubernetes Secrets into a SealedSecret
Kubernetes Custom Resource, representing an encrypted object safe to store in Git.
Sealed Secrets uses public-key cryptography and consists of two main components:
A Kubernetes controller that has knowledge about the private and public key used to decrypt and encrypt encrypted secrets and is responsible for reconciliation. The controller also supports automatic secret rotation for the private key and key expiration management in order to enforce the re-encryption of secrets.
kubeseal
, a CLI used by developers to encrypt their secrets before committing them to a Git repository.
The SealedSecret
object is encrypted and decrypted only by the SealedSecret
controller running in the target Kubernetes cluster. This operation is exclusive only to this component, thus nobody else can decrypt the object. The kubeseal
CLI allows the developer to take a normal Kubernetes Secret resource and convert it to a SealedSecret
resource definition as shown in Figure 8-1.
In your Kubernetes cluster with Argo CD, you can install the kubeseal
CLI for your operating system from the GitHub project’s releases. At the time of writing this book, we are using version 0.18.2.
After you install the CLI, you can install the controller as follows:
kubectl create
-f https://github.com/bitnami-labs/sealed-secrets/releases/download/0.18.2/controller.yaml
You should have output similar to the following:
serviceaccount/sealed-secrets-controller created deployment.apps/sealed-secrets-controller created customresourcedefinition.apiextensions.k8s.io/sealedsecrets.bitnami.com created service/sealed-secrets-controller created rolebinding.rbac.authorization.k8s.io/sealed-secrets-controller created rolebinding.rbac.authorization.k8s.io/sealed-secrets-service-proxier created role.rbac.authorization.k8s.io/sealed-secrets-service-proxier created role.rbac.authorization.k8s.io/sealed-secrets-key-admin created clusterrolebinding.rbac.authorization.k8s.io/sealed-secrets-controller created clusterrole.rbac.authorization.k8s.io/secrets-unsealer created
As an example, let’s create a Secret for the Pac-Man game deployed in Chapter 5:
kubectl create secret generic pacman-secret--from-literal
=
user
=
pacman--from-literal
=
pass
=
pacman
You should have the following output:
secret/pacman-secret created
And here you can see the YAML representation:
kubectl get secret pacman-secret -o yaml
apiVersion
:
v1
data
:
pass
:
cGFjbWFu
user
:
cGFjbWFu
kind
:
Secret
metadata
:
name
:
pacman-secret
namespace
:
default
type
:
Opaque
Now, you can convert the Secret into a SealedSecret
in this way:
kubectl get secret pacman-secret -o yaml
|
kubeseal -o yaml > pacman-sealedsecret.yaml
apiVersion
:
bitnami.com/v1alpha1
kind
:
SealedSecret
metadata
:
creationTimestamp
:
null
name
:
pacman-secret
namespace
:
default
spec
:
encryptedData
:
pass
:
AgBJR1AgZ5Gu5NOVsG1E8SKBcdB3QSDdzZka3RRYuWV7z8g7ccQ0dGc1suVOP8wX/ZpPmIMp8+urPYG62k4EZRUjuu/Vg2E1nSbsGBh9eKu3NaO6tGSF3eGk6PzN6XtRhDeER4u7MG5pj/+FXRAKcy8Z6RfzbVEGq/QJQ4z0ecSNdJmG07ERMm1Q+lPNGvph2Svx8aCgFLqRsdLhFyvwbTyB3XnmFHrPr+2DynxeN8XVMoMkRYXgVc6GAoxUK7CnC3Elpuy7lIdPwc5QBx9kUVfra83LX8/KxeaJwyCqvscIGjtcxUtpTpF5jm1t1DSRRNbc4m+7pTwTmnRiUuaMVeujaBco4521yTkh5iEPjnjvUt+VzK01NVoeNunqIazp15rFwTvmiQ5PAtbiUXpT733zCr60QBgSxPg31vw98+u+RcIHvaMIoDCqaXxUdcn2JkUF+bZXtxNmIRTAiQVQ1vEPmrZxpvZcUh/PPC4L/RFWrQWnOzKRyqLq9wRoSLPbKyvMXnaxH0v3USGIktmtJlGjlXoW/i+HIoSeMFS0mUAzOF5M5gweOhtxKGh3Y74ZDn5PbVA/9kbkuWgvPNGDZL924Dm6AyM5goHECr/RRTm1e22K9BfPASARZuGA6paqb9h1XEqyqesZgM0R8PLiyLuu+tpqydR0SiYLc5VltdjzpIyyy9Xmw6Aa3/4SB+4tSwXSUUrB5yc=
user
:
AgBhYDZQzOwinetPceZL897aibTYp4QPGFvP6ZhDyuUAxOWXBQ7jBA3KPUqLvP8vBcxLAcS7HpKcDSgCdi47D2WhShdBR4jWJufwKmR3j+ayTdw72t3ALpQhTYI0iMYTiNdR0/o3vf0jeNMt/oWCRsifqBxZaIShE53rAFEjEA6D7CuCDXu8BHk1DpSr79d5Au4puzpHVODh+v1T+Yef3k7DUoSnbYEh3CvuRweiuq5lY8G0oob28j38wdyxm3GIrexa+M/ZIdO1hxZ6jz4edv6ejdZfmQNdru3c6lmljWwcO+0Ue0MqFi4ZF/YNUsiojI+781n1m3K/giKcyPLn0skD7DyeKPoukoN6W5P71OuFSkF+VgIeejDaxuA7bK3PEaUgv79KFC9aEEnBr/7op7HY7X6aMDahmLUc/+zDhfzQvwnC2wcj4B8M2OBFa2ic2PmGzrIWhlBbs1OgnpehtGSETq+YRDH0alWOdFBq1U8qn6QA8Iw6ewu8GTele3zlPLaADi5O6LrJbIZNlY0+PutWfjs9ScVVEJy+I9BGdyT6tiA/4v4cxH6ygG6NzWkqxSaYyNrWWXtLhOlqyCpTZtUwHnF+OLB3gCpDZPx+NwTe2Kn0jY0c83LuLh5PJ090AsWWqZaRQyELeL6y6mVekQFWHGfK6t57Vb7Z3+5XJCgQn+xFLkj3SIz0ME5D4+DSsUDS1fyL8uI=
template
:
data
:
null
metadata
:
creationTimestamp
:
null
name
:
pacman-secret
namespace
:
default
type
:
Opaque
Now you can safely push your SealedSecret
to your Kubernetes manifests repo and create the Argo CD application. Here’s an example from this book’s repository:
argocd app create pacman--repo https://github.com/gitops-cookbook/pacman-kikd-manifests.git
--path
'k8s/sealedsecrets'
--dest-server https://kubernetes.default.svc
--dest-namespace default
--sync-policy auto
Check if the app is running and healthy:
argocd app list
You should get output similar to the following:
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH ↳ SYNCPOLICY CONDITIONS REPO PATH TARGET pacman https://kubernetes.default.svc default default Synced Healthy↳ <none> <none> https://github.com/gitops-cookbook/pacman-kikd-manifests.git k8s/sealedsecrets
In Recipe 8.1 you saw how to manage encrypted data in Git following the GitOps declarative way, but how do you avoid storing even encrypted credentials with GitOps?
One solution is External Secrets, an open source project initially created by GoDaddy, which aims at storing secrets in external services or vaults from different vendors, then storing only the reference to such secrets in Git.
Today, External Secrets supports systems such as AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, and more. The idea is to provide a user-friendly abstraction for the external API that stores and manages the lifecycles of the secrets.
In depth, ExternalSecrets is a Kubernetes controller that reconciles Secrets into the cluster from a Custom Resource that includes a reference to a secret in an external key management system. The Custom Resource SecretStore
specifies the backend containing the confidential data, and how it should be transformed into a Secret by defining a template, as you can see in Figure 8-2. The SecretStore has the configuration to connect to the external secret manager.
Thus, the ExternalSecrets
objects can be safely stored in Git, as they do not contain any confidential information, but just the references to the external services managing credentials.
You can install External Secrets with a Helm Chart as follows. At the time of writing this book, we are using version 0.5.9:
helm repo add external-secrets https://charts.external-secrets.io helm install external-secretsexternal-secrets/external-secrets
-n external-secrets
--create-namespace
You should get output similar to the following:
NAME: external-secrets LAST DEPLOYED: Fri Sep2
13
:09:532022
NAMESPACE: external-secrets STATUS: deployed REVISION:1
TEST SUITE: None NOTES: external-secrets has been deployed successfully!
In order to begin using ExternalSecrets, you will need to set up a SecretStore or ClusterSecretStore resource (for example, by creating a vault SecretStore).
More information on the different types of SecretStores and how to configure them can be found in our GitHub page.
You can also install the External Secrets Operator with OLM from OperatorHub.io.
As an example with one of the providers supported, such as HashiCorp Vault, you can do the following.
First download and install HashiCorp Vault for your operating system and get your Vault Token. Then create a Kubernetes Secret as follows:
export
VAULT_TOKEN
=
<YOUR_TOKEN> kubectl create secret generic vault-token--from-literal
=
token
=
$VAULT_TOKEN
-n external-secrets
Then create a SecretStore
as a reference to this external system:
apiVersion
:
external-secrets.io/v1beta1
kind
:
SecretStore
metadata
:
name
:
vault-secretstore
namespace
:
default
spec
:
provider
:
vault
:
server
:
"
http://vault.local:8200
"
path
:
"
secret
"
version
:
"
v2
"
auth
:
tokenSecretRef
:
name
:
"
vault-token
"
key
:
"
token
"
namespace
:
external-secrets
Hostname where your Vault is running
Name of the Kubernetes Secret containing the vault token
Key to address the value in the Kubernetes Secret containing the vault token content:
kubectl create -f vault-secretstore.yaml
Now you can create a Secret in your Vault as follows:
vault kv put secret/pacman-secretspass
=
pacman
And then reference it from the ExternalSecret
as follows:
apiVersion
:
external-secrets.io/v1beta1
kind
:
ExternalSecret
metadata
:
name
:
pacman-externalsecrets
namespace
:
default
spec
:
refreshInterval
:
"15s"
secretStoreRef
:
name
:
vault-secretstore
kind
:
SecretStore
target
:
name
:
pacman-externalsecrets
data
:
-
secretKey
:
token
remoteRef
:
key
:
secret/pacman-secrets
property
:
pass
kubectl create -f pacman-externalsecrets.yaml
Now you can deploy the Pac-Man game with Argo CD using External Secrets as follows:
argocd app create pacman--repo https://github.com/gitops-cookbook/pacman-kikd-manifests.git
--path
'k8s/externalsecrets'
--dest-server https://kubernetes.default.svc
--dest-namespace default
--sync-policy auto
While Argo CD polls Git repositories every three minutes to detect changes to the monitored Kubernetes manifests, it also supports an event-driven approach with webhooks notifications from popular Git servers such as GitHub, GitLab, or Bitbucket.
Argo CD Webhooks are enabled in your Argo CD installation and available at the endpoint /api/webhooks
.
To test webhooks with Argo CD using Minikube you can use Helm to install a local Git server such as Gitea, an open source lightweight server written in Go, as follows:
helm repo add gitea-charts https://dl.gitea.io/charts/ helm install gitea gitea-charts/gitea
You should have output similar to the following:
helm install gitea gitea-charts/gitea"gitea-charts"
has been added to your repositories NAME: gitea LAST DEPLOYED: Fri Sep2
15
:04:042022
NAMESPACE: default STATUS: deployed REVISION:1
NOTES:1
. Get the application URL by running these commands:echo
"Visit http://127.0.0.1:3000 to use your application"
kubectl --namespace default port-forward svc/gitea-http3000
:3000
Log in to the Gitea server with the default credentials you find the in the values.yaml file from the Helm Chart here or define new ones via overriding them.
Import the Pac-Man manifests repo into Gitea.
Configure the Argo app:
argocd app create pacman-webhook--repo http://gitea-http.default.svc:3000/gitea_admin/pacman-kikd-manifests.git
--dest-server https://kubernetes.default.svc
--dest-namespace default
--path k8s
--sync-policy auto
To add a webhook to Gitea, navigate to the top-right corner and click Settings. Select the Webhooks tab and configure it as shown in Figure 8-3:
Payload URL: http://localhost:9090/api/webhooks
Content type: application/json
You can omit the Secret for this example; however, it’s best practice to configure secrets for your webhooks. Read more from the docs.
Save it and push your change to the repo on Gitea. You will see a new sync from Argo CD immediately after your push.
Argo CD supports the ApplicationSet
resource to “templetarize” an Argo CD
Application
resource.
It covers different use cases, but the most important are:
Use a Kubernetes manifest to target multiple Kubernetes clusters.
Deploy multiple applications from one or multiple Git repositories.
Since the ApplicationSet
is a template file with placeholders to substitute at runtime, we need to feed these with some values.
For this purpose, ApplicationSet
has the concept of generators.
A generator is responsible for generating the parameters, which will finally be replaced in the template placeholders to generate a valid Argo CD Application
.
Create the following ApplicationSet
:
apiVersion
:
argoproj.io/v1alpha1
kind
:
ApplicationSet
metadata
:
name
:
bgd-app
namespace
:
argocd
spec
:
generators
:
-
list
:
elements
:
-
cluster
:
staging
url
:
https://kubernetes.default.svc
location
:
default
-
cluster
:
prod
url
:
https://kubernetes.default.svc
location
:
app
template
:
metadata
:
name
:
'
{{cluster}}-app
'
spec
:
project
:
default
source
:
repoURL
:
https://github.com/gitops-cookbook/gitops-cookbook-sc.git
targetRevision
:
main
path
:
ch08/bgd-gen/{{cluster}}
destination
:
server
:
'
{{url}}
'
namespace
:
'
{{location}}
'
syncPolicy
:
syncOptions
:
-
CreateNamespace=true
Defines a generator
Sets the value of the parameters
Defines the Application
resource as a template
cluster
placeholder
url
placeholder
Apply the previous file by running the following command:
kubectl apply -f bgd-application-set.yaml
When this ApplicationSet
is applied to the cluster, Argo CD generates and automatically registers two Application
resources. The first one is:
apiVersion
:
argoproj.io/v1alpha1
kind
:
Application
metadata
:
name
:
staging-app
spec
:
project
:
default
source
:
path
:
ch08/bgd-gen/staging
repoURL
:
https://github.com/example/app.git
targetRevision
:
HEAD
destination
:
namespace
:
default
server
:
https://kubernetes.default.svc
...
And the second one:
apiVersion
:
argoproj.io/v1alpha1
kind
:
Application
metadata
:
name
:
prod-app
spec
:
project
:
default
source
:
path
:
ch08/bgd-gen/prod
repoURL
:
https://github.com/example/app.git
targetRevision
:
HEAD
destination
:
namespace
:
app
server
:
https://kubernetes.default.svc
...
Inspect the creation of both Application
resources by running the following
command:
# Remember to login first
argocd login --insecure --grpc-web$argoURL
--username admin --password$argoPass
argocd app list
And the output should be similar to (trunked):
NAME CLUSTER NAMESPACE prod-app https://kubernetes.default.svc app staging-app https://kubernetes.default.svc default
Delete both applications by deleting the ApplicationSet
file:
kubectl delete -f bgd-application-set.yaml
We’ve seen the simplest generator, but there are eight generators in total at the time of writing this book:
Generates Application
definitions through a fixed list of clusters. (It’s the one we’ve seen previously).
Similar to List but based on the list of clusters defined in Argo CD.
Generates Application
definitions based on a JSON/YAML properties file within a Git repository or based on the directory layout of the repository.
Generates Application
definitions from repositories within an organization.
Generates Application
definitions using duck-typing.
In the previous example, we created the Application
objects from a fixed list of elements. This is fine when the number of configurable environments is small; in the example, two clusters refer to two Git folders (ch08/bgd-gen/staging
and ch08/bgd-gen/prod
).
In the case of multiple environments (which means various folders), we can dynamically use the Git generator to generate one Application
per directory.
Let’s migrate the previous example to use the Git generator. As a reminder, the Git directory layout used was:
bgd-gen ├── staging │ ├── ...yaml └── prod ├── ...yaml
Create a new file of type ApplicationSet
generating an Application
for each directory of the configured Git repo:
apiVersion
:
argoproj.io/v1alpha1
kind
:
ApplicationSet
metadata
:
name
:
cluster-addons
namespace
:
openshift-gitops
spec
:
generators
:
-
git
:
repoURL
:
https://github.com/gitops-cookbook/gitops-cookbook-sc.git
revision
:
main
directories
:
-
path
:
ch08/bgd-gen/*
template
:
metadata
:
name
:
'
{{path[0]}}{{path[2]}}
'
spec
:
project
:
default
source
:
repoURL
:
https://github.com/gitops-cookbook/gitops-cookbook-sc.git
targetRevision
:
main
path
:
'
{{path}}
'
destination
:
server
:
https://kubernetes.default.svc
namespace
:
'
{{path.basename}}
'
Configures the Git repository to read layout
Initial path to start scanning directories
Application
definition
The directory paths within the Git repository matching the path wildcard (staging
or prod
)
Directory path (full path)
The rightmost pathname
Apply the resource:
kubectl apply -f bgd-git-application-set.yaml
Argo CD creates two applications as there are two directories:
argocd app list NAME CLUSTER NAMESPACE ch08prod https://kubernetes.default.svc prod ch08staging https://kubernetes.default.svc staging
Also, this generator is handy when your application is composed of different components (service, database, distributed cache, email server, etc.), and deployment files for each element are placed in other directories. Or, for example, a repository with all operators required to be installed in the cluster:
app ├── tekton-operator │ ├── ...yaml ├── prometheus-operator │ ├── ...yaml └── istio-operator ├── ...yaml
Instead of reacting to directories, Git generator can create Application
objects with parameters specified in JSON/YAML files.
The following snippet shows an example JSON file:
{
"cluster"
:
{
"name"
:
"staging"
,
"address"
:
"https://1.2.3.4"
}
}
This is an excerpt of the ApplicationSet
to react to these files:
apiVersion
:
argoproj.io/v1alpha1
kind
:
ApplicationSet
metadata
:
name
:
guestbook
spec
:
generators
:
-
git
:
repoURL
:
https://github.com/example/app.git
revision
:
HEAD
files
:
-
path
:
"
app/**/config.json
"
template
:
metadata
:
name
:
'
{{cluster.name}}-app
'
...
.
Finds all config.json files placed in all subdirectories of the app
Injects the value set in config.json
This ApplicationSet
will generate one Application
for each config.json file in the folders matching the path
expression.
Use the pull request generator to automatically discover open pull requests within a repository and create an Application
object.
Let’s create an ApplicationSet
reacting to any GitHub pull request annotated with the preview
label created on the configured repository.
Create a new file named bgd-pr-application-set.yaml with the following content:
apiVersion
:
argoproj.io/v1alpha1
kind
:
ApplicationSet
metadata
:
name
:
myapps
namespace
:
openshift-gitops
spec
:
generators
:
-
pullRequest
:
github
:
owner
:
gitops-cookbook
repo
:
gitops-cookbook-sc
labels
:
-
preview
requeueAfterSeconds
:
60
template
:
metadata
:
name
:
'
myapp-{{branch}}-{{number}}
'
spec
:
source
:
repoURL
:
'
https://github.com/gitops-cookbook/gitops-cookbook-sc.git
'
targetRevision
:
'
{{head_sha}}
'
path
:
ch08/bgd-pr
project
:
default
destination
:
server
:
https://kubernetes.default.svc
namespace
:
'
{{branch}}-{{number}}
'
GitHub pull request generator
Organization/user
Repository
Select the target PRs
Polling time in seconds to check if there is a new PR (60 seconds)
Sets the name with branch name and number
Sets the Git SHA number
Apply the previous file by running the following command:
kubectl apply -f bgd-pr-application-set.yaml
Now, if you list the Argo CD applications, you’ll see that none are registered.
The reason is there is no pull request yet in the repository labeled with preview
:
argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS
Create a pull request against the repository and label it with preview
.
In GitHub, the pull request window should be similar to Figure 8-4.
Wait for one minute until the ApplicationSet
detects the change and creates the Application
object.
Run the following command to inspect that the change has been detected and registered:
kubectl describe applicationset myapps -n argocd ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal created 23s applicationset-controller created Application"myapp-lordofthejars-patch-1-1"
Normal unchanged 23s(
x2 over 23s)
applicationset-controller unchanged Application"myapp-lordofthejars-patch-1-1"
Check the registration of the Application
to the pull request:
argocd app list NAME CLUSTER NAMESPACE myapp-lordofthejars-patch-1-1 https://kubernetes.default.svc lordofthejars-patch-1-1
The Application
object is automatically removed when the pull request is closed.
At the time of writing this book, the following pull request providers are supported:
GitHub
Bitbucket
Gitea
GitLab
The ApplicationSet controller polls every requeueAfterSeconds
interval to detect changes but also supports using webhook events.
To configure it, follow Recipe 8.3, but also enable sending pull requests events too in the Git provider.
Use the Argo Rollouts project to roll out updates to an application.
Argo Rollouts is a Kubernetes controller providing advanced deployment techniques such as blue-green, canary, mirroring, dark canaries, traffic analysis, etc. to Kubernetes. It integrates with many Kubernetes projects like Ambassador, Istio, AWS Load Balancer Controller, NGNI, SMI, or Traefik for traffic management, and projects like Prometheus, Datadog, and New Relic to perform analysis to drive progressive delivery.
To install Argo Rollouts to the cluster, run the following command in a terminal window:
kubectl create namespace argo-rollouts kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/download/v1.2.2/install.yaml ... clusterrolebinding.rbac.authorization.k8s.io/argo-rollouts created secret/argo-rollouts-notification-secret created service/argo-rollouts-metrics created deployment.apps/argo-rollouts created
Although it’s not mandatory, we recommend you install the Argo Rollouts Kubectl Plugin to visualize rollouts. Follow the instructions to install it. With everything in place, let’s deploy the initial version of the BGD application.
Argo Rollouts doesn’t use the standard Kubernetes Deployment
file, but a specific new Kubernetes resource named Rollout
.
It’s like a Deployment
object, hence all its options are supported, but it adds some fields to configure the rolling update.
Let’s deploy the first version of the application. We’ll define the canary release process when Kubernetes executes a rolling update, which in this case follows these steps:
Forward 20% of traffic to the new version.
Wait until a human decides to proceed with the process.
Forward 40%, 60%, 80% of the traffic to the new version automatically, waiting 30 seconds between every increase.
Create a new file named bgd-rollout.yaml with the following content:
apiVersion
:
argoproj.io/v1alpha1
kind
:
Rollout
metadata
:
name
:
bgd-rollouts
spec
:
replicas
:
5
strategy
:
canary
:
steps
:
-
setWeight
:
20
-
pause
:
{
}
-
setWeight
:
40
-
pause
:
{
duration
:
30s
}
-
setWeight
:
60
-
pause
:
{
duration
:
30s
}
-
setWeight
:
80
-
pause
:
{
duration
:
30s
}
revisionHistoryLimit
:
2
selector
:
matchLabels
:
app
:
bgd-rollouts
template
:
metadata
:
creationTimestamp
:
null
labels
:
app
:
bgd-rollouts
spec
:
containers
:
-
image
:
quay.io/rhdevelopers/bgd:1.0.0
name
:
bgd
env
:
-
name
:
COLOR
value
:
"
blue
"
resources
:
{
}
Canary release
List of steps to execute
Sets the ratio of canary
Rollout is paused
Pauses the rollout for 30 seconds
template
Deployment definition
Apply the resource to deploy the application. Since there is no previous deployment, the canary part is ignored:
kubectl apply -f bgd-rollout.yaml
Currently, there are five pods as specified in the replicas
field:
kubectl get pods NAME READY STATUS RESTARTS AGE bgd-rollouts-679cdfcfd-6z2zf1
/1 Running0
12m bgd-rollouts-679cdfcfd-8c6kl1
/1 Running0
12m bgd-rollouts-679cdfcfd-8tb4v1
/1 Running0
12m bgd-rollouts-679cdfcfd-f4p7f1
/1 Running0
12m bgd-rollouts-679cdfcfd-tljfr1
/1 Running0
12m
And using the Argo Rollout Kubectl Plugin:
kubectl argo rollouts get rollout bgd-rollouts Name: bgd-rollouts Namespace: default Status: ✔ Healthy Strategy: Canary Step:8
/8 SetWeight:100
ActualWeight:100
Images: quay.io/rhdevelopers/bgd:1.0.0(
stable)
Replicas: Desired:5
Current:5
Updated:5
Ready:5
Available:5
NAME KIND STATUS AGE INFO ⟳ bgd-rollouts Rollout ✔ Healthy 13m └──# revision:1 └──⧉ bgd-rollouts-679cdfcfd ReplicaSet ✔ Healthy 13m stable ├──□ bgd-rollouts-679cdfcfd-6z2zf Pod ✔ Running 13m ready:1/1 ├──□ bgd-rollouts-679cdfcfd-8c6kl Pod ✔ Running 13m ready:1/1 ├──□ bgd-rollouts-679cdfcfd-8tb4v Pod ✔ Running 13m ready:1/1 ├──□ bgd-rollouts-679cdfcfd-f4p7f Pod ✔ Running 13m ready:1/1 └──□ bgd-rollouts-679cdfcfd-tljfr Pod ✔ Running 13m ready:1/1
Let’s deploy a new version to trigger a canary rolling update.
Create a new file named bgd-rollout-v2.yaml with exactly the same content as the previous one, but change the environment variable COLOR
value to green
:
...
name
:
bgd
env
:
-
name
:
COLOR
value
:
"green"
resources
:
{}
Apply the previous resource and check how Argo Rollouts executes the rolling update. List the pods again to check that 20% of the pods are new while the other 80% are the old version:
kubectl
get
pods
NAME
READY
STATUS
RESTARTS
AGE
bgd-rollouts-679cdfcfd-6z2zf
1
/1
Running
0
27m
bgd-rollouts-679cdfcfd-8c6kl
1
/1
Running
0
27m
bgd-rollouts-679cdfcfd-8tb4v
1
/1
Running
0
27m
bgd-rollouts-679cdfcfd-tljfr
1
/1
Running
0
27m
bgd-rollouts-c5495c6ff-zfgvn
1
/1
Running
0
13s
And do the same using the Argo Rollout Kubectl Plugin:
kubectl argo rollouts get rollout bgd-rollouts ... NAME KIND STATUS AGE INFO ⟳ bgd-rollouts Rollout ॥ Paused 31m ├──# revision:2 │ └──⧉ bgd-rollouts-c5495c6ff ReplicaSet ✔ Healthy 3m21s canary │ └──□ bgd-rollouts-c5495c6ff-zfgvn Pod ✔ Running 3m21s ready:1/1 └──# revision:1 └──⧉ bgd-rollouts-679cdfcfd ReplicaSet ✔ Healthy 31m stable ├──□ bgd-rollouts-679cdfcfd-6z2zf Pod ✔ Running 31m ready:1/1 ├──□ bgd-rollouts-679cdfcfd-8c6kl Pod ✔ Running 31m ready:1/1 ├──□ bgd-rollouts-679cdfcfd-8tb4v Pod ✔ Running 31m ready:1/1 └──□ bgd-rollouts-679cdfcfd-tljfr Pod ✔ Running 31m ready:1/1
Remember that the rolling update process is paused until the operator executes a manual step to let the process continue. In a terminal window, run the following command:
kubectl argo rollouts promote bgd-rollouts
The rollout is promoted and continues with the following steps, which is substituting the old version pods with new versions every 30 seconds:
kubectl get pods NAME READY STATUS RESTARTS AGE bgd-rollouts-c5495c6ff-2g7r81
/1 Running0
89s bgd-rollouts-c5495c6ff-7mdch1
/1 Running0
122s bgd-rollouts-c5495c6ff-d98281
/1 Running0
13s bgd-rollouts-c5495c6ff-h4t6f1
/1 Running0
56s bgd-rollouts-c5495c6ff-zfgvn1
/1 Running0
11m
The rolling update finishes with the new version progressively deployed to the cluster.
Kubernetes doesn’t implement advanced deployment techniques natively. For this reason, Argo Rollouts uses the number of deployed pods to implement the canary release.
As mentioned before, Argo Rollouts integrates with Kubernetes products that offer advanced traffic management capabilities like Istio.
Using Istio, the traffic splitting is done correctly at the infrastructure level instead of playing with replica numbers like in the first example.
Argo Rollouts integrates with Istio to execute a canary release, automatically updating the Istio VirtualService
object.
Assuming you already know Istio and have a Kubernetes cluster with Istio installed, you can perform integration between Argo Rollouts and Istio by setting the
trafficRouting
from Rollout
resource to Istio
.
First, create a Rollout
file with Istio configured:
apiVersion
:
argoproj.io/v1alpha1
kind
:
Rollout
metadata
:
name
:
bgdapp
labels
:
app
:
bgdapp
spec
:
strategy
:
canary
:
steps
:
-
setWeight
:
20
-
pause
:
duration
:
"
1m
"
-
setWeight
:
50
-
pause
:
duration
:
"
2m
"
canaryService
:
bgd-canary
stableService
:
bgd
trafficRouting
:
istio
:
virtualService
:
name
:
bgd
routes
:
-
primary
replicas
:
1
revisionHistoryLimit
:
2
selector
:
matchLabels
:
app
:
bgdapp
version
:
v1
template
:
metadata
:
labels
:
app
:
bgdapp
version
:
v1
annotations
:
sidecar.istio.io/inject
:
"
true
"
spec
:
containers
:
-
image
:
quay.io/rhdevelopers/bgd:1.0.0
name
:
bgd
env
:
-
name
:
COLOR
value
:
"
blue
"
resources
:
{
}
Canary section
Reference to a Kubernetes Service pointing to the new service version
Reference to a Kubernetes Service pointing to the old service version
Configures Istio
Reference to the VirtualService
where weight is updated
Name of the VirtualService
Route name within VirtualService
Deploys the Istio sidecar container
Then, we create two Kubernetes Services pointing to the same deployment used to redirect traffic to the old or the new one.
The following Kubernetes Service is used in the stableService
field:
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
bgd
labels
:
app
:
bgdapp
spec
:
ports
:
-
name
:
http
port
:
8080
selector
:
app
:
bgdapp
And the Canary one is the same but with a different name.
It’s the one used in the canaryService
field:
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
bgd-canary
labels
:
app
:
bgdapp
spec
:
ports
:
-
name
:
http
port
:
8080
selector
:
app
:
bgdapp
Finally, create the Istio Virtual Service to be updated by Argo Rollouts to update the canary traffic for each service:
apiVersion
:
networking.istio.io/v1alpha3
kind
:
VirtualService
metadata
:
name
:
bgd
spec
:
hosts
:
-
bgd
http
:
-
route
:
-
destination
:
host
:
bgd
weight
:
100
-
destination
:
host
:
bgd-canary
weight
:
0
name
:
primary
After applying these resources, we’ll get the first version of the application up and running:
kubectl apply -f bgd-virtual-service.yaml kubectl apply -f service.yaml kubectl apply -f service-canary.yaml kubectl apply -f bgd-isio-rollout.yaml
When any update occurs on the Rollout
object, the canary release will start as described in the Solution. Now, Argo Rollouts updates the bgd virtual service weights automatically instead of playing with pod numbers.
3.141.7.186