Production

The default Auto DevOps pipeline will deploy your code to production after it finishes the test stage. Various environment variables are available that you can set that will control the autoscaling of your replica pods. The heavy lifting in this phase is performed by the auto-deploy-app Helm chart. You can also provide your own chart by adding it to a .chart directory in your project or by setting AUTO_DEVOPS_CHART combined with the AUTO_DEVOPS_CHART_REPOSITORY environment variable with the URL to your custom chart. It will create several things:

  • A deploy token
  • A Prometheus monitoring instance that's wired for your application

Let's run the following code through the log file:

Running with gitlab-runner 11.8.0 (4745a6f3)
on runner-gitlab-runner-7fd79f558b-2wx96 _drEv8rS
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image alpine:latest ...

This checks the artifacts of the previous jobs and performs a check on the Kubernetes domain. It will install dependencies for minimal Helm execution:

Checking out 08222854 as master...
Skipping Git submodules setup
Downloading artifacts for code_quality (477)...
Downloading artifacts from coordinator... ok id=477 responseStatus=200 OK token=zxQGxCFW
Downloading artifacts for license_management (478)...
Downloading artifacts from coordinator... ok id=478 responseStatus=200 OK token=HjYg-s1y
Downloading artifacts for container_scanning (481)...
Downloading artifacts from coordinator... ok id=481 responseStatus=200 OK token=hErz9aWj
$ # Auto DevOps variables and functions # collapsed multi-line command
$ check_kube_domain
$ install_dependencies

The next step is to download the required chart (auto-deploy-app chart or custom):

$ download_chart

Next, we need to ensure that a namespace is defined (which is usually the Kubernetes cluster name you used):

$ ensure_namespace

Now, it's time to initialize tiller (the Helm server):

 initialize_tiller

Here, a secret to access the registry is created:

 create_secret

Finally, the deployment can start:

 $ deploy secret "production-secret" 
deleted secret/production-secret replaced
Deploying new release...
Release "production" has been upgraded.
Happy Helming! ...

After the deployment, you will see feedback about the URL where the application is running. The name is created by appending the namespace to the project name and the domain wildcard where the cluster is running:

NOTES:
Application should be accessible at: http://it-eventmanager.kubernetes.joustie.nl
Waiting for deployment "production" rollout to finish: 0 of 1 updated replicas are available...
deployment "production" successfully rolled out
$ delete canary
$ delete rollout
$ persist_environment_url
Uploading artifacts...
environment_url.txt: found 1 matching files
Uploading artifacts to coordinator... ok id=482 responseStatus=201 Created token=koT8yujj
Job succeeded

If you have configured kubectl to use the context of your GKE cluster, on the command line, you can verify whether your deployments took place:

Joosts-iMac-Pro:Part3 joostevertse$ kubectl get pods --all-namespaces

The list of pods should show you the pods that were started:

NAME                                                     READY   STATUS    RESTARTS   AGE
eventmanager production-6b9db68f6f-hrwzv 1/1 Running 0 11h
eventmanager production-postgres-5b5cf56747-xngbk 1/1 Running 0 11h

By default, a postgres instance is started as well, and you can fine-tune your installation to use it if you need it. You can find more information about that here: https://docs.gitlab.com/ee/topics/autodevops/#postgresql-database-support. There are also other pods in the list, and they are all part of the deployment:

certmanager-cert-manager-6c8cd9f9bf-8kbf8                1/1     Running   0          11h
ingress-nginx-ingress-controller-ff666c548-n2s84 1/1 Running 0 11h
ingress-nginx-ingress-default-backend-677b99f864-bnk8c 1/1 Running 0 11h
runner-gitlab-runner-7fd79f558b-2wx96 1/1 Running 0 11h
tiller-deploy-6586b57bcb-t6zql 1/1 Running 0 11h

The eventmanager application can be viewed by going to http://it-eventmanager.kubernetes.joustie.nl

Now, we have a running application that is being tested and monitored. The next and final step is to run a performance check on the production environment. Again, we can use our Kubernetes cluster to spawn a test container for it and run performance tests on it, which is the subject of the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.5.239