Using YAML values to customize Helm installations

We managed to customize Jenkins by setting ImageTag. What if we'd like to set CPU and memory? We should also add Ingress, and that would require a few annotations. If we add Ingress, we might want to change the Service type to ClusterIP and set HostName to our domain. We should also make sure that RBAC is used. Finally, the plugins that come with the Chart are probably not all the plugins we need.

Applying all those changes through --set arguments would end up as a very long command and would constitute an undocumented installation. We'll have to change the tactic and switch to --values. But before we do all that, we need to generate a domain we'll use with our cluster.

We'll use nip.io (http://nip.io/) to generate valid domains. The service provides a wildcard DNS for any IP address. It extracts IP from the nip.io subdomain and sends it back in the response. For example, if we generate 192.168.99.100.nip.io, it'll be resolved to 192.168.99.100. We can even add sub-sub domains like something 192.168.99.100.nip.io, and it would still be resolved to 192.168.99.100. It's a simple and awesome service that quickly became an indispensable part of my toolbox.

The service will be handy with Ingress since it will allow us to generate separate domains for each application, instead of resorting to paths which, as you will see, are unsupported by many Charts. If our cluster is accessible through 192.168.99.100, we can have jenkins.192.168.99.100.nip.io and go-demo-3.192.168.99.100.nip.io.

We could use xip.io (http://xip.io/) instead. For the end users, there is no significant difference between the two. The main reason why we'll use nip.io instead of xip.io is integration with some of the tool. Minishift, for example, comes with Routes pre-configured to use nip.io.

Do not use nip.io, xip.io, or similar services for production. They are not a substitute for "real" domains, but a convenient way to generate them for testing purposes when your corporate domains are not easily accessible.

First things first... We need to find out the IP of our cluster, or the external LB if it is available. The commands that follow will differ from one cluster type to another.

Feel free to skip the sections that follow if you already know how to get the IP of your cluster's entry point.

If your cluster is running in AWS and was created with kops, we'll need to retrieve the hostname from the Ingress Service, and extract the IP from it. Please execute the commands that follow.

 1  LB_HOST=$(kubectl -n kube-ingress 
 2      get svc ingress-nginx 
 3      -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
 4
 5  LB_IP="$(dig +short $LB_HOST 
6 | tail -n 1)"

If your cluster is running in AWS and was created as EKS, we'll need to retrieve the hostname from the Ingress Service, and extract the IP from it. Please execute the commands that follow.

 1  LB_HOST=$(kubectl -n ingress-nginx 
 2      get svc ingress-nginx 
 3      -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
 4
 5  LB_IP="$(dig +short $LB_HOST 
 6      | tail -n 1)"

If your cluster is running in Docker for Mac and Windows, the IP is 127.0.0.1 and all you have to do is assign it to the environment variable LB_IP. Please execute the command that follows.

 1  LB_IP="127.0.0.1"

If your cluster is running in minikube, the IP can be retrieved using minikube ip command. Please execute the command that follows.

 1  LB_IP="$(minikube ip)"

If your cluster is running in GKE, the IP can be retrieved from the Ingress Service. Please execute the command that follows.

 1  LB_IP=$(kubectl -n ingress-nginx 
 2      get svc ingress-nginx 
 3      -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

Next, we'll output the retrieved IP to confirm that the commands worked, and generate a sub-domain jenkins.

 1  echo $LB_IP
 2
 3  HOST="jenkins.$LB_IP.nip.io"
 4
 5  echo $HOST

The output of the second echo command should be similar to the one that follows.

jenkins.192.168.99.100.nip.io

nip.io will resolve that address to 192.168.99.100, and we'll have a unique domain for our Jenkins installation. That way we can stop using different paths to distinguish applications in Ingress config. Domains work much better. Many Helm charts do not even have the option to configure unique request paths and assume that Ingress will be configured with a unique domain.

A note to minishift users
I did not forget about you. You already have a valid domain in the ADDR variable. All we have to do is assign it to the HOST variable. Please execute the command that follows.
HOST=$ADDR && echo $HOST
The output should be similar to jenkins.192.168.99.100.nip.io.

Now that we have a valid jenkins.* domain, we can try to figure out how to apply all the changes we discussed.

We already learned that we can inspect all the available values using helm inspect command. Let's take another look.

 1  helm inspect values stable/jenkins

The output, limited to the relevant parts, is as follows.

Master:
  Name: jenkins-master
  Image: "jenkins/jenkins"
  ImageTag: "lts"
  ...
  Cpu: "200m"
  Memory: "256Mi"
  ...
  ServiceType: LoadBalancer
  # Master Service annotations
  ServiceAnnotations: {}
  ...
  # HostName: jenkins.cluster.local
  ...
  InstallPlugins:
    - kubernetes:1.1
    - workflow-aggregator:2.5
    - workflow-job:2.15
    - credentials-binding:1.13
    - git:3.6.4
  ...
  Ingress:
    ApiVersion: extensions/v1beta1
    Annotations:
    ...
...
rbac:
  install: false
  ...

Everything we need to accomplish our new requirements is available through the values. Some of them are already filled with defaults, while others are commented. When we look at all those values, it becomes clear that it would be unpractical to try to re-define them all through --set arguments. We'll use --values instead. It will allow us to specify the values in a file.

I already prepared a YAML file with the values that will fulfill our requirements, so let's take a quick look at them.

 1  cat helm/jenkins-values.yml

The output is as follows.

Master:
  ImageTag: "2.116-alpine"
  Cpu: "500m"
  Memory: "500Mi"
  ServiceType: ClusterIP
  ServiceAnnotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  InstallPlugins:
    - blueocean:1.5.0
    - credentials:2.1.16
    - ec2:1.39
    - git:3.8.0
    - git-client:2.7.1
    - github:1.29.0
    - kubernetes:1.5.2
    - pipeline-utility-steps:2.0.2
    - script-security:1.43
    - slack:2.3
    - thinBackup:1.9
    - workflow-aggregator:2.5
  Ingress:
enabled: true Annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 50m nginx.ingress.kubernetes.io/proxy-request-buffering: "off" ingress.kubernetes.io/ssl-redirect: "false" ingress.kubernetes.io/proxy-body-size: 50m ingress.kubernetes.io/proxy-request-buffering: "off" HostName: jenkins.acme.com rbac: install: true

As you can see, the variables in that file follow the same format as those we output through the helm inspect values command. The only difference is in values, and the fact that helm/jenkins-values.yml contains only those that we are planning to change.

We defined that the ImageTag should be fixed to 2.116-alpine.

We specified that our Jenkins master will need half a CPU and 500 MB RAM. The default values of 0.2 CPU and 256 MB RAM are probably not enough. What we set is also low, but since we're not going to run any serious load (at least not yet), what we re-defined should be enough.

The service was changed to ClusterIP to better accommodate Ingress resource we're defining further down.

If you are not using AWS, you can ignore ServiceAnnotations. They're telling ELB to use HTTP protocol.

Further down, we are defining the plugins we'll use throughout the book. Their usefulness will become evident in the next chapters.

The values in the Ingress section are defining the annotations that tell Ingress not to redirect HTTP requests to HTTPS (we don't have SSL certificates), as well as a few other less important options. We set both the old style (ingress.kubernetes.io) and the new style (nginx.ingress.kubernetes.io) of defining NGINX Ingress. That way it'll work no matter which Ingress version you're using. The HostName is set to a value that apparently does not exist. I could not know in advance what will be your hostname, so we'll overwrite it later on.

Finally, we set rbac.install to true so that the Chart knows that it should set the proper permissions.

Having all those variables defined at once might be a bit overwhelming. You might want to go through the Jenkins Chart documentation (https://hub.kubeapps.com/charts/stable/jenkins) for more info. In some cases, documentation alone is not enough, and I often end up going through the files that form the Chart. You'll get a grip on them with time. For now, the important thing to observe is that we can re-define any number of variables through a YAML file.

Let's install the Chart with those variables.

 1  helm install stable/jenkins 
 2      --name jenkins 
 3      --namespace jenkins 
 4      --values helm/jenkins-values.yml 
 5      --set Master.HostName=$HOST

We used the --values argument to pass the contents of the helm/jenkins-values.yml. Since we had to overwrite the HostName, we used --set. If the same value is defined through --values and --set, the latter always takes precedence.

A note to minishift users
The values define Ingress which does not exist in your cluster. If we'd create a set of values specific to OpenShift, we would not define Ingress. However, since those values are supposed to work in any Kubernetes cluster, we left them intact. Given that Ingress controller does not exist, Ingress resources will have no effect, so it's safe to leave those values.

Next, we'll wait for jenkins Deployment to roll out and open its UI in a browser.

 1  kubectl -n jenkins 
 2      rollout status deployment jenkins
 3
 4  open "http://$HOST"

The fact that we opened Jenkins through a domain defined as Ingress (or Route in case of OpenShift) tells us that the values were indeed used. We can double check those currently defined for the installed Chart with the command that follows.

 1  helm get values jenkins

The output is as follows.

Master:
  Cpu: 500m
  HostName: jenkins.18.220.212.56.nip.io
  ImageTag: 2.116-alpine
  Ingress:
    Annotations:
      ingress.kubernetes.io/proxy-body-size: 50m
      ingress.kubernetes.io/proxy-request-buffering: "off"
      ingress.kubernetes.io/ssl-redirect: "false"
      nginx.ingress.kubernetes.io/proxy-body-size: 50m
      nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
      nginx.ingress.kubernetes.io/ssl-redirect: "false"
  InstallPlugins:
  - blueocean:1.5.0
  - credentials:2.1.16
  - ec2:1.39
  - git:3.8.0
  - git-client:2.7.1
  - github:1.29.0
  - kubernetes:1.5.2
  - pipeline-utility-steps:2.0.2
  - script-security:1.43
  - slack:2.3
  - thinBackup:1.9
  - workflow-aggregator:2.5
  Memory: 500Mi
  ServiceAnnotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  ServiceType: ClusterIP
rbac:
  install: true

Even though the order is slightly different, we can easily confirm that the values are the same as those we defined in helm/jenkins-values.yml. The exception is the HostName which was overwritten through the --set argument.

Now that we explored how to use Helm to deploy publicly available Charts, we'll turn our attention towards development. Can we leverage the power behind Charts for our applications?

Before we proceed, please delete the Chart we installed as well as the jenkins Namespace.

 1  helm delete jenkins --purge
2 3 kubectl delete ns jenkins
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.0.53