Using ChartMuseum

Just as Docker Registry (https://docs.docker.com/registry/) is a place where we can publish our container images and make them accessible to others, we can use Chart repository to accomplish similar goals with our Charts.

A Chart repository is a location where packaged Charts can be stored and retrieved. We'll use ChartMuseum for that. There aren't many other solutions to choose. We can say that we picked it because there were no alternatives. That will change soon. I'm sure that Helm Charts will become integrated into general purpose repositories. At the time of this writing (June 2018), Charts are already supported by JFrog's Artifactory (https://www.jfrog.com/confluence/display/RTF/Helm+Chart+Repositories). You could easily build one yourself if you're adventurous.

All you'd need is a way to store index.yaml file that contains all the Charts and an API that could be used to push and retrieve packages. Anything else would be a bonus, not a requirement.

That's it. That's all the explanation you need, except a note that we'll go with the easiest solution. We won't build a Charts repository ourselves, nor we are going to pay for Artifactory. We'll use ChartMuseum.

ChartMuseum is already available in the official Helm repository. We'll add it to your Helm installation just in case you removed it accidentally.

 1  helm repo add stable 
 2      https://kubernetes-charts.storage.googleapis.com

You should see the output claiming that "stable" has been added to your repositories.

Next, we'll take a quick look at the values available in chartmuseum.

 1  helm inspect values stable/chartmuseum

The output, limited to the relevant parts, is as follows.

...
image:
  repository: chartmuseum/chartmuseum
  tag: v0.7.0
  pullPolicy: IfNotPresent
env:
  open:
    ...
    DISABLE_API: true
...
secret: # username for basic http authentication BASIC_AUTH_USER: # password for basic http authentication BASIC_AUTH_PASS: ... resources: {} # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 80m # memory: 64Mi ... persistence: enabled: false ... ## Ingress for load balancer ingress: enabled: false ... # annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" ## Chartmuseum Ingress hostnames ## Must be provided if Ingress is enabled ## # hosts: # chartmuseum.domain.com: # - /charts # - /index.yaml ...

We can, and we will change the image tag. We'll try to make that our practice with all installations. We'll always use a specific tag and leave latest for developers and others who might not be concerned with stability of the system.

By default, access to the API is disabled through the DISABLE_API: true entry. We'll have to enable it if we are to interact with the API. We can see that there are, among others, BASIC_AUTH_USER and BASIC_AUTH_PASS secrets which we can use if we'd like to provide a basic HTTP authentication.

Please visit ChartMuseum API (https://github.com/helm/chartmuseum#api) documentation if you're interested in more details.

Further down are the commented resources. We'll have to define them ourselves.

We'll need to persist the state of the application and make it accessible through Ingress. Both can be accomplished by changing related enabled entries to true and, in case of Ingress, by adding a few annotations and a host.

Now that we went through the values we're interested in, we can proceed with the practical parts. We'll need to define the address (domain) we'll use for ChartMuseum.

We already have the IP of the cluster (hopefully the IP of the external LB), and we can use it to create a nip.io domain, just as we did in the previous chapter.

 1  CM_ADDR="cm.$LB_IP.nip.io"

To be on the safe side, we'll echo the value stored in CM_ADDR, and check whether it looks OK.

 1  echo $CM_ADDR

In my case, the output is cm.18.221.122.90.nip.io.

I already prepared a file with all the values we'll want to customize. Let's take a quick look at it.

 1  cat helm/chartmuseum-values.yml

The output is as follows.

image:
  tag: v0.7.0
env:
  open:
    DISABLE_API: false
resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 80m
    memory: 64Mi
persistence:
  enabled: true
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  hosts:
  - name: cm.127.0.0.1.nip.io
    path: /

This is becoming monotonous, and that's OK. It should be that way. Installations should be boring and follow the same pattern. We found that pattern in Helm.

The chartmuseum-values.yml file defines the values we discussed. It sets the tag we'll use, and it enables the API. It defines the resources, and you already know that the values we're using should be taken with a lot of skepticism. In the "real" production, the amount of memory and CPU your applications require will differ significantly from what we can observe in our examples. So we should always monitor our applications real usage patterns, and fine-tune the configuration instead of guessing.

We enabled persistence, and we'll use the default StorageClass, since we did not specify any explicitly.

Ingress section defines the same annotations as those we used with the other Helm installations. It also defines a single host that will handle requests from all paths (/). Think of it as a reminder only. We cannot rely on the host in the chartmuseum-values.yml file since it likely differs from the nip.io address you defined. I could not predict which one will be in your case. So, we'll overwrite that value with a --set argument.

Let's install the Chart.

 1  helm install stable/chartmuseum 
 2      --namespace charts 
 3      --name cm 
 4      --values helm/chartmuseum-values.yml 
 5      --set "ingress.hosts[0].name=$CM_ADDR" 
 6      --set env.secret.BASIC_AUTH_USER=admin 
 7      --set env.secret.BASIC_AUTH_PASS=admin

The Chart is installed. Instead of waiting in silence for all the Pods to start running, we'll briefly discuss security.

We defined the username and the password through --set arguments. They
shouldn't be stored in helm/chartmuseum-values.yml since that would defy the purpose of secrecy.

Personally, I believe that there's no reason to hide the Charts. They do not (should not) contain anything confidential. The applications are stored in a container registry. Even if someone decides to use our Charts, that person would not be able to deploy our images, if our registry is configured to require authentication.

If that is not enough, and we do want to protect our Charts besides protecting images, we should ask yourself who should not be allowed to access them. If we want to prevent only outsiders from accessing our Charts, the fix is easy. We can put our cluster inside a VPN and make the domain accessible only to internal users. On the other hand, if we want to prevent even internal users from accessing our Charts, we can add basic HTTP authentication. We already saw the secret section when we inspected the values. You could set env.secret.BASIC_AUTH_USER and env.secret.BASIC_AUTH_PASS to enable basic authentication. That's what we did in our example.

If none of those methods is secure enough, we can implement the best security measure of all. We can disable access to all humans by removing Ingress and changing the Service type to ClusterIP. That would result in only processes running in Pods being able to access the Charts.

A good example would be to allow Jenkins to push and pull the Charts, and no one else. Even though that approach is more secure, it does not provide access to the Charts to people who might need it. Humans are true users of ChartMuseum. For scripts, it is easy to know which repository contains the definitions they need and to clone the code, even if that is only for the purpose of retrieving Charts. Humans need a way to search for Charts, to inspect them, and to run them on their laptops or servers.

We opted to a middle solution. We set up basic authentication which is better than no authentication, but still less secure than allowing only those within a VPN to access Charts or disabling human access altogether.

A note to minishift users
OpenShift ignores Ingress resources so we'll have to create a Route to accomplish the same effect. Please execute the command that follows.
oc -n charts create route edge --service cm-chartmuseum --hostname $CM_ADDR --insecure-policy Allow

By now, the resources we installed should be up-and-running. We'll confirm that just to be on the safe side.

 1  kubectl -n charts 
 2      rollout status deploy 
 3      cm-chartmuseum

The output should show that the deployment "cm-chartmuseum" was successfully rolled out.

Next, we'll check whether the application is healthy.

 1  curl "http://$CM_ADDR/health"

The output is as follows.

{"healthy":true}

Now we can open ChartMuseum in browser.

 1  open "http://$CM_ADDR"

You will be asked for a username and a password. Please use admin for both and click the Sign in button.

Figure 5-1: ChartMuseum's welcome screen

As you can see, there's not much of the UI to look. We are supposed to interact with ChartMuseum through its API. If we need to visualize our Charts, we'll need to look for a different solution.

Let's see the index.

 1  curl "http://$CM_ADDR/index.yaml"

Since we did not specify the username and the password, we got {"error":"unauthorized"} as the output. We'll need to authenticate every time we want to interact with ChartMuseum API.

Let's try again but, this time, with the authentication info.

 1  curl -u admin:admin 
 2      "http://$CM_ADDR/index.yaml"

The output is as follows.

apiVersion: v1
entries: {}
generated: "2018-06-02T21:38:30Z"

It should come as no surprise that we have no entries to the museum. We did not yet push a Chart. Before we do any pushing, we should add a new repository to our Helm client.

 1  helm repo add chartmuseum 
 2      http://$CM_ADDR 
 3      --username admin 
 4      --password admin

The output states that "chartmuseum" has been added to your repositories. From now on, all the Charts we store in our ChartMuseum installation will be available through our Helm client.

The only thing left is to start pushing Charts to ChartMuseum. We could do that by sending curl requests. However, there is a better way, so we'll skip HTTP requests and install a Helm plugin instead.

 1  helm plugin install 
 2      https://github.com/chartmuseum/helm-push

This plugin added a new command helm push. Let's give it a spin.

 1  helm push 
 2      ../go-demo-3/helm/go-demo-3/ 
 3      chartmuseum 
 4      --username admin 
 5      --password admin

The output is as follows.

Pushing go-demo-3-0.0.1.tgz to chartmuseum...
Done.

We pushed a Chart located in the ../go-demo-3/helm/go-demo-3/ directory into a repository chartmuseum. We can confirm that the push was indeed successful by retrieving index.yaml file from the repository.

 1  curl "http://$CM_ADDR/index.yaml" 
 2      -u admin:admin

The output is as follows.

apiVersion: v1
entries:
  go-demo-3:
  - apiVersion: v1
    created: "2018-06-02T21:39:21Z"
    description: A silly demo based on API written in Go and MongoDB
    digest: d8443c78485e80644ff9bfddcf32cc9f270864fb50b75377dbe813b280708519
    home: http://www.devopstoolkitseries.com/
    keywords:
    - api
    - backend
    - go
    - database
    - mongodb
    maintainers:
    - email: [email protected]
      name: Viktor Farcic
    name: go-demo-3
    sources:
    - https://github.com/vfarcic/go-demo-3
    urls:
    - charts/go-demo-3-0.0.1.tgz
    version: 0.0.1
generated: "2018-06-02T21:39:28Z"

We can see that the go-demo-3 Chart is now in the repository. Most of the information comes from the Chart.yaml file we explored in the previous chapter.

Finally, we should validate that our local Helm client indeed sees the new Chart.

 1  helm search chartmuseum/

The output is probably disappointing. It states that no results were found. The problem is that even though the Chart is stored in the ChartMuseum repository, we did not update the repository information stored locally in the Helm client. So, let's update it first.

 1  helm repo update

The output is as follows.

Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "chartmuseum" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. Happy Helming!

If you added more repositories to your Helm client, you might see a bigger output. Those additional repositories do not matter in this context. What does matter is that the chartmuseum was updated and that we can try to search it again.

 1  helm search chartmuseum/

This time, the output is not empty.

 NAME                    CHART VERSION   APP VERSION DESCRIPTION      
chartmuseum/go-demo-3 0.0.1 A silly demo...

Our Chart is now available in ChartMuseum, and we can access it with our Helm client. Let's inspect the Chart.

 1  helm inspect chartmuseum/go-demo-3

We won't go through the output since it is the same as the one we explored in the previous chapter. The only difference is that this time it is not retrieved from the Chart stored locally, but from ChartMuseum running inside our cluster. From now on, anyone with the access to that repository can deploy the go-demo-3 application.

To be on the safe side, and fully confident in the solution, we'll deploy the Chart before announcing to everyone that they can use the new repository to install applications. Just as with the other applications, we'll start by defining a domain we'll use for go-demo-3.

 1  GD3_ADDR="go-demo-3.$LB_IP.nip.io"

Next, we'll output the address as a way to confirm that it looks OK.

 1  echo $GD3_ADDR

The output should be similar to go-demo-3.18.221.122.90.nip.io.

Now we can finally install go-demo-3 Chart stored in ChartMuseum running inside our cluster. We'll continue using upgrade with -i since that is more friendly to our yet-to-be-defined continuous deployment process.

 1  helm upgrade -i go-demo-3 
 2      chartmuseum/go-demo-3 
 3      --namespace go-demo-3 
 4      --set image.tag=1.0 
 5      --set ingress.host=$GD3_ADDR 
 6      --reuse-values

We can see from the first line of the output that the release "go-demo-3" does not exist, so Helm decided to install it, instead of doing the upgrade. The rest of the output is the same as the one you saw in the previous chapter. It contains the list of the resources created from the Chart as well as the post-installation instructions.

A note to minishift users
OpenShift ignores Ingress resources so we'll have to create a Route to accomplish the same effect. Please execute the command that follows.
oc -n go-demo-3 create route edge --service go-demo-3 --hostname $GD3_ADDR --insecure-policy Allow

Next, we'll wait until the application is rolled out and confirm that we can access it.

 1  kubectl -n go-demo-3 
 2      rollout status deploy go-demo-3
 3    
 4  curl "http://$GD3_ADDR/demo/hello"

The latter command output the familiar hello, world! message thus confirming that the application is up-and-running. The only thing left to learn is how to remove charts from ChartMuseum. But, before we do that, we'll delete go-demo-3 from the cluster. We don't need it anymore.

 1  helm delete go-demo-3 --purge

Unfortunately, there is no Helm plugin that will allow us to delete a chart from a repository, so we'll accomplish our mission using curl.

 1  curl -XDELETE 
 2      "http://$CM_ADDR/api/charts/go-demo-3/0.0.1" 
 3      -u admin:admin

The output is as follows.

{"deleted":true}

The chart is deleted from the repository.

Now you know everything there is to know about ChartMuseum. OK, maybe you don't know everything you should know, but you do know the basics that will allow you to explore it further.

Now that you know how to push and pull Charts to and from ChartMuseum, you might still be wondering if there is an UI that will allow us to visualize Charts. Read on.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.71.142