Chapter 10. Configuration and Secrets

If you want to keep a secret, you must also hide it from yourself.

George Orwell, 1984

It’s very useful to be able to separate the logic of your Kubernetes application from its configuration: that is, any values or settings that might change over the life of the application. Configuration values commonly include things like environment-specific settings, DNS addresses of third-party services, and authentication credentials.

While you could simply put these values directly into your code, that’s not a very flexible approach. For one thing, changing a configuration value would require a complete rebuild and redeploy of the application. It’s much better to separate these values out from the code and read them in from a file, or from environment variables.

Kubernetes provides a few different ways to help you manage configuration. One is to pass values to the application via environment variables in the Pod spec (see “Environment Variables”). Another is to store configuration data directly in Kubernetes, using the ConfigMap and Secret objects.

In this chapter we’ll explore ConfigMaps and Secrets in detail, and look at some practical techniques for managing configuration and secrets in applications, using the demo application as an example.

ConfigMaps

The ConfigMap is the primary object for storing configuration data in Kubernetes. You can think of it as being a named set of key-value pairs that stores configuration data. Once you have a ConfigMap, you can supply that data to an application either by creating a file in the Pod, or by injecting it into the Pod’s environment.

In this section, we’ll look at some different ways to get data into a ConfigMap, and then explore the various ways you can extract that data and feed it into your Kubernetes application.

Creating ConfigMaps

Suppose you want to create a YAML configuration file in your Pod’s filesystem named config.yaml, with the following contents:

autoSaveInterval: 60
batchSize: 128
protocols:
  - http
  - https

Given this set of values, how do you turn them into a ConfigMap resource that you can apply to Kubernetes?

One way is to specify that data, as literal YAML values, in the ConfigMap manifest. This is what the manifest for a ConfigMap object looks like:

apiVersion: v1
data:
  config.yaml: |
    autoSaveInterval: 60
    batchSize: 128
    protocols:
      - http
      - https
kind: ConfigMap
metadata:
  name: demo-config
  namespace: demo

You could create a ConfigMap by writing the manifest from scratch, and adding the values from config.yaml into the data section, as we’ve done in this example.

An easier way, though, is to let kubectl do some of the work for you. You can create a ConfigMap directly from a YAML file as follows:

kubectl create configmap demo-config --namespace=demo --from-file=config.yaml
configmap "demo-config" created

To export the manifest file that corresponds to this ConfigMap, run:

kubectl get configmap/demo-config --namespace=demo --export -o yaml
    >demo-config.yaml

This writes a YAML manifest representation of the cluster’s ConfigMap resource to the file demo-config.yaml. The --export flag strips out metadata we don’t need to keep in our infrastructure repo (see “Exporting Resources”).

Setting Environment Variables from ConfigMaps

Now that we have the required configuration data in a ConfigMap object, how do we then get that data into a container? Let’s look at a complete example using our demo application. You’ll find the code in the hello-config-env directory of the demo repo.

It’s the same demo application we’ve used in previous chapters that listens for HTTP requests and responds with a greeting (see “Looking at the Source Code”).

This time, though, instead of hard coding the string Hello into the application, we’d like to make the greeting configurable. So there’s a slight modification to the handler function to read this value from the environment variable GREETING:

func handler(w http.ResponseWriter, r *http.Request) {
	greeting := os.Getenv("GREETING")
	fmt.Fprintf(w, "%s, 世界
", greeting)
}

Don’t worry about the exact details of the Go code; it’s just a demo. Suffice it to say that if the GREETING environment variable is present when the program runs, it will use that value when responding to requests. Whatever language you’re using to write applications, it’s a good bet that you’ll be able to read environment variables with it.

Now, let’s create the ConfigMap object to hold the greeting value. You’ll find the manifest file for the ConfigMap, along with the modified Go application, in the hello-config-env directory of the demo repo.

It looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-config
data:
  greeting: Hola

In order to make this data visible in the container’s environment, we need to modify the Deployment slightly. Here’s the relevant part of the demo Deployment:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-config-env
      ports:
        - containerPort: 8888
      env:
        - name: GREETING
          valueFrom:
            configMapKeyRef:
              name: demo-config
              key: greeting

Note that we’re using a different container image tag to that in previous examples (see “Image Identifiers”). The :hello-config-env tag gets us the modified version of the demo application that reads the GREETING variable: cloudnatived/demo:hello-config-env.

The second point of interest is the env section. Remember from “Environment Variables” that you can create environment variables with literal values by adding a name/value pair.

We still have name here, but instead of value, we’ve specified valueFrom. This tells Kubernetes that, rather than taking a literal value for the variable, it should look elsewhere to find the value.

configMapKeyRef tells it to reference a specific key in a specific ConfigMap. The name of the ConfigMap to look at is demo-config, and the key we want to look up is greeting. We created this data with the ConfigMap manifest, so it should now be available to read into the container’s environment.

If the ConfigMap doesn’t exist, the Deployment won’t be able to run (its Pod will show a status of CreateContainerConfigError).

That’s everything you need to make the updated application work, so go ahead and deploy the manifests to your Kubernetes cluster. From the demo repo directory, run the following command:

kubectl apply -f hello-config-env/k8s/
configmap "demo-config" created
deployment.extensions "demo" created

As before, to see the application in your web browser, you’ll need to forward a local port to the Pod’s port 8888:

kubectl port-forward deploy/demo 9999:8888
Forwarding from 127.0.0.1:9999 -> 8888
Forwarding from [::1]:9999 -> 8888

(We didn’t bother creating a Service this time; while you’d use a Service with a real production app, for this example we’ve just used kubectl to forward the local port directly to the demo Deployment.)

If you point your web browser to http://localhost:9999/ you should see, if all is well:

Hola, 世界

Setting the Whole Environment from a ConfigMap

While you can set one or two environment variables from individual ConfigMap keys, as we saw in the previous example, that could get tedious for a large number of variables.

Fortunately, there’s an easy way to take all the keys from a ConfigMap and turn them into environment variables, using envFrom:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-config-env
      ports:
        - containerPort: 8888
      envFrom:
      - configMapRef:
            name: demo-config

Now every setting in the demo-config ConfigMap will be a variable in the container’s environment. Because in our example ConfigMap the key is called greeting, the environment variable will also be named greeting (in lowercase). To make your environment variable names uppercase when you’re using envFrom, change them in the ConfigMap.

You can also set other environment variables for the container in the normal way, using env; either by putting the literal values in the manifest file, or using a ConfigMapKeyRef as in our previous example. Kubernetes allows you to use either env, envFrom, or both at once, to set environment variables.

If a variable set in env has the same name as one set in envFrom, it will take precedence. For example, if you set the variable GREETING in both env and a ConfigMap referenced in envFrom, the value specified in env will override the one from the ConfigMap.

Using Environment Variables in Command Arguments

While it’s useful to be able to put configuration data into a container’s environment, sometimes you need to supply it as command-line arguments for the container’s entrypoint instead.

You can do this by sourcing the environment variables from the ConfigMap, as in the previous example, but using the special Kubernetes syntax $(VARIABLE) to reference them in the command-line arguments.

In the hello-config-args directory of the demo repo, you’ll find this example in the deployment.yaml file:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-config-args
      args:
        - "-greeting"
        - "$(GREETING)"
      ports:
        - containerPort: 8888
      env:
        - name: GREETING
          valueFrom:
            configMapKeyRef:
              name: demo-config
              key: greeting

Here we’ve added an args field for the container spec, which will pass our custom arguments to the container’s default entrypoint (/bin/demo).

Kubernetes replaces anything of the form $(VARIABLE) in a manifest with the value of the environment variable VARIABLE. Since we’ve created the GREETING variable and set its value from the ConfigMap, it’s available for use in the container’s command line.

When you apply these manifests, the value of GREETING will be passed to the demo app in this way:

kubectl apply -f hello-config-args/k8s/
configmap "demo-config" configured
deployment.extensions "demo" configured

You should see the effect in your web browser:

Salut, 世界

Creating Config Files from ConfigMaps

We’ve seen a couple of different ways of getting data from Kubernetes ConfigMaps into applications: via the environment, and via the container command line. More complex applications, however, often expect to read their configuration from files on disk.

Fortunately, Kubernetes gives us a way to create such files directly from a ConfigMap. First, let’s change our ConfigMap so that instead of a single key, it stores a complete YAML file (which happens to only contain one key, but it could be a hundred, if you like):

apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-config
data:
  config: |
    greeting: Buongiorno

Instead of setting the key greeting, as we did in the previous example, we’re creating a new key called config, and assigning it a block of data (the pipe symbol | in YAML indicates that what follows is a block of raw data). This is the data:

greeting: Buongiorno

It happens to be valid YAML, but don’t be confused by that; it could be JSON, TOML, plain text, or any other format. Whatever it is, Kubernetes will eventually write the whole block of data, as is, to a file on our container.

Now that we’ve stored the necessary data, let’s deploy it to Kubernetes. In the hello-config-file directory of the demo repo, you’ll find the Deployment template, containing:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-config-file
      ports:
        - containerPort: 8888
      volumeMounts:
      - mountPath: /config/
        name: demo-config-volume
        readOnly: true
  volumes:
  - name: demo-config-volume
    configMap:
      name: demo-config
      items:
      - key: config
        path: demo.yaml

Looking at the volumes section, you can see that we create a Volume named demo-config-volume, from the existing demo-config ConfigMap.

In the container’s volumeMounts section, we mount this volume on the mountPath: /config/, select the key config, and write it to the path demo.yaml. The result of this will be that Kubernetes will create a file in the container at /config/demo.yaml, containing the demo-config data in YAML format:

greeting: Buongiorno

The demo application will read its config from this file on startup. As before, apply the manifests using this command:

kubectl apply -f hello-config-file/k8s/
configmap "demo-config" configured
deployment.extensions "demo" configured

You should see the results in your web browser:

Buongiorno, 世界

If you want to see what the ConfigMap data looks like in the cluster, run the following command:

kubectl describe configmap/demo-config
Name:         demo-config
Namespace:    default
Labels:       <none>
Annotations:
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1",
"data":{"config":"greeting: Buongiorno
"},"kind":"ConfigMap","metadata":
{"annotations":{},"name":"demo-config","namespace":"default...

Data
====
config:
greeting: Buongiorno

Events:  <none>

If you update a ConfigMap and change its values, the corresponding file (/config/demo.yaml in our example) will be updated automatically. Some applications may autodetect that their config file has changed and reread it; others may not.

One option is to redeploy the application to pick up the changes (see “Updating Pods on a Config Change”), but this may not be necessary if the application has a way to trigger a live reload, such as a Unix signal (for example SIGHUP) or running a command in the container.

Updating Pods on a Config Change

Suppose you have a Deployment running in your cluster, and you want to change some values in its ConfigMap. If you’re using a Helm chart (see “Helm: A Kubernetes Package Manager”) there’s a neat trick to have it automatically detect a config change and reload your Pods. Add this annotation to your Deployment spec:

checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") .
    | sha256sum }}

Because the Deployment template now includes a hash sum of the config settings, if these settings change, then so will the hash. When you run helm upgrade, Helm will detect that the Deployment spec has changed, and restart all the Pods.

Kubernetes Secrets

We’ve seen that the Kubernetes ConfigMap object provides a flexible way of storing and accessing configuration data in the cluster. However, most applications have some config data that is secret and sensitive, such as passwords or API keys. While we could use ConfigMaps to store these, that’s not an ideal solution.

Instead, Kubernetes provides a special type of object intended to store secret data: the Secret. Let’s see an example of how to use it with the demo application.

First, here’s the Kubernetes manifest for the Secret (see hello-secret-env/k8s/secret.yaml):

apiVersion: v1
kind: Secret
metadata:
  name: demo-secret
stringData:
  magicWord: xyzzy

In this example, the secret key is magicWord, and the secret value is the word xyzzy (a very useful word in computing). As with a ConfigMap, you can put multiple keys and values into a Secret. Here, just to keep things simple, we’re only using one key-value pair.

Using Secrets as Environment Variables

Just like ConfigMaps, Secrets can be made visible to containers by putting them into environment variables, or mounting them as a file on the container’s filesystem. In this example, we’ll set an environment variable to the value of the Secret:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-secret-env
      ports:
        - containerPort: 8888
      env:
        - name: GREETING
          valueFrom:
            secretKeyRef:
              name: demo-secret
              key: magicWord

We set the environment variable GREETING exactly as we did when using a ConfigMap, except that now it’s a secretKeyRef instead of a configMapKeyRef (see “Setting Environment Variables from ConfigMaps”).

Run the following command in the demo repo directory to apply these manifests:

kubectl apply -f hello-secret-env/k8s/
deployment.extensions "demo" configured
secret "demo-secret" created

As before, forward a local port to the Deployment so you can see the results in your web browser:

kubectl port-forward deploy/demo 9999:8888
Forwarding from 127.0.0.1:9999 -> 8888
Forwarding from [::1]:9999 -> 8888

Browse to http://localhost:9999/ and you should see:

The magic word is "xyzzy"

Writing Secrets to Files

In this example we’ll mount the Secret on the container as a file. You’ll find the code for this example in the hello-secret-file folder of the demo repo.

In order to mount the Secret in a file on the container, we use a Deployment like this:

spec:
  containers:
    - name: demo
      image: cloudnatived/demo:hello-secret-file
      ports:
        - containerPort: 8888
      volumeMounts:
        - name: demo-secret-volume
          mountPath: "/secrets/"
          readOnly: true
  volumes:
    - name: demo-secret-volume
      secret:
        secretName: demo-secret

Just as we did in “Creating Config Files from ConfigMaps”, we create a Volume (demo-secret-volume in this example), and mount it on the container in the volumeMounts section of the spec. The mountPath is /secrets, and Kubernetes will create one file in this directory for each of the key-value pairs defined in the Secret.

We’ve only defined one key-value pair in the example Secret, named magicWord, so this manifest will create the read-only file /secrets/magicWord on the container, and the contents of the file will be the secret data.

If you apply this manifest in the same way as for the previous example, you should see the same results:

The magic word is "xyzzy"

Reading Secrets

In the previous section we were able to use kubectl describe to see the data inside the ConfigMap. Can we do the same with a Secret?

kubectl describe secret/demo-secret
Name:         demo-secret
Namespace:    default
Labels:       <none>
Annotations:
Type:         Opaque

Data
====
magicWord:  5 bytes

Notice that this time, the actual data is not shown. Kubernetes Secrets are Opaque, which means they’re not shown in kubectl describe output, in log messages, or in the terminal. This prevents secret data being exposed accidentally.

You can see an obfuscated version of the secret data by using kubectl get with YAML output format:

kubectl get secret/demo-secret -o yaml
apiVersion: v1
data:
  magicWord: eHl6enk=
kind: Secret
metadata:
...
type: Opaque

base64

What’s that eHl6enk=? That doesn’t look much like our original secret data. In fact, it’s a base64 representation of the Secret. Base64 is a scheme for encoding arbitrary binary data as a character string.

Because the secret data could be nonprintable binary data (for example, a TLS encryption key), Kubernetes Secrets are always stored in base64 format.

The text beHl6enk= is the base64-encoded version of our secret word xyzzy. You can verify this using the base64 --decode command in the terminal:

echo "eHl6enk=" | base64 --decode
xyzzy

So although Kubernetes protects you from accidentally printing secret data to the terminal, or in log files, if you have permission to read the Secrets in a particular namespace, you can get the data in base64 format and then decode it.

If you need to base64-encode some text (for instance, to add it to a Secret), use the base64 tool without any arguments:

echo xyzzy | base64
eHl6enkK

Access to Secrets

Who can read or edit Secrets? That’s controlled by the Kubernetes access control mechanism, RBAC, which we’ll talk about in much more detail in “Introducing Role-Based Access Control (RBAC)”. If you’re using a cluster that doesn’t support RBAC or doesn’t have it enabled, then all Secrets are accessible to any user or any container. (You absolutely shouldn’t be running any cluster in production without RBAC, as we’ll explain.)

Encryption at Rest

What about someone with access to the etcd database where all Kubernetes information is stored? Could they access the secret data, even without API permissions to read the Secret object?

From Kubernetes version 1.7 onwards, encryption at rest is supported. That means that the secret data in the etcd database is actually stored encrypted on disk, and unreadable even to someone who can access the database directly. Only the Kubernetes API server has the key to decrypt this data. In a properly configured cluster, encryption at rest should be enabled.

You can check whether encryption at rest is enabled in your cluster by running:

kubectl describe pod -n kube-system -l component=kube-apiserver |grep encryption
      --experimental-encryption-provider-config=...

If you don’t see the experimental-encryption-provider-config flag, then encryption at rest is not enabled. (If you’re using Google Kubernetes Engine, or some other managed Kubernetes services, your data is encrypted using a different mechanism and you won’t see this flag. Check with your Kubernetes provider to find out whether etcd data is encrypted or not.)

Keeping Secrets

Sometimes you’ll have Kubernetes resources that you never want to be deleted from the cluster, such as a particularly important Secret. Using a Helm-specific annotation, you can prevent a resource from being removed:

kind: Secret
metadata:
  annotations:
    "helm.sh/resource-policy": keep

Secrets Management Strategies

In the example in the previous section, our secret data was protected against unauthorized access once it was stored in the cluster. But the secret data was represented in plain text in our manifest files.

You should never expose secret data like this in files that are committed to source control. So how do you manage and store secret data securely before it’s applied to the Kubernetes cluster?

Whatever tool or strategy you choose for managing secrets in your applications, you’ll need it to answer at least the following questions:

  1. Where do you store secrets so that they are highly available?

  2. How do you make secrets available to your running applications?

  3. What needs to happen to your running applications when you rotate or change secrets?

In this section we’ll look at three of the most popular secrets management strategies, and examine how each of them tackles these questions.

Encrypt Secrets in Version Control

The first option for secrets management is to store your secrets directly in code, in version control repositories, but in encrypted form, and decrypt them at deploy time.

This is probably the simplest choice. Secrets are put directly into source code repos, but never in plain text. Instead, they are encrypted in a form that can only be decrypted with a certain trusted key.

When you deploy the application, the secrets are decrypted just before the Kubernetes manifests are applied to the cluster. The application can then read and use the secrets just like any other configuration data.

Encrypting secrets in version control lets you review and track changes to secrets, just as you would changes to application code. And so long as your version control repositories are highly available, your secrets will be highly available as well.

To change or rotate secrets, just decrypt them in your local copy of the source, update them, re-encrypt, and commit the change to version control.

While this strategy is simple to implement and has no dependencies except the key and the encryption/decryption tool (see “Encrypting Secrets with Sops”), there’s one potential drawback. If the same secret is used by multiple applications, they all need a copy of it in their source code. This means rotating the secret is more work, because you have to make sure you’ve found and changed all instances of it.

There is also a serious risk of accidentally committing plain-text secrets to version control. Mistakes do happen, and even with private version control repositories, any secret so committed should be considered compromised, and you should rotate it as soon as possible. You may want to restrict access to the encryption key to only certain individuals, rather than handing it out to all developers.

Nonetheless, the encrypt secrets in source code strategy is a good starting point for small organizations with noncritical secrets. It’s relatively low-touch and easy to set up, while still being flexible enough to handle multiple apps and different types of secret data. In the final section of this chapter, we’ll outline some options for encryption/decryption tools you can use to do this, but first, let’s briefly describe the other secrets management strategies.

Store Secrets Remotely

Another option for secrets management is to keep them in a file (or multiple files) in a remote, secure file storage, such as an AWS S3 bucket, or Google Cloud Storage. When you deploy an individual application, the files would be downloaded, decrypted, and provided to the application. This is similar to the encrypt secrets in version control option, except that instead of living in the source code repo, the secrets are stored centrally. You can use the same encryption/decryption tool for both strategies.

This solves the problem of secrets being duplicated across multiple code repos, but it does need a little extra engineering and coordination to pull the relevant secrets file down at deploy time. This gives you some of the benefits of a dedicated secrets management tool, but without having to set up and manage an extra software component, or refactoring your apps to talk to it.

Because your secrets are not in version control, though, you’ll need a process to handle changing secrets in an orderly way, ideally with an audit log (who changed what, when, and why), and some kind of change control procedure equivalent to a pull request review and approval.

Use a Dedicated Secrets Management Tool

While the encrypt secrets in source code and keep secrets in a bucket strategies are fine for most organizations, at very large scale you may need to think about using a dedicated secrets management tool, such as Hashicorp’s Vault, Square’s Keywhiz, AWS Secrets Manager, or Azure’s Key Vault. These tools handle securely storing all of your application secrets in one central place in a highly available way, and can also control which users and service accounts have permissions to add, remove, change, or view secrets.

In a secrets management system, all actions are audited and reviewable, making it easier to analyze security breaches and prove regulatory compliance. Some of these tools also provide the ability to automatically rotate secrets on a regular basis, which is not only a good idea in any case, but is also required by many corporate security policies.

How do applications get their data from a secrets management tool? One common way is to use a service account with read-only access to the secrets vault, so that each application can only read the secrets it needs. Developers can have their own individual credentials, with permission to read or write secrets for only the applications that they’re responsible for.

While a central secrets management system is the most powerful and flexible option available, it also adds significant complexity to your infrastructure. As well as setting up and running the secrets vault, you will need to add tooling or middleware to each application and service that consumes secrets. While applications can be refactored or redesigned to access the secrets vault directly, this may be more expensive and time-consuming than simply adding a layer in front of them that gets secrets and puts them in the application’s environment or config file.

Of the various options, one of the most popular is Vault, from Hashicorp.

Recommendations

While, at first glance, a dedicated secrets management system such as Vault might seem to be the logical choice, we don’t recommend you start with this. Instead, try out a lightweight encryption tool such as Sops (see “Encrypting Secrets with Sops”), encrypting secrets directly in your source code.

Why? Well, you may not actually have that many secrets to manage. Unless your infrastructure is very complex and interdependent, which you should be avoiding anyway, any individual application should only need one or two pieces of secret data: API keys and tokens for other services, for example, or database credentials. If a given app really needs a great many different secrets, you might consider putting them all in a single file and encrypting that instead.

We take a pragmatic approach to secrets management, as we have with most issues throughout this book. If a simple, easy-to-use system solves your problem, start there. You can always switch to a more powerful or complicated setup later. It’s often hard to know at the beginning of a project exactly how much secret data will be involved, and if you’re not sure, choose the option that gets you up and running most quickly, without limiting your choices in the future.

That said, if you know from the outset that there are regulatory or compliance restrictions on your handling of secret data, it’s best to design with that in mind, and you will probably need to look at a dedicated secrets management solution.

Encrypting Secrets with Sops

Assuming that you’re going to do your own encryption, at least to start with, you’ll need an encryption tool that can work with your source code and data files. Sops (short for secrets operations), from the Mozilla project, is an encryption/decryption tool that can work with YAML, JSON, or binary files, and supports multiple encryption backends, including PGP/GnuPG, Azure Key Vault, AWS’s Key Management Service (KMS), and Google’s Cloud KMS.

Introducing Sops

Let’s introduce Sops by showing what it does. Rather than encrypting the whole file, Sops encrypts only the individual secret values. For example, if your plain-text file contains:

password: foo

when you encrypt it with Sops, the resulting file will look like this:

password: ENC[AES256_GCM,data:p673w==,iv:YY=,aad:UQ=,tag:A=]

This makes it easy to edit and review code, especially in pull requests, without needing to decrypt the data in order to understand what it is.

Visit the Sops project home page for installation and usage instructions.

In the remainder of this chapter, we’ll run through some examples of using Sops, see how it works with Kubernetes, and add some Sops-managed secrets to our demo app. But first, we should mention that other secrets encryption tools are available. If you’re already using a different tool, that’s fine: as long as you can encrypt and decrypt secrets within plain-text files in the same way as Sops does, use whichever tool works best for you.

We’re fans of Helm, as you’ll know if you’ve read this far, and if you need to manage encrypted secrets in a Helm chart, you can do that with Sops using the helm-secrets plug-in. When you run helm upgrade or helm install, helm-secrets will decrypt your secrets for deployment. For more information about Sops, including installation and usage instructions, consult the GitHub repo.

Encrypting a File with Sops

Let’s try out Sops by encrypting a file. As we mentioned, Sops doesn’t actually handle encryption itself; it delegates that to a backend such as GnuPG (a popular open source implementation of the Pretty Good Privacy, or PGP, protocol). We’ll use Sops with GnuPG in this example to encrypt a file containing a secret. The end result will be a file that you can safely commit to version control.

We won’t get into the details of how PGP encryption works, but just know that, like SSH and TLS, it’s a public key cryptosystem. Instead of encrypting data with a single key, it actually uses a pair of keys: one public, one private. You can safely share your public key with others, but you should never give out your private key.

Let’s generate your key pair now. First, install GnuPG, if you haven’t got it already.

Once that’s installed, run this command to generate a new key pair:

gpg --gen-key

Once your key has been successfully generated, make a note of the Key fingerprint (the string of hex digits): this uniquely identifies your key, and you’ll need it in the next step.

Now that you have a key pair, let’s encrypt a file using Sops and your new PGP key. You will also need to have Sops installed on your machine, if you haven’t already. There are binaries available for download, or you can install it with Go:

go get -u go.mozilla.org/sops/cmd/sops
sops -v
sops 3.0.5 (latest)

Now let’s create a test secret file to encrypt:

echo "password: secret123" > test.yaml
cat test.yaml
password: secret123

And finally, use Sops to encrypt it. Pass your key fingerprint to the --pgp switch, with the spaces removed, like this:

sops --encrypt --in-place --pgp E0A9AF924D5A0C123F32108EAF3AA2B4935EA0AB
test.yaml cat test.yaml
password: ENC[AES256_GCM,data:Ny220Ml8JoqP,iv:HMkwA8eFFmdUU1Dle6NTpVgy8vlQu/
6Zqx95Cd/+NL4=,tag:Udg9Wef8coZRbPb0foOOSA==,type:str]
sops:
  ...

Success! Now the test.yaml file is encrypted securely, and the value of password is scrambled and can only be decrypted with your private key. You will also notice that Sops added some metadata to the bottom of file, so that it will know how to decrypt it in the future.

Another nice feature of Sops is that only the value of password is encrypted, so the YAML format of the file is preserved, and you can see that the encrypted data is labeled password. If you have a long list of key-value pairs in your YAML file, Sops will encrypt only the values, leaving the keys alone.

To make sure that we can get the encrypted data back, and to check that it matches what we put in, run:

sops --decrypt test.yaml
You need a passphrase to unlock the secret key for
user: "Justin Domingus <[email protected]>"
2048-bit RSA key, ID 8200750F, created 2018-07-27 (main key ID 935EA0AB)
Enter passphrase: *highly secret passphrase*

password: secret123

Remember the passphrase that you chose when you generated your key pair? We hope so, because you need to type it in now! If you remembered it right, you will see the decrypted value of password: secret123.

Now you know how to use Sops, you can encrypt any sensitive data in your source code, whether that’s application config files, Kubernetes YAML resources, or anything else.

When it comes time to deploy the application, use Sops in decrypt mode to produce the plain-text secrets that you need (but remember to delete the plain-text files, and don’t check them in to version control!).

In the next chapter, we’ll show you how to use Sops this way with Helm charts. You can not only decrypt secrets when deploying your application with Helm, but also use different sets of secrets, depending on the deployment environment: for example, staging versus production (see “Managing Helm Chart Secrets with Sops”).

Using a KMS Backend

If you are using Amazon KMS or Google Cloud KMS for key management in the cloud, you can also use them with Sops. Using a KMS key works exactly the same as in our PGP example, but the metadata in the file will be different. Instead, the sops: section at the bottom might look something like this:

sops:
  kms:
  - created_at: 1441570389.775376
    enc: CiC....Pm1Hm
    arn: arn:aws:kms:us-east-1:656532927350:key/920aff2e...

Just like with our PGP example, the key ID (arn:aws:kms...) is embedded in the file so that Sops knows how to decrypt it later.

Summary

Configuration and secrets is one of the topics that people ask us about the most in relation to Kubernetes. We’re glad to be able to devote a chapter to it, and to outline some ways you can connect your applications with the settings and data they need.

The most important things we’ve learned:

  • Separate your configuration data from application code and deploy it using Kubernetes ConfigMaps and Secrets. That way, you don’t need to redeploy your app every time you change a password.

  • You can get data into ConfigMaps by writing it directly in your Kubernetes manifest file, or use kubectl to convert an existing YAML file into a ConfigMap spec.

  • Once data is in a ConfigMap, you can insert it into a container’s environment, or into the command-line arguments of its entrypoint. Alternatively, you can write the data to a file that is mounted on the container.

  • Secrets work just like ConfigMaps, except that the data is encrypted at rest, and obfuscated in kubectl output.

  • A simple, flexible way to manage secrets is to store them directly in your source code repo, but encrypt them using Sops or another text-based encryption tool.

  • Don’t overthink secrets management, especially at first. Start with something simple that’s easy to set up for developers.

  • Where secrets are shared by many applications, you can store them (encrypted) in a cloud bucket, and fetch them at deploy time.

  • For enterprise-level secrets management, you’ll need a dedicated service such as Vault. But don’t start with Vault, because you may end up not needing it. You can always move to Vault later.

  • Sops is an encryption tool that works with key-value files like YAML and JSON. It can get its encryption key from a local GnuPG keyring, or cloud key management services like Amazon KMS and Google Cloud KMS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.61.195