Chapter 14. The Ecosystem

In this chapter, we take a look at the wider Kubernetes ecosystem; that is, software in the Kubernetes incubator and related projects such as Helm and kompose.

14.1 Installing Helm, the Kubernetes Package Manager

Problem

You do not want to write all the Kubernetes manifests by hand. Instead, you would like to be able to search for a package in a repository and download and install it with a command-line interface.

Solution

Use Helm. Helm is the Kubernetes package manager; it defines a Kubernetes package as a set of manifests and some metadata. The manifests are actually templates. The values in the templates are filled when the package is instantiated by Helm. A Helm package is called a chart.

Helm has a client-side CLI called helm and a server called tiller. You interact with charts using helm, and tiller runs within your Kubernetes cluster as a regular Kubernetes deployment.

You can build Helm from source or download it from the GitHub release page, extract the archive, and move the helm binary into your $PATH. For example, on macOS, for the v2.7.2 release of Helm, do this:

$ wget https://storage.googleapis.com/kubernetes-helm/ 
  helm-v2.7.2-darwin-amd64.tar.gz

$ tar -xvf helm-v2.7.2-darwin-amd64.tar.gz

$ sudo mv darwin-amd64/64 /usr/local/bin

$ helm version

Now that the helm command is in your $PATH, you can use it to start the server-side component, tiller, on your Kubernetes cluster. Here we use Minikube as an example:

$ kubectl get nodes
NAME       STATUS    AGE       VERSION
minikube   Ready     4m        v1.7.8

$ helm init
$HELM_HOME has been configured at /Users/sebgoa/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes
Cluster. Happy Helming!

$ kubectl get pods --all-namespaces | grep tiller
kube-system   tiller-deploy-1491950541-4kqxx   0/1  ContainerCreating  0  1s

You’re all set now and can install one of the over 100 packages available.

14.2 Using Helm to Install Applications

Problem

You’ve installed the helm command (see Recipe 14.1), and now you would like to search for charts and deploy them.

Solution

By default, Helm comes with some chart repositories configured. These repositories are maintained by the community; you can read more about them on GitHub. There are over 100 charts available.

For example, let’s assume you would like to deploy Redis. You can search for redis in the Helm repositories and then install it. Helm will take the chart and create an instance of it called a release.

First, verify that tiller is running and that you have the default repositories configured:

$ kubectl get pods --all-namespaces | grep tiller
kube-system   tiller-deploy-1491950541-4kqxx   1/1   Running   0   3m

$ helm repo list
NAME   	URL
stable 	http://storage.googleapis.com/kubernetes-charts

You can now search for a Redis package:

$ helm search redis
NAME                    	VERSION	DESCRIPTION
stable/redis            	0.5.1  	Open source, advanced key-value store. It ...
testing/redis-cluster   	0.0.5  	Highly available Redis cluster with multiple...
testing/redis-standalone 0.0.1   Standalone Redis Master
stable/sensu            	0.1.2  	Sensu monitoring framework backed by the ...
testing/example-todo    	0.0.6  	Example Todo application backed by Redis

And use helm install to create a release like so:

$ helm install stable/redis

Helm will create all the Kubernetes objects defined in the chart; for example, a secret (see Recipe 8.2), a PVC (see Recipe 8.5), a service (see Recipe 5.1), and/or a deployment. Together, these objects make up a Helm release that you can manage as a single unit.

The end result is that you will have a redis pod running:

$ helm ls
NAME           REVISION	 UPDATED                   STATUS  	 CHART        ...
broken-badger  1         Fri May 12 11:50:43 2017  DEPLOYED  redis-0.5.1  ...

$ kubectl get pods
NAME                                   READY     STATUS    RESTARTS   AGE
broken-badger-redis-4040604371-tcn14   1/1       Running   0          3m

To learn more about Helm charts and how to create your own charts, see Recipe 14.3.

14.3 Creating Your Own Chart to Package Your Application with Helm

Problem

You have written an application with multiple Kubernetes manifests and would like to package it as a Helm chart.

Solution

Use the helm create and helm package commands.

With helm create, you can generate the skeleton of your chart. Issue the command in your terminal, specifying the name of your chart. For example, to create an oreilly chart:

$ helm create oreilly
Creating oreilly

$ tree oreilly/
oreilly/
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── service.yaml
└── values.yaml

2 directories, 7 files

If you have all your manifests already written, you can copy them into the /templates directory and delete what the scaffolding created. If you want to templatize your manifests, then write the values that need to be substituted in the manifests in the values.yaml file. Edit the metadata file Chart.yaml, and if you have any dependent charts put them in the /charts directory.

You can test your chart locally by running:

$ helm install ./oreilly

Finally, you can package it with helm package oreilly/. This will generate a tarball of your chart, copy it to a local chart repository, and generate a new index.yaml file for your local repository. Look into the ~/.helm directory and you should see something similar to the following:

$ ls -l ~/.helm/repository/local/
total 16
-rw-r--r--  1 sebgoa  staff   379 Dec 16 21:25 index.yaml
-rw-r--r--  1 sebgoa  staff  1321 Dec 16 21:25 oreilly-0.1.0.tgz

A helm search oreilly should now return your local chart:

$ helm search oreilly
NAME         	VERSION	DESCRIPTION
local/oreilly	0.1.0  	A Helm chart for Kubernetes

See Also

14.4 Converting Your Docker Compose Files to Kubernetes Manifests

Problem

You’ve started using containers with Docker and written some Docker compose files to define your multicontainer application. Now you would like to start using Kubernetes, and wonder if and how you can reuse your Docker compose files.

Solution

Use kompose, a CLI tool that converts your Docker compose files into Kubernetes manifests.

To start, download kompose from the GitHub release page and move it to your $PATH, for convenience.

For example, on macOS, do the following:

$ wget https://github.com/kubernetes-incubator/kompose/releases/download/ 
       v1.6.0/kompose-darwin-amd64

$ sudo mv kompose-darwin-amd64 /usr/local/bin/kompose

$ sudo chmod +x /usr/local/bin/kompose

$ kompose version
1.6.0 (ae4ef9e)

Given the following Docker compose file that starts a redis container:

version: '2'

services:
  redis:
    image: redis
    ports:
    - "6379:6379"

You can automatically convert this into Kubernetes manifests with the following command:

$ kompose convert --stdout

The manifests will be printed to stdout and you will see a Kubernetes service and a deployment as a result. To create these objects automatically, you can use the Docker compose-compliant command up like so:

$ kompose up
Warning

Some Docker compose directives are not converted to Kubernetes. In this case, kompose prints out a warning informing you that the conversion did not happen.

While in general it doesn’t cause problems, it is possible that the conversion may not result in a working manifest in Kubernetes. This is expected as this type of transformation cannot be perfect. However, it gets you close to a working Kubernetes manifest. Most notably, handling volumes and network isolation will typically require manual, custom work from your side.

Discussion

The main kompose commands are convert, up, and down. You can view detailed help for each command in the CLI using the --help option.

By default, kompose converts your Docker services into a Kubernetes deployment and associated service. You can also specify the use of a DaemonSet (see Recipe 7.3), or you can use OpenShift-specific objects such as a DeploymentConfiguration.

14.5 Creating a Kubernetes Cluster with kubicorn

Problem

You want to create a Kubernetes cluster on AWS.

Solution

Use kubicorn to create and manage Kubernetes clusters on AWS. Since kubicorn currently doesn’t provide for binary releases, you need to have Go installed for the following to work.

First, install kubicorn and make sure that Go (version 1.8 or later) is available. Here, we’re using a CentOS environment.

$ go version
go version go1.8 linux/amd64

$ yum group install "Development Tools" 
  yum install ncurses-devel

$ go get github.com/kris-nova/kubicorn
...
Create, Manage, Image, and Scale Kubernetes infrastructure in the cloud.

Usage:
  kubicorn [flags]
  kubicorn [command]

Available Commands:
  adopt       Adopt a Kubernetes cluster into a Kubicorn state store
  apply       Apply a cluster resource to a cloud
  completion  Generate completion code for bash and zsh shells.
  create      Create a Kubicorn API model from a profile
  delete      Delete a Kubernetes cluster
  getconfig   Manage Kubernetes configuration
  help        Help about any command
  image       Take an image of a Kubernetes cluster
  list        List available states
  version     Verify Kubicorn version

Flags:
  -C, --color         Toggle colorized logs (default true)
  -f, --fab           Toggle colorized logs
  -h, --help          help for kubicorn
  -v, --verbose int   Log level (default 3)

Use "kubicorn [command] --help" for more information about a command.

Once you have the kubicorn command installed, you can create the cluster resources by selecting a profile and verifying whether the resources are properly defined:

$ kubicorn create --name k8scb --profile aws
2017-08-14T05:18:24Z [✔]  Selected [fs] state store
2017-08-14T05:18:24Z [✿]  The state [./_state/k8scb/cluster.yaml] has been...

$ cat _state/k8scb/cluster.yaml
SSH:
  Identifier: ""
  metadata:
    creationTimestamp: null
  publicKeyPath: ~/.ssh/id_rsa.pub
  user: ubuntu
cloud: amazon
kubernetesAPI:
  metadata:
    creationTimestamp: null
  port: "443"
location: us-west-2
...
Note

The default resource profile we’re using assumes you have a key pair in ~/.ssh named id_rsa (private key) and id_rsa.pub (public key). If this is not the case, you might want to change this. Also, note that the default region used is Oregon, us-west-2.

To continue, you need to have an AWS Identity and Access Management (IAM) user with the following permissions available: AmazonEC2FullAccess, AutoScalingFullAccess, and AmazonVPCFullAccess. If you don’t have such an IAM user, now is a good time to create one.1

One last thing you need to do for kubicorn to work is set the credentials of the IAM user you’re using (see previous step) as environment variables, as follows:

$ export AWS_ACCESS_KEY_ID=***************************
$ export AWS_SECRET_ACCESS_KEY=*****************************************

Now you’re in a position to create the cluster, based on the resource definitions above as well as the AWS access you provided:

$ kubicorn apply --name k8scb
2017-08-14T05:45:04Z [✔]  Selected [fs] state store
2017-08-14T05:45:04Z [✔]  Loaded cluster: k8scb
2017-08-14T05:45:04Z [✔]  Init Cluster
2017-08-14T05:45:04Z [✔]  Query existing resources
2017-08-14T05:45:04Z [✔]  Resolving expected resources
2017-08-14T05:45:04Z [✔]  Reconciling
2017-08-14T05:45:07Z [✔]  Created KeyPair [k8scb]
2017-08-14T05:45:08Z [✔]  Created VPC [vpc-7116a317]
2017-08-14T05:45:09Z [✔]  Created Internet Gateway [igw-e88c148f]
2017-08-14T05:45:09Z [✔]  Attaching Internet Gateway [igw-e88c148f] to VPC ...
2017-08-14T05:45:10Z [✔]  Created Security Group [sg-11dba36b]
2017-08-14T05:45:11Z [✔]  Created Subnet [subnet-50c0d919]
2017-08-14T05:45:11Z [✔]  Created Route Table [rtb-8fd9dae9]
2017-08-14T05:45:11Z [✔]  Mapping route table [rtb-8fd9dae9] to internet gate...
2017-08-14T05:45:12Z [✔]  Associated route table [rtb-8fd9dae9] to subnet ...
2017-08-14T05:45:15Z [✔]  Created Launch Configuration [k8scb.master]
2017-08-14T05:45:16Z [✔]  Created Asg [k8scb.master]
2017-08-14T05:45:16Z [✔]  Created Security Group [sg-e8dca492]
2017-08-14T05:45:17Z [✔]  Created Subnet [subnet-cccfd685]
2017-08-14T05:45:17Z [✔]  Created Route Table [rtb-76dcdf10]
2017-08-14T05:45:18Z [✔]  Mapping route table [rtb-76dcdf10] to internet gate...
2017-08-14T05:45:19Z [✔]  Associated route table [rtb-76dcdf10] to subnet ...
2017-08-14T05:45:54Z [✔]  Found public IP for master: [34.213.102.27]
2017-08-14T05:45:58Z [✔]  Created Launch Configuration [k8scb.node]
2017-08-14T05:45:58Z [✔]  Created Asg [k8scb.node]
2017-08-14T05:45:59Z [✔]  Updating state store for cluster [k8scb]
2017-08-14T05:47:13Z [✿]  Wrote kubeconfig to [/root/.kube/config]
2017-08-14T05:47:14Z [✿]  The [k8scb] cluster has applied successfully!
2017-08-14T05:47:14Z [✿]  You can now `kubectl get nodes`
2017-08-14T05:47:14Z [✿]  You can SSH into your cluster ssh -i ~/.ssh/id_rsa ...

Although you don’t see the beautiful coloring here, the last four lines of output are green and tell you that everything has been successfully set up. You can also verify this by visiting the Amazon EC2 console in a browser, as shown in Figure 14-1.

Screenshot of Amazon EC2 console, showing two nodes created by kubicorn
Figure 14-1. Screenshot of Amazon EC2 console, showing two nodes created by kubicorn

Now, do as instructed in the last output line of the kubicorn apply command and ssh into the cluster:

$ ssh -i ~/.ssh/id_rsa [email protected]
The authenticity of host '34.213.102.27 (34.213.102.27)' can't be established.
ECDSA key fingerprint is ed:89:6b:86:d9:f0:2e:3e:50:2a:d4:09:62:f6:70:bc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '34.213.102.27' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-1020-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

75 packages can be updated.
32 updates are security updates.


To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@ip-10-0-0-52:~$ kubectl get all -n kube-system
NAME                                           READY     STATUS
po/calico-etcd-qr3f1                           1/1       Running
po/calico-node-9t472                           2/2       Running
po/calico-node-qlpp6                           2/2       Running
po/calico-policy-controller-1727037546-f152z   1/1       Running
po/etcd-ip-10-0-0-52                           1/1       Running
po/kube-apiserver-ip-10-0-0-52                 1/1       Running
po/kube-controller-manager-ip-10-0-0-52        1/1       Running
po/kube-dns-2425271678-zcfdd                   0/3       ContainerCreating
po/kube-proxy-3s2c0                            1/1       Running
po/kube-proxy-t10ck                            1/1       Running
po/kube-scheduler-ip-10-0-0-52                 1/1       Running

NAME              CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
svc/calico-etcd   10.96.232.136   <none>        6666/TCP        4m
svc/kube-dns      10.96.0.10      <none>        53/UDP,53/TCP   4m

NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/calico-policy-controller   1         1         1            1           4m
deploy/kube-dns                   1         1         1            0           4m

NAME                                     DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller-1727037546   1         1         1         4m
rs/kube-dns-2425271678                   1         1         0         4m

When you’re done, tear down the Kubernetes cluster like so (note that this may take a couple of minutes):

$ kubicorn delete --name k8scb
2017-08-14T05:53:38Z [✔]  Selected [fs] state store
Destroying resources for cluster [k8scb]:
2017-08-14T05:53:41Z [✔]  Deleted ASG [k8scb.node]
...
2017-08-14T05:55:42Z [✔]  Deleted VPC [vpc-7116a317]

Discussion

While kubicorn is a rather young project, it is fully functional, and you can also create clusters on Azure and Digital Ocean with it.

It does require you to have Go installed as it doesn’t ship binaries (yet), but it’s very flexible in terms of configuration and also rather intuitive to handle, especially if you have an admin background.

See Also

14.6 Storing Encrypted Secrets in Version Control

Problem

You want to store all your Kubernetes manifests in version control and safely share them (even publicly), including secrets.

Solution

Use sealed-secrets. Sealed-secrets is a Kubernetes controller that decrypts one-way encrypted secrets and creates in-cluster Secret objects (see Recipe 8.2).

Your sensitive information is encrypted into a SealedSecret object, which is a custom CRD resource (see Recipe 13.4). The SealedSecret is safe to store under version control and share even publicly. Once a SealedSecret is created on the Kubernetes API server, the controller decrypts it and creates the corresponding Secret object (which is only base64-encoded).

To get started, download the latest release of the kubeseal binary. This will allow you to encrypt your secrets:

$ GOOS=$(go env GOOS)

$ GOARCH=$(go env GOARCH)

$ wget https://github.com/bitnami/sealed-secrets/releases/download/v0.5.1/
       kubeseal-$GOOS-$GOARCH

$ sudo install -m 755 kubeseal-$GOOS-$GOARCH /usr/local/bin/kubeseal

Then create the SealedSecret CRD and launch the controller:

$ kubectl create -f https://github.com/bitnami/sealed-secrets/releases/
                    download/v0.5.1/sealedsecret-crd.yaml

$ kubectl create -f https://github.com/bitnami/sealed-secrets/releases/
                    download/v0.5.1/controller.yaml

The result will be that you have a new custom resource and a new pod running in the kube-system namespace:

$ kubectl get customresourcedefinitions
NAME                        AGE
sealedsecrets.bitnami.com   34s

$ kubectl get pods -n kube-system | grep sealed
sealed-secrets-controller-867944df58-l74nk   1/1       Running   0          38s

You are now ready to start using sealed-secrets. First, generate a generic secret manifest:

$ kubectl create secret generic oreilly --from-literal=password=root -o json
                                        --dry-run > secret.json

$ cat secret.json
{
    "kind": "Secret",
    "apiVersion": "v1",
    "metadata": {
        "name": "oreilly",
        "creationTimestamp": null
    },
    "data": {
        "password": "cm9vdA=="
    }
}
Tip

To create a manifest but not create the object on the API server, use the --dry-run option. This will print your manifest to stdout. If you want YAML, use the -o yaml option; and if you want JSON, use -o json.

Then use the kubeseal command to generate the new custom SealedSecret object:

$ kubeseal < secret.json > sealedsecret.json

$ cat sealedsecret.json
{
  "kind": "SealedSecret",
  "apiVersion": "bitnami.com/v1alpha1",
  "metadata": {
    "name": "oreilly",
    "namespace": "default",
    "creationTimestamp": null
  },
  "spec": {
    "data": "AgDXiFG0V6NKF8e9k1NeBMc5t4QmfZh3QKuDORAsFNCt50wTwRhRLRAQOnz0sDk..."
  }
}

You can now store sealedsecret.json safely in version control. Only the private key stored in the sealed-secret controller can decrypt it. Once you create the SealedSecret object, the controller will detect it, decrypt it, and generate the corresponding secret:

$ kubectl create -f sealedsecret.json
sealedsecret "oreilly" created

$ kubectl get sealedsecret
NAME      AGE
oreilly   5s

$ kubectl get secrets
NAME       TYPE    DATA      AGE
...
oreilly    Opaque  1         5s

14.7 Deploying Functions with kubeless

Problem

You want to deploy Python, Node.js, Ruby, or PowerShell functions to Kubernetes without having to build a Docker container. You also want to be able to call those functions via HTTP or by sending events to a message bus.

Solution

Use the Kubernetes-native serverless solution kubeless.

kubeless uses a CustomResourceDefinition (see Recipe 13.4) to define Function objects and a controller to deploy these functions inside pods within a Kubernetes cluster.

While the possibilities are quite advanced, in this recipe we will show a basic example of deploying a Python function that returns the JSON payload you send it.

First, create a kubeless namespace and launch the controller. To do so, you can get the manifest that is released with every version on the GitHub release page. From that same release page, also download the kubeless binary:

$ kubectl create ns kubeless

$ curl -sL https://github.com/kubeless/kubeless/releases/download/v0.3.1/ 
           kubeless-rbac-v0.3.1.yaml | kubectl create -f -

$ wget https://github.com/kubeless/kubeless/releases/download/v0.3.1/ 
       kubeless_darwin-amd64.zip

$ sudo cp bundles/kubeless_darwin-amd64/kubeless /usr/local/bin

Within the kubeless namespace, you will see three pods: the controller that watches the Function custom endpoints, and the Kafka and Zookeeper pods. The latter two pods are only needed for functions that are triggered by events. For HTTP-triggered functions, you only need the controller to be in the running state:

$ kubectl get pods -n kubeless
NAME                                  READY     STATUS    RESTARTS   AGE
kafka-0                               1/1       Running   0          6m
kubeless-controller-9bff848c4-gnl7d   1/1       Running   0          6m
zoo-0                                 1/1       Running   0          6m

To try kubeless, write the following Python function in a file called post.py:

def handler(context):
    print context.json
    return context.json

You can then deploy this function in Kubernetes using the kubeless CLI. The function deploy command takes several optional arguments. The --runtime option specifies what language the function is written in; the --http-trigger option specifies that the function will be triggered via HTTP(S) calls; and the --handler option specifies the name of the function, with the prefix being the basename of the file the function is stored in. Finally, the --from-file option specifies the file in which the function is written:

$ kubeless function deploy post-python --trigger-http 
                                       --runtime python2.7 
                                       --handler post.handler 
                                       --from-file post.py
INFO[0000] Deploying function...
INFO[0000] Function post-python submitted for deployment
INFO[0000] Check the deployment status executing 'kubeless function ls post-python'

$ kubeless function ls
NAME                 NAMESPACE	HANDLER      RUNTIME  	 TYPE	 TOPIC
post-python	default 	hellowithdata.handler 	python     2.7	  HTTP

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
post-python-5bcb9f7d86-d7nbt   1/1     Running   0          6s

The kubeless controller detected the new Function object and created a deployment for it. The function code is stored in a config map (see Recipe 8.3) and injected into the running pod at runtime. Then the function is callable via HTTP. The following shows these few objects:

$ kubectl get functions
NAME          AGE
post-python   2m

$ kubectl get cm
NAME          DATA      AGE
post-python   3         2m

$ kubectl get deployments
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
post-python   1         1         1            1           2m

To call the function you can use the kubeless function call command, like so:

$ kubeless function call post-python --data '{"oreilly":"function"}'
{"oreilly": "function"}
Note

kubeless can be used for much more than basic HTTP-triggered functions. Use the --help option to explore the CLI: kubeless --help.

1 AWS Identity and Access Management User Guide, “Creating an IAM User in Your AWS Account”.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.54.63