Realizing container orchestration

We've now seen which challenges container orchestration framework tackle. This section will show you the core concepts of Kubernetes, a solution originally developed by Google to run their workloads. At the time of writing this book Kubernetes has a enormous momentum and is also the basis for other orchestration solutions such as OpenShift by RedHat. I chose this solution because of its popularity but also because I believe that it does the job of orchestration very well. However, the important point is less about comprehending the chosen technology rather than the motivations and concepts behind it.

Kubernetes runs and manages Linux containers in a cluster of nodes. The Kubernetes master node orchestrates the worker nodes which do the actual work, that is, to run the containers. The software engineers control the cluster using the API provided by the master node, via a web-based GUI or command-line tool.

The running cluster consists of so-called resources of a specific type. The core resource types of Kubernetes are pods, deployments, and services. A pod is an atomic workload unit, running one or more Linux container. This means the application runs in a pod.

The pods can be started and managed as standalone, single resources. However, it makes a lot of sense to not directly specify separate pods but to define a deployment, which encapsulates and manages running pods. Deployments enable the functionality that provide production-readiness such as upscaling and downscaling of pods or rolling updates. They are responsible for reliably running our applications in the specified versions.

A system defines services in order to connect to running applications from outside of the cluster or within other containers. The services provide the logical abstraction described in the last section that embraces a set of pods. All pods that run a specific application are abstracted by a single service which directs the traffic onto active pods. The combination of services routing to active pods and deployments managing the rolling update of versions enables zero-downtime deployments. Applications are always accessed using services which direct to corresponding pods.

All core resources are unique within a Kubernetes namespace. Namespaces encapsulate aggregates of resources and can be used to model different environments. For example, services that point to external systems outside of the cluster can be configured differently in different namespaces. The applications that use the external systems always use the same logical service name which are directed to different endpoints.

Kubernetes supports resources definition as IaC using JSON or YAML files. The YAML format is a human-readable data serialization format, a superset of JSON. It became the de facto standard within Kubernetes.

The following code snippet shows the definition of a service of the hello-cloud application:

---
kind: Service
apiVersion: v1
metadata:
  name: hello-cloud
spec:
  selector:
    app: hello-cloud
  ports:
    - port: 8080
---

The example specifies a service which directs traffic on port 8080 toward hello-cloud pods that are defined by the deployment.

The following shows the hello-cloud deployment:

---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: hello-cloud
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-cloud
    spec:
      containers:
      - name: hello-cloud
        image: docker.example.com/hello-cloud:1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            port: 8080
        readinessProbe:
          httpGet:
            path: /hello-cloud/resources/hello
            port: 8080
      restartPolicy: Always
---

The deployment specifies one pod from the given template with the provided Docker image. As soon as the deployment is created Kubernetes tries to satisfy the pod specifications by starting a container from the image and testing the container's health using the specified probes.

The container image docker.example.com/hello-cloud:1 includes the enterprise application which was built and distributed to a Docker registry earlier.

All these resource definitions are applied to the Kubernetes cluster by either using the web-based GUI or the CLI.

After creating both the deployment and the service, the hello-cloud application is accessible from within the cluster via the service. To be accessed from the outside of the cluster a route needs to be defined, for example using an ingress. Ingress resources route traffic to services using specific rules. The following shows an example ingress resource that makes the hello-cloud service available:

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: hello-cloud
spec:
  rules:
  - host: hello.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-cloud
          servicePort: 8080
---

These resources now specify the whole application, which is deployed onto a Kubernetes cluster, accessible from the outside and abstracted in a logical service inside of the cluster. If other applications need to communicate with the application, they can do so via the Kubernetes-internal, resolvable hello-cloud DNS hostname and port 8080.

The following diagram shows an example setup of the hello-cloud application with a replica of three pods that runs in a Kubernetes cluster of two nodes:

Besides service lookup using logical names, some applications still need additional configuration. Therefore Kubernetes as well as other orchestration technology has the possibility of inserting files and environment variables into the container dynamically at runtime. The concept of config maps, key-value-based configuration is used for this. The contents of config maps can be made available as files, dynamically mounted into a container. The following defines an example config map, specifying the contents of a properties file:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: hello-cloud-config
data:
  application.properties: |
    hello.greeting=Hello from Kubernetes
    hello.name=Java EE
---

The config map is being used to mount the contents as files into containers. The config map's keys will be used as file names, mounted into a directory, with the value representing the file contents. The pod definitions specify the usage of config maps mounted as volumes. The following shows the previous deployment definition of the hello-cloud application, using hello-cloud-config in a mounted volume:

---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: hello-cloud
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-cloud
    spec:
      containers:
      - name: hello-cloud
        image: docker.example.com/hello-cloud:1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: config-volume
          mountPath: /opt/config
        livenessProbe:
          httpGet:
            path: /
            port: 8080
        readinessProbe:
          httpGet:
            path: /hello-cloud/resources/hello
            port: 8080
      volumes:
      - name: config-volume
        configMap:
          name: hello-cloud-config
      restartPolicy: Always
---

The deployment defines a volume which references to the hello-cloud-config config map. The volume is mounted to the path /opt/config resulting in all key-value pairs of the config map being inserted as files in this directory. With the config map demonstrated previously this would result in a application.properties file containing the entries for keys hello.greeting and hello.name. The application expects that at runtime the file resides under this location.

Separate environments will specify different contents of the config maps, depending on the desired configuration values. Configuring applications using dynamic files is one approach. It is also possible to inject and override specific environment variables. The following code snippet demonstrates this example as well. This approach is advisable when the number of configuration values is limited:

# similar to previous example
# ...
        image: docker.example.com/hello-cloud:1
        imagePullPolicy: IfNotPresent
        env:
        - name: GREETING_HELLO_NAME
          valueFrom:
            configMapRef:
              name: hello-cloud-config
              key: hello.name
          livenessProbe:
# ...

Applications need to configure credentials, used for example to authorize against external systems or as database accesses. These credentials are ideally configured in a different place than uncritical configuration values. Besides config maps, Kubernetes therefore also includes the concept of secrets. These are similar to config maps, also representing key-value pairs, but obfuscated for humans as Base64-encoded data. Secrets and their contents are typically not serialized as infrastructure as code since the credentials should not have unrestricted access.

A common practice is to make credentials accessible in containers using environment variables. The following code snippet shows how to include a value configured in secret hello-cloud-secret into the hello-cloud application:

# similar to previous example
# ...
        image: docker.example.com/hello-cloud:1
        imagePullPolicy: IfNotPresent
        env:
        - name: TOP_SECRET
          valueFrom:
            secretKeyRef:
              name: hello-cloud-secret
              key: topsecret
          livenessProbe:
# ...

The environment variable TOP_SECRET is created from referencing the topsecret key in secret hello-cloud-secret. This environment variable is available at container runtime and can be used from the running process.

Some applications packaged in containers cannot solely run as stateless applications. Databases are a typical example of this. Since containers are discarded after their processes have exited, the contents of their file system are also gone. Services such as databases need persistent state though. To solve this issue Kubernetes includes persistent volumes. As the name suggests these volumes are available beyond the life cycle of the pods. Persistent volumes dynamically make files and directories available which are used within the pod and retain after it has exited.

Persistent volumes are backed by network attached storage or cloud storage offerings, depending on the cluster installation. They make it possible to run storage services such as databases in container orchestration clusters as well. However, as a general advise, persistent state in containers should be avoided.

The YAML IaC definitions are kept under version control in the application repository. The next chapter covers how to apply the file contents to a Kubernetes cluster as part of a Continuous Delivery pipeline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.102.50