Managing applications

At the time of this book's writing, new software has emerged that hopes to tackle the problem of managing Kubernetes applications from a holistic perspective. As application installation and continued management grows more complex, software such as Helm hopes to ease the pain for cluster operators creating, versioning, publishing, and exporting application installation and configuration for other operators. You may have also heard the term GitOps, which uses Git as the source of truth from which all Kubernetes instances can be managed.

While we'll jump deeper into Continuous Integration and Continuous Delivery (CI/CD) in the next chapter, let's see what advantages can be gained by taking advantage of package management within the Kubernetes ecosystem. First, it's important to understand what problem we're trying to solve when it comes to package management within the Kubernetes ecosystem. Helm and programs like it have a lot in common with package managers such as apt, yum, rpm, dpgk, Aptitude, and Zypper. These pieces of software helped users cope during the early days of Linux, where programs were simply distributed as source code, with installation documents, configuration files, and the necessary moving pieces left to the operator to set up. These days of course Linux distributions use a great many pre-built packages, which are made available to the user community for consumption on their operating system of choice. In many ways, we're in those early days of software management for Kubernetes, with many different methods for installing software within many different layers of the Kubernetes system. But are there other reasons for  wanting a GNU Linux-style package manager for Kubernetes? Perhaps you feel confident that by using containers, or Git and configuration management, you can manage on your own.

Keep in mind the that there several important dimensions to consider when it comes to application management in a Kubernetes cluster:

  1. You want to be able to leverage the experience of others. When you install software in your cluster, you want to be able to take advantage of the expertise of the teams that built the software you're running, or experts who've set it up in a way to perform best.
  1. You want a repeatable, auditable method of maintaining the application-specific configuration of your cluster across environments. It's difficult to build in environment-specific memory settings, for example, across environments using simpler tools such as cURL, or within a makefile or other package compilation tools.

In short, we want to take advantage of the expertise of the ecosystem when deploying technologies such as databases, caching layers, web servers, key/value stores, and other technologies that you're likely to run on your Kubernetes cluster. There are a lot of potential players in this part of the ecosystem, such as Landscaper (https://github.com/Eneco/landscaper), Kubepack (https://github.com/kubepack/pack), Flux (https://github.com/weaveworks/flux), Armada (https://github.com/att-comdev/armada), and helmfile (https://cdp.packtpub.com/getting_started_with_kubernetes__third_edition/wp-admin/post.php?post=29&action=pdfpreview). In this section in particular, we're going to look at Helm (https://github.com/helm/helm), which has recently been accepted into the CNCF as an incubating project, and its approach to the problems we've described here.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.17.154