What Kubernetes isn't

The most succinct—and currently, the best—viewpoint on the Kubernetes ecosystem, at a level that's digestible both for individuals running their own small scale clusters and executives looking to understand the massive scope of the Kubernetes ecosystem, is the Cloud Native Trail Map, shown here:

The trail map helps us to break down all of the efforts to support Kubernetes currently going on outside of the core container-centric management environment that we alluded to in the preceding section. Outside of networking, storage, and compute, there are a lot of moving pieces that need to work in order for complex, microservice-based, cloud-native applications to run at scale. What else is needed to support the Kubernetes PaaS system?

You should treat each of these layers as a choice; choose one technology (or multiple, to render a proof-of-concept and decision) and see how it works.

For example, let's take containerization: at this point, it's table stakes to run your application as a containerized workload, but it may take your organization time to re-architect your applications, or to learn how to use Dockerfiles and build cloud-native applications.

There are traditionally 6Rs involved in moving your application to the cloud or container orchestration and scheduling platform. 

Here's a diagram demonstrating the 6Rs referenced in the preceding tip box that you can utilize to update your applications:

While this 6Rs formula was intended for considering a move to the cloud, it's also very useful when migrating to containers as well. Keep in mind here that not all of your applications will be well suited to running in containers (Retain), while some of them should be deprecated for OSS (Retire). A good way to start moving into containerized workloads is to simply drop a large monolithic application, such as a Java .war file or Python program, directly into the container and let it run as is (Rehost). In order to achieve the maximum benefits of containerization, and to take advantage of the cutting edge features of Kubernetes, you'll most likely need to explore refactoring, re-imagining, and rearchitecting your application (Refactor).

The next area of focus for anyone running a platform is Continuous Integration and Continuous Delivery (CICD). You'll need to manage both infrastructure and application-as-code in order to provide seamless rollouts, updates, and testing. In this new world, infrastructure and application are both first-class citizens when it comes to software.

Observability and analysis are also important in this realm of highly complex software systems that control both infrastructure and application. The CNCF breaks down solutions into sandbox, graduated, and incubation areas:

  • Sandbox: OpenMetrics is designed to create a common standard, building from Prometheus, to transmit metrics at scale. OpenMetrics uses standard text formats, as well as protocol buffers in order to serialize structured data in a language and platform-neutral manner.
  • Incubating: Here, we see Fluentd, Jaeger, and OpenTracing. Fluentd has been around for some time now, for those folks who've used the Elasticsearch, Logstash, Kibana (ELK) stack to collect metrics. It's an open source data aggregator that allows you to unify a set of logs from disparate sources. Jaeger helps operators to monitor and resolve issues in complex, distributed systems by providing tracing that can help unearth problems in modern microservice systems. Similarly to OpenMetrics, OpenTracing is an effort to build a standard for distributed tracing in microservices and OSS. As our systems become more deeply interconnected with know-nothing APIs, it is ever more important to introspect the connections of these systems.
  • Graduated: Along with Kubernetes, Prometheus remains the only other project currently graduated within the CNCF. Prometheus is a monitoring and alerting system that can use a number of different time series databases to display system status.

Service mesh and discovery is the next step along the Cloud Native Trail Map. This tier can be thought of as an additional capability set on top of the base functionality of Kubernetes, which can be seen as a set of the following capabilities:

  • A single Kubernetes API control plane
  • An authentication and authorization model
  • A namespaced, predictable, cluster-scoped resource description scheme
  • A container scheduling and orchestration domain
  • A pod-to-pod and ingress network routing domain

The three products in this portion of the map are CoreDNS, Envoy, and Linkerd. CoreDNS replaces kube-dns in your cluster, and provides the ability to chain multiple plugins together to create deeper functionality for looking up customer providers. CoreDNS will soon replace kube-dns as the default DNS provider for Kubernetes. Envoy is a service proxy that is built into the popular Istio product. Istio is a control that uses the Envoy binary as a data plane to provide common capabilities to a homogeneous set of software or services. Envoy provides the foundational capabilities for a service mesh that runs on top of the application that runs on Kubernetes, which provides an additional layer of resilience in the form of circuit breaking, rate limiting, load balancing, service discovery, routing, and application introspection in the form of metrics and logging. Linkerd has nearly all the same functionality as Envoy, as it's also a data plane for the service mesh. 

Networking is the next building block that we can add to the Kubernetes ecosystem. The Container Network Interface (CNI) is one of several interfaces that are currently being developed from within the CNCF ecosystem. Multiple options for Kubernetes cluster networking are being developed in order to cope with the complex feature requirements that applications have these days. Current options include the following:

  • Calico
  • Flannel
  • Weave Net
  • Cilium
  • Contiv
  • SR-IOV
  • Knitter

The Kubernetes team also provides a core set of plugins for the system that manage IP address allocation and interface creation.

Read more about the standard plugins at https://github.com/containernetworking/plugins/.

Reading from the GitHub project homepage, the CNI is described as follows:

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.

For more information on Cloud Native Computing Foundation visit https://www.cncf.io/.

There isn't currently a lot of activity in the distributed database portion of the trail map, simply because most of the workloads that currently run on Kubernetes tend to be stateless. There is a project incubating currently, named Vitess, which is attempting to provide a horizontal scaling model for the ever-popular MySQL database system. In order to scale MySQL across the pod-structured infrastructure of Kubernetes, the makers of Vitess are focusing on sharding out MySQL's data store in order to distribute it among the nodes of the cluster. It is similar to other NoSQL systems that, in this fashion, rely on data being replicated and spread out over several nodes. Vitess has been used at scale at YouTube since 2011, and is a promising technology for those looking to venture deeper into stateful workloads on Kubernetes.

For those operators who are pushing the limits of the Kubernetes system, there are several high-performance options for increasing the speed of your system. gRPC is a Remote Procedure Call (RPC) framework that was developed by Google to help clients and servers communicate transparently. gRPC is available in many languages, including C++, Java, Python, Go, Ruby, C#, Node.js, and more. gRPC uses ProtoBufs and is based on the simple concept that a service should have methods that can be called from another remote service. By defining these methods and parameters within the code, gRPC allows for large, complex applications to be built in pieces. NATS is a message queue that implements a distributed queue system that provides publish/subscribe and request/reply functionality, allowing the implementation of a highly scalable and secure foundation for inter-process communication (IPC).

The container runtime portion of the trail map is an area where there's been some contention. There are currently two options in the CNCF: containerd and rkt. These two technologies do not currently conform to the Container Runtime Interface (CRI), which is a new standard that attempts to create a shared understanding of what a container runtime should do. There are a few examples outside of the CNCF that currently conform to CRI standards:

  • CRI-O
  • Docker CRI shim
  • Frakti
  • rkt

There are also interesting players, such as Kata Containers, which are compliant with Open Container Initiative (OCI) standards and seek to offer containers running on lightweight virtual machines using technology from Hyper's runV and Intel's Clear Containers. Here, Kata replaces the traditional runC runtime in order to provide a container with a lightweight VM that contains its own mini-kernel.

The last piece of the trail map puzzle is software distribution, which is covered by Notary and the TUF framework. These are tools designed to aid in the secure distribution of software. Notary is a client/server framework that allows people to build trust over discretionary collections of data. In short, publishers can sign data content and then send that to consumers who have access to public key cryptographic systems, which allow them to validate the publisher's identity and the data. 

The TUF framework is used by Notary, which is a framework that allows for the secure update of a software system. TUF is used in delivery secure updates over-the-air (OTA) to automobiles.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.222.185