Extending Kubernetes Infrastructure

Kubernetes clusters are run on actual bare-metal clusters and interact with the infrastructure systems running on the servers. Extension points for infrastructure are still in the design stage and not mature enough for standardization. However, they can be grouped as follows:

  • Server: The Kubernetes node components interact with container runtimes such as Docker. Currently, Kubernetes is designed to work with any container runtime that implements the Container Runtime Interface (CRI) specification. CRI consists of libraries, protocol buffers, and the gRPC API to define the interaction between Kubernetes and the container environment.
  • Network: Kubernetes and the container architecture requires high-performance networking, decoupled from container runtime. The connections between containers and network interfaces are defined with the abstraction of the Container Network Interface (CNI). The CNI consists of a set of interfaces for adding and removing containers from the Kubernetes network.
  • Storage: Storage for Kubernetes resources is provided by the storage plugins that are communicating with cloud providers or the host system. For instance, a Kubernetes cluster running on AWS could easily get storage from AWS and attach to its stateful sets. Operations including storage provisioning and consuming in container runtimes are standardized under the Container Storage Interface (CSI). In Kubernetes, any storage plugin implementing CSI can be used as a storage provider.

The infrastructure of Kubernetes can be extended to work with servers implementing CRI, network providers compliant with CNI, and storage providers realizing CSI.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.152.58