Overlay Networking

It would be nice if all the containers across the cluster were addressable with their own IP address and we didn’t have to deal with dynamic ports or port conflicts. Kubernetes even requires each pod to have its own IP address, which is assigned from a range of IPs on the node the pod is placed. We still need some service discovery mechanism to know what IP address the service instance is at, but we don’t have to worry about port conflicts on the hosts. In some environments, it’s not possible to assign enough IP addresses to a host, and we need to manage the assignment of the IP address ranges in the environment.

We can create an overlay network on top of the existing infrastructure network that can route requests between containers that are distributed across multiple nodes. This enables us to assign an IP address to each container and connect to the service using multiple and standard ports. This reduces the complexity of port mapping and the need to treat ports as a resource on host machines when scheduling work in the cluster.

Benefits:

Basic DNS use: We can use DNS features to find containers in the network, and thus do not need to write additional code to discover the assigned host ports the service is running on.

Avoids host port resource conflicts: This can eliminate port conflicts in situations where we might want to schedule tasks on a node needing to expose the same port.

Simpler support for connectivity with legacy code: In some situations legacy code can make it difficult or nearly impossible to use a different port.

Networking management: Although it is necessary to configure and manage an overlay network, it can often be easier managing the deployment and configuration of an overlay network than dealing with IP host ranges and service port mappings.

An overlay network can be extremely useful, especially with a large cluster or clusters spanning multiple data centers. There are, however, concerns with the additional performance overhead and the need to install and manage another service on the cluster nodes.

Figure 5.8 provides a visual representation of an overlay network created on top of the existing infrastructure network, enabling us to route traffic to services within a node. As we can see, the host machine is running a multi-homed gateway service that is bound to port 80 of the host machine on 10.0.0.4 and connected to the overlay network. The gateway service can proxy inbound requests to the order service on port 80 at 172.16.20.1, and the order service can connect to an instance of the catalog service at either 172.16.20.2 or 172.16.30.1 on well-known port 80. Any of the name resolution options can be used for service discovery including those that come with some of the overlay technologies we will cover here.

Image

FIGURE 5.8: Service lookup using a proxy

The nice thing about this approach is that each container is now directly addressable, and although it can add a little complexity in the networking configuration, it significantly simplifies container lookup and management. All our services can be deployed and listening on well-known HTTP port 80 at their very own IP addresses. We can use well-known ports for the various services; it’s easy to expose multiple ports on a container; and it can simplify service discovery.

Technologies

There are a number of technologies available in the market that can be used to create and manage an overlay network. These technologies all have trade-offs that need to be considered, providing a different set of features.

Docker Networking

The Docker engine includes a built-in multi-host networking feature that provides Software Defined Networking (SDN) for containers. The Docker networking feature creates an overlay network using kernel-mode Open Virtual Switching (OVS) and Virtual Extensible LAN (VXLAN) encapsulation. The Docker networking feature requires a key/value (KV) store to create and manage the VXLAN mesh between the various nodes. The KV store is pluggable and currently supports the popular Zookeeper, etcd, and Consul stores.

The Docker networking feature also provides service discovery features that make all containers on the same overlay network aware of each other. Because multi-host networking is built into the Docker engine, we would not have to deal with deploying a network overlay to all the host nodes. In true Docker fashion, we can replace this with something like Weave, which offers more advanced features.

Weave

Weaveworks Weave Net (http://weave.works) is a platform for connecting Docker containers, regardless of where they’re located. Weave uses a peering system to discover and network containers running on separate hosts, without the need to manually configure networking. Weave creates two containers on the host machine: a router container, which captures traffic intended for containers managed by Weave; and a DNS discovery container, which provides automatic DNS discovery for Weave containers.

Registering a Docker container with Weave assigns it a DNS entry and makes it available to other containers on the host. You can reference a Weave container from another simply by using its Weave-assigned DNS name. Weave also enables multiple containers to share the same name for load balancing, fault tolerance, hot-swappable containers, and redundancy. Weave also supports additional features such as encryption of traffic, host network integration, and application isolation.

There are some benefits to running Weave Net in place of the Docker networking. Weave Net does not require us to install and manage additional software, so there is no requirement for a KV store with Weave Net. It’s more resilient to network partitions, offers simpler support for cross-site deployments, and provides a more robust service discovery option. It’s also a great option for Kubernets, Mesos, and other container-centric schedulers.

Flannel

Flannel is a virtual mesh network that assigns a subnet to each container host. Flannel removes the need to map ports to containers by giving each host a pool of IP addresses, which can be allocated to individual containers. Flannel is a CoreOS project, but can be built for multiple Linux distributions.

A basic Flannel network encapsulates IP frames in a UDP packet. Flannel can use different back ends, including VXLAN, Amazon VPC routes, and Google Compute Engine routes.

Project Calico as an Alternative to Overlay

Project Calico (http://www.projectcalico.org) seeks to simplify data center networking by providing a purely IP-based solution. Working over both IPv4 and IPv6, Calico supports container-based solutions such as Docker, as well as OpenStack virtual machines.

With Calico, each container is given a unique IP address. Each host runs an agent (known as “Felix”) that manages the details of each IP address. Using the Border Gateway Protocol, each host routes data directly to and from each container without the need for overlays, tunneling, or Network Address Translation. Access Control Lists can increase or limit public access to individual containers, isolate workloads, or implement security.

This approach can offer better scale and performance with reduced overhead in the VXLAN tunnels, and simplify network troubleshooting because the packets are not encapsulated.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.142.166