Chapter 3. Docker Multihost Networking

As long as you’re using Docker on a single host, the techniques introduced in the previous chapter are really all you need. However, if the capacity of a host is not sufficient to sustain the workload, you will either need to buy a bigger box (scale up) or add more machines of the same type (scale out).

In the latter case, you end up with a network of machines (i.e., a cluster). Now, a number of questions arise: How do containers talk to each other on different hosts? How do you control communication between containers and between the outside world? How do you keep state, such as IP address assignments, consistent in a cluster? What are the integration points with the existing networking infrastructure? What about security policies?

In order to address these questions, we will review technologies for Docker multihost networking in this chapter.1

Tip

For the options discussed in this chapter, please do remember that Docker subscribes to a “batteries included but replaceable” paradigm. By that I mean that there will always be a default functionality (like networking or service discovery) that you can exchange with alternatives.

Overlay

In March 2015, Docker, Inc., acquired the software-defined networking (SDN) startup SocketPlane and rebranded it as Docker Overlay Driver; this is the upcoming default for multihost networking (in Docker 1.9 and above). The Overlay Driver extends the normal bridge mode by a peer-to-peer communication and uses a pluggable key-value store backend to distribute cluster state, supporting Consul, etcd, and ZooKeeper.

Flannel

CoreOS flannel is a virtual network that gives a subnet to each host for use with container runtimes. Each container (or pod, in the case of Kubernetes) has a unique, routable IP inside the cluster and it supports a range of backends such as VXLAN, AWS VPC, and the default layer 2 UDP overlay network. The advantage of flannel is that it reduces the complexity of doing port mapping. For example, Red Hat’s Project Atomic uses flannel.

Weave

Weaveworks Weave creates a virtual network that connects Docker containers deployed across multiple hosts. Applications use the network just as if the containers were all plugged into the same network switch, with no need to configure port mappings and links. Services provided by application containers on the Weave network can be made accessible to the outside world, regardless of where those containers are running. Similarly, existing internal systems can be exposed to application containers irrespective of their location. Weave can traverse firewalls and operate in partially connected networks. Traffic can be encrypted, allowing hosts to be connected across an untrusted network. You can learn more about Weave’s discovery features in Alvaro Saurin’s “Automating Weave Deployment on Docker Hosts with Weave Discovery”.

Project Calico

Metaswitch’s Project Calico uses standard IP routing—to be precise, the venerable Border Gateway Protocol (BGP), as defined in RFC 1105—and networking tools to provide a layer 3 solution. In contrast, most other networking solutions, including Weave, build an overlay network by encapsulating layer 2 traffic into a higher layer. The primary operating mode requires no encapsulation and is designed for datacenters where the organization has control over the physical network fabric.

Open vSwitch

Open vSwitch is a multilayer virtual switch designed to enable network automation through programmatic extension while supporting standard management interfaces and protocols, such as NetFlow, IPFIX, LACP, and 802.1ag. In addition, it is designed to support distribution across multiple physical servers, quite similar to VMware’s vNetwork distributed vSwitch or Cisco’s Nexus 1000V.

Pipework

Pipework was created by Jérôme Petazzoni, a rather well-known Docker engineer, and promises to be “software-defined networking for Linux containers.” It lets you connect containers in arbitrarily complex scenarios using cgroups and namespace and works with LXC containers as well as with Docker. Given the Docker, Inc., acquisition of SocketPlane and the introduction of the Overlay Driver, cf. “Overlay”, we will have to see how, if at all, these activities will consolidate.

OpenVPN

OpenVPN, another OSS project that has a commercial offering, allows you to create virtual private networks (VPNs) using TLS. These VPNs can also be used to securely connect containers to each other over the public Internet. If you want to try out a Docker-based setup, I suggest taking a look at DigitalOcean’s great walk-through tutorial “How To Run OpenVPN in a Docker Container on Ubuntu 14.04”.

Future Docker Networking

In the recently released Docker version 1.9, a new docker network command has been introduced. With this, containers can then also dynamically connect to other networks, with each network potentially backed by a different network driver. The default multihost network driver is Overlay (discussed earlier in “Overlay”).

In order to gather more hands-on experience, I suggest checking out the following blog posts:

Wrapping It Up

In this chapter, we reviewed multihost networking options, and we close out with a brief discussion on other aspects you should be aware of in this context:

IPVLAN

The Linux kernel version 3.19 introduced an IP-per-container feature. This assigns each container on a host a unique and (world-wide) routable IP address. Effectively, IPVLAN takes a single network interface and creates multiple virtual network interfaces with different MAC addresses assigned to them. This relatively recent feature, which was contributed by Mahesh Bandewar of Google, is conceptually similar to the macvlan driver, but is more flexible because it’s operating both on L2 and L3. If your Linux distro already has a kernel > 3.19, you’re in luck; otherwise, you cannot yet benefit from this feature.

IP address management (IPAM)

One of the bigger challenges concerning multihost networking is the allocation of IP addresses to containers in a cluster.2

Orchestration tool compatibility

Most if not all of the multihost networking solutions discussed in this chapter are effectively co-processes wrapping the Docker API and configuring the networking for you. This means that before you select one, you should make sure to check compatibility issues with the container orchestration tool you’re using. More on this topic in Chapter 5.

IPv4 versus IPv6

To date, most Docker deployments use the standard IPv4, but IPv6 is witnessing some uptake. Docker supports IPv6 since v1.5 (released in February 2015). The ever-growing address shortage in IPv4 land might encourage more IPv6 deployments down the line, also getting rid of NATs, however it is unclear when exactly the tipping point will be reached here.

At this point in time, you should have a good understanding of the low-level and Docker networking options and challenges. We now move on to the next layer in the stack: service discovery.

1 For some background on this topic, read DOCKER-8951, which contains a detailed discussion of the problem statement; also, the Docker docs now has a dedicated section on multihost networking.

2 As for some background on IPAM, check out the “Mesos Networking” talk from MesosCon 2015 in Seattle; it contains an excellent discussion of the problem statement and potential approaches to solving it.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.135.202