Networking in OpenStack

The OpenStack Networking Guide, https://docs.openstack.org/ocata/networking-guide/index.html, is a detailed and authoritative source for networking in OpenStack. The guide provides a high level overview of the technology, dependencies, and deployment examples. Here is an overview and summary of networking in OpenStack:

  • OpenStack networking handles the creation and management of a virtual network infrastructure, including networks, switches, subnets, and routers for devices managed by OpenStack services.
  • Additional services, such as firewalls or VPN, can also be used in the form of a plugins.
  • OpenStack networking consists of the neutron-server, database for network information storage, and plugin agents. The agents are responsible for the interaction between the Linux networking mechanism, external devices, and SDN controllers.
  • N and horizon dashboard services for administrator and tenant users to create network services.
  • The main virtualization underpin is the Linux namespace. A Linux namespace is a way of scoping a particular set of identifiers, which provided for networking and processes. If a process or a network device is running within a process namespace, it can only see and communicate with other processes and network devices in the same namespace.
  • Each network namespace has its own routing table, IP address space, and IP tables.
  • Modular layer 2 (ml2) neutron plugin is a framework allowing OpenStack networking to simultaneously use the variety of layer 2 networking such as VXLAN and Open vSwitch. They can be used simultaneously to access different ports of the same virtual network.
  • An L2 agent serves layer 2 network connectivity to OpenStack resources. It is typically run on each (). Available agents are Open vSwitch agent, Linux bridge agent, SRIOV NIC switch agent, and MacVTap agent.
L2 Agents (source: https://docs.openstack.org/ocata/networking-guide/config-ml2.html)
  • An L3 agent offers advanced layer 3 services, such as virtual routers and fFloating IPs. It requires an L2 agent running in parallel.

OpenStack Networking service neutron includes the following components:

  • API server: This component includes support for layer 2 networking and IP address space management, as well as an extension for a layer 3 router construct that enables routing between a layer 2 network and gateways to external networks.
  • PlugIn and agents: This component creates subnets, provides IP addressing, connect and disconnect ports.
  • Messaging queue: This queue accepts RPC requests between agents to complete API operations.
Compute and Network Node(source: https://docs.openstack.org/ocata/networking-guide/intro-os-networking.html)

:

  • This is done through the combination of provider and tenant networks.
  • The provider network offers layer 2 connectivity to instances with optional support for DHCP and metadata services. The provider network connects to the existing layer 2 network in the datacenter, typically using VLAN. Only an administrator can manage the provider network.
  • The tenant networks are self-service networks that use tunneling mechanism, such as VXLAN or GRE. They are entirely virtual and self-contained. In order for the tenant network to connect to the provider network or external network, a virtual router is used.
  • From tenant network to provider network, source network address translation is used on the virtual router.
  • From provider to tenant network, destination network address translation with a floating IP is used on the virtual router.

Using the Linux bridge as an example, let's take a look at how the provider network is mapped to the physical network:

Linux Bridge Tenant Network (source: https://docs.openstack.org/ocata/networking-guide/deploy-lb-selfservice.html)

The following graph includes are more detailed link topology for interface 2:

Linux Bridge Provider Network (source: https://docs.openstack.org/ocata/networking-guide/deploy-lb-provider.html)

Let us take a closer look at the preceding topology:

  • Interface 1 is a physical interface with an IP address dedicated for management purposes. This is typically an isolated VLAN in the physical infrastructure. The controller node receives the network management API call, and through the management interface on the compute node, configures the DHCP agent and metadata agent for the tenant network instance.
  • Interface 2 on the compute node is dedicated as a L2 trunk interface to the physical network infrastructure. Each instance has its dedicated Linux bridge instance with separate IP table and virtual Ethernet ports. Each instance is tagged with a particular VLAN ID for separation and trunked through interface 2 to the provider network. In this deployment, the instance traffic traverses through the same VLAN as the provider network in the same VLAN.
  • Interface 3 provides an additional overlay option for tenant networks, if the physical network supports VXLAN. In the compute node, instead of having the Linux bridge tag with VLAN ID, the Linux bridge uses VXLAN VNI for identification. It then traverses through the physical network to reach the Network node with the same VXLAN that connects to the router namespace. The router namespace has a separate virtual connection to the Linux bridge to interface 2 for the provider network.

Here is an example of how a layer 3 agent can interact with the BGP router in the provider network, which in turn peers with the external network:

BGP Dynamic Routing (source: https://docs.openstack.org/ocata/networking-guide/config-bgp-dynamic-routing.html)

In the case of a Linux bridge with BGP routing, the L3 gateway would be on the network node that can directly or indirectly peer with the external network. In the next section, we will take a look at some of the command syntax of OpenStack configuration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.35.72