External network access

Every project will have at least one network to launch instances on, which will be built as we have just built a network. Whenever a new project is created, the steps that have just been performed will need to be performed for that new project. All projects will share a network that provides external access to the outside world. Let's work through creating this external network.

Preparing a network

Earlier, we discussed how Neutron is an API layer that manages virtual networking resources. The preparation for external network access will be different for different Neutron plugins. Talk to your networking vendor for your specific implementation. In general, what is being accomplished by this preparation is the connection of the networking node to a set of externally routable IP addresses. External just means external to, or outside of, the OpenStack cluster. These may be a pool within your company's 10.0.0.0/8 network or a pool of IPs public to the Internet. The project network IP addresses are not publicly routeable. The floating IP addresses allocated from the external network will be public and mapped to the project IP addresses on the instances to provide access to the instances outside of your OpenStack deployment. This is accomplished using the Network Address Translation (NAT) rules.

Since we are using Open vSwitch for this deployment, let's take a look at how OVS was set up by Triple-O when it was installed. Start by looking at the virtual switches defined on the control node. To do this, you will need to get the IP address of the control node. Get this from the undercloud. Source the stackrc file then ask Nova for a list of servers. We will do this again when we create instances in Chapter 5, Instance Management:

undercloud# source stackrc
undercloud# openstack server list
undercloud# ssh heat-admin@{ctlplane ip address}
overcloud-controller-0# sudo -i
overcloud-controller-0# ovs-vsctl show
5cd37026-4a3b-4601-b936-0143b0ef1545
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth0"
            Interface "eth0"
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c000020c"
            Interface "vxlan-c000020c"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.0.2.13", out_key=flow, remote_ip="192.0.2.12"}
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap39fb599f-25"
            tag: 1
            Interface "tap39fb599f-25"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.4.0"

In this output, you can see three bridges. You can think of each of these exactly as you would think of a switch – as a network appliance that has a bunch of places to plug in Ethernet cables into. A port is just something plugged into one of these virtual switch ports. Each of these virtual switches has a port to itself; br-int is patched to br-tun and br-tun is patched to br-int. You can see the VXLAN tunnel established between the control node and the compute node on br-tun. Br-int is known as the integration bridge and is used for local attachments to OVS. Br-tun is the tunnel bridge used to establish tunnels between nodes, and br-ex is the external bridge, which is what we need to focus on. Br-ex is important because it is patched to a physical interface on the control node. In the example previous, you can see eth0 as a port to br-ex. This is the device on your control node that can route traffic to the external pool of IP addresses. It is important when this happens to make sure that traffic flowing through the Ethernet device on the node communicates with OVS and not directly with the host itself. To make sure this happens, the IP address associated with the Ethernet device must be moved off the device and onto the OVS br-ex.

Next , look at the IP configuration for the host:

overcloud-controller-0# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 00:2c:8c:00:3a:af brd ff:ff:ff:ff:ff:ff
    inet6 fe80::22c:8cff:fe00:3aaf/64 scope link 
       valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 66:87:85:12:cf:b7 brd ff:ff:ff:ff:ff:ff
4: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:2c:8c:00:3a:af brd ff:ff:ff:ff:ff:ff
    inet 192.0.2.13/24 brd 192.0.2.255 scope global dynamic br-ex
       valid_lft 69570sec preferred_lft 69570sec
    inet 192.0.2.10/32 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::22c:8cff:fe00:3aaf/64 scope link 
       valid_lft forever preferred_lft forever
5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether d6:c2:f6:fd:42:44 brd ff:ff:ff:ff:ff:ff
6: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 2e:6e:05:fb:07:42 brd ff:ff:ff:ff:ff:ff

Notice here that the IP address you connected to with ssh is not on eth0; it is on br-ex. There are also virtual network devices for the other OVS bridges. There are also network configuration files in /etc/sysconfig/network-scripts/. Inspect the ifcfg-eth0 and ifcfg-br-ex files and note the references to each other and to OVS.

Note

Triple-O does this networking setup for you. Packstack does not.

Creating an external network

Now that you have explored the network plumbing that exposes OpenStack to the externally routable IP pool that will be managed by OpenStack, it is time to tell OpenStack about this set of resources it can manage. Because an external network is a general-purpose resource, it must be created by the administrator.

Exit your control node and source your overcloudrc file on your undercloud node so that you can create the external network as a privileged user. Then, create the external network, as shown in the following commands:

undercloud# neutron net-create --tenant-id service external --router:external=True
undercloud# neutron subnet-create --tenant-id service external
192.0.2.0/24 --disable-dhcp --allocation_pool start=192.0.2.100,end=192.0.2.199

You will notice a few things here. First, the project that the network and subnet are created in is the service project. As mentioned in Chapter 2, Identity Management, all resources are created in a project. General-purpose resources like these are no exception. They are put into the service project because users do not have access to networks in this project directly, so they would not have the ability to create instances and attach them directly to the external network. Things would not work if that was done because the underlying virtual networking infrastructure is not structured to allow this to work properly. Second, the network is marked as external. Third, note the allocation pools; the nodes are 101, 102, and 103. So the IP addresses 100109 are left out of the pool. This way, OpenStack will not allocate those IP addresses to the instances. Finally, DHCP is disabled. If DHCP was not disabled, OpenStack would try to start and attach a dnsmasq service for the external network. This should not happen because there may be a DHCP service running external to OpenStack that would conflict with the one that would have started if DHCP was enabled on the network you have just created.

The final step to make this network accessible to the instances in your OpenStack cloud is setting the project router's gateway to the external network. Let's do that for the router created earlier, as shown in the following command:

undercloud# neutron router-gateway-set my_router external
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.17.139