Load-balancing control services

When more compute services are added to the cluster, OpenStack's scheduler distributes the new instances appropriately. When new control or network services are added, traffic has to be deliberately sent to them. There is not anything natively in OpenStack that handles traffic being distributed across the API services. There is a load-balancing service called HAProxy that can do this for us. HAProxy can be run anywhere it can access the endpoints that will be balanced. It could go on its own node or it could be put on a node that already has a bit of OpenStack installed on it. Triple-O will run HAProxy on each of the control nodes.

HAProxy has a concept of frontends and backends. The frontends are where HAProxy listens for incoming traffic, and the backends define where the incoming traffic will be sent to and balanced across. When a user makes an API call to one of the OpenStack services, the HAProxy frontend assigned to the service will receive the request. HAProxy will make a decision about which backend should receive the traffic. There will be one backend configured for each control node. This is because the service should be running on each of the control nodes. The backend has no knowledge of HAProxy and is serving traffic as if it was being called directly by the end user. The frontend and backend definitions are configured in /etc/haproxy/haproxy.cfg. Have a look at this file if you are interested in learning more about HAProxy's configuration.

The control services can be load-balanced like this because they are stateless applications. This means that no information is associated across multiple requests. Each time a user makes a request to an OpenStack API endpoint, their authentication token is validated, the request is processed, and the request is terminated.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.234.24