A load balancer is a common system that uses the exported resources pattern in Puppet. Load balancers are used to forward traffic across multiple nodes, providing both high availability through redundancy and performance via horizontal scaling. Load balancers like HAProxy also allow for the design of applications that forward a user to data centers more local to them for performance.
The load balancer itself will receive a traditional configuration, while every member of the balancer will export a resource to be consumed by the load balancer. The load balancer then uses each entry from each exported resource to build a combined configuration file.
We'll need to create two profiles to support this use case: one for the load balancer and one for the balancer member. The balancer member profile is a simple exported resource that declares the listener service it will use, and reports its hostname, IP address, and available ports to the HAProxy. The loadbalancer profile will configure a very simple default loadbalancer, a listening service to provide forwarding on, and most importantly collect all configurations from exported resources:
class profile::balancermember {
@@haproxy::balancermember { 'haproxy': listening_service => 'myapp', ports => ['80','443'], server_names => $::hostname, ipaddresses => $::ipaddress, options => 'check', }
}
class profile::loadbalancer {
include haproxy
haproxy::listen {'myapp':
ipaddress => $::ipaddress,
ports => ['80','443']
}
Haproxy::Balancermember <<| listening_service == 'myapp' |>>
}
We'll want then to place these profiles on two separate hosts. In the following example, I'm placing the balancermember profile on the appserver, and the loadbalancer profile on the haproxy. We'll continue expanding on our site.pp from before, adding code as we go along:
#site.pp
include profile::etchosts
node 'haproxy' {
include profile::loadbalancer
}
node 'appserver' {
include profile::balancermember
}
# Provided so nodes don't fail to classify anything
node default { }
In the following sample, the load balancer had already been configured as a load balancer but had no balancer members to forward to. The appserver had also completed a run and reported its exported haproxy configuration to PuppetDB. Finally, the HAProxy server collects and realizes that resource and places it as a line in its configuration files, enabling the forwarding of traffic to the appserver from the haproxy:
root@haproxy vagrant]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for haproxy
Info: Applying configuration version '1529036882'
Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content:
--- /etc/haproxy/haproxy.cfg 2018-06-15 04:27:25.398339144 +0000
+++ /tmp/puppet-file20180615-17937-6bt84x 2018-06-15 04:28:05.100339144 +0000
@@ -27,3 +27,5 @@
bind 10.0.2.15:443
balance roundrobin
option tcplog
+ server appserver 10.0.2.15:80 check
+ server appserver 10.0.2.15:443 check
Info: Computing checksum on file /etc/haproxy/haproxy.cfg
Info: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]: Filebucketed /etc/haproxy/haproxy.cfg to puppet with sum dd6721741c30fbed64eccf693e92fdf4
Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}dd6721741c30fbed64eccf693e92fdf4' to '{md5}b819a3af31da2d0e2310fd7d521cbc76'
Info: Haproxy::Config[haproxy]: Scheduling refresh of Haproxy::Service[haproxy]
Info: Haproxy::Service[haproxy]: Scheduling refresh of Service[haproxy]
Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Service[haproxy]/Service[haproxy]: Triggered 'refresh' from 1 event
Notice: Applied catalog in 0.20 seconds
When a user requests port 80 (http) or port 443 (https) from the HAProxy server, it will automatically retrieve and forward traffic from our appserver. If we had multiple app servers, it would even split the load between the two, allowing for horizontal scaling.