How Puppet code may change in the future

Now, hold on, stop to read and think again about what we've seen in this chapter, and particularly in the last section: variables that define our infrastructure can be dynamically populated according to the number of hosts that have specific classes or resources. If we add hosts with these services, they can be automatically used by the other hosts.

This is what we need to configure with Puppet dynamic and elastic environments where new services are made available to other nodes which are consequently configured.

For example, to manage a load balancer configuration, we can use, in the ERB template that is used for its configuration, a variable that returns all the IP addresses of the nodes that have Apache installed:

$web_servers_ip = query_nodes('Class[apache]', ipaddress)

This is a simple case that probably doesn't fit real scenarios where we probably have different Apache web servers doing different works in different servers, but it can give us an idea.

In other cases, we might need to know the value of a fact of a given node and use it on another node. In the following example, cluster_id might be a custom fact that returns an ID generated on the db01 host; its value might be used on another host (a cluster member) to configure it accordingly:

$shared_cluster_id = query_facts('hostname=db01', cluster_id)

It's important to understand the kind of data we find and query on PuppetDB: what we find with the puppetdbquery module, for example, are resources contained in the last catalog generated by the Puppet Master and stored for each node. We are not sure if these resources have been applied successfully (we should query the events endpoint for that but currently that's not possible with this module) and we are not sure that the services expected are available.

Also consider how frequently Puppet runs on our nodes, as its convergence time may vary: if the interval is too large the infrastructure may adapt slowly to new changes; if it's too small we may have a greater risk of race conditions where a catalog that exposes a new service for a node has not yet been applied and, in the meantime, is already used to configure other nodes to use a service which might not already be configured.

These are probably hypothetical edge cases that we can tackle and manage in the same ways we would manage a temporarily faulty service, for example, excluding from a load-balancing pool non-responsive servers. Just be aware of them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.100.40