Using Consul service discovery

For this chapter, we configured Consul as an example service discovery system in our virtual machine-based test environment – Consul is quite simple to set up, which makes it perfect for our example. The way it works is by having an agent running in client mode in each node and an odd number of agents running in server mode that maintain the service catalog. The services that are available on the client nodes are communicated to the server nodes directly, while cluster membership is propagated using a gossip protocol (random peer-to-peer message passing) between every node in the cluster. Since our main objective is to showcase the Prometheus service discovery using Consul, we configured our test environment with an agent running in development mode, which enables an in-memory server to play around with. This, of course, has complete disregard for security, scalability, data safety, and resilience; documentation regarding how to properly configure Consul can be found at https://learn.hashicorp.com/consul/, and should be taken into account when deploying and maintaining Consul in production environments.

To poke around how this is set up in the test environment, we need to connect to the instance running Consul:

vagrant ssh consul

From here, we can start to explore how Consul is set up. For example, the following snippet shows the systemd unit file being used, where we can see the configuration flags being used – it's configured to run as an agent in development mode, and has to bind its ports to the instance's external-facing IP address:

vagrant@consul:~$ systemctl cat consul.service 
...
[Service]
User=consul
ExecStart=/usr/bin/consul agent
-dev
-bind=192.168.42.11
-client=192.168.42.11
-advertise=192.168.42.11
...

If we run ss and filter its output to only show lines belonging to Consul, we can find all the ports it's using:

vagrant@consul:~$ sudo /bin/ss -lnp | grep consul
udp UNCONN 0 0 192.168.42.11:8301 0.0.0.0:* users:(("consul",pid=581,fd=8))
udp UNCONN 0 0 192.168.42.11:8302 0.0.0.0:* users:(("consul",pid=581,fd=6))
udp UNCONN 0 0 192.168.42.11:8600 0.0.0.0:* users:(("consul",pid=581,fd=9))
tcp LISTEN 0 128 192.168.42.11:8300 0.0.0.0:* users:(("consul",pid=581,fd=3))
tcp LISTEN 0 128 192.168.42.11:8301 0.0.0.0:* users:(("consul",pid=581,fd=7))
tcp LISTEN 0 128 192.168.42.11:8302 0.0.0.0:* users:(("consul",pid=581,fd=5))
tcp LISTEN 0 128 192.168.42.11:8500 0.0.0.0:* users:(("consul",pid=581,fd=11))
tcp LISTEN 0 128 192.168.42.11:8502 0.0.0.0:* users:(("consul",pid=581,fd=12))
tcp LISTEN 0 128 192.168.42.11:8600 0.0.0.0:* users:(("consul",pid=581,fd=10))

As we can see, Consul listens on a lot of ports, both TCP and UDP. The port we're interested in is the one serving the HTTP API, which defaults to TCP port 8500. If we open a web browser to http://192.168.42.11:8500, we will see something similar to the following:

Figure 12.7: Consul web interface displaying its default configuration

There's a single service configured by default, which is the Consul service itself.

To make this example more interesting, we also have consul_exporter (an exporter provided by the Prometheus project) deployed in the consul instance. This exporter doesn't require any additional configuration on Consul's side, so it should just work. We can find the configuration used to run this service in the systemd unit file, like so:

vagrant@consul:~$ systemctl cat consul-exporter.service 
...
[Service]
User=consul_exporter
ExecStart=/usr/bin/consul_exporter --consul.server=consul:8500
...
The source code and installation files for the consul_exporter are available at https://github.com/prometheus/consul_exporter.

To validate that the exporter is correctly contacting Consul and parsing its metrics, we can run the following instruction:

vagrant@consul:~$ curl -qs localhost:9107/metrics | grep "^consul"
consul_catalog_service_node_healthy{node="consul",service_id="consul",service_name="consul"} 1
consul_catalog_services 1
consul_exporter_build_info{branch="HEAD",goversion="go1.10.3",revision="75f02d80bbe2191cd0af297bbf200a81cbe7aeb0",version="0.4.0"} 1
consul_health_node_status{check="serfHealth",node="consul",status="critical"} 0
consul_health_node_status{check="serfHealth",node="consul",status="maintenance"} 0
consul_health_node_status{check="serfHealth",node="consul",status="passing"} 1
consul_health_node_status{check="serfHealth",node="consul",status="warning"} 0
consul_raft_leader 1
consul_raft_peers 1
consul_serf_lan_members 1
consul_up 1

The exporter sets the consul_up metric to 1 when it can successfully connect and collect metrics from Consul. We can also see the consul_catalog_services metric, which is telling us that Consul knows about one service, matching what we've seen in the web interface.

We can now disconnect from the consul instance and connect to the prometheus one using the following commands:

exit
vagrant ssh prometheus

If we take a look at the Prometheus server configuration, we will find the following:

vagrant@prometheus:~$ cat /etc/prometheus/prometheus.yml 
...

- job_name: 'consul_sd'
consul_sd_configs:
- server: http://consul:8500
datacenter: dc1
relabel_configs:
- source_labels: [__meta_consul_service]
target_label: job
...

This configuration allows Prometheus to connect to the Consul API address (available at http://192.168.42.11:8500) and, by means of relabel_configs, rewrite the job label so that it matches the service name (as exposed in the __meta_consul_service label). If we inspect the Prometheus web interface, we can find the following information:

Figure 12.8: Prometheus /service-discovery endpoint showing Consul default service

Now, the fun part: let's add a scrape target for consul_exporter automatically by defining it as a service in Consul. A JSON payload with a Consul service configuration is provided in this chapter's resources, so we can add it via the Consul API. The payload can be found at the following path:

vagrant@prometheus:~$ cat /vagrant/chapter12/configs/consul_exporter/payload.json 
{
"ID": "consul-exporter01",
"Name": "consul-exporter",
"Tags": [
"consul",
"exporter",
"prometheus"
],
"Address": "consul",
"Port": 9107
}

Using the following instruction, we'll add this new service to Consul's service catalogs via the HTTP API:

vagrant@prometheus:~$ curl --request PUT 
--data @/vagrant/chapter12/configs/consul_exporter/payload.json
http://consul:8500/v1/agent/service/register

After running this command, we can validate that the new service was added by having a look at the Consul web interface, which will show something like the following:

Figure 12.9: Consul web interface showing the consul-exporter service

Finally, we can inspect the Prometheus /service-discovery endpoint and check that we have a new target, proving that the Consul service discovery is working as expected:

Figure 12.10: Prometheus /service-discovery endpoint showing consul-exporter target

If we consult the consul_catalog_services metric once again, we can see that it has changed to 2. Since we're now collecting the consul_exporter metrics in Prometheus, we can query its current value using promtool:

vagrant@prometheus:~$ promtool query instant http://localhost:9090 'consul_catalog_services'
consul_catalog_services{instance="consul:9107", job="consul-exporter"} => 2 @[1555252393.681]

Consul tags can be used to do scrape job configuration using relabel_configs for services that have different requirements, such as changing the metrics path when a given tag is present, or having a tag to mark whether to scrape using HTTPS. The __meta_consul_tags label value has the comma separator at the beginning and end to make matching easier; this way, you don't need to special-case your regular expression, depending on the position in the string of the tag you're trying to match. An example of this at work could be:

...
relabel_configs:
- source_labels: [__meta_consul_tags]
regex: .*,exporter,.*
action: keep
...

This would only keep services registered in Consul with the exporter tag, discarding everything else.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.105.239