Configuring agents

When Consul is installed, the very first task we need to perform is configuring the Consul agent. All the nodes within a Kubernetes cluster manager that have containerized services will deploy the Consul agent. The agent performs health checks and gathers metrics for the infrastructure, platform, and overall services running within Kubernetes. The Consul agent is not used for service discovery or to gather key-value data.

If there are multiple Kubernetes clusters, the Consul agent can be enabled to communicate with multiple Consul servers as long as the agent is installed across all Kubernetes clusters. The Consul server is where all the data is stored, and a primary server is defined to serve as a master server.

The Consul agent is Consul's core feature and is used to maintain server/client membership, service registry, health checks, address queries, and many more capabilities. The Consul agent is installed on every node within a cluster or data center for all the servers and clients. These nodes take part in the RAFT and Serf protocols.

It's good practice to deploy server nodes on dedicated machines to avoid high latencies and slow response times. The reason for this is that servers have higher resource workloads than client nodes. As we mentioned earlier, there are a lot more client nodes than servers because client nodes are lightweight and only interact with the server.

We can use Consul CLI commands using configuration files either in the HashiCorp Configuration Language (HCL) or the JavaScript Object Notation (JSON) format to spin up a server or client node.

Take a look at the following example of a Consul configuration file, which has been taken from https://www.consul.io/docs/agent/options.html#configuration-files:

{
"datacenter": "remote-location",
"data_dir": "/opt/consul",
"log_level": "INFO",
"node_name": "server1",
"addresses": {
"https": "0.0.0.0"
},
"ports": {
"https": 8501
},
"key_file": "/etc/pki/tls/private/my.key",
"cert_file": "/etc/pki/tls/certs/my.crt",
"ca_file": "/etc/pki/tls/certs/ca-bundle.crt"
}

From the preceding JSON, we can see that a Consul server has been defined using TLS, the address, ports, and key certificate files.

The consul agent command is used to manage nodes, run server checks, announce services, apply queries, and much more.

The following is some sample output after executing the consul agent:

$ consul agent -data-dir=/opt/consul
==> Starting Consul agent...
==> Consul agent running!
Node name: ‘MyLaptop'
Datacenter: 'dc1'
Server: false (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, DNS: 8600)
Cluster Addr: 192.168.108.141 (LAN: 8301, WAN: 8302)
==> Log data will now stream in as it occurs:
[INFO] serf: EventMemberJoin: MyLaptop.local 192.168.108.141
...

The five main messages that the preceding Consul agent command displays are as follows:

  • Node name: This is the hostname of the machine where the Consul agent was executed.
  • Data center: This tags where the Consul agent is configured to run. Consul can support multiple data centers, but each node is configured to a specific data center. The data center parameter is used to define that value. In the preceding example, the Consul agent is running in a single node environment, so by default, it assigns dc1 as the data center.
  • Server: Value determines whether the Consul agent is running in either client or server mode. If the value is true, it is running in server mode. If it's false, then it is running in client mode. A server can be running in bootstrap mode. Since client nodes are stateless and rely on server nodes for state information, the bootstrap process allows the initial server nodes to be tied to a cluster.
  • Client Addr: This is the localhost address that's used by the Consul agent for client interfaces. It includes HTTP and DNS ports, where the address and port can be changed as long as the -http-addr property is defined.
  • Cluster Addr: This is the cluster IP address and provides a defined list of ports for the LAN and WAN protocols to enable communication between other Consul agents. It is good practice to define unique ports for all the Consul agents.

Running the Consul agent in a cluster provides a life cycle of interactions among its nodes. It's imperative to understand such interactions to see how a cluster manages its nodes. When a Consul agent is first enabled, that agent isn't aware of any other nodes and their interactions within the cluster. Node discovery, getting added to the cluster through the join command, or enabling auto-join configuration enables such interactions. The first interaction is a gossip, which notifies all the nodes within the cluster that a new node has been added.

If a node is removed from a cluster, the cluster will define that node as left and update the service catalog accordingly as not registered. If the Consul agent is a server, it will halt all replications. The process of keeping the Consul service catalog up to date with only active and running nodes and removing all failed and left nodes is called reaping. The reaping process is configured at 72 hours and it is recommended to factor any cluster outages, downtime, and so on.

Now that the Consul agents (client/servers) have been configured either through CLI options or through JSON configuration files, we can look at the service discovery process and the service catalog.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.233.41