Consul server in a VM

One of the beneficial features of Consul is that you can have a hybrid service mesh that can span multiple data centers, Kubernetes clusters, VMs, or just bare-metal servers.

Though it does not serve any useful purpose in a single node, we will use consul at the VM level and join the Kubernetes cluster, which is running three servers, just as a demonstration to show the Consul method of service discovery and spanning multiple heterogeneous environments. In our demonstration environment, which is running on a single VM node, we'll simulate a VM and a Kubernetes cluster running three Consul servers. Let's get started:

  1. Find out the endpoints for consul-server:
$ kubectl -n consul get ep
NAME ENDPOINTS AGE
consul-connect-injector-svc 192.168.230.218:8080 47m
consul-dns 192.168.230.219:8600,
192.168.230.237:8600,
192.168.230.245:8600 + 5 more... 47m
consul-server 192.168.230.219:8301,
192.168.230.237:8301,
192.168.230.245:8301 + 21 more... 47m
consul-ui 192.168.230.219:8500,
192.168.230.237:8500,
192.168.230.245:8500 47m
Note that + 5 more or 21 more in the preceding output is an indication of the additional output that we can see by using the kubectl -n consul describe ep consul-server command. The endpoint IP addresses may be different in your case.

The Kubernetes consul-server service points to three Consul pods. Kubernetes will do the load balancing for us. The Fully Qualified Domain Name of this service name is consul-server.consul.svc.cluster.local. The preceding service name should resolve to the cluster pod addresses in a round-robin fashion. 

Note that the request for the read and write for the Consul server can be routed to any server in a round-robin fashion. The AnyConsul server can fulfill the read operation, but all writes are forwarded to the leader server. The leader writes the information in a distributed key-value store to maintain the state of the cluster.  

Let's assume that you have VMs running on other machines and you want to join those VMs to the Consul cluster. To do this, you need to create an ingress rule that will forward an external domain name (say, consul.example.com) to the consul-server.consul.svc.cluster.local service. At the VM level, you can run a command such as a consul join <name of the consul server>. The Consul server can run in VMs, bare-metal servers or, as in our example, in the Kubernetes clusters.

  1. Now, query the node names using the REST API:
$ curl -s localhost:8500/v1/catalog/nodes | json_reformat
[
{
"ID": "1a36a121-9810-887f-78e0-30721fab90c5",
"Node": "consul-server-0",
"Address": "192.168.230.219",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "192.168.230.219",
"wan": "192.168.230.219"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 12,
"ModifyIndex": 14
},
...
  1. Check the members of the Consul cluster from inside one of the Kubernetes Consul pods:
$ kubectl -n consul exec -it consul-server-0 -- consul members
Node Address Status Type ---
consul-server-0 192.168.230.219:8301 alive server ---
consul-server-1 192.168.230.245:8301 alive server ---
consul-server-2 192.168.230.237:8301 alive server ---
osc01.servicemesh.local 192.168.230.249:8301 alive client ---

--- Build Protocol DC Segment
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <default>
  1. Check the same from the VM:
$ consul members
Node Address Status Type ---
consul-server-0 192.168.230.219:8301 alive server ---
consul-server-1 192.168.230.245:8301 alive server ---
consul-server-2 192.168.230.237:8301 alive server ---
osc01.servicemesh.local 192.168.230.249:8301 alive client ---

--- Build Protocol DC Segment
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <all>
--- 1.6.1 2 dc1 <default>

Note that the list of Consul members includes Kubernetes nodes as well as the VMs that are running the Consul agent.

  1. Use the consul info command to find out about the configuration information of the Consul cluster from inside one of the Consul servers in the Kubernetes environment:
$ kubectl -n consul exec -it consul-server-0 -- consul info
agent:
check_monitors = 0
check_ttls = 0
checks = 0
services = 0
build:
prerelease =
revision = 34eff659
version = 1.6.1
consul:
acl = disabled
bootstrap = false
known_datacenters = 1
leader = false
leader_addr = 10.1.230.238:8300
server = true
raft:
applied_index = 8267
commit_index = 8267
fsm_pending = 0
last_contact = 85.424007ms
last_log_index = 8267
last_log_term = 403
last_snapshot_index = 0
...

The preceding output shows information about various Consul server components, such as their LAN, WAN gossip, and raft protocol, as well as their metrics. The consul info command can also be executed from the VM and will produce the same output. 

Consul provides an HTTP API for the consul info command and other commands. Please refer to https://www.consul.io/api for details about HTTP APIs.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.227.69