Installing Consul

To install Consul, follow these steps:

  1. Switch to the scripts directory for this chapter:
$ cd ~/consul/scripts # Switch to scripts for this exercise

  1. Create a new Consul cluster.

We will run three consul servers in our Kubernetes environment, even though we only have one node. Define the input parameters for the Consul Helm chart to be able to run three servers using a single node:

# Script : 02-consul-values.yaml

global:
datacenter: dc1
image: "consul:1.6.1"
imageK8S: "hashicorp/consul-k8s:0.9.1"

server:
enabled: true
replicas: 3
bootstrapExpect: 0
affinity: ''
storage: 2Gi
disruptionBudget:
enabled: true
maxUnavailable: 0

client:
enabled: true
grpc: true

dns:
enabled: true

ui:
enabled: true

The following portion of the 02-consul-values.yaml script specifies the parameters for the Consul Connect sidecar proxy install:

connectInject:
enabled: true
imageEnvoy: "envoyproxy/envoy:v1.10.0"
default: true
centralConfig:
enabled: true
defaultProtocol: "http"
proxyDefaults: |
{
"envoy_dogstatsd_url": "udp://127.0.0.1:9125"
}
If we need to sync services between Kubernetes and Consul, we could define an additional parameter, syncCatalog, in the preceding values.yaml file:

# Sync Kubernetes and Consul services
syncCatalog:
   enabled: true

Note that a data center should only have three or five Consul servers, but there can be hundreds of Consul agents. Using the preceding values.yaml file, we are defining three Consul servers. An actual Kubernetes environment may have hundreds of nodes, but only three or five of them will host Consul servers, and the other nodes will be running Consul clients.   

We are setting connectInject.enabled to true so that the Envoy proxy sidecar is injected into each service when they are created.

  1. Run the following helm install command to create the Consul cluster by installing Consul servers, clients, and a Consul Connect injector service. The Consul injector will be used to inject sidecar proxies into the pods:
$ helm install ~/consul-helm-${CONSUL_HELM_VERSION}/ --name consul 
--namespace consul --set fullnameOverride=consul -f ./02-consul-values.yaml
  1. Make sure that the persistent volume claims are bound to the persistent volume that you created previously:
$ kubectl -n consul get pvc
NAME STATUS VOLUME ---
data-consul-consul-server-0 Bound consul-data-0 ---
data-consul-consul-server-1 Bound consul-data-1 ---
data-consul-consul-server-2 Bound consul-data-2 ---

--- CAPACITY ACCESS MODES STORAGECLASS AGE
--- 2Gi RWO 105s
--- 2Gi RWO 104s
--- 2Gi RWO 103s

Since we are running a single node VM, it is difficult to run three replicas of Kubernetes StatefulSet. In a production Kubernetes cluster, each replica will run in a separate node. We have simulated the same by running three replicas in a single VM by using setting the affinity variable to null in the helm values.yaml file. We created three persistent volumes ahead of time by using the filesystem through no provisioner storage class that was introduced in Kubernetes 1.14. In an actual production Kubernetes environment, you would use cloud-native storage for your storage providers, such as portworx.io, robin.io, or rook.io, or any other storage vendor that allows a Container Storage Interface (CSI) enabled driver to connect to their dedicated storage.
  1. Also, make sure that all Consul servers are in the READY 1/1 state and have a status of Running:
$ kubectl -n consul get pods
NAME READY ---
consul-6frhx 1/1 ---
consul-connect-injector-webhook-deployment-699976587d-wrzcw 1/1 ---
consul-server-0 1/1 ---
consul-server-1 1/1 ---
consul-server-2 1/1 ---

--- STATUS RESTARTS AGE
--- Running 0 19m
--- Running 0 19m
--- Running 0 19m
--- Running 0 19m
--- Running 0 19m

Now, you have deployed Consul in a Kubernetes environment that's simulating three consul servers in a single VM.

Both the Consul server and the client have been installed in your Kubernetes environment. Let's check their deployments. The Consul servers are deployed through StatefulSet, like so:

$ kubectl -n consul get sts
NAME READY AGE
consul-server 3/3 4h43m
Note that the Consul server replicas were set to three in values.yaml, and hence three consul servers are running. The persistent volumes were created at the start of the install process.

Check ls -l /var/lib/consul? to verify the data that was generated by each Consul server.

Each Consul server node ID is persisted in a node-id file. This won't cause an issue, even if that server is rescheduled on another node and gets a new IP address. The Consul servers are normally created with an anti-affinity rule so that they are placed on different nodes. However, for this demonstration VM environment, we disabled the anti-affinity rule by setting server.affinity to null in values.yaml so that we can create all the three Consul servers on the same Kubernetes node.

  1. Check the version of the Consul running in Kubernetes like so:
$ kubectl -n consul exec -it consul-server-0 -- consul version
Consul v1.6.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
  1. Out of the three Consul servers, go through the server log of any one of them to ascertain which server is the leader:
$ kubectl -n consul logs consul-server-0 | grep -i leader
2019/08/26 15:50:52 [INFO] raft: Node at 192.168.230.233:8300 [Follower] entering Follower state (Leader: "")
2019/08/26 15:51:00 [ERR] agent: failed to sync remote state: No cluster leader
2019/08/26 15:51:09 [INFO] consul: New leader elected: consul-server-1
  1. The Consul clients are installed as a DaemonSet so that they execute on every Kubernetes node, like so:
$ kubectl -n consul get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
consul 1 1 1 1 1 <none> 4h59m

This shows the Consul client on the sole node of our demonstration environment.


If we set global.enabled to false and client.enabled to true in the preceding values.yaml file, only the client components will be installed in Kubernetes. It joins the existing cluster by setting the join property. While installing this, it is also possible to join an existing Consul cluster. Then, we can extend each Kubernetes node so that it joins the existing Consul cluster by creating the value.yaml file like so:
global:
enabled: false
client:
enabled: true
join:
- "provider=my-cloud config=val ..."

Now, let's connect the Consul DNS server to Kubernetes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.245.196