Setting up EFK with kubespray

kubespray has a configuration concerning whether or not to enable EFK. By default, it is disabled, so you need to enable it with the following steps:

  1. Open <kubespray dir>/inventory/mycluster/group_vars/k8s-cluster.yaml.

 

  1. Around line number 152 in the k8s-cluster.yml file, change the value of  efk_enabled to true:
# Monitoring apps for k8s
efk_enabled: true
  1. Run the ansible-playbook command to update your Kubernetes cluster:
$ ansible-playbook -b -i inventory/mycluster/hosts.ini cluster.yml
  1. Check to see if Elasticsearch, Fluentd, and Kibana Pod's STATUS became Running or not; if you see the Pending state for more than 10 minutes, check kubectl describe pod <Pod name> to see the status. In most cases, you will get an error saying insufficient memory. If so, you need to add more Nodes or increase the available RAM:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-9wnwn 1/1 Running 0 2m
kube-system calico-node-jg67p 1/1 Running 0 2m
kube-system elasticsearch-logging-v1-776b8b856c-97qrq 1/1 Running 0 1m
kube-system elasticsearch-logging-v1-776b8b856c-z7jhm 1/1 Running 0 1m
kube-system fluentd-es-v1.22-gtvzg 1/1 Running 0 49s
kube-system fluentd-es-v1.22-h8r4h 1/1 Running 0 49s
kube-system kibana-logging-57d98b74f9-x8nz5 1/1 Running 0 44s
kube-system kube-apiserver-master-1 1/1 Running 0 3m
kube-system kube-controller-manager-master-1 1/1 Running 0 3m
  1. Check the kibana log to see if the status has become green:
$ kubectl logs -f kibana-logging-57d98b74f9-x8nz5 --namespace=kube-system
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2018-03-25T05:11:10Z","tags":["info","optimize"],"pid":5,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}

(wait for around 5min)

{"type":"log","@timestamp":"2018-03-25T05:17:55Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2018-03-25T05:17:58Z","tags":["status","plugin:[email protected]","info"],"pid":5,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
  1. Run kubectl cluster-info, confirm Kibana is running, and capture the URL of Kibana:
$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kibana-logging/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  1. In order to access the Kibana WebUI from your machine remotely, it is easier to use ssh port forwarding from your machine to the Kubernetes master:
$ ssh -L 8080:127.0.0.1:8080 <Kubernetes master IP address>
  1. Access the Kibana WebUI from your machine using the following URL: http://localhost:8080/api/v1/namespaces/kube-system/services/kibana-logging/proxy.

Now you can access Kibana from your machine. You also need to configure the index. Just make sure the index name has logstash-* as the default value. Then, click the Create button:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.110.131