Configuring UI access

Configuring web UI access to Grafana, Prometheus, Kiali, and Jaeger can be done in various ways, as follows:

  • Using port-forward while using the kubectl command for the pod's port number.
  • Configuring a node port and accessing the UI through hostIP:NodePort and by configuring an Istio virtual service.
  • Using the istioctl dashboard command to open the Web UI.

The first two approaches are well documented and refer to the Kubernetes documentation. In this section, we will show Istio's approach to defining a virtual service to access the UI. In a real-world situation, you would use a DNS server to resolve the names, but, in our case, we are going to use the /etc/hosts file to resolve the names. Let's get started:

  1. Edit the /etc/hosts file and add entries for the following additional hosts:
$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.142.101 osc01.servicemesh.local osc01
192.168.142.249 bookinfo.istio.io bookinfo
192.168.142.249 httpbin.istio.io httpbin
192.168.142.249 grafana.istio.io grafana
192.168.142.249 prometheus.istio.io prometheus
192.168.142.249 kiali.istio.io kiali
192.168.142.249 jaeger.istio.io jaeger

Here, we've added four additional hosts using the same IP address for Grafana, Prometheus, Kiali, and Jaeger.

In an actual example, you will use a domain name that can be resolved through a DNS server, which can point to the IP address of the Ingress gateway. In the preceding code, we are using /etc/hosts to resolve our made-up hostnames to the IP address of our Ingress gateway.

Now, let's create and define some virtual hosts. We will point these to the respective service using a particular port.

  1. Check the services and note the port numbers that these web UI services for telemetry are running on:
$ kubectl -n istio-system get svc | grep -E "grafana|prometheus|kiali|jaeger"
grafana ClusterIP 10.99.238.230 <none> ---
jaeger-agent ClusterIP None <none> ---
jaeger-collector ClusterIP 10.105.138.178 <none> ---
jaeger-query ClusterIP 10.104.117.150 <none> ---
kiali ClusterIP 10.104.122.142 <none> ---
prometheus ClusterIP 10.108.236.193 <none> ---

--- 3000/TCP 45h
--- 5775/UDP,6831/UDP,6832/UDP 45h
--- 14267/TCP,14268/TCP 45h
--- 16686/TCP 45h
--- 20001/TCP 45h
--- 9090/TCP 45h

We need port numbers for Grafana, Jaeger, Kiali, and Prometheus. These will be 3000, 16686, 20001, and 9090, respectively. 

  1. Define the virtual service for Grafana:
# Script : 01-create-vs-grafana-jaeger-prometheus.yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- grafana.istio.io
gateways:
- mygateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000
...
  1. The definition of the virtual services for Prometheus, Jaeger, and Kiali can be seen in the 01-create-vs-grafana-jaeger-prometheus.yaml script.
  1. Create all the necessary virtual services:
$ kubectl -n istio-system apply -f 01-create-vs-grafana-jaeger-prometheus.yaml 
virtualservice.networking.istio.io/grafana created
virtualservice.networking.istio.io/prometheus created
virtualservice.networking.istio.io/jaeger created
virtualservice.networking.istio.io/kiali created
  1. The virtual service route information is pushed to each sidecar proxy in the mesh. First, let's check the istioctl command and then use the sidecar proxy that's internal to the web UI:
$ export INGRESS_HOST=$(kubectl -n istio-system get pods -l app=istio-ingressgateway -o jsonpath='{.items..metadata.name}') ; echo $INGRESS_HOST 
istio-ingressgateway-688d5886d-vsd8k

$ istioctl proxy-config route $INGRESS_HOST.istio-system -o json

...
"name": "prometheus.istio.io:80",
...
"routes": [
{
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|9090||prometheus.istio-system.svc.cluster.local",
...
  1. In the output of the istioctl command, scroll up and locate the entry labeled "name": "prometheus.istio.io:80"Check and validate that the route rules for the virtual host labeled "cluster" have been pushed to the sidecar proxy. 
  1. Let's check the same through the sidecar proxy's internal web UI. Note that port 15000 is the management port for the sidecar proxy:
$ kubectl -n istio-system port-forward $INGRESS_HOST 15000

From inside the VM, open a browser, open http://localhost:15000, and click on config_dump. Scroll all the way down to view the route information that was pushed to the sidecar proxy. 

  1. Press Ctrl + C from the command-line window to stop port forwarding. The same routing rule is pushed to all the sidecars in the istio-lab namespace. The following code shows the routing rules from the sidecar of the ratings service:
$ RATING_POD=$(kubectl -n istio-lab get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}') ; echo $RATING_POD
ratings-v1-79b6d99979-k2j7t

$
kubectl -n istio-lab port-forward $RATING_POD 15000

Browse to http://localhost:15000/config_dump and scroll down to check the pushed routing rule virtual service telemetry. 

  1. Press Ctrl + C to stop port forwarding from the command-line window. 

The sidecar proxy web UIs are local to the cluster. Configuring external web UI access, as shown here, is appropriate when you need to expose the web UI to users who may not have access to Kubectl. If you have access to a Kubernetes cluster through your Windows or Mac machine, you can use the kubectl port-forward command and use localhost:<portNumer> to access the web UI. Istioctl provides a dashboard command that you can use to run the web UI. Let's take a look at two examples.

  1. Show the web UI for any control plane pod:
$ INGRESS_HOST=$(kubectl -n istio-system get pods -l app=istio-ingressgateway -o jsonpath='{.items[0].metadata.name}') ; echo $INGRESS_HOST
istio-ingressgateway-688d5886d-vsd8k

$ istioctl dashboard controlz $INGRESS_HOST.istio-system

http://localhost:39284
  1. Now, you can open the web UI using http://localhost:39284. Here, you will see the web UI's ControlZ pod:

  1. Open Envoy's admin dashboard for a microservice:
$ RATING_POD=$(kubectl -n istio-lab get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}') ; echo $RATING_POD
ratings-v1-79b6d99979-k2j7t

$ istioctl dashboard envoy $RATING_POD.istio-lab

http://localhost:41010

The dashboard looks as follows:

Dashboard 
  1. Similarly, you can open a dashboard for Grafana, Jaeger, Kiali, and Prometheus like so:
$ istioctl dashboard grafana

$ istioctl dashboard jaeger

$ istioctl dashboard prometheus

$ istioctl dashboard kiali

Now that we've gained web access to the tools, we will look at Prometheus's in-built metrics collection. Istio has been coded with built-in Prometheus APIs to allow for data scrapping from different components.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.115.195