How it works…

Under the hood, gcloud creates a Kubernetes cluster with three nodes, along with a controller manager, scheduler, and etcd cluster with two members. We can also see that the master is launched with some services, including a default backend used by the controller, heapster (used for monitoring) KubeDNS for DNS services in the cluster, a dashboard for Kubernetes UI, and metrics-server for resource usage metrics.

We saw Kubernetes-dashboard has a URL; let's try and access it:

Forbidden to access Kubernetes dashboard

We got HTTP 403 Forbidden. Where do we get the access and credentials though? One way to do it is running a proxy via the kubectl proxy command. It will bind the master IP to local 127.0.0.1:8001:

# kubectl proxy
Starting to serve on 127.0.0.1:8001

After that, when we access http://127.0.0.1:8001/ui, it'll be redirected to http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy.

Since Kubernetes 1.7, the dashboard has supported user authentication based on a bearer token or Kubeconfig file:

Logging in to the Kubernetes dashboard

You could create a user and bind it to the current context (please refer to the Authentication and authorization recipe in Chapter 8, Advanced Cluster Administration). Just for convenience, we can check if we have any existing users. Firstly, we need to know our current context name. Context combines of cluster information, users for authentication, and a namespace:

// check our current context name
# kubectl config current-context
gke_kubernetes-cookbook_us-central1-a_my-k8s-cluster

After we know the context name, we can describe it via the kubectl config view $CONTEXT_NAME:

// kubectl config view $CONTEXT_NAME
# kubectl config view gke_kubernetes-cookbook_us-central1-a_my-k8s-cluster
current-context: gke_kubernetes-cookbook_us-central1-a_my-k8s-cluster
kind: Config
preferences: {}
users:
- name: gke_kubernetes-cookbook_us-central1-a_my-k8s-cluster
user:
auth-provider:
config:
access-token: $ACCESS_TOKEN
cmd-args: config config-helper --format=json
cmd-path: /Users/chloelee/Downloads/google-cloud-sdk-2/bin/gcloud
expiry: 2018-02-27T03:46:57Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp

We may find there is a default user existing in our cluster; using its $ACCESS_TOKEN, you can glimpse the Kubernetes console.

Kubernetes dashboard overview

Our cluster in GKE is up-and-running! Let's try and see if we can run a simple deployment on it:

# kubectl run nginx --image nginx --replicas=2
deployment "nginx" created
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-8586cf59-x27bj 1/1 Running 0 12s
nginx-8586cf59-zkl8j 1/1 Running 0 12s

Let's check our Kubernetes dashboard:

Workloads in Kubernetes dashboard

Hurray! The deployment is created and as a result two pods are scheduled and created.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.77.208