How it works...

On the network protocol stack, the Kubernetes Service relies on the transport layer, working together with the overlay network and kube-proxy. The overlay network of Kubernetes builds up a cluster network by allocating a subnet lease out of a pre-configured address space and storing the network configuration in etcd; on the other hand, kube-proxy helps to forward traffic from the endpoints of Services to the Pods through iptables settings.

Proxy-mode and Service kube-proxy  currently has three modes with different implementation methods: userspace, iptables, and ipvs. The modes affect how the requests of clients reach to certain Pods through the Kubernete Service:
  • userspace: kube-proxy opens a random port, called a proxy port, for each Service on the local node, then updates the iptables rules, which capture any request sent to the Service and forward it to the proxy port. In the end, any message sent to the proxy port will be passed to the Pods covered by the Service. It is less efficient, since the traffic is required to go to kube-proxy for routing to the Pod.
  • iptables: As with the userspace mode, there are also required iptables rules for redirecting the client traffic. But there is no proxy port as mediator. Faster but need to take care the liveness of Pod. By default, there is no way for a request to retry another Pod if the target one fails. To avoid accessing the unhealthy Pod, health-checking Pods and updating iptables in time is necessary.
  • ipvs: ipvs is the beta feature in Kubernetes v1.9. In this mode, kube-proxy builds up the interface called netlink between the Service and its backend set. The ipvs mode takes care of the downside in both userspace and iptables; it is even faster, since the routing rules stored a hash table structure in the kernel space, and even reliable that kube-proxy keeps checking the consistency of netlinks. ipvs even provides multiple load balancing options.
The system picks the optimal and stable one as the default setting for kube-proxy. Currently, it is the mode iptables.

When a Pod tries to communicate with a Service, it can find the Service through environment variables or a DNS host lookup. Let's give it a try in the following scenario of accessing a service in a Pod:

// run a Pod first, and ask it to be alive 600 seconds
$ kubectl run my-1st-centos --image=centos --restart=Never sleep 600
pod "my-1st-centos" created
// run a Deployment of nginx and its Service exposing port 8080 for nginx
$ kubectl run my-nginx --image=nginx --port=80
deployment.apps "my-nginx" created
$ kubectl expose deployment my-nginx --port=8080 --target-port=80 --name="my-nginx-service"
service "my-nginx-service" exposed
// run another pod
$ kubectl run my-2nd-centos --image=centos --restart=Never sleep 600
pod "my-2nd-centos" created
//Go check the environment variables on both pods.
$ kubectl exec my-1st-centos -- /bin/sh -c export
$ kubectl exec my-2nd-centos -- /bin/sh -c export

You will find that the Pod my-2nd-centos comes out with additional variables showing information for the Service my-nginx-service, as follows:

export MY_NGINX_SERVICE_PORT="tcp://10.104.218.20:8080"
export MY_NGINX_SERVICE_PORT_8080_TCP="tcp://10.104.218.20:8080"
export MY_NGINX_SERVICE_PORT_8080_TCP_ADDR="10.104.218.20"
export MY_NGINX_SERVICE_PORT_8080_TCP_PORT="8080"
export MY_NGINX_SERVICE_PORT_8080_TCP_PROTO="tcp"
export MY_NGINX_SERVICE_SERVICE_HOST="10.104.218.20"
export MY_NGINX_SERVICE_SERVICE_PORT="8080"

This is because the system failed to do a real-time update for Services; only the Pods created subsequently can be applied to accessing the Service through environment variables. With this ordering-dependent constraint, pay attention to running your Kubernetes resources in a proper sequence if they have to interact with each other in this way. The keys of the environment variables representing the Service host are formed as <SERVICE NAME>_SERVICE_HOST, and the Service port is like <SERVICE NAME>_SERVICE_PORT. In the preceding example, the dash in the name is also transferred to the underscore:

// For my-2nd-centos, getting information of Service by environment variables
$ kubectl exec my-2nd-centos -- /bin/sh -c 'curl $MY_NGINX_SERVICE_SERVICE_HOST:$MY_NGINX_SERVICE_SERVICE_PORT'
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Nevertheless, if the kube-dns add-on is installed, which is a DNS server in the Kubernetes system, any Pod in the same Namespace can access the Service, no matter when the Service was created. The hostname of the Service would be formed as <SERVICE NAME>.<NAMESPACE>.svc.cluster.local. cluster.local is the default cluster domain defined in booting kube-dns:

// go accessing my-nginx-service by A record provided by kube-dns
$ kubectl exec my-1st-centos -- /bin/sh -c 'curl my-nginx-service.default.svc.cluster.local:8080'
$ kubectl exec my-2nd-centos -- /bin/sh -c 'curl my-nginx-service.default.svc.cluster.local:8080'
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.57.172