Migrations, multicluster, and more

As we've already seen so far, Kubernetes offers a high level of flexibility and customization to create a service abstraction around your containers running in the cluster. However, there may be times where you want to point to something outside your cluster.

An example of this would be working with legacy systems or even applications running on another cluster. In the case of the former, this is a perfectly good strategy in order to migrate to Kubernetes and containers in general. We can begin by managing the service endpoints in Kubernetes while stitching the stack together using the K8s orchestration concepts. Additionally, we can even start bringing over pieces of the stack, as the frontend, one at a time as the organization refactors applications for microservices and/or containerization.

To allow access to non pod-based applications, the services construct allows you to use endpoints that are outside the cluster. Kubernetes is actually creating an endpoint resource every time you create a service that uses selectors. The endpoints object keeps track of the pod IPs in the load balancing pool. You can see this by running the get endpoints command, as follows:

$ kubectl get endpoints

You should see something similar to the following:

NAME               ENDPOINTS
http-pd 10.244.2.29:80,10.244.2.30:80,10.244.3.16:80
kubernetes 10.240.0.2:443
node-js 10.244.0.12:80,10.244.2.24:80,10.244.3.13:80

You'll note the entry for all the services we currently have running on our cluster. For most services, the endpoints are just the IP of each pod running in an RC. As I mentioned previously, Kubernetes does this automatically based on the selector. As we scale the replicas in a controller with matching labels, Kubernetes will update the endpoints automatically.

If we want to create a service for something that is not a pod and therefore has no labels to select, we can easily do this with both a service definition nodejs-custom-service.yaml and endpoint definition nodejs-custom-endpoint.yaml, as follows:

apiVersion: v1 
kind: Service
metadata:
name: custom-service
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80

apiVersion: v1 
kind: Endpoints
metadata:
name: custom-service
subsets:
- addresses:
- ip: <X.X.X.X>
ports:
- name: http
port: 80
protocol: TCP

In the preceding example, you'll need to replace <X.X.X.X> with a real IP address, where the new service can point to. In my case, I used the public load balancer IP from the node-js-multi service we created earlier in listing ingress-example.yaml. Go ahead and create these resources now.

If we now run a get endpoints command, we will see this IP address at port 80, which is associated with the custom-service endpoint. Furthermore, if we look at the service details, we will see the IP listed in the Endpoints section:

$ kubectl describe service/custom-service

We can test out this new service by opening the custom-service external IP from a browser.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.131.133.159