Migrations, multicluster, and more

As you've seen so far, Kubernetes offers a high level of flexibility and customization to create a service abstraction around your containers running in the cluster. However, there may be times where you want to point to something outside your cluster.

An example of this would be working with legacy systems or even applications running on another cluster. In the case of the former, this is a perfectly good strategy in order to migrate to Kubernetes and containers in general. We can begin to manage the service endpoints in Kubernetes while stitching the stack together using the K8s orchestration concepts. Additionally, we can even start bringing over pieces of the stack, as the frontend, one at a time as the organization refactors applications for microservices and/or containerization.

To allow access to non-pod-based applications, the services construct allows you to use endpoints that are outside the cluster. Kubernetes is actually creating an endpoint resource every time you create a service that uses selectors. The endpoints object keeps track of the pod IPs in the load balancing pool. You can see this by running a get endpoints command, as follows:

$ kubectl get endpoints

You should see something similar to this:

NAME               ENDPOINTS
http-pd 10.244.2.29:80,10.244.2.30:80,10.244.3.16:80
kubernetes 10.240.0.2:443
node-js 10.244.0.12:80,10.244.2.24:80,10.244.3.13:80

You'll note an entry for all the services we currently have running on our cluster. For most services, the endpoints are just the IP of each pod running in an RC. As I mentioned, Kubernetes does this automatically based on the selector. As we scale the replicas in a controller with matching labels, Kubernetes will update the endpoints automatically.

If we want to create a service for something that is not a pod and therefore has no labels to select, we can easily do this with both a service and endpoint definition, as follows:

apiVersion: v1 
kind: Service
metadata:
name: custom-service
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80

Listing 3-10: nodejs-custom-service.yaml

apiVersion: v1 
kind: Endpoints
metadata:
name: custom-service
subsets:
- addresses:
- ip: <X.X.X.X>
ports:
- name: http
port: 80
protocol: TCP

Listing 3-11: nodejs-custom-endpoint.yaml

In the preceding example, you'll need to replace <X.X.X.X> with a real IP address, where the new service can point to. In my case, I used the public load balancer IP from the node-js-multi service we created earlier in listing 3-6. Go ahead and create these resources now.

If we now run a get endpoints command, we will see this IP address at port 80 associated with the custom-service endpoint. Further, if we look at the service details, we will see the IP listed in the Endpoints section:

$ kubectl describe service/custom-service

We can test out this new service by opening the custom-service external IP from a browser.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.234.105