Elastic load balancer as LoadBalancer service

Let's create a LoadBalancer Service with Pods underneath, which is what we learned in Chapter 3Playing with Containers:

# cat aws-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
targetPort: 80
type: LoadBalancer
selector:
run: nginx

In the preceding template, we declared one nginx Pod and associated it with the LoadBalancer service. The service will direct the packet to container port 80:

# kubectl create -f aws-service.yaml 
deployment.apps "nginx" created
service "nginx" created

Let's describe our nginx Service:

# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
Selector: run=nginx
Type: LoadBalancer
IP: 100.68.35.30
LoadBalancer Ingress: a9da4ef1d402211e8b1240ef0c7f25d3-1251329976.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31384/TCP
Endpoints: 100.124.40.196:80,100.99.102.130:80,100.99.102.131:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 2m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 2m service-controller Ensured load balancer

After the service is created, we will find out that the AWS CloudProvider will provision a classic load balancer with the endpoint adb576a05401911e8b1240ef0c7f25d3-1637943008.us-east-1.elb.amazonaws.com. We can check its detailed settings via the aws command-line interface (https://aws.amazon.com/cli/).

To install aws CLI, you can use pip to install in Mac or Linux (pip install awscli); for Windows users, you'll have to download the installer from the official website.

The combination of AWS CLI commands is aws [options] <command> <subcommand> [<subcommand> ...] [parameters]. For listing load balancers, we'll use aws elb describe-load-balancers as the major command. Using the --load-balancer-names parameter will filter load balancers by name, and for the --output parameter, you can choose text, JSON, or table:

# aws elb describe-load-balancers --load-balancer-names a9da4ef1d402211e8b1240ef0c7f25d3 --output text
LOADBALANCERDESCRIPTIONS a9da4ef1d402211e8b1240ef0c7f25d3-1251329976.us-east-1.elb.amazonaws.com Z35SXDOTRQ7X7K 2018-04-14T20:30:45.990Z a9da4ef1d402211e8b1240ef0c7f25d3-1251329976.us-east-1.elb.amazonaws.com a9da4ef1d402211e8b1240ef0c7f25d3 internet-facing vpc-07374a7c
AVAILABILITYZONES us-east-1a
AVAILABILITYZONES us-east-1b
AVAILABILITYZONES us-east-1c
HEALTHCHECK 2 10 TCP:31384 5 6
INSTANCES i-03cafedc27dca591b
INSTANCES i-060f9d17d9b473074
LISTENER 31384 TCP 80 TCP
SECURITYGROUPS sg-3b4efb72
SOURCESECURITYGROUP k8s-elb-a9da4ef1d402211e8b1240ef0c7f25d3 516726565417
SUBNETS subnet-088f9d27
SUBNETS subnet-e7ec0580
SUBNETS subnet-f38191ae

If we access this ELB endpoint port 80, we'll see the nginx welcome page:

Access ELB endpoint to access LoadBalancer Service

Behind the scene, AWS CloudProvider creates a AWS elastic load balancer and configures its ingress rules and listeners by the Service we just defined. The following is a diagram of how the traffic gets into the Pods:

The illustration of Kubernetes resources and AWS resources for Service with LoadBalancer type

The external load balancer receives the requests and forwards them to EC2 instances using a round-robin algorithm. For Kubernetes, the traffic gets into the Service via NodePort and starts a Service-to-Pod communication. For more information about external-to-Service and Service-to-Pod communications, you can refer to Chapter 3, Playing with Containers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.131.38.14