How to do it...

Firstly, we need to ensure that we have our tooling configured correctly as per the instructions in the Getting Ready section.

To confirm, we should have the following environment variables defined in our working shell:

export PATH=~/Library/Python/<version>/bin/:$PATH
export AWS_ACCESS_KEY_ID=xxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxx
export AWS_DEFAULT_REGION=us-west-2

For this recipe, the examples will use us-west-2 as the AWS region. Any region may be used--however, we suggest using a region in which no previous resources have been deployed.

Let's enter the micro folder (as we left it in the previous recipe), and create a cluster directory:

$ cd micro
$ mkdir cluster

The cluster folder will store our local cluster configuration information.

Instructions for creating a cluster with kops are provided at: https://kubernetes.io/docs/getting-started-guides/kops/. We'll walk through the process here and provide additional direction and explanations.

First, we need to create a Route53 domain (Route53 is simply the AWS domain name registrar).

Let's open the AWS console and navigate to the Route53 control panel. Next, we enter a domain name into the textbox provided and hit the check button. If the domain is available, proceed to register it. This will cost 12 USD to complete (at the time of writing).

During the process we need to provide an email address as the administrative and technical contact for the domain. AWS will validate this address by sending an email. We must be sure to click the validation link in the email.

For the purposes of this recipe any valid domain name can be used. In the examples, we have used nodecookbookdeployme.com as our throwaway domain name.

Next, we need to create an S3 bucket (S3 is the AWS static assets service). Let's navigate to the S3 control panel from the AWS console and create an empty bucket. For this example, we have used nodecookbookdemo as our bucket name.

Once we have created the bucket, we need to set an environment variable in our shell to tell kops where to store and read cluster configuration to.

We do this by setting the following variable:

export KOPS_STATE_STORE=s3://<bucket name> 

Substitute <bucket name> for the name of the bucket we just created. We are now ready to create our configuration:

$ kops create cluster --zones=<desired zone> <domain name> 

Substituting our zone and domain name information. For example, using us-west-2c as our zone we would run the following command:

$ kops create cluster --zones=us-west-2c nodecookbookdeployme.com 

This command will generate a cluster configuration for us and write it to our S3 bucket. We can inspect this by navigating to the bucket and viewing the files that kops created. Note that at this point kops has only created the configuration. No other resources have been created in AWS.

To actually deploy the cluster, run the following:

$ kops update cluster <domain name> --yes 

This causes kops to create a cluster for us on AWS. Operations include booting up machine instances and deploying the Kubernetes container to the instances. Note that this command will take several minutes to complete. We can check our cluster status using this command:

$ kops validate cluster <domain name> 

Once our cluster is up and running on AWS, it's time to deploy our system.

When we created the cluster, kops created a file for us called kubeconfig in the cluster directory. To control our cluster using kubectl, we need to point our local tools to this configuration. We can do this using the KUBECONFIG environment variable:

export KUBECONFIG=<path to kubeconfig> 

Once we have this environment variable set, we can run kubectl as in our previous recipes, except it will now point to our AWS Kubernetes cluster rather than our local minikube instance.

Let's run the following to check that everything is working correctly:

$ kubectl get namespaces
NAME STATUS AGE
default Active 3h
kube-system Active 3h

This may take a few seconds to complete. Note that there will be no micro namespace reported as we have yet to create this in our AWS cluster.

In fact, let's now go ahead and create this:

$ cd ../deployment # assuming cwd is micro/cluster
$ kubectl create -f namespace.yml
$ kubectl get namespaces
NAME STATUS AGE
default Active 3h
kube-system Active 3h
micro Active 1h

Now that we've registered our namespace with our Kubernetes cluster on AWS we need to make it the default namespace for kubectl.

Let's open up the kubeconfig file and locate the context entry.

In our example, the context is uswest2.nodecookbookdeployme.com, which we can pass to kubectrl config set-context to configure the kubectrl tool's default namespace, like so:

$ kubectl config set-context  uswest2.nodecookbookdeployme.com --namespace=micro 

Now that we have kubectl configured locally we can go ahead and deploy our system containers.

Interestingly, because of the way that we structured our build and deployment scripts, our Jenkins build process should work without change. This is because we have pointed our local kubectl command at our AWS cluster and not at minikube.

Let's open the Jenkins control panel and deploy the infrastructure project to spin up our Mongo and Redis containers.

Once this project has deployed, run the following:

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mongo 1 1 1 1 2h
redis 1 1 1 1 2h

Now that the infrastructure is deployed, we can go ahead and deploy the rest of our systems containers. We can do this by manually triggering a build from our Jenkins server for each of adderservice, auditservice, and eventservice:

$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice 100.67.229.222 <nodes> 8080:30860/TCP 2h
auditservice 100.70.233.161 <nodes> 8081:30212/TCP 2h
eventservice 100.66.88.128 <nodes> 8082:31917/TCP 2h
mongo 100.67.19.86 <nodes> 27017:30940/TCP 2h
redis 100.68.54.205 <nodes> 6379:31896/TCP 2h
GitHub triggers
We could at this point re-enable our GitHub triggers from the previous recipe to provide a fully automated build pipeline into AWS.

Finally, we need to deploy our webapp project. This is our frontend into the system and we need to make a small tweak before deploying. We deployed our services as the NodePort type, which exposes services in a point-to-point manner using a direct IP address and port number for each service instance.

However, for the webapp service, which is a public-facing layer, deploying with the LoadBalancer type allows for a more scalable deployment.

Let's go ahead and configure the build to run our webapp service instance in LoadBalancer mode.

First, let's remove our existing webapp service and deployment:

$ kubectl delete service webapp
$ kubectl delete deployment webapp

Next, let's copy deployment/service-template.yml to deployment/service-template-lb.yml:

$ cp service-template.yml service-template-lb.yml 

We'll modify service-template-lb.yml, so that the type is LoadBalancer, as follows:

apiVersion: v1
kind: Service
metadata:
name: _NAME_
labels:
run: _NAME_
spec:
ports:
- port: _PORT_
name: main
protocol: TCP
targetPort: _PORT_
selector:
run: _NAME_
type: LoadBalancer

Next, we'll edit the micro/webapp/build.sh so it uses our new service-template-lb.yml file:

#!/bin/bash
source ~/.bashrc

GITSHA=$(git rev-parse --short HEAD)

case "$1" in
  container)
    sudo -u <user> docker build -t webapp:$GITSHA .
    sudo -u <user> docker tag webapp:$GITSHA <user>/webapp:$GITSHA
    sudo -i -u <user> docker push <user>/webapp:$GITSHA
  ;;
  deploy)
    sed -e s/_NAME_/webapp/ -e s/_PORT_/3000/ 
      < ../deployment/service-template-lb.yml > svc.yml
    sed -e s/_NAME_/webapp/ -e s/_PORT_/3000/ 
      -e s/_IMAGE_/<user>\/webapp:$GITSHA/ 
      < ../deployment/deployment-template.yml > dep.yml
    sudo -i -u <user> kubectl apply -f $(pwd)/svc.yml
    sudo -i -u <user> kubectl apply -f $(pwd)/dep.yml
  ;;
  *)
    echo 'invalid build command'
    exit 1
  ;;
esac

Once we have made these changes, we need to commit them to our GitHub repository.

If our Jenkins server was set to trigger on commit then a build will start automatically. Otherwise we can navigate to the webapp project in Jenkins and manually trigger a build (refer to the Creating a deployment pipeline recipe for details).

Once the rebuild is complete, we can check that the updates to our cluster were successful:

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 22h
auditservice 1 1 1 1 22h
eventservice 1 1 1 1 22h
mongo 1 1 1 1 22h
redis 1 1 1 1 22h
webapp 1 1 1 1 21m
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice 100.67.229.222 <nodes> 8080:30860/TCP 22h
auditservice 100.70.233.161 <nodes> 8081:30212/TCP 22h
eventservice 100.66.88.128 <nodes> 8082:31917/TCP 22h
mongo 100.67.19.86 <nodes> 27017:30940/TCP 22h
redis 100.68.54.205 <nodes> 6379:31896/TCP 22h
webapp 100.65.39.113 a0ac218271915... 3000:31108/TCP 22m

We can see that our webapp service now has a different EXTERNAL-IP field. Let's check this out:

$ kubectl describe service webapp 
Name: webapp

Namespace: micro
Labels: run=webapp
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"run":"webapp"},"name":"webapp","namespace":"micro"},"spec":{"ports":[{"name"...
Selector: run=webapp
Type: LoadBalancer
IP: 100.65.39.113
LoadBalancer Ingress: a0ac21827191511e78d220ae28f9af81-1027644718.us-west-2.elb.amazonaws.com
Port: main 3000/TCP
NodePort: main 31108/TCP
Endpoints: 100.96.1.7:3000
Session Affinity: None

From this, we can observe that Kubernetes has created an Elastic Load Balancer (ELB) within AWS for us. We can now access our system through this balancer by pointing a browser to (in the case of this example) http://a0ac21827191511e78d220ae28f9af81-1027644718.us-west-2.elb.amazonaws.com:3000/add (in our case, the URL will be similar but unique to us). The add screen will load as before and we can also see our audit service at the usual /audit route.

Excellent! We now have a fully automated build pipeline to our system running on AWS!

We will be inspecting this system in the following There's more... section, but please note that AWS will bill us for the instance time and other resources used, such as the ELB.

To remove the system from AWS at any time and stop incurring costs, run the following:

$ kops delete cluster <domain name> --yes 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.162.26