©  Pablo Acuña 2016

Pablo Acuña, Deploying Rails with Docker, Kubernetes and ECS, 10.1007/978-1-4842-2415-1_4

4. Amazon EC2 Container Service

Pablo Acuña

(1)Providencia, Chile

Concepts

Although every container orchestration framework uses its own specific language for its concepts, they all try to solve the same issues. That’s why you’ll find some similarities between Amazon EC2 Container Service (ECS) and Kubernetes but also a couple of differences in the way they solve the issues. One strong point in favor of ECS is that because it’s a native AWS (Amazon Web Services) technology, it is completely integrated with the other components, such as VPC (Virtual Private Cloud) , EC2, ELBs (Elastic Load Balancers) , and Route53. The main issue with ECS is that it is coupled to AWS, and currently you can’t run an ECS cluster on other cloud providers, which makes sense since it’s an Amazon technology.

Let’s review the main concepts in ECS.

Container Instance

A container instance is just a regular EC2 instance that is attached to the cluster. You can create a cluster using a tool called ECS CLI (Command Line Interface) and declare the number of instances you want, and they will be attached automatically. You can also use the graphical interface in the ECS panel and use a wizard to create a new cluster. Finally, if you prefer to create your instances by hand, you have to initialize them with some extra data that will be attached to the cluster. In this book we will be using the ECS CLI tool to create our cluster.

Task Definition

A task definition is the template where we declare a group of containers that’ll run on the same instance. You’ll probably will be running just one container per task definition, but it’s good to know that you can run more than one container if you wish. Think of a task definition as a skeleton for a task, and the actual instances of the task definition will run within the cluster and will be managed by a task scheduler.

Following is an example task definition for our application:

{
    "family": "webapp",
    "containerDefinitions": [
        {
            "name": "webapp",
            "image": "pacuna/webapp:345opj",
            "cpu": 500,
            "memory": 500,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "PASSENGER_APP_ENV",
                    "value": "production"
                }
            ]
        }
    ]
}

We start by declaring a family. This family, along with a revision number, will identify a task definition. Every time you make a change to the same task definition, you will create a new revision. Then, when you want to run a task, you can specify the family and revision number. If you don’t, ECS will run the latest active revision. Then you have the container definitions. The elements are pretty similar to a Docker Compose declaration. We have to indicate the image, the ports, and the environmental variables we want to use. We also have to specify the requirements for this task (e.g., the CPU units and the memory limit for the container). This will help us to control the resources in our instances.

In order to run a task definition, you have to register it and then run it specifying the family and revision number. To register the task, you can use the following command:

$ aws ecs register-task-definition               --cli-input-json file://webapp.json

We can directly pass the JSON (JavaScript Object Notation) file with the task definition. That will return the latest revision number assigned to it by ECS.

Then you can run the task with

$ aws ecs run-task --task-definition webapp:1 --cluster ecs-cluster

Indicating the family and revision number, along with the name of the cluster where you want to run it.

Another important aspect of tasks is that we can override a definition and run a task using a custom entry point. This will help us to run tasks that live in our application, like migrations or assets-related tasks.

We can declare a task override with

{
  "containerOverrides": [
    {
      "name": "webapp",
      "command": ["bin/rails", "db:migrate", "RAILS_ENV=production"],
      "environment": [
        {
          "name": "PASSENGER_APP_ENV",
          "value": "production"
        }
      ]
    }
  ]
}

And run it with

$ aws ecs run-task --task-definition webapp --overrides file://migrate-overrides.json --cluster ecs-cluster

That’s going to run a new task using the same definition we had for our main application, but with a custom entry point that’s going to migrate the database. After that, the task will finish. This is similar to what we did in Kubernetes with the concept of Jobs .

Service

The service is an element that’s going to give us a static end point for our running tasks. Normally it’ll be associated with a load balancer so you can balance the traffic among your replicas. The service is also in charge of maintaining the desired number of running tasks. If you read about the concepts in Kubernetes , the ECS service is like a mix between a Replication Set and a Kubernetes Service.

If we want to associate an ELB with our service, we have to create it first. Then we can specify the ELB identifier in the service definition in the following way:

{
    "cluster": "ecs-cluster",
    "serviceName": "webapp-service",
    "taskDefinition": "webapp:1",
    "loadBalancers": [
        {
            "loadBalancerName": "webapp-load-balancer",
            "containerName": "webapp",
            "containerPort": 80
        }
    ],
    "desiredCount": 3,
    "role": "ecsServiceRole"
}

We need to specify a name for the service, the task definition we want to use to run the tasks, the ELB specifications, and the desired number of tasks that the service should keep running. We also have to specify the role that ECS created for our services.

Once we have this template, we can launch the service with the following:

$ aws ecs create-service --cli-input-json file://webapp-service.json

If later we create a new task definition revision and we want to update the associated service, we can run the following:

$ aws ecs update-service --service webapp-service --task-definition webapp --desired-count 3 --cluster ecs-cluster

Since we didn’t specify a revision number for the task, ECS will pick up the latest active revision and use that definition to run the tasks.

Configuring the ECS-CLI

Currently there are two ways to create an ECS cluster. You can use the graphical interface from the AWS panel for ECS or you can make use of the ESC CLI tool. We are going to focus on the latter for launching our cluster. Any of these choices will give you the same result. Keep in mind that it can be a little more tedious if you decide to configure a cluster by yourself. You’ll need to launch instances with initial user data and security groups, among other configurations.

The ESC-CLI is pretty easy to install. You can find more details in the official documentation ( http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_installation.html ), but basically you’ll need to run one command.

The version I’m currently running is

$ ecs-cli --version

Output:

ecs-cli version 0.4.4 (7e1376e)

If you have the AWS CLI tool already configured, meaning you have your AWS access keys configured for the client, you can make use of the ECS CLI tool immediately. If not, you can follow the instructions provided at http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration .

Creating the Cluster Using the Amazon ECS CLI

Before we launch the cluster, we’ll need a key pair to access the nodes. You can use an existent key or create and save a new one with the following command:

$ aws ec2 create-key-pair --key-name ecs-cluster-pem --query 'KeyMaterial' --output text > ecs-cluster-pem.pem

Move that PEM to a safe place. You won’t be able to download it again later.

Now we can use the configure command to configure and save our future cluster information. This command will create and save the configuration in a file so we don’t have to specify all the data when running future commands. We only need to pass the name we want for our cluster.

$ ecs-cli configure --cluster ecs-cluster --region us-east-1

Output:

INFO[0000] Saved ECS CLI configuration for cluster (ecs-cluster)

The configuration file is located at ∼/.ecs/config. If you inspect that file, you’ll see that none of our AWS credentials are actually there. You could also have passed those tokens with the configure commands, and they would have been saved in the file. But since we already have our credentials configured to use the AWS CLI, the ECS CLI knows where to find the credentials when they aren’t passed as arguments.

Now that we have the configuration ready for the cluster, we can launch it along with some nodes. Let’s use two t2.medium nodes. We only need to pass the key pair we created previously and the number and size of the nodes. ECS CLI will use the configuration saved to get our cluster information.

$ ecs-cli up --keypair ecs-cluster-pem --capability-iam --size 2 --instance-type t2.medium

Output:

INFO[0002] Created cluster                    cluster=ecs-cluster
INFO[0003] Waiting for your cluster resources to be created
INFO[0004] Cloudformation stack status        stackStatus=CREATE_IN_PROGRESS
INFO[0066] Cloudformation stack               status stackStatus=CREATE_IN_PROGRESS
INFO[0128] Cloudformation stack status        stackStatus=CREATE_IN_PROGRESS
INFO[0190] Cloudformation stack status        stackStatus=CREATE_IN_PROGRESS
INFO[0251] Cloudformation stack status        stackStatus=CREATE_IN_PROGRESS

This can take a while , and even after the command finishes, you’ll have to wait for the instances to be initialized so they can join the cluster. Besides launching nodes for the cluster, this command will create a VPC and all the security resources for keeping our cluster safe.

Let’s use the AWS CLI to ask about the clusters in our account.

$ aws ecs --region us-east-1 describe-clusters

Output:

{
    "failures": [],
    "clusters": [
        {
            "pendingTasksCount": 0,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/default",
            "status": "INACTIVE",
            "activeServicesCount": 0,
            "registeredContainerInstancesCount": 0,
            "clusterName": "default",
            "runningTasksCount": 0
        }
    ]
}

There you can see that the cluster we just launched was actually created, but it doesn’t have any instances registered yet. That’s because these are still initializing.

Wait for a few minutes, and let’s run another command in order to get information about that specific cluster we just launched.

$ aws ecs --region us-east-1 describe-clusters --clusters ecs-cluster

Output :

{
    "clusters": [
        {
            "status": "ACTIVE",
            "clusterName": "ecs-cluster",
            "registeredContainerInstancesCount": 2,
            "pendingTasksCount": 0,
            "runningTasksCount": 0,
            "activeServicesCount": 0,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster"
        }
    ],
    "failures": []
}

OK, that’s better. Now you can see that the cluster has a status of active and it has two instances registered. You can also see more information about the tasks and services in our cluster. Don’t worry about that yet. We’ll see that later when we start to run containers in the cluster.

DB Configuration

AWS ECS doesn’t currently have native support for DNS (Domain Name System) discovery like Kubernetes has. This is a big issue when you want to run a lot of different services in the cluster and even bigger if lots of those services need to be non-public. This is because the typical form for service discovery in ECS is done via load balancing. So every time you create a service that you need to be discoverable by other services, you need to create a load balancer for it. This can be expensive if you’re running a micro-service architecture. In Kubernetes we solve this problem via DNS discovery. You can launch services like databases without exposing them to the Internet. They will have an internal IP (Internet Protocol) address and also an alias so the other services can communicate with them.

Another problem with ECS is that there’s no support for associating cloud storage with the instances. You can mount volumes from the containers to the instances, but that doesn’t work if your container will be jumping from one instance to another, which is what typically occurs during deployments. Again, Kubernetes has support for persistence by integrating itself with cloud provider native storage objects, like EBS (Elastic Block Storage) in the case of AWS. That storage will always be external to the cluster and the nodes, so it doesn’t matter if a new container is launched on a different node than it was before, because the data lives outside the cluster.

These two reasons (no native support for DNS discovery and no support for storage objects) are why I recommend that you don’t use ECS to run containers that need persistence. This goes for things like databases, search engines, cache storages, and so on.

Fortunately for us, we don’t have to mount a server with a database engine and configure the whole thing by ourselves, since AWS has a very good database-as-a-service software called RDS. This service will allow us to run a new database server already configured and which we can use to connect to our web application that will be running on a container.

Creating a RDS Resource

The only complexity in creating a RDS resource is that we have to make sure to create it inside the same VPC that the ECS CLI created for our cluster. That’s a much more secure approach than just to launch a server open to the Internet. We want to run a database that’s only accessible from the same VPC.

First, let’s collect some data that we may need during the execution of the following commands. These are our VPC ID and the ID of one of the VPC’s subnets.

Let’s describe all of our current VPC IDs and the first tag.

$ aws ec2 describe-vpcs --region us-east-1 --query="Vpcs[*].{ID:VpcId,tags:Tags[0]}"

Output:

[
    {
        "ID": "vpc-e61cec82",
        "tags": null
    },
    {
        "ID": "vpc-a0e9c0c7",
        "tags": {
            "Value": "amazon-ecs-cli-setup-ecs-cluster",
            "Key": "aws:cloudformation:stack-name"
        }
    }
]

You can see I have two VPCs and obviously the second one is the one that was created for our cluster. It has a tag with a value that contains the name of the cluster. The ID for this VPC is vpc-a0e9c0c7. Remember to save yours.

Now we can get the subnet IDs for this VPC with the following command:

$ aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-a0e9c0c7" --query="Subnets[*].SubnetId"

Output:

[
    "subnet-3a09f717",
    "subnet-e0906cbb "
]

You should also save the values you got for your VPC.

Finally, we also want the security group ID that ECS CLI assigned to our VPC. We want to launch the database (DB) server with this same security group so the communication can be configured more easily. Let’s get our security groups and filter by our VPC ID.

$ aws ec2 describe-security-groups --filters="Name=vpc-id,Values=vpc-a0e9c0c7" --query="SecurityGroups[*].{Description:Description,ID:GroupId}"

Output:

[
    {
        "Description": "ECS Allowed Ports",
        "ID": "sg-bbe6b3c1"
    },
    {
        "Description": "default VPC security group",
        "ID": "sg-efe6b395"
    }
]

Right now I have two security groups for that VPC, but you can see one that says “ECS Allowed Ports.” That’s the one I want to use for the DB. Save that security group ID for our following command.

We need all these identifiers to create our RDS in our VPC. In fact, before launching the server, we have to create a DB Subnet Group. This group is just a collection of subnets designated for the RDS DB instance in our VPC. We got our subnet IDs because they’re necessary to create this DB subnet group.

In the following command, replace the subnet IDs with the ones you got previously. You can leave the rest of argument just like that.

$ aws rds create-db-subnet-group --db-subnet-group-name webapp-postgres-subnet --subnet-ids subnet-3a09f717 subnet-e0906cbb --db-subnet-group-description "Subnet                                                                                   for PostgreSQL" 

Output:

{
    "DBSubnetGroup": {
        "Subnets": [
            {
                "SubnetStatus": "Active",
                "SubnetIdentifier": "subnet-3a09f717",
                "SubnetAvailabilityZone": {
                    "Name": "us-east-1c"
                }
            },
            {
                "SubnetStatus": "Active",
                "SubnetIdentifier": "subnet-e0906cbb",
                "SubnetAvailabilityZone": {
                    "Name": "us-east-1a"
                }
            }
        ],
        "VpcId": "vpc-a0e9c0c7",
        "DBSubnetGroupDescription": "Subnet for PostgreSQL",
        "SubnetGroupStatus": "Complete",
        "DBSubnetGroupArn": "arn:aws:rds:us-east-1:586421825777:subgrp:webapp-postgres-subnet",
        "DBSubnetGroupName": "webapp-postgres-subnet"
    }
}

The following command will create a DB server for us. Here we are going to create a database named webapp_production, with a master username webapp and a password mysecretpassword. We are also indicating the size of the server and the size of the associated storage. We also have to indicate the engine we want for our database. In our case we are using postgres. Finally, we are passing the identifier of the previously created DB subnet group and the VPC security group ID so this server is created in the same VPC where our cluster is.

$ aws rds create-db-instance --db-name webapp_production --db-instance-identifier webapp-postgres 
--allocated-storage 20 --db-instance-class db.t2.medium --engine postgres
--master-username webapp --master-user-password mysecretpassword --db-subnet-group-name webapp-postg res-subnet
--vpc-security-group -id sg-bbe6b3c1

Output:

{
    "DBInstance": {
        "PubliclyAccessible": false,
        "MasterUsername": "webapp",
        "MonitoringInterval": 0,
        "LicenseModel": "postgresql-license",
        "VpcSecurityGroups": [
            {
                "Status": "active",
                "VpcSecurityGroupId": "sg-bbe6b3c1"
            }
        ],
        "CopyTagsToSnapshot": false,
        "OptionGroupMemberships": [
            {
                "Status": "in-sync",
                "OptionGroupName": "default:postgres-9-5"
            }
        ],
        "PendingModifiedValues": {
            "MasterUserPassword": "****"
        },
        "Engine": "postgres",
        "MultiAZ": false,
        "DBSecurityGroups": [],
        "DBParameterGroups": [
            {
                "DBParameterGroupName": "default.postgres9.5",
                "ParameterApplyStatus ": "in-sync"
            }
        ],
        "AutoMinorVersionUpgrade": true,
        "PreferredBackupWindow": "04:05-04:35",
        "DBSubnetGroup": {
            "Subnets": [
                {
                    "SubnetStatus": "Active",
                    "SubnetIdentifier": "subnet-3a09f717",
                    "SubnetAvailabilityZone": {
                        "Name": "us-east-1c"
                    }
                },
                {
                    "SubnetStatus": "Active",
                    "SubnetIdentifier": "subnet-e0906cbb",
                    "SubnetAvailabilityZone": {
                        "Name": "us-east-1a"
                    }
                }
            ],
            "DBSubnetGroupName": "webapp-postgres-subnet",
            "VpcId": "vpc-a0e9c0c7",
            "DBSubnet GroupDescription": "Subnet for PostgreSQL",
            "SubnetGroupStatus": "Complete"
        },
        "ReadReplicaDBInstanceIdentifiers": [], "AllocatedStorage": 20,
        "DBInstanceArn": "arn:aws:rds:us-east-1:586421825777:db:webapp-postgres",
        "BackupRetentionPeriod": 1,
        "DBName": "webapp_production",
        "PreferredMaintenanceWindow": "tue:06:28-tue:06:58",
        "DBInstanceStatus": "creating",
        "EngineVersion": "9.5.2",
        "DomainMemberships": [],
        "StorageType": "standard",
        "DbiResourceId": "db-3QR6VRBU4OIDKYCA73XFNINCH4",
        "CACertificateIdentifier": "rds-ca-2015",
        "StorageEncrypted": false,
        "DBInstanceClass": "db.t2.medium",
        "DbInstancePort": 0,
        "DBInstanceIdentifier": "webapp-postgres"
    }
}

You can query the API to get the current DB instance status with the following command:

$ aws rds describe-db-instances --db-instance-identifier webapp-postgres --query 'DBInstances[*].{Status:DBInstanceStatus}'
[
    {
        "Status": "creating"
    }
]

Wait for a few minutes until the DB is ready and the status becomes available.

[
     {
        "Status": "available "
     }
]

Once the server is available, we can query the end point AWS gave to the DB:

$ aws rds describe-db-instances --db-instance-identifier webapp-postgres --query 'DBInstances[*].{URL:Endpoint.Address}'

Output:

[
    {
        "URL": "webapp-postgres.caxygd3nh0bk.us-east-1.rds.amazonaws.com"
    }
]

Great! Now we have our database ready. But before we configure it in our application we have to do one thing. Right now we are using the cluster’s VPC security group. Currently this group should only have an inbound rule for the port 80. What we want is to allow all traffic between elements that live inside this VPC.

We can create a custom rule for this with the following command. Remember to change the ID with the security group ID you got previously:

$ aws ec2 authorize-security-group-ingress --group-id sg-bbe6b3c1 --protocol all --port all --source-group sg-bbe6b3c1

Let’s also add a rule for accessing the nodes via ssh from anywhere. This can be useful for debugging and diagnosing container errors.

$ aws ec2 authorize-security-group-ingress --group-id sg-bbe6b3c1 --protocol tcp --port 22 --cidr 0.0.0.0/0

Now our nodes should be able to access this database. You won’t be able to connect to this database from outside the cluster. That could be an inconvenience if you like to inspect your databases using a graphical user interface (GUI), but it’s also a very strong security measure to prevent attacks.

Now we can configure the production database credentials in our application.

production:
  <<: *default
  host: webapp-postgres.caxygd3nh0bk.us-east-1.rds.amazonaws.com
  database: webapp_production
  username: webapp
  password: mysecretpassword

Let’s rebuild our Docker image and add a tag for ECS using our latest commit

$ git add .
$ git commit -m 'add database credentials for rds'
$ LC=$(git rev-parse --short HEAD)
$ docker build -t pacuna/webapp:ecs-${LC} .

And push it to DockerHub.

$ docker push pacuna/webapp:ecs-${LC}            

And that’s it for the configuration. Now it’s time to build the templates to deploy the application in the cluster.

Creating the Task Definition

If you follow the instructions to deploy with Kubernetes , you saw that we created an independent kube folder to keep the Kubernetes templates. Now let’s create a folder to keep the ECS templates. Run the following commands in the root of the project:

$ mkdir ecs
$ mkdir ecs/task-definitions

Now we can create a skeleton task definition for our application with the following command:

$ aws ecs register-task-definition --generate-cli-skeleton > ecs/task-definitions/webapp.json

That’s going to create an empty task definition which we can fill out with our data.

Let’s clean up the file a little bit and add the information for our first task definition.

Open the file and replace its content with the following:

{
    "family": "webapp",
    "containerDefinitions": [
        {
            "name": "webapp",
            "image": "pacuna/webapp:ecs-0eddbb1",
            "cpu": 500,
            "memory": 500,
            "portMappings": [
                {
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol ": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "PASSENGER_APP_ENV",
                    "value": "production"
                }
            ]
        }
    ]
}

For the image, you should use the Docker image we created after we set up the database credentials. It should be ecs-followed by the last commit.

Now we can register this file with

$ aws ecs register-task-definition --cli-input-json file://ecs/task-definitions/webapp.json

Output:

{
        "taskDefinition ": {
            "status": "ACTIVE",
            "family": "webapp",
            "volumes": [],
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
            "containerDefinitions": [
                {
                    "environment": [
                    {
                        "name": "PASSENGER_APP_ENV",
                        "value": "production"
                    }
                ],
                "name": "webapp",
                "mountPoints": [],
                "image": "pacuna/webapp:ecs-0eddbb1",
                "cpu": 500,
                "portMappings ": [
                    {
                        "protocol": "tcp",
                        "containerPort": 80,
                        "hostPort": 80
                    }
                ],
                "memory": 500,
                "essential": true,
                "volumesFrom": []
            }
        ],
        "revision": 1
    }
}

The important part of the output is the family name we declared in the JSON file and the revision number. Every time we modify a task definition we create a new revision. In this case I got revision 1 because it’s the first time I registered this task.

We can run this task with

$ aws ecs run-task --task-definition webapp:1 --cluster ecs-cluster

Output:

{
    "failures": [],
    "tasks": [
        {
            "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c ",
            "overrides": {
                "containerOverrides ": [
                    {
                        "name": "webapp"
                    }
                ]
            },
            "lastStatus": "PENDING",
            "containerInstanceArn": "arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7- 7f29-43a6-98c1-b7d7dd912cd2",
            "createdAt": 1475113475.222,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster", "desiredStatus": "RUNNING",
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
            "containers": [
                {
                    "containerArn": "arn:aws:ecs:us-east-1:586421825777:container/2d4aa3c7-48d7-40c1
                    -9b30-8938566c2b0c",
                    "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7 -811c-4672-ab95-2c34
                    3930825c"
                    "lastStatus": "PENDING",
                    "name": "webapp"
                }
            ]
        }
    ]
}

That command is going to run the container declared in the task definition somewhere in the cluster. If we want to get more info about our task, first we have to get its identifier.

$ aws ecs list-tasks --cluster ecs-cluster

Output:

{
    "taskArns": [
        "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c"
    ]
}

Now we can use that identifier in the order we want.

$ aws ecs describe-tasks --tasks arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c3
43930825c --cluster ecs-cluster

Output:

{
    "failures": [],
    "tasks": [
        {
            "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c ",
            "overrides": {
                "containerOverrides": [
                    {
                        "name": "webapp"
                    }
                ]
            },
            "lastStatus": "RUNNING",
            "containerInstanceArn": "arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7-
            7f29-43a6-98c1-b7d7dd912cd2",
            "createdAt": 1475113475.222,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster",
            "startedAt": 1475113521.285,
            "desiredStatus": "RUNNING",
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
            "containers ": [
                {
                    "containerArn": "arn:aws:ecs:us-east-1:586421825777:container/2d4aa3c7-48d7-40c1
                    -9b30-8938566c2b0c",
                    "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c34
                    3930825c",
                    "lastStatus": "RUNNING",
                    "name": "webapp",
                    "networkBindings": [
                        {
                            "protocol": "tcp",
                            "bindIP": "0.0.0.0",
                            "containerPort": 80,
                            "hostPort": 80
                        }
                    ]
                }
            ]
        }
    ]
}

We can see that the last status was running. If in your case it says pending, just wait until the image gets pulled and the container starts. Now, we know our application is not going to work out of the box. We have our database created but we haven’t run the migrations yet. In Kubernetes we use the concept of Jobs for running this kind of stuff. In this case we are going to run a task that’s going to override a task definition. This means we can run this same task but override its entry point , which is what we need.

Let’s create a new JSON file that’s going to contain the overrides for the migrations,

$ touch ecs/task-definitions/migrate-overrides.json

And add the following to it:

{
  "containerOverrides": [
    {
      "name": "webapp",
      "command": ["bin/rails", "db:migrate", "RAILS_ENV=production"],
      "environment": [
        {
          "name": "PASSENGER_APP_ENV",
          "value": "production "
        }
      ]
    }
  ]
}

As you can see, there’s nothing crazy going on there. We are just setting a new entry point and making sure the environmental variables are preserved.

In order to run a task with this overrides we can run the following command:

$ aws ecs run-task --task-definition webapp:1 --overrides file://ecs/task-definitions/migrate-overrides.json --cluster ecs-cluster

Output :

{
    "failures": [],
    "tasks": [
        {
            "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/865d7bcc-55c3-4c2b-993f-4bbd32c1f63f ",
            "overrides": {
                "containerOverrides": [
                    {
                        "environment": [
                            {
                                "name": "PASSENGER_APP_ENV",
                                "value": "production"
                            }
                        ],
                        "command": [
                            "bin/rails",
                            "db:migrate",
                            "RAILS_ENV=production"
                        ],
                        "name": "webapp"
                    }
                ]
            },
            "lastStatus": "PENDING",
            "containerInstanceArn": "arn:aws:ecs:us-east-1:586421825777:container-instance/931096f1-
            4e01-4baf-8cfc-73a80e5b1ead",
            "createdAt": 1475113727.013,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster",
            "desiredStatus": "RUNNING",
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
            "containers": [
                {


                    "containerArn": "arn:aws:ecs:us-east-1:586421825777:container/2b547545 -1bed-483b
                    -bb4a-dbc73e9de3ad",
                    "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/865d7bcc-55c3-4c2b-993f-4bbd
                    32c1f63f",
                    "lastStatus": "PENDING",
                    "name": "webapp"
                }
            ]
        }
    ]
}

That’s going to take our same task definition, override its entry point so the migrations are executed, and then stop the task.

Now we just need to get the IP address of the node that’s running our application. This can be a little bit tedious to do using the AWS CLI, but don’t worry. Later we’ll create a service with an associated load balancer that’s going to give us a static DNS for our container.

Following are the steps to get the IP address of the node that’s running the task:

  1. Get our task identifier:

    $ aws ecs list-tasks --cluster ecs-cluster

    {
        "taskArns": [
            "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c"
        ]
    }
  2. Describe the task with that identifier filtered by the container instance identifier:

    $ aws ecs describe-tasks --tasks arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c3
    43930825c --cluster ecs-cluster --query="tasks[*].containerInstanceArn"


    [
        "arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7-7f29-43a6-98c1-b7d7dd912cd2"
    ]
  3. Using the instance identifier, get the instance ID:

    $ aws ecs describe-container                      -instances --container-instances arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7-7f29-43a6-98c1-b7d7dd912cd2 --query="containerInstances                                              [*].ec2InstanceId" --cluster ecs-cluster

    [
        "i-69f4a17f"
    ]
  4. Using that ID, get the IP address by using the EC2 API:

    $ aws ec2 describe-instances --instance-ids i-69f4a17f --query="Reservations[0].Instances[0].PublicIpAddress"

    "54.164.16.149"

Now we know that the container is mapping its port 80 to the port 80 of the host, so we can cURL that IP address in one of our end points to see if it’s actually responding.

$ curl -I http://54.164.16.149/articles

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Status: 200 OK
Cache-Control: max-age=0, private, must-revalidate
ETag: W/"4f53cda18c2baa0c0354bb5f9a3ecbe5"
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Runtime: 0.011589
X-Request-Id: 049c50bd-4f6e-4170-956c-8a085750edb8
Date: Thu, 29 Sep 2016 01:51:54 GMT
X-Powered-By: Phusion Passenger 5.0.29
Server: nginx/1.10.1 + Phusion Passenger 5.0.29

Great! Our application is up and running with no errors.

In the next section we’ll create a service that’s going to keep our task alive 24/7 and also create a load balancer so we can have a static address for our application .

Creating a Service for Our Application

As I mentioned previously, one of the cool features of ECS Services is that you can attach a load balancer to them. This will allow us to update our service and task definitions during deployments while keeping the address of the service.

One important thing is that we need a path where the load balancer can check if the container to which the requests are being routed is responding correctly. By default, the load balancer will hit the root path of our application. Since we don’t have any action taking care of that route, production is going to respond with a not found status code, which will produce a fail with the health check.

Let’s fix that by quickly adding an action that responds with a 200 status code for that path. In the routes file add the following route:

get '/', to: "pages#welcome"

Create a new controller

$ touch app/controllers/pages_controller.rb

And add the following to it:

class PagesController < ApplicationController
  def welcome
    render json: {message: 'hello!'}, status: 200
  end
end

And that’s it. Now our base path will respond with a 200 status code if the container starts correctly and the load balancer will be attached with no problems.

We can create a load balancer in the VPC with (replace with your cluster’s security group ID and a subnet ID in your VPC):

$ aws elb create-load-balancer --load-balancer-name webapp-load-balancer --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --subnets subnet-3a09f717 subnet-e0906cbb--security-groups sg-bbe6b3c1                                    

Output:

{
     "DNSName": "webapp-load-balancer-1711291190.us-east-1.elb.amazonaws.com"
}

Now we can use that load balancer when creating the service.

First, let’s stop the task that’s running our application. For that, we’ll need its identifier.

$ aws ecs list-tasks --cluster ecs-cluster

Output:

{
    "taskArns": [
        "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c"
    ]
}

And now we can run the stop-task command.

$ aws ecs stop-task --task arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c --cluster ecs-cluster

Output:

{
    "task": {
        "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c",
        "overrides": {
            "containerOverrides": [
                {
                    "name": "webapp"
                }
            ]
        },
        "lastStatus": "RUNNING",
        "containerInstanceArn": "arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7-7f29
        -43a6-98c1-b7d7dd912cd2",
        "createdAt": 1475113475.222,
        "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster",
        "startedAt": 1475113521.285,
        "desiredStatus": "STOPPED",
        "stoppedReason": "Task stopped by user",
        "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition /webapp:1",
        "containers": [
            {
                "containerArn": "arn:aws:ecs:us-east-1:586421825777:container/2d4aa3c7-48d7-40c1-9b3
                0-8938566c2b0c",
                "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/1133a3a7-811c-4672-ab95-2c343930825c",
                "lastStatus": "RUNNING",
                "name": "webapp",
                "networkBindings": [
                    {
                        "protocol": "tcp",
                        "bindIP": "0.0.0.0",
                        "containerPort": 80,
                        "hostPort": 80
                    }
                ]
            }
        ]
    }
}

If we list the tasks again, we shouldn’t see anything.

$ aws ecs list-tasks --cluster ecs-cluster

Output:

{
    "taskArns": []
}

Let’s create a folder for our services.

$ mkdir ecs/services

And now let’s use the following command to generate a service skeleton:

$ aws ecs create-service --generate                          -cli-skeleton > ecs/services/webapp-service.json --cluster ecs-cluster

Open the generated file and replace the values with the following:

{
    "cluster": "ecs-cluster",
    "serviceName": "webapp-service",
    "taskDefinition": "webapp:1",
    "loadBalancers": [
        {
            "loadBalancerName": "webapp-load-balancer",
            "containerName": "webapp",
            "containerPort": 80
        }
    ],
    "desiredCount": 1,
    "role": "ecsServiceRole"
}

And then create the service with

$ aws ecs create-service --cli-input-json file://ecs/services/webapp-service.json

Output:

{
    "service": {
        "status": "ACTIVE",
        "taskDefinition": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
        "pendingCount": 0,
        "loadBalancers": [
            {
                "containerName": "webapp",
                "containerPort": 80,
                "loadBalancerName": "webapp-load-balancer"
            }
        ],
        "roleArn": "arn:aws:iam::586421825777:role/ecsServiceRole",
        "createdAt": 1475114557.793,
        "desiredCount": 2,
        "serviceName": "webapp-service ",
        "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster",
        "serviceArn": "arn:aws:ecs:us-east-1:586421825777:service/webapp-service",
        "deploymentConfiguration": {
            "maximumPercent": 200,
            "minimumHealthyPercent": 100
        },
        "deployments": [
             {
                "status": "PRIMARY",
                "pendingCount": 0,
                "createdAt": 1475114557.793,
                "desiredCount": 1,
                "taskDefinition": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:1",
                "updatedAt": 1475114557.793,
                "id": "ecs-svc/9223370561740218014",
                "runningCount": 0
            }
        ],
        "events": [],
        "runningCount": 0
    }
}

Now we can try to hit the ELB URL (uniform resource locator) to see if it’s actually routing requests to our application. Remember that we got the DNS when we called the following create load balancer command:

$ curl -I webapp-load-balancer-1711291190.us-east-1.elb.amazonaws.com/articles

HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Thu, 29 Sep 2016 02:03:33 GMT
ETag: W/"4f53cda18c2baa0c0354bb5f9a3ecbe5"
Server: nginx/1.10.1 + Phusion Passenger 5.0.29
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Powered-By: Phusion Passenger 5.0.29
X-Request-Id: 5e670c6e-dde7-48bd-bb61-00d5598946b4
X-Runtime: 0.012451
X-XSS-Protection: 1; mode=block
Connection: keep-alive

The cool thing about the ELB is that it is going to balance the load among the number of tasks you define in your desired replicas. It wouldn’t be able to do it without a service. In our case we’re using just one replica, since we only have two nodes available. The problem is that this type of ELB won’t allow us to have two containers exposing the same port on the same instance. And every time we update a service, a new container starts in the cluster before the old one dies. So during the deployment, there will be always a moment where two containers will be running and exposing the same port. In a real scenario where you need high availability for your application, you’ll want to have a cluster with many more nodes and create services that keep a higher number of replicas running at the same time.

Let’s try to create an article to see if everything is working correctly.

$ curl -H "Content-Type: application/json" -X POST -d '{"title":"the title","body":"The body"}' http
://webapp-load-balancer-1711291190.us-east-1.elb.amazonaws.com/articles

Output:

{"id":1,"title":"the title","body":"The body","created_at":"2016-09-29T02:04:32.267Z","updated_at":"
2016-09-29T02:04:32.267Z","slug":"the-title"}%

Cool! So we have our DB and our load balancer working correctly.

In the next section we will see how to run updates to our application and how we can automate those updates with a little bit of bash scripting .

Running Updates to Our Application

Just as we did with Kubernetes , we want to use a couple of automated scripts to run updates for our application. We have to be able to automate two things: migrating our database in case there are new migrations and, after building a new version of our image, creating a new task definition revision with that new tag and updating our service to use that new task definition.

First, let’s create a new deploy folder inside the ecs folder to keep these scripts and two empty files: one for pushing and one to migrate.

$ mkdir ecs/deploy
$ touch ecs/deploy/push.sh
$ touch ecs/deploy/migrate.sh
$ chmod +x ecs/deploy/*

Add the following to the push.sh file:

#!/bin/sh                
set -x


LC=$(git rev-parse --short HEAD)
docker build -f Dockerfile -t pacuna/webapp:${LC} .
docker push pacuna/webapp:${LC}


# replace LAST_COMMIT with latest commit hash output the result to a tmp file
sed "s/webapp:LAST_COMMIT/webapp:$LC/g" ecs/task-definitions/webapp.json > webapp.json.tmp


# register the new task definition and delete the tmp file
aws ecs register-task-definition --cli-input-json file://webapp.json.tmp rm webapp.json.tmp


# update the service
aws ecs update-service --service webapp-service --task-definition webapp --desired-count 1 --cluster ecs-cluster

As we did with Kubernetes , we are tagging our newest image with the latest commit hash. Then we are using the sed tool to replace a placeholder with the string LAST_COMMIT in our task definition template with the latest image and its tag. We output that new file to a temp file, register a new task definition with it, and then update the service. When we update the service, it is not necessary to pass the revision number for the family name. If we don’t pass one, the service will use the latest active revision number, which will be the one we are deploying at that moment. If you see the desired count, you’ll see that we are only deploying one replica. That’s because currently we have a replica running in the cluster and it’s using the port 80. We cannot deploy two more replicas since we only have two instances in this cluster and this type of load balancer (classic) doesn’t support dynamic port mapping, which allows us to have several containers exposing the same port to a port in the load balance r. If you want to have more replicas running, you should add more instances to the cluster.

Now we have to modify our task definition and add the placeholder. Open ecs/task-definitions/webapp.json and modify the image name to

"image": "pacuna/webapp:LAST_COMMIT"

And that’s it for the push.sh file. Let’s go with migrate.sh. Add the following to the ecs/deploy/migrate.sh:

#!/bin/sh              
set -x
aws ecs run-task --task-definition webapp --overrides file://ecs/task-definitions/migrate-overrides.json --cluster ecs-cluster

Pretty simple. We are just running the same command we ran to migrate our database. Again, we can just skip the revision number and ECS will run a copy with the latest revision. This simple command will run our latest migrations if there are any.

Now you can test those scripts by committing your changes

$ git add -A
$ git commit -m 'add deploy scripts for ecs'

And executing the files.

$ ./ecs/deploy/push.sh

++ git rev-parse --short HEAD
+ LC=1cc4379
+ docker build -f Dockerfile -t pacuna/webapp:1cc4379 .
Sending build context to Docker daemon 24.47 MB
Step 1 : FROM phusion/passenger-ruby23:0.9.19
 ---> 6841e494987f
 (truncated)
...
+ docker push pacuna/webapp:1cc4379
The push refers to a repository [docker.io/pacuna/webapp]
692dc46ce0e9: Pushed
b3e13bc936af: Pushing [======================>           ] 10.46 MB/23.17 MB
25c9e91cead0: Pushed
7570593f934a: Pushing [====================>             ] 9.678 MB/23.17 MB
(truncated)
...
{
    "service": {
        "status": "ACTIVE",
        "taskDefinition": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:8",
        "pendingCount": 0,
        "loadBalancers": [
            {
                "containerName": "webapp",
                "containerPort": 80,
                "loadBalancerName": "webapp-load-balancer"
            }
        ],
...

And then migrate

$ ./ecs/deploy/migrate.sh

{
    "failures": [],
    "tasks": [
        {
            "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/a7128bec-43e9-4773-8425-698419d86db3 ",
            "overrides": {
                "containerOverrides": [
                    {
                        "environment": [
                            {
                                "name": "PASSENGER_APP_ENV",
                                "value": "production"
                            }
                        ],
                        "command": [
                            "bin/rails",
                            "db:migrate",
                            "RAILS_ENV=production"
                        ],
                        "name ": "webapp"
                    }
                ]
            },
            "lastStatus": "PENDING",
            "containerInstanceArn": "arn:aws:ecs:us-east-1:586421825777:container-instance/edd5e9a7- 7f29-43a6-98c1-b7d7dd912cd2",
            "createdAt": 1475198720.749,
            "clusterArn": "arn:aws:ecs:us-east-1:586421825777:cluster/ecs-cluster",
            "desiredStatus": "RUNNING",
            "taskDefinitionArn": "arn:aws:ecs:us-east-1:586421825777:task-definition/webapp:8",
            "containers": [
                {
                    "containerArn": "arn:aws:ecs:us-east-1:586421825777:container/bf8c369b-666a-42c6
                    -8a3b-b57cca400c1b",
                    "taskArn": "arn:aws:ecs:us-east-1:586421825777:task/a7128bec-43e9-4773-8425-698419d86db3",
                    "lastStatus": "PENDING",
                    "name": "webapp"
                }
            ]
        }
    ]
}

And that’s it! Later we are going to use these scripts for building a pipeline with Jenkins. Jenkins will execute the scripts after every deploy we make to a GitHub branch.

Summary

After this chapter, you should be able to run your Rails application using Amazon ECS. You should also be able to see the main differences between ECS and Kubernetes and choose the solution that makes more sense to you. You can see that both technologies are pretty strong and have mostly conceptual differences. Although Kubernetes seems to have a better integration in some areas like Volumes and DNS, ECS is tightly integrated with all the AWS components, which makes it a very solid choice if you run your system on AWS. Just as we did with Chapter 3, we ended this chapter by creating a couple of automation scripts to run deploys in a very efficient and structured way.

In Chapter 5, we’ll take automation to the next level by using a continuous integration server to run our deployments along with our test suite.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.217.177