The road to automated infrastructure deployment

Now that we know what we want, how can we do it? Luckily for us, as hinted in the previous list, Ansible can do that part for us; we just need to write a couple of configuration files. But AWS is very complex here so it will not be as simple as just starting an instance since we want an isolated VPC environment. However, since we will only manage one server, we don't really care much for inter-VPC networking, so that will make things a bit easier.

We first need to consider all the steps that will be required. Some of these will be very foreign to most of you as AWS is pretty complex and most developers do not usually work on networking, but they are the minimum necessary steps to have an isolated VPC without clobbering the default settings of your account:

  • Set up the VPC for a specific virtual network.
  • Create and tie a subnet to it. Without this, our machines will not be able to use the network on it.
  • Set up a virtual Internet gateway and attach it to the VPC for unresolvable addresses with a routing table. If we do not do this, the machines will not be able to use the Internet.
  • Set up a security group (firewall) whitelist of ports that we want to be able to access our server (SSH and HTTP ports). By default all ports are blocked so this makes sure that the launched instances are reachable.
  • Finally, provision the VM instance using the configured VPC for networking.

To tear down everything, we will need to do the same thing, but just in reverse.

First, we need some variables that will be shared across both deploy and teardown playbooks. Create a group_vars/all file in the same directory as the big Ansible example that we have been working on in this chapter:

# Region that will accompany all AWS-related module usages
aws_region: us-west-1

# ID of our Packer-built AMI
cluster_node_ami: ami-a694a8c6

# Key name that will be used to manage the instances. Do not

# worry about what this is right now - we will create it in a bit
ssh_key_name: swarm_key

# Define the internal IP network for the VPC
swarm_vpc_cidr: "172.31.0.0/16"

Now we can write our deploy.yml in the same directory that packer.json is in, using some of those variables:

The difficulties of this deployment is starting to scale up significantly from our previous examples and there is no good way to cover all the information that is spread between dozens of AWS, networking, and Ansible topics to describe it in a concise way, but here are some links to the modules we will use that, if possible, you should read before proceeding:
https://docs.ansible.com/ansible/latest/ec2_vpc_net_module.html
https://docs.ansible.com/ansible/latest/set_fact_module.html
https://docs.ansible.com/ansible/latest/ec2_vpc_subnet_module.html
https://docs.ansible.com/ansible/latest/ec2_vpc_igw_module.html
https://docs.ansible.com/ansible/latest/ec2_vpc_route_table_module.html
https://docs.ansible.com/ansible/latest/ec2_group_module.html
https://docs.ansible.com/ansible/latest/ec2_module.html
- hosts: localhost
connection: local
gather_facts: False

tasks:
- name: Setting up VPC
ec2_vpc_net:
region: "{{ aws_region }}"
name: "Swarm VPC"
cidr_block: "{{ swarm_vpc_cidr }}"
register: swarm_vpc

- set_fact:
vpc: "{{ swarm_vpc.vpc }}"

- name: Setting up the subnet tied to the VPC
ec2_vpc_subnet:
region: "{{ aws_region }}"
vpc_id: "{{ vpc.id }}"
cidr: "{{ swarm_vpc_cidr }}"
resource_tags:
Name: "Swarm subnet"
register: swarm_subnet

- name: Setting up the gateway for the VPC
ec2_vpc_igw:
region: "{{ aws_region }}"
vpc_id: "{{ vpc.id }}"
register: swarm_gateway

- name: Setting up routing table for the VPC network
ec2_vpc_route_table:
region: "{{ aws_region }}"
vpc_id: "{{ vpc.id }}"
lookup: tag
tags:
Name: "Swarm Routing Table"
subnets:
- "{{ swarm_subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ swarm_gateway.gateway_id }}"


- name: Setting up security group / firewall
ec2_group:
region: "{{ aws_region }}"
name: "Swarm SG"
description: "Security group for the swarm"
vpc_id: "{{ vpc.id }}"
rules:
- cidr_ip: 0.0.0.0/0
proto: tcp
from_port: 22
to_port: 22
- cidr_ip: 0.0.0.0/0
proto: tcp
from_port: 80
to_port: 80
rules_egress:
- cidr_ip: 0.0.0.0/0
proto: all
register: swarm_sg

- name: Provisioning cluster node
ec2:
region: "{{ aws_region }}"
image: "{{ cluster_node_ami }}"
key_name: "{{ ssh_key_name }}"
instance_type: "t2.medium"
group_id: "{{ swarm_sg.group_id }}"
vpc_subnet_id: "{{ swarm_subnet.subnet.id }}"
source_dest_check: no
assign_public_ip: yes
monitoring: no
instance_tags:
Name: cluster-node
wait: yes
wait_timeout: 500

What we are doing here closely matches our earlier plan but now we have concrete deployment code to match it up against:

  1. We set up the VPC with the ec2_vpc_net module.
  1. We create our subnet and associate it to the VPC with the ec2_vpc_subnet module.
  1. The Internet virtual gateway for our cloud is created with ec2_vpc_igw.
  2. Internet gateway is then made to resolve any addresses that are not within the same network.
  3. ec2_group module is used to enable ingress and egress networking but only port 22 (SSH) and port 80 (HTTP) are allowed in.
  4. Finally, our EC2 instance is created within the newly configured VPC with the ec2 module.

As we mentioned earlier, the tear-down should be very similar but in reverse and contain a lot more state: absent arguments. Let's put the following in destroy.yml in the same folder:

- hosts: localhost
connection: local
gather_facts: False

tasks:
- name: Finding VMs to delete
ec2_remote_facts:
region: "{{ aws_region }}"
filters:
"tag:Name": "cluster-node"
register: deletable_instances

- name: Deleting instances
ec2:
region: "{{ aws_region }}"
instance_ids: "{{ item.id }}"
state: absent
wait: yes
wait_timeout: 600
with_items: "{{ deletable_instances.instances }}"
when: deletable_instances is defined

# v2.0.0.2 doesn't have ec2_vpc_net_facts so we have to fake it to get VPC info
- name: Finding route table info
ec2_vpc_route_table_facts:
region: "{{ aws_region }}"
filters:
"tag:Name": "Swarm Routing Table"
register: swarm_route_table

- set_fact:
vpc: "{{ swarm_route_table.route_tables[0].vpc_id }}"
when: swarm_route_table.route_tables | length > 0

- name: Removing security group
ec2_group:
region: "{{ aws_region }}"
name: "Swarm SG"
state: absent
description: ""
vpc_id: "{{ vpc }}"
when: vpc is defined

- name: Deleting gateway
ec2_vpc_igw:
region: "{{ aws_region }}"
vpc_id: "{{ vpc }}"
state: absent
when: vpc is defined

- name: Deleting subnet
ec2_vpc_subnet:
region: "{{ aws_region }}"
vpc_id: "{{ vpc }}"
cidr: "{{ swarm_vpc_cidr }}"
state: absent
when: vpc is defined

- name: Deleting route table
ec2_vpc_route_table:
region: "{{ aws_region }}"
vpc_id: "{{ vpc }}"
state: absent
lookup: tag
tags:
Name: "Swarm Routing Table"
when: vpc is defined

- name: Deleting VPC
ec2_vpc_net:
region: "{{ aws_region }}"
name: "Swarm VPC"
cidr_block: "{{ swarm_vpc_cidr }}"
state: absent

If the deploy playbook was readable, then this playbook should be generally easy to understand and as we mentioned, it just runs the same steps in reverse, removing any infrastructure pieces we already created.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.34.105