Now that we have been introduced to what Ansible looks like and how it operates, it is time to do something practical with it. What we will do at this point is make an Ansible deployment configuration to apply some of the system tweaks we covered in the previous chapter and have Docker ready for us on the machine after running the playbook.
First, we will need to create our file structure for holding files. We will call our main role swarm_node and since our whole machine is just going to be a swarm node, we will name our top-level deployment playbook the same:
$ # First we create our deployment source folder and move there
$ mkdir ~/ansible_deployment
$ cd ~/ansible_deployment/
$ # Next we create the directories we will need
$ mkdir -p roles/swarm_node/files roles/swarm_node/tasks
$ # Make a few placeholder files
$ touch roles/swarm_node/tasks/main.yml
swarm_node.yml
hosts
$ # Let's see what we have so far
$ tree
.
├── hosts
├── roles
│ └── swarm_node
│ ├── files
│ └── tasks
│ └── main.yml
└── swarm_node.yml
4 directories, 3 files
Now let's add the following content to the top-level swarm_node.yml. This will be the main entry point for Ansible and it basically just defines target hosts and roles that we want to be run on them:
---
- name: Swarm node setup
hosts: all
become: True
roles:
- swarm_node
What we are doing here should be mostly obvious:
- hosts: all: Run this on all the defined servers in the inventory file. Generally, this would be just a DNS name but since we will only have a single machine target, all should be fine.
- become: True: Since we use SSH to run things on the target and the SSH user is usually not root, we need to tell Ansible that it needs to elevate permissions with sudo for the commands that we will run. If the user requires a password to use sudo, you can specify it when invoking the playbook with the ansible-playbook -K flag, but we will be using AWS instances later in the chapter which do not require one.
- roles: swarm_mode: This is a list of roles we want to apply to the targets which is for now just a single one called swarm_node. This name must match a folder name in roles/.
Next in line for defining will be our system tweaking configuration files that we covered in the previous chapter for things like increases in file descriptor maximum, ulimits, and a couple of others. Add the following files and their respective content to the roles/swarm_node/files/ folder:
- conntrack.conf:
net.netfilter.nf_conntrack_tcp_timeout_established = 43200
net.netfilter.nf_conntrack_max = 524288
- file-descriptor-increase.conf:
fs.file-max = 1000000
- socket-buffers.conf:
net.core.optmem_max = 40960
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216
- ulimit-open-files-increase.conf:
root soft nofile 65536
root hard nofile 65536
* soft nofile 65536
* hard nofile 65536
With those added, our tree should look a bit more like this now:
.
├── hosts
├── roles
│ └── swarm_node
│ ├── files
│ │ ├── conntrack.conf
│ │ ├── file-descriptor-increase.conf
│ │ ├── socket-buffers.conf
│ │ └── ulimit-open-files-increase.conf
│ └── tasks
│ └── main.yml
└── swarm_node.yml
With most of the files in place, we can now finally move onto the main configuration file--roles/swarm_mode/tasks/main.yml. In it, we will lay out our configuration steps one by one using Ansible's modules and DSL to:
- apt-get dist-upgrade the image for security
- Apply various improvements to machine configuration files in order to perform better as a Docker host
- Install Docker
To simplify understanding the following Ansible configuration code, it would be good to also keep this structure in mind since it underpins each discrete step we will use and is pretty easy to understand after you see it a couple of times:
- name: A descriptive step name that shows in output
module_name:
module_arg1: arg_value
module_arg2: arg2_value
module_array_arg3:
- arg3_item1
...
...
You can also find module-specific documentation that we used here too:
- https://docs.ansible.com/ansible/latest/apt_module.html
- https://docs.ansible.com/ansible/latest/copy_module.html
- https://docs.ansible.com/ansible/latest/lineinfile_module.html
- https://docs.ansible.com/ansible/latest/command_module.html
- https://docs.ansible.com/ansible/latest/apt_key_module.html
- https://docs.ansible.com/ansible/latest/apt_repository_module.html
Let us see what that main installation playbook (roles/swarm_mode/tasks/main.yml) should look like:
---
- name: Dist-upgrading the image
apt:
upgrade: dist
force: yes
update_cache: yes
cache_valid_time: 3600
- name: Fixing ulimit through limits.d
copy:
src: "{{ item }}.conf"
dest: /etc/security/limits.d/90-{{ item }}.conf
with_items:
- ulimit-open-files-increase
- name: Fixing ulimits through pam_limits
lineinfile:
dest: /etc/pam.d/common-session
state: present
line: "session required pam_limits.so"
- name: Ensuring server-like kernel settings are set
copy:
src: "{{ item }}.conf"
dest: /etc/sysctl.d/10-{{ item }}.conf
with_items:
- socket-buffers
- file-descriptor-increase
- conntrack
# Bug: https://github.com/systemd/systemd/issues/1113
- name: Working around netfilter loading order
lineinfile:
dest: /etc/modules
state: present
line: "{{ item }}"
with_items:
- nf_conntrack_ipv4
- nf_conntrack_ipv6
- name: Increasing max connection buckets
command: echo '131072' > /sys/module/nf_conntrack/parameters/hashsize
# Install Docker
- name: Fetching Docker's GPG key
apt_key:
keyserver: hkp://pool.sks-keyservers.net
id: 58118E89F3A912897C070ADBF76221572C52609D
- name: Adding Docker apt repository
apt_repository:
repo: 'deb https://apt.dockerproject.org/repo {{ ansible_distribution | lower }}-{{ ansible_distribution_release | lower }} main'
state: present
- name: Installing Docker
apt:
name: docker-engine
state: installed
update_cache: yes
cache_valid_time: 3600
In this file, we sequentially ordered the steps one by one to configure the machine from base to a system fully capable of running Docker containers by using some of the core Ansible modules and the configuration files we created earlier. One thing that might not be very obvious is our use of the {{ ansible_distribution | lower }} type variables but in those, we are using Ansible facts (https://docs.ansible.com/ansible/latest/playbooks_variables.html) gathered about the system we are running on and passing them though a Ninja2 lower() filter to ensure that the variables are lowercase. By doing this for the repository endpoint, we can use the same configuration without problems on almost any deb-based server target without much trouble as the variables will be substituted to the appropriate values.
At this point, the only thing we would need to do in order to apply this configuration to a machine is to add our server IP/DNS to hosts file and run the playbook with ansible-playbook <options> swarm_node.yml. But since we want to run this on an Amazon infrastructure, we will stop here and see how we can take these configuration steps and from them create an Amazon Machine Image (AMI) on which we can start any number of Elastic Compute Cloud (EC2) instances that are identical and have already been fully configured.