Managing a public cloud infrastructure

The management of public cloud infrastructures with Ansible is no more difficult than the management of OpenStack with it, as we covered earlier. In general, for any IaaService provider supported by Ansible, there is a three-step process to getting it working:

  1. Establish the Ansible modules available to support the cloud provider.
  2. Install any prerequisite software or libraries on the Ansible host.
  3. Define the playbook and run it against the infrastructure provider.

There are dynamic inventory scripts readily available for most providers, too, and we have already demonstrated two in this book:

  • ec2.py was discussed in Chapter 1, The System Architecture and Design of Ansible.
  • openstack_inventory.py was demonstrated earlier in this chapter.

Let's take a look at Amazon Web Services (AWS), and specifically, their EC2 offering. We can boot up a new server from an image of our choosing, using exactly the same high-level process that we did with OpenStack earlier. However, as I'm sure you will have guessed by now, we have to use an Ansible module that offers specific EC2 support. Let's build up the playbook. First of all, our initial play will once again run from the local host, as this will be making the calls to EC2 to boot up our new server:

---
- name: boot server
hosts: localhost
gather_facts: false

Next, we will use the ec2 module in place of the os_server module to boot up our desired server. This code is really just an examplenormally, just like with our os_server example, you would not include the secret keys in the playbook, but would store them in a vault somewhere:

    - name: boot the server
ec2:
access_key: XXXXXXXXXXXXXXXXX
secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
keypair: mastery-key
group: default
type: t2.medium
image: "ami-000848c4d7224c557"
region: eu-west-2
instance_tags: "{'ansible_group':'mastery_server', 'Name':'mastery1'}"
exact_count: 1
count_tag:
ansible_group: "mastery_server"
wait: true
user_data: |
#!/bin/bash
sudo dnf install -y python python2-dnf
register: newserver
The ec2 module requires the Python boto library to be installed on the Ansible hostthe method for this will vary between operating systems, but on our CentOS 7 demo host, it was installed using the sudo yum install python-boto command .

The preceding code is intended to perform the same job as our os_server example, and while it looks similar at a high level, there are many differences. Hence, it is essential to read the module documentation whenever working with a new module, in order to understand precisely how to use it. Of specific interest, do note the following:

  • The ec2 module creates a new virtual machine every time it is run, unless you set the exact_count parameter in conjunction with the count_tags parameter (mentioning a tag set in the instance_tags line).
  • The user_data field can be used to send post-creation scripts to the new VM; this is incredibly useful when initial configuration is needed immediately, lending itself to raw commands. In this case, we use it to install the Python prerequisites required to install ImageMagick later on.

Next, we can obtain the public IP address of our newly created server by using the newserver variable that we registered in the last task. However, note the different variable structure, as compared to the way that we accessed this information when using the os_server module (again, always refer to the documentation):

    - name: show floating ip
debug:
var: newserver.tagged_instances[0].public_ip

Another key difference between the ec2 module and the os_server one is that ec2 does not wait for SSH connectivity to become available before completingthus, we must define a task specifically for this purpose to ensure that our playbook doesn't fail later on due to a lack of connectivity:

    - name: Wait for SSH to come up
wait_for_connection:
timeout: 320

Once this task has completed, we will know that our host is alive and responding to SSH, so we can proceed to using add_host to add this new host to the inventory, and then install ImageMagick just like we did before (the image used here is the same Fedora 29 cloud-based image used in the OpenStack example):

    - name: add new server
add_host:
name: "mastery1"
ansible_ssh_host: "{{ newserver.tagged_instances[0].public_ip }}"
ansible_ssh_user: "fedora"

- name: configure server
hosts: mastery1
gather_facts: false

tasks:
- name: install imagemagick
dnf:
name: "ImageMagick"
become: "yes"

Putting all of this together and running the playbook should result in something like the following screenshot. Note that I have turned SSH host key checking off, to prevent the SSH transport agent from asking about adding the host key on the first run, which would cause the playbook to hang and wait for user intervention, something that we don't want here:

As we have seen here, we can achieve the same result on a different cloud provider, using only a subtly different playbook. The key here is to read the documentation that comes with each module and ensure that both the parameters and return values are correctly referenced.

We could apply this methodology to Azure, Google Cloud, or any of the other cloud providers that Ansible ships with support for. If we wanted to repeat this example on Azure, then we would need to use the azure_rm_virtualmachine module. The documentation for this module states that we need Python 2.7 or newer (this is already a part of our CentOS 7 demo machine), and the azure Python module, version 2.0.0 or higher. On CentOS 7, there was no RPM for the latter dependency, so it was installed using the commands (the first installs the azure Python module, and the second installs the Ansible support for Azure):

sudo pip install azure
sudo pip install ansible[azure]

With these prerequisites satisfied, we can build up our playbook again. Note that with Azure, multiple authentication methods are possible. For the sake of simplicity, I am using the Azure Active Directory credentials that I created for this demohowever, to enable this, I had to also install the official Azure CLI utility, and log in using the following:

az login

This ensures that your Ansible host is trusted by Azure. In practice, you would set up a service principal that removes the need for thishowever, doing so is beyond the scope of this book. To continue with this example, we set up the header of our playbook like before:

---
- name: boot server
hosts: localhost
gather_facts: false
vars:
vm_password: Password123!

Note that this time, we will store a password for our new VM in a variable; normally, we would do this in a vault, but that is left as an exercise for the reader. From here, we use the azure_rm_virtualmachine module to boot up our new VM. To make use of a Fedora 29 image for continuity with the previous examples, I've had to go to the image marketplace on Azure, which requires some additional parameters, such as plan, to be defined. To enable the use of this image with Ansible, I first had to find it, then accept the terms of the author to enable its use, using the az command-line utility with these commands:

az vm image list --offer fedora --all --output table
az vm image show --urn tunnelbiz:fedora:fedora29:1.0.0
az vm image accept-terms --urn tunnelbiz:fedora:fedora29:1.0.0

I also had to create the resource group and network that the VM would use; these are very much Azure-specific steps, and are beyond the scope of this book. However, once all of that was created, I was then able to write the following playbook code to boot up our Azure-based Fedora 29 image:

  tasks:
- name: boot the server
azure_rm_virtualmachine:
ad_user: [email protected]
password: xxxxxxx
resource_group: mastery
name: mastery1
admin_username: fedora
admin_password: "{{ vm_password }}"
vm_size: Standard_B1s
image:
offer: fedora
publisher: tunnelbiz
sku: fedora29
version: latest
plan:
name: fedora29
product: fedora
publisher : tunnelbiz
register: newserver

Like before, we obtain the public IP address of our image (note the complex variable required to access this), ensure that SSH access is working, and then use add_host to add the new VM to our runtime inventory:

    - name: show floating ip
debug:
var: newserver.ansible_facts.azure_vm.properties.networkProfile.networkInterfaces[0].properties.ipConfigurations[0].properties.publicIPAddress.properties.ipAddress

- name: Wait for SSH to come up
wait_for_connection:
delay: 1
timeout: 320

- name: add new server
add_host:
name: "mastery1"
ansible_ssh_host: "{{ newserver.ansible_facts.azure_vm.properties.networkProfile.networkInterfaces[0].properties.ipConfigurations[0].properties.publicIPAddress.properties.ipAddress }}"
ansible_ssh_user: "fedora"
ansible_ssh_pass: "{{ vm_password }}"
ansible_become_pass: "{{ vm_password }}"

Azure allows for either password- or key-based authentication for SSH on Linux VMswe're using password-based here for simplicity. Also, note the newly utilized ansible_become_pass connection variable, as the Fedora 29 image that we are using will prompt for a password when sudo is used, potentially blocking execution. Finally, with this work complete, we install ImageMagick, like before:

- name: configure server
hosts: mastery1
gather_facts: false

tasks:
- name: install python
raw: "dnf install -y python python2-dnf"
become: "yes"

- name: install imagemagick
dnf:
name: "ImageMagick"
become: "yes"

Let's take a look at this in action:

The output is very similar to before, demonstrating that we can very easily perform the same actions across different cloud platforms with just a little effort in terms of learning how the various modules that we might need work. This section of the chapter is by no means definitive, given the number of platforms and operations supported by Ansible, but we hope that the information provided gives an idea of the process and steps required for getting Ansible to integrate with a new cloud platform. Next, we will look at using Ansible to interact with Docker containers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.83.150