Baking images with Packer

Packer was released back in 2013 with a goal to simplify, automate, and codify the image creation process. It removes all the pain from baking images for different platforms, by replacing many manual steps with a single JSON template fed to the CLI. It is written in the Go programming language, just like Terraform. Installing it is a piece of cake--just download the archive for your operating system from https://www.packer.io/downloads.html and extract the binary to a directory available in your $PATH environment variable. Then, verify your installation:

$> packer -v
0.12.0

You are all set up to bake images! To do so, just run packer build my_template.json. It won't work, of course, because we don't have a template yet. Create the base.json file and let's start filling it in. Our goal is to bake a Centos 7 AMI with all packages updated and Puppet installed.

The only required section for the template is Builders array definition. Builders are configuration blocks of each provider that you want to bake an image for. Each provider is different, and each requires some kind of authorization to APIs, few network details, and so on. Some example of builders are: AWS AMI, Google Compute Engine images, and Virtual Box. We will continue with using AWS. The amazon-ebs builder is what we are going to use. There are two other AWS builders in Packer that are more advanced and not required for our exercise.

Note

Each Packer template can have multiple builders defined, which allows you to bake an image for multiple providers at once.

Configuration for the amazon-ebs builder looks like as follows:

{ 
  "builders": [ 
    { 
      "type": "amazon-ebs", 
      "ami_name": "centos-7-base-puppet-{{timestamp}}", 
      "region": "eu-central-1", 
      "source_ami": "ami-9bf712f4", 
      "instance_type": "t2.micro", 
      "ssh_username": "centos", 
      "ssh_pty": true 
    } 
  ] 
} 

In case you don't have default VPC in your account, you will also need to specify the vpc_id and subnet_id keys. There is no need to configure a security group or a key pair, if you don't want to: Packer will create them in case they are not specified and destroy them after the build is done. Go ahead and start the build:

$> packer build base.json
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name...
    amazon-ebs: Found Image ID: ami-9bf712f4
==> amazon-ebs: Creating temporary keypair: 
    packer_582c19ae-62d9-5ffd-06c3-ae22db9e7d3c
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing access to port 22 the 
    temporary security group...
==> amazon-ebs: Launching a source AWS instance...
    amazon-ebs: Instance ID: i-f00f954d
==> amazon-ebs: Waiting for instance (i-f00f954d) to become ready...
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: centos-7-base-puppet-1479285166
    amazon-ebs: AMI: ami-a0d114cf
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: Destroying volume (vol-52edcfd8)...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-central-1: ami-a0d114cf

If you read through this log, you will start appreciating the work Packer does. There are so many steps that would take tens of minutes to do by hand, only to create an AMI that is no different from source image! With Packer, it's just 15 lines of JSON that you can put into source control, version it, and collaborate on it.

Note this part: "ami_name": "centos-7-base-puppet-{{timestamp}}". Here, the internal Packer variable timestamp is used. It's very handy to name your AMIs. We could also define our own variables:

{ 
  "variables": { 
    "environment": "production", 
    "prefix": "{{ env `AMI_NAME_PREFIX` }}" 
  }, 
  "builders": [ 
    { 
      "ami_name": "{{ user `prefix` }}centos-7-base-puppet-{{ user `environment` }}-{{timestamp}}", 
      "type": "amazon-ebs", 
      ... 
} 

Just like with Terraform, there are many ways to supply these variables. You could do it inline:

$> packer build -var 'environment=development'

You could also store them in a file like as follows:

{ 
  "prefix": "packt" 
} 

Then, you could use it via a command-line argument:

$> packer build -var-file=variables.json

You could also send them via environment variables, in case you configured the variable like the "prefix" variable shown earlier. To verify that the configuration was done correctly before running the build, you can use the validate command:

$> packer validate base.json
Template validated successfully.

Our template is pretty useless though: it just repackages existing AMI! To do some real configuration of what goes into this AMI, we should use provisioners. Packer has been around for quite some time now, so it has much better support for various provisioners than Terraform. It even has built-in Puppet provisioners (masterless and with Puppet server), two types of Ansible provisioners, Salt support, and many others. We will stick with the simple remote shell provisioner, though. But I encourage you to try different ones out.

You can also configure multiple provisioners per each template. For example, you could upload configuration files with the file provisioner and then copy it to needed locations with a remote shell provisioner. It's not an uncommon use case: file provisioner of Packer can't use sudo privileges, so if you need to upload a system service configuration, you need to do it in two steps. Add the following provisioner configuration right after builders:

... 
], 
  "provisioners": [ 
    { 
      "type": "shell", 
      "inline": [ 
        "sudo yum update -y", 
        "sudo rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm", 
        "sudo yum install puppet -y" 
      ] 
    } 
  ] 

Unfortunately, Packer doesn't have a substitute for handy Terraform plan command. To test if your template is working, you have to run the build. But given that nonexisting AMI can't do much harm to the infrastructure, the only downside of it is cost--Packer creates EC2 instances in order to create the image, and these instances cost money.

Run packer build base.json again and get a cup of coffee--it takes a while for the build to finish. You probably don't want to do it manually in the future. Packer is perfect to be run inside a Continuous Integration server such as Jenkins or Gitlab CI. Ideally, you should even try to architect the complete pipeline that builds the image, tests it, and rolls it out to production. But let's not overcomplicate things right now.

After a little while, Packer will report to you about the success of the build:

Build 'amazon-ebs' finished.
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-central-1: ami-12d3167d

With that, our Packer 101 is finished. Just like Terraform, the tools are focused on doing exactly one job, and it needs some tooling around it to make it productive. One option is to use HashiCorp Atlas--a paid service that wraps Packer and Terraform and provides hosting for your templates. Another option is, as usual, the DIY approach.

Again, you had to learn Packer a bit because that's the tool that works best in pair with Terraform. It's also the tool that makes Immutable Infrastructure efforts much more enjoyable. Without further ado, let's get back to the Terraform and teach it how to update servers in an immutable fashion!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.165.66