Chapter 9. Bringing it together: deployment

This chapter covers

  • Automating your deployment to AWS
  • Deploying your application to your AWS infrastructure
  • Pushing incremental changes to your application

Your development cluster is set up, your production cluster is set up, and you’ve architected a full application stack for performance scaling and availability. Now, you have to figure out how you’re going to tackle deployment. There are many, many models for constructing deployment mechanics, and many options for continuous integration systems, task schedulers, and build systems. After chapter 8, you have two major systems to rely on: CoreOS and AWS. You want to be able to do something that fits both systems without creating deep dependencies between them.

In this chapter, you’ll create a workflow that’s pretty generic; it will cut a few corners to remain generic and avoid bringing more components into the system. When you start building this kind of pipeline for your own applications, you’ll probably use a variety of other tools, but this example provides the basic inputs of a deployment system that you can plug into your tooling.

Ultimately, this means you’re going to create something in AWS that eventually flips a value in etcd, but you want to do it without

  • Building out new EC2 VMs to do orchestration
  • Running an agent on your CoreOS nodes that only works in AWS

I’ve established these constraints for a couple of reasons. First, adding infrastructure that becomes a dependency of other infrastructure is a bit of an antipattern for the twelve-factor methodology; it tightly couples two systems. Second, your AWS deployments should be functionally similar to how deployments work in your local Vagrant cluster, to reduce “it works on my machine” syndrome.

Software development in your organization undoubtedly goes through some kind of lifecycle. Services that support the software may go through a similar process. In the process you followed in chapters 6 and 7 to architect your system, the final results were Docker containers and service-unit files. You want to reliably (and with some amount of abstraction) deploy that software and those services into your brand-new production cluster, as rapidly as you did in your local development environment.

In this chapter, you’ll add/build some new things:

  • A pipeline in AWS to trigger etcd to give the sidekick’s context
  • A gateway so you can execute that trigger remotely (from Docker Hub webhooks)
  • Modified sidekicks to get deployment context from etcd

The big takeaway will be how to deploy software to a CoreOS cluster running in AWS with a single touchpoint that doesn’t require you to directly interact with fleet. Figure 9.1 shows what you’ll build: you’ll use AWS Lambda, AWS API Gateway, and Docker Hub to initiate a sidekick-controlled deploy via etcd.

Figure 9.1. Deployment pipeline

9.1. New CloudFormation objects

Yes, you get to add more objects to the ever-growing CloudFormation template!

  • An input parameter that serves as a pseudo API deploy key
  • An output key that you can drop into Docker Hub’s webhook configuration
  • A Lambda function that sets a key on your internal etcd load balancer
  • An API Gateway configuration to create the endpoint for the webhook

In this chapter, I assume you’ve completed the previous chapter and understand, for example, that a Parameter object goes into the Parameters section of the Cloud-Formation file. You’ll create the parameter and output objects first, and at the end of the section, you’ll run an update-stack command.

Note

If in doubt about your YAML, just like the last chapter, the completed CloudFormation template for this chapter is available in the book’s code repository (code/ch9/ch9-cfn-cluster.yml).

9.1.1. Parameter and output

You’ll add one new parameter and one new output object. They’re related, as you’ll see.

Listing 9.1. Parameters

Listing 9.2. Outputs

The final product from the modifications of your CloudFormation stack is a URL that causes your CoreOS cluster to update the web application. Unfortunately, the Docker Hub webhook feature doesn’t come from a specific IP block and doesn’t support custom headers, so you’re forced here to make this URL public. The (admittedly less-than-secure) solution is to use the URL like an API key with a reasonable amount of entropy, which would be extremely difficult to guess. Because you’re using AWS API Gateway, though, if you wanted to add a more secure trigger—for example, from your CI system—doing so would be trivial.

Here’s the worst-case scenario, given the way you’re going to construct this. If someone brute-forced—for example, https://<YOUR API GATEWAY>/prod/ iheph6 un2ropiodei7kamo7eegoo2kethai3cohfaicaegae4ea8ahheriedoo1w—and figured out the right payload, the worst they could do would be to cause the service to restart very quickly. When you hook up APIs like this in the real world, security is your responsibility. As mentioned in the introduction to this chapter, this is an interaction you may want to control with another system that’s more appropriate to your workflow and security requirements. You’ll get started with the Resources next.

9.1.2. AWS Lambda

AWS Lambda is a newer service in AWS that lets you run snippets of code (Node.js, Python, or Java) in reaction to events in AWS. It has an interesting pricing model: you’re charged in units of 100 ms for how long it takes a task to complete. This makes it great for quick, task-oriented operations like asynchronous deployments. You’re going to set up the event emitter (AWS API Gateway) in the next subsection, but you’ll set up your Lambda function here.

First, you need a new IAM role for Lambda so that it can do things in your VPCs.

Listing 9.3. Lambda role

Now let’s get into the Lambda function. The short script is inlined here, much like cloud-config from chapter 8. Lambda does not support inlining Java; for consistency, this example uses Node.js, so you don’t have to worry about newlines or whitespace in the Code field.

Listing 9.4. Lambda function

There’s a little complexity here, but nothing you haven’t seen before if you read the code for the sidekick units in previous chapters. The only difference here is that you’re interacting with etcd without the help of a handy Node.js library (just the built-in http module), and you’re hitting the internal etcd load balancer you created in chapter 8 rather than hitting a node directly. You’re basically keying the deployment on a particular Docker tag. If that tag is pushed to Docker Hub, this code pushes a new value to an etcd key.

Note

If you want to read more about the options for Lambda for Cloud-Formation, you can find the documentation at http://mng.bz/57Br.

Finish by giving API Gateway permission to invoke the Lambda function.

Listing 9.5. Lambda permission
LambdaPermission:
  Type: AWS::Lambda::Permission
  DependsOn: [ DeployLambda ]
  Properties:
    Action: "lambda:InvokeFunction"
    FunctionName: !GetAtt [ DeployLambda, Arn ]
    Principal: apigateway.amazonaws.com

9.1.3. API Gateway

API Gateway lets you trigger Lambda functions and pass any HTTP parameters along with them. You’ll add only one resource with one method; but it requires a bunch of discrete resources to work, so you have to include a good deal of boilerplate configuration to initiate this resource.

Listing 9.6. Rest API and resource

The base resource is a lot like the base resource in the web load balancer in chapter 8; it doesn’t do much of anything on its own until you attach API Gateway resources, methods, deployments, and stages.

Now, you need to define a POST method for this resource and attach it to the Lambda.

Listing 9.7. POST method

Next, you construct a “deployment” for API Gateway.

Listing 9.8. Deployment

Your API Gateway should be ready to go, and you can move on to updating your stack.

9.1.4. Updating your stack

The command to update your stack is similar to the one you used to create it:

Once that’s finished, look at the outputs and take note of the generated API Gateway URL:

Test this endpoint:

What you’ve built here is essentially a pathway into your CoreOS etcd cluster. You can follow or extend this pattern to build any kind of administrative tooling to interact with your cluster, effectively giving you the ability to build a custom API for specifically managing your services. You can take this further and build more robust authentication and authorization systems into API Gateway, as well as add more interesting functionality to your Lambdas. For example, you could build a Lambda to fire up more compute workers or run a search on your Couchbase or any other data system.

You can finally move on to the initial deployment of your software and test the deployment trigger. The next section will briefly describe the new web sidekick to orchestrate the deployment, and go over pushing out all of your service files.

9.2. Deploying the app!

You’re ready to get started on the actual deployment of your application. But not so fast: the first thing you have to do is create a new sidekick unit file for the web that can react to etcd events to redeploy your web application. If you apply this pattern to your own applications, you’ll have to make deployment sidekicks for any of them that you want to deploy automatically. Let’s get this out of the way first, and then move on to deploying the application.

9.2.1. Web sidekick

You did a lot of sidekick functionality in chapters 4 and 7. You’re adding one more here that you’ll attach to the state of the web@ unit template. Like your other sidekicks, this should run on the same machine as the web instance it’s bound to. Call the new sidekick [email protected].

Listing 9.9. code/ch9/webapp/[email protected]

Also tweak the [email protected] file a little so that you’re pulling the production tag.

Listing 9.10. code/ch9/webapp/[email protected]

Now you can start up your services in your AWS cluster!

9.2.2. Initial deployment

Make sure your local fleetctl is set up properly to use your AWS cluster:

Also, set the etcd key for the workers to fetch some Twitter data:

Now, change directory into where you have all your service units, and spin them all up:

Next, confirm that your application is up and running by hitting the ELB with curl. You can fetch the ELB hostname with the AWS CLI:

Also, start up the workers from chapter 7 for a bit—but remember to stop them, because they’ll quickly be rate limited:

You can now visit the load balancer in your browser to see the same site you deployed to your dev environment back in chapter 7. If you visit http://<YOUR ELB>:8091/index.html, you should be able to access your Couchbase admin panel.

You should be starting to see the big picture of deploying a complex application out to your infrastructure in AWS by combining the tools and commands you’ve learned in the previous chapters. In the next section, you’ll make a change to your web app and test your automated deployment.

9.3. Automated deployment

This section goes over how to use the Lambda hook you set up earlier. Keep handy the URL that came out of the Outputs request for your stack in section 9.1.4:

URL to put in Docker Hub web hook DeployHook
  https://<YOUR API GATEWAY HOST>
  /prod/eivi1leecojai3fephievie1ohsuo6sheenga2chaip8oph5doo5bethohg2uv6i

If you want to follow along with this example, you’ll obviously have to use your own Docker Hub account, along with your own published web app.

9.3.1. Docker Hub setup

Go to https://hub.docker.com, and go into the webhook config for your repository. For example, mine is at https://hub.docker.com/r/mattbailey/ch6-web/~/settings/webhooks/ (see figure 9.2). Click the + to add a new hook.

Figure 9.2. Add a webhook.

Note

This example uses Docker Hub primarily because it’s an easy pathway to set up and isn’t another service you have to construct for this example. Anything could exist in its place: a CI system, task-execution system, GitHub hook, Slack command, and so on.

Name your webhook, and then paste in the API Gateway URL (see figure 9.3) and click Save.

Figure 9.3. Save the webhook.

You’re ready to make some modifications to your app and automatically deploy it when it’s pushed to Docker Hub. Let’s give it a shot.

9.3.2. Pushing a change

Let’s make a simple style change so it’s obvious what you’ve done. In the index.html file, add the following new line after the <script> tag for socket.io:

Save the file, and build and run your Docker image. Then, push it to Docker Hub, using the production tag:

The rest should be automatic. Go back to your website and reload a few times; after maybe a minute, your site should appear with white text on a black background. Congratulations! You’ve set up an automated deployment pipeline.

You can easily integrate this kind of workflow into the common continuous--integration or source-control hooks you normally use. For example, perhaps you have CircleCI or Jenkins do the Docker build on a push to a GitHub repo branch and then push it out to Docker Hub to trigger this deploy. Now, instead of manually destroying and re-creating services with fleetctl to deploy a new version, you’re more or less hands-off your CoreOS cluster after the initial deployment, until you want to remove or add new services. This is the point at which your CoreOS system becomes more self-service for developers; you can continue to add automation around CoreOS to remove a lot of human error related to running robust services.

The final chapter looks at the long-term maintenance of this deployment, how to tune the infrastructure for scale, and what’s on the horizon for CoreOS.

9.4. Summary

  • Use AWS’s features to automate as much as you can in your CoreOS cluster.
  • Beware of tightly coupling AWS systems: note that you didn’t make your Lambda function directly interact with fleet.
  • Make sure you use etcd as an abstraction point of loose coupling: for example, you should be able to trigger any automation from curl to etcd.
  • Don’t forget to consider security and authorization restraints in your real-world pipelines.
  • Tune your stack outputs. CloudFormation can provide a lot of useful information to help with automation.
  • The final step to close the loop for your implementation is to integrate your CI tools and source-control system.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.23.181