Deploying APIs

Once we've written an API implementation and declared the infrastructure that should be created, how do we deploy it? When deploying things, we need options to support a software development lifecycle, so that we can safely promote artifacts through the various environments using an appropriate release method. We'll be covering Continuous Integration/Continuous Delivery pipelines in a later chapter. For this section, however, we need to understand the native capabilities and features of Amazon API Gateway when it comes to deployment.

The way that API Gateway separates deployments of an API is by using stages. To be clear, we're talking about separating a deployment of a single API here and not creating an entirely new instance. Stages can be used for an API's lifecycle, to move changes through or to enable different settings (caching and throttling) for different deployments. You can use as many or as few stages as you would like (10 is the default soft limit) and use them for whatever you need. For example, you could use them to enable versioning for your API. The syntax looks like this:

https://<domain-name>/<stage-name>/<resource-path>

So, the example of how versioning could be implemented would be https://api.mycompany.com/v1/widgets and https://api.mycompany.com/v2/widgets.

Another feature that comes with using stages is stage variables. These work sort of like environment variables for your APIs. You can store things for configuration or extra context, and then reference them using the curly bracket syntax, for example, ${stageVar}. You might use a stage variable to abstract an element that changes across lifecycle environments. This would allow you to reuse code and inject specific values at deploy time.

Notice here that I haven't mentioned an example of how you could use stages as lifecycle environments—that is, dev, uat, or prod. You might be tempted to do this, but a better idea is to have separate instances of the API for each environment. Often, in larger deployments, these instances might even be in different accounts. Separating environments into their own instances allows you to apply different security guard rails, administrator access controls, and configuration elements such as log levels.

Okay, we've already run through what the console looks like for building APIs, so let's fire up our command line and run through a deployment using the CLI. This command assumes the resources, methods, integrations, and stage have been created:

aws apigateway create-deployment 
--rest-api-id ${api-id}
--stage-name ${stage-name}
--description 'My first deployment'

As far as deployment commands go, this is fairly straightforward. This will deploy any changes you have made to the specified stage. 

Releasing code to production can often be a risky endeavor. If it goes wrong, then real users can be negatively affected, which could impact customer churn rate and revenue. Thankfully, with the create-deployment command, the deployment is pushed without downtime and existing requests won't be disconnected. But there's always a possibility that your backend code implementation has a problem or hits an edge case that wasn't expected or tested. Surely, there's a less risky method to releasing code? Well, there is. It's called a canary release—a method of rolling out the deployment slowly so it's available to a handful of users first. Vital service metrics can then be monitored using your tooling, and then you can make a decision to roll back the deployment if error rates rapidly increase or you become confident enough to decide to roll out the change to the rest of the user base.

So, how does this work with API Gateway? When you have deployed an initial release to a production stage and you want to make a change with a canary release, you need to create the canary release in the stage configuration. This will give you options for the percentage of traffic that is split across the base release and the new release:

The API Gateway console showing the options for creating a canary release for the dev stage of an API

API traffic is split at random to maintain the configured ratio. Once it's deployed, you can make changes to it in the same place in the console or use the CLI. Let's see an example of updating the traffic percentage to 25% using the CLI:

aws apigateway update-stage 
--rest-api-id ${api-id}
--stage-name ${stage-name}
--patch-operations op=replace,path=/canarySettings/percentTraffic,value=25.0

There is also another step to finally promote the canary release into the new production base release. Check out the AWS documentation for examples of how to do that. Once a deployment has been completed, there is a cool feature that helps you to kickstart the consumption of your API. You might want to publish this along with your API specification on your developer portal. I'm talking about generating the SDK for your API, and that is done through the console or CLI. API Gateway will generate SDKs in a supported language: Java, JavaScript, Java for Android, Objective C or Swift for iOS, and Ruby.

We've covered some of the details for deploying APIs to API Gateway. Later on, we're going to cover an application framework that combines building, testing, and deployment to create a really useful toolset. We'll also cover in-depth how this framework would be integrated into a CI/CD pipeline in Chapter 8, CI/CD with Serverless Framework. Stay tuned for that!

Once we have deployed our API, we then have to switch our focus to managing the API while it is in use. The next section will teach us about the benefits of throttling and managing the number of requests to our service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.218.226