Appendix G. Serverless Framework and SAM

This appendix covers

  • An overview of the Serverless Framework 1.x
  • An overview of the Serverless Application Model

Automation and continuous delivery are important if you’re building anything on a cloud platform such as AWS. If you take a serverless approach, it becomes even more critical because you end up having more services, more functions, and more things to configure. You need to be able to script your entire application, run tests, and deploy it automatically. The only time you should deploy Lambda functions manually or self-configure API Gateway is while you learn. But once you begin working on real serverless applications, you need to be able to script everything and have a repeatable, automated, and robust way of provisioning your system. In this appendix, we introduce Serverless Framework and the Serverless Application Model (SAM) to help you organize and deploy serverless applications.

Serverless Framework is an all-encompassing tool that can help to define, test, and deploy serverless applications to AWS. It’s supported by a full-time team at Serverless, Inc., and a number of open source contributors from all over the world. It’s a tool that’s used with great success by many companies worldwide to manage their serverless applications.

SAM is an extension to CloudFormation, developed by AWS. It allows users to script their Lambda, API Gateway, and DynamoDB tables using a simple syntax and then deploy using CloudFormation commands and know-how that they already have.

G.1. Serverless Framework

The Serverless Framework (https://serverless.com) is an MIT open source framework that’s actively developed and maintained by a full-time team. At its essence, it allows users to define a serverless application—including Lambda functions and API Gateway APIs—and then deploy it using a command-line interface (CLI). It helps you organize and structure serverless applications, which is of great benefit as you begin to build larger systems, and it’s fully extensible via its plugin system.

G.1.1. Installation

Serverless Framework is a Node.js CLI tool, so the first thing you need to do is to install Node.js on your machine. Refer to appendix B for instructions on installation of Node.js.

Note

Serverless Framework runs on Node.js v4 or higher, so make sure you pick a recent Node.js version.

You can verify that Node.js is installed successfully by running the command node --version in a terminal window. You should see the corresponding Node.js version number printed out. Next, open your terminal and run npm install -g serverless to install Serverless Framework. When the installation process completes, you can verify that Serverless Framework is installed successfully by running the command serverless from your terminal.

Credentials

The Serverless Framework needs access to your AWS account so that it can create and manage resources on your behalf. To let the Serverless Framework access your AWS account, you’re going to create an IAM user with admin access, which can configure the services in your AWS account. This IAM user will have its own set of AWS access keys.

Note

Normally in a production environment, we’d recommend reducing the permissions of the IAM user that the Framework uses. Unfortunately, the Framework’s functionality is growing so fast that we don’t yet have a list or a finite set of permissions it needs. Consider using a separate AWS account in the interim if you can’t get permission to your organization’s primary AWS accounts.

Follow these steps:

  1. Create or log in to your Amazon Web Services account and go to the Identity & Access Management (IAM) page.
  2. Click Users, and then on Create New Users, enter a name, like serverless-admin, in the first field to remind you that this user is the Framework.
  3. Select Programmatic Access and click Next: Permissions.
  4. Select Attach Existing Policies Directly and search for “AdministratorAccess.” Select the administrator access policy and click Next: Review.
  5. Click Create User.
  6. On the next page, you’ll see the access key ID and the secret access key. Save them to a temporary file. You can also download a CSV file with the keys. Click Close when finished.

You can configure the Serverless Framework to use your AWS API key and secret key using this command from a terminal:

serverless config credentials --provider aws --key [ACCESS_KEY] --secret
    [SECRET_KEY]
AWS credentials

Running serverless config credentials --provider will store the credentials under a default AWS profile at the following location in your computer: ~/.aws/ credentials. If you followed our previous chapters, you might already have keys for the lambda-upload user in the credentials file. Running the previous command will overwrite your existing keys.

There are two ways you can deal with this: instead of overwriting your lambda-upload keys, you can either add AdministratorAccess permissions to the lambda-upload user or add multiple credentials to ~/.aws/credentials, like so:

[default]
aws_access_key_id=[ACCESS_KEY]
aws_secret_access_key=[SECRET_KEY]

[serverless]
aws_access_key_id=[ACCESS_KEY]
aws_secret_access_key=[SECRET_KEY]

Then add a profile setting to your provider configuration in serverless.yml:

service: new-service
provider:
name: aws
 runtime: nodejs4.3
 profile: serverless
Services

A service is the Framework’s unit of organization. You can think of it as a project (though you can have multiple services for a single project or application). A service is where you define your functions, the events that trigger them, and the resource your functions use, all in a single file called serverless.yml, as shown in the following listing.

Listing G.1. Service—serverless.yml

The point of a service is to keep your functions and all of their dependencies together in one unit. When you deploy with the Framework by running serverless deploy, everything in serverless.yml is deployed at once.

Plugins

You can overwrite or extend the functionality of the Framework using plugins. Every serverless.yml can contain a plugins: property, which features the plugins the service uses (see the following listing).

Listing G.2. Plugins—serverless.yml

G.1.2. Beginning Serverless Framework

As we’ve mentioned, in the Serverless Framework, a service is like a project. It’s where you define your AWS Lambda functions, the events that trigger them, and any AWS infrastructure resource they require.

Organization

In the beginning of an application, many people use a single service to define all of the functions, events, and resource for that project, as shown in the next listing.

Listing G.3. Your application

But as your application grows, you can break it out into multiple services. Some people organize their services by workflows or data models and group the functions related to those workflows and data models together in the service, as shown here.

Listing G.4. Your application

This makes sense because related functions usually use common infrastructure resources, and you want to keep those functions and resources together as a single unit of deployment for better organization and separation of concerns.

Creation

To create a service, use the create command. You must also pass in a runtime (for example, node.js or Python) that you want to write the service in. You can also pass in a path to create a directory and autoname your service:

serverless create --template aws-nodejs --path myService

The following runtimes are available in the Serverless Framework for AWS Lambda:

  • aws-nodejs
  • aws-python
  • aws-java-gradle
  • aws-java-maven
  • aws-scala-sbt
Getting help

You can run serverless to see a list of available commands and then run the command serverless <command-name> --help to get more information about each command. Considerable information about the Framework is also available online at https://serverless.com/framework/docs/.

Scaffolding

You’ll see the following files in your working directory:

  • serverless.yml
  • handler.js

Each service configuration is managed in the serverless.yml file. The main responsibilities of this file are as follows:

  • Declare a serverless service
  • Define one or multiple functions in the service
  • Define the provider the service will be deployed to (and the runtime if provided).
  • Define custom plugins to be used
  • Define events that trigger each function to execute (for example, HTTP requests)
  • Define a set of resources (for example, 1 AWS CloudFormation stack) required by the functions in this service
  • Allow events listed in the events section to automatically create the resource required for the event upon deployment
  • Allow flexible configuration using Serverless variables

You can see the name of the service, the provider configuration, and the first function inside the functions definition, which points to the handler.js file. Any further service configuration will be done in this file, as shown in the following listing.

Listing G.5. A more complete serverless.yml example

Every serverless.yml translates to a single AWS CloudFormation template, and a CloudFormation stack is created from that resulting CloudFormation template. The handler.js file contains your function code. The function definition in serverless.yml will point to this handler.js file and the function will be exported here.

Local and remote development

The Serverless Framework offers a command to run your AWS Lambda functions on AWS Lambda after they’ve been uploaded. Additionally, the Framework allows you to run your AWS Lambda functions locally via a powerful emulator, so you don’t have to re-upload your functions every time you want to run your code. You can do this by running a few commands.

This command runs your functions locally:

serverless invoke local --function myFunction

This command runs your functions remotely:

serverless invoke --function myFunction

You can pass data into both commands via the following options:

--path lib/data.json
--data "hello world"
--data '{"a":"bar"}'

You can also pass data in from standard input:

node dataGenerator.js | serverless invoke local --function functionName

G.1.3. Using the Serverless Framework

The Serverless Framework was designed to provision your AWS Lambda functions, events, and infrastructure resources safely and quickly. It does this via a couple of methods designed for different types of deployments.

Deploy all

The following command is the main way of doing deployments with the Serverless Framework:

serverless deploy

Use this command when you’ve updated your function, event, or resource configuration in serverless.yml and you want to deploy that change (or multiple changes at the same time) to Amazon Web Services. The Serverless Framework translates all syntax in serverless.yml to a single AWS CloudFormation template. By depending on Cloud-Formation for deployments, users of the Serverless Framework get the safety and reliability of CloudFormation. At a high level, these steps take place when the serverless deploy command is run:

  1. An AWS CloudFormation template is created from your serverless.yml.
  2. If a stack has not yet been created, it’s created with no resource except an S3 bucket, which will store zip files of your function code.
  3. The code of your functions is then packaged into zip files.
  4. Zip files of your functions’ code are uploaded to your code S3 bucket.
  5. Any IAM roles, functions, events, and resources are added to the AWS Cloud-Formation template.
  6. The CloudFormation stack is updated with the new CloudFormation template.

Use serverless deploy in your CI/CD systems because it’s the safest method of deployment. You can print the progress during the deployment if you use verbose mode, as follows:

serverless deploy --verbose

This method defaults to dev stage and us-east-1 region. But you can change the default stage and region in your serverless.yml file by setting the stage and region properties inside a provider object, as the following example shows.

Listing G.6. Regions and stages

You can also deploy to different stages and regions by passing in flags, as shown in the following command:

serverless deploy --stage production --region eu-central-1
Deploy function

The serverless deploy function method doesn’t touch your AWS CloudFormation stack. Instead, it overwrites the zip file of the current function on AWS. This method is much faster than running vanilla serverless deploy, because it doesn’t rely on CloudFormation:

serverless deploy function --function myFunction

The Framework packages the targeted AWS Lambda function into a zip file. That zip file is uploaded to your S3 bucket using the same name as the previous function, which the CloudFormation stack is pointing to. Use this when you’re developing and want to test on AWS, because it’s much faster. During development, people often run this command several times, as opposed to serverless deploy, which is run only when larger infrastructure provisioning is required.

G.1.4. Packaging

Sometimes you might like to have more control over your function artifacts and how they’re packaged. You can use the package and exclude configuration for this.

Exclude/include

Exclude allows you to define globs that will be excluded from the resulting artifact. If you want to include files, you can use a glob pattern prefixed with !, such as !reinclude-me/**. Serverless will run the glob patterns in order. For example, the next listing shows how to exclude all node_modules but then re-include a specific module (in this case, node-fetch).

Listing G.7. Exclude

Artifact

For complete control over the packaging process, you can specify your own zip file for your service. Serverless won’t zip your service if it’s configured, so include and exclude will be ignored. An example of this is shown in the next listing.

Listing G.8. Artifact

Packaging functions separately

If you want even more control over your functions during deployment, you can configure them to be packaged independently. This allows you to optimize the way they’re deployed. To enable individual packaging, set individually to true in the service-wide packaging settings. Then, for every function, you can use the same include/exclude/artifact config options as you can service-wide. The include/exclude options will be merged with the service-wide options to create one include/exclude config per function during packaging (see the following listing).

Listing G.9. Packaging functions separately

G.1.5. Testing

Testing a serverless architecture can be challenging for several reasons, including the following:

  • Your architecture is highly dependent on multiple third-party services, which require their own tests.
  • Those third-party services are cloud-based services and are inherently tricky to test locally.
  • Asynchronous, event-driven workflows are especially complicated to emulate and test.

Because of these issues, we suggest the following testing strategy:

  • Write your business logic in a way that separates it from AWS Lambda’s API.
  • Write unit tests to verify that the business logic is working well.
  • Write integration tests to verify integrations with other services (for example, AWS services) are working correctly.
Example

Let’s take a simple Node.js function as an example. The responsibility of this function is to save a user into a database and send a welcome email. See the following listing for the implementation.

Listing G.10. The mailer function

There are two main problems with this function:

  • The business logic isn’t separated from the third-party services it uses, making it hard to test. An example of this is that the business logic is dependent on how AWS Lambda passes in data (the event object).
  • Testing this function requires running a DB instance and mail server.

First, the business logic should be separated. A side benefit of this is that it will matter less whether the logic is running in AWS Lambda, Google Cloud Functions, or a traditional HTTP server. You’ll separate the business logic first, as shown in the next listing.

Listing G.11. Mailer function business logic

The Users class is separate and more easily testable, and it doesn’t require running any of the external services. Instead of real DB and mailer objects, you can pass mocks and assert if saveUser and sendWelcomeEmail have been called with proper arguments. You should have as many unit tests as possible and run them on every code change. Of course, passing unit tests doesn’t mean your function is working as expected. That’s why you also need integration tests. After extracting all of the business logic to a separate module, all that’s left is a simple handler function, as shown in the next listing.

Listing G.12. Mailer handler function

The code in listing G.12 is responsible for setting up dependencies, injecting them, and calling business logic functions. This code will be changed less often. To make sure the function is working as expected, integration tests should be run against the deployed function. They should invoke the function (serverless invoke) with the fixture email address, check if the user is actually saved to the DB, and check if email was received.

G.1.6. Plugins

A plugin is custom JavaScript code that creates new, or extends existing, commands within the Serverless Framework. The Serverless Framework’s architecture is merely a group of plugins that are provided in the core. If you (or your organization) have a specific workflow, you can install a prewritten plugin or write a plugin to customize the Framework to your needs. External plugins are written exactly the same way as the core plugins.

Installing plugins

External plugins are added on a per-service basis and are not applied globally. Make sure you’re in your service’s root directory; then install the corresponding plugin with the help of npm by running the following command:

npm install --save custom-serverless-plugin

You need to tell Serverless that you want to use the plugin inside your service. You do this by adding the name of the plugin to the plugins section in the serverless.yml file, as shown in the following listing. The custom section in the serverless.yml file is the place where you can add necessary configurations for your plugins (the plugins author or documentation will tell you if you need to add anything there).

Listing G.13. Adding plugins
plugins:
 - custom-serverless-plugin
custom:
 customkey: customvalue
Load order

Keep in mind that the order in which you define your plugins matters. Serverless first loads all the core plugins and then the custom plugins in the order in which you’ve defined them, as shown in the following listing.

Listing G.14. Load order

Writing plugins

These are the three concepts you need to know when authoring plugins:

  • Command— CLI configuration, commands, subcommands, options
  • LifecycleEvent— An event that happen sequentially when the command is run
  • Hook— Code that runs when a LifecycleEvent takes place during a command

A command can be called by a user (for example, serverless deploy); it has no logic, but simply defines the CLI configuration (for example, command, subcommands, and parameters) and the lifecycle events for the command. Every command defines its own lifecycle events, as shown in the next listing.

Listing G.15. Creating a serverless plugin
'use strict';
class MyPlugin {
 constructor() {
  this.commands = {
   deploy: {
    lifecycleEvents: [
     'resource',
     'functions'
    ]
   },
  };
 }
}

module.exports = MyPlugin;

Listing G.15 lists two events. But for each event, additional before and after events are created. Therefore, the following six lifecycle events exist in that example:

  • before:deploy:resource
  • deploy:resource
  • after:deploy:resource
  • before:deploy:functions
  • deploy:functions
  • after:deploy:functions

The name of the command in front of a lifecycle event is used for hooks. A hook binds code to any lifecycle event from any command, as the next listing shows.

Listing G.16. Hooks in a serverless plugin
'use strict';
class Deploy {
 constructor() {
  this.commands = {
   deploy: {
    lifecycleEvents: [
     'resource',
     'functions'
    ]
   },

  };
  this.hooks = {
   'before:deploy:resource': this.beforeDeployResources,
   'deploy:resource': this.deployResources,
   'after:deploy:functions': this.afterDeployFunctions
  };

 }
 beforeDeployResources() {
  console.log('Before Deploy Resource');

 }
 deployResources() {
  console.log('Deploy Resource');

 }
 afterDeployFunctions() {
  console.log('After Deploy functions');
 }

}

module.exports = Deploy;

Each command can have multiple options. Options are passed in with a double dash (--) like this:

serverless function deploy --function functionName

Option shortcuts are passed in with a single dash (-) like this:

serverless function deploy -f functionName

The options object will be passed in as the second parameter to the constructor of your plugin. In it, you can optionally add a shortcut property, as well as a required property. The Framework will return an error if a required option is not included, as shown in the following listing.

Listing G.17. Options in a plugin
'use strict';
class Deploy {
 constructor(serverless, options) {
   this.serverless = serverless;

   this.options = options;
   this.commands = {
     deploy: {
       lifecycleEvents: [
         'functions'
       ],
       options: {
         function: {
           usage: 'Specify the function you want to deploy
           (for example, "--function myFunction")',
           shortcut: 'f',
           required: true
         }
       }
     },

   };
   this.hooks = {
     'deploy:functions': this.deployFunction.bind(this)
   }

 }
 deployFunction() {
   console.log('Deploying function: ', this.options.function);
 }

}

module.exports = Deploy;

The serverless instance that enables access to global service config during runtime is passed in as the first parameter to the plugin constructor, shown in the next listing.

Listing G.18. Accessing the global service config
'use strict';
class MyPlugin {
 constructor(serverless, options) {
   this.serverless = serverless;

   this.options = options;
   this.commands = {
     log: {
       lifecycleEvents: [
         'serverless'

       ],
     },

   };
   this.hooks = {
     'log:serverless': this.logServerless.bind(this)
   }

 }
 logServerless() {
   console.log('Serverless instance: ', this.serverless);
 }

}

module.exports = MyPlugin;

Command names need to be unique. If you load two commands and both want to specify the same command (for example, you have an integrated command deploy and an external command also wants to use deploy), the Serverless CLI will print an error and exit. If you want to have your own deploy command, you need to name it something different, like myCompanyDeploy, so it doesn’t clash with existing plugins.

G.1.7. Examples

Here are a few examples with the Serverless Framework that you can try for yourself.

REST API

In this example, you’re going to create a simple REST API with a single HTTP endpoint using the Serverless Framework. The following serverless.yml (listing G.19) will deploy a single AWS Lambda function, create an AWS API Gateway REST API with an HTTP endpoint, and then connect the two. Listing G.20 shows the implementation of the Lambda function. You can deploy this easily with the serverless deploy command.

Listing G.19. Simple REST API—serverless.yml
service: serverless-simple-http-endpoint
provider:
 name: aws

 runtime: nodejs4.3
functions:
 currentTime:
   handler: handler.endpoint
   events:
     - http:
         path: ping
         method: get
Listing G.20. Simple REST API—handler.js
'use strict';
module.exports.endpoint = (event, context, callback) => {

 const response = {
   statusCode: 200,
   body: JSON.stringify({
     message: 'Hello,
     the current time is ${new Date().toTimeString()}.'
   }),

 };
 callback(null, response);

};
IoT event

This example demonstrates how to set up an IoT rule on the AWS IoT platform to send events to a Lambda function. You can use this to react to any IoT events with an AWS Lambda function. You can deploy this easily with the serverless deploy command. The following listing shows the implementation for serverless.yml, and listing G.22 shows the Lambda function.

Listing G.21. IoT Event—serverless.yml
service: aws-node-iot-event
provider:
 name: aws

 runtime: nodejs4.3
functions:
 log:
   handler: handler.log
   events:
     - iot:
         sql: "SELECT * FROM 'mybutton'"
Listing G.22. IoT event—handler.js
module.exports.log = (event, context, callback) => {
 console.log(event);
 callback(null, {});

};
Scheduled

Listing G.23 is an example of an AWS Lambda function that runs on a schedule like a cron job. You can deploy this easily with the serverless deploy command. This listing shows the implementation of the serverless.yml file, whereas listing G.24 shows the implementation of the function.

Listing G.23. Scheduled—serverless.yml

Listing G.24. Scheduled—handler.js
module.exports.run = (event, context) => {
 const time = new Date();
 console.log(`Your cron function "${context.functionName}" ran at ${time}`);

};
Amazon Alexa skill

The following example demonstrates how to create your own Alexa skill using AWS Lambda. First, you need to register your skill in the Amazon Alexa Developer Portal (https://developer.amazon.com/edw/home.html). To do this, you need to define the available intents and then connect them to a Lambda function (https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/getting-started-guide). You can define and update this Lambda function with Serverless and deploy with the serverless deploy command. The following listing shows the implementation of serverless.yml, and listing G.26 shows the implementation of the function written in Python.

Listing G.25. Alexa skill—serverless.yml
service: aws-python-alexa-skill
provider:
 name: aws

 runtime: python2.7
functions:
 luckyNumber:
   handler: handler.lucky_number
   events:
     - alexaSkill
Listing G.26. Alexa skill—handler.py

G.2. Serverless Application Model

AWS CloudFormation (https://aws.amazon.com/cloudformation) is an AWS service that allows you to create and provision AWS resources and services like EC2, S3, DynamoDB, and Lambda. You define resources in a text file called a template, and CloudFormation creates and deploys them for you. CloudFormation helps you deal with dependencies and the order in which resources are provisioned. It’s a core tool for automation of infrastructure within AWS and something serious solution architects and infrastructure gurus can’t do without. And, frankly, without scripting and automating infrastructure, you’re not using AWS to its full potential anyway. CloudFormation, or its third-party alternative tool called Terraform (https://www.terraform.io), is something you should know.

It turns out, however, that defining serverless applications made with Lambda, API Gateway, and DynamoDB can be complex and time-consuming if you do it directly in CloudFormation. It’s understandable, too: CloudFormation is older than Lambda and API Gateway and wasn’t designed and optimized for serverless applications. Thankfully, the team responsible for Lambda and API Gateway saw this and came up with the Serverless Application Model (SAM).

SAM (https://aws.amazon.com/about-aws/whats-new/2016/11/introducing-the-aws-serverless-application-model/) allows you to use a simpler syntax to define serverless applications. CloudFormation can process a SAM template and transform it to standard CloudFormation syntax (something the Serverless Framework does too). It’s amazing to see how elegant and succinct SAM is, compared with regular Cloud---Formation templates (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/-transform-section-structure.html). We encourage you to take a good look at SAM if you’re going to automate your infrastructure and use CloudFormation. Use the simpler model and your future self will thank you for it.

G.2.1. Getting started

To begin writing a SAM template, create a new JSON or YAML CloudFormation template with an AWSTemplateFormatVersion at the top. First, you need to include a transform statement at the root of the template (under the template format version). The transform tells CloudFormation which version of SAM is used and how to process the template. The transform section for JSON templates must be Transform: AWS::Serverless-2016-10-31, and for YAML it must be "Transform" : "AWS::Serverless-2016-10-31".

If a transform isn’t specified, CloudFormation won’t know how to process SAM (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html). The current SAM specification (https://github.com/awslabs/serverless-application-model) defines three overarching resource types that can be used within a SAM template:

  • AWS::Serverless::Function (Lambda function)
  • AWS::Serverless::Api (API Gateway)
  • AWS::Serverless::SimpleTable (DynamoDB table)

The specification also defines a number of event source types for Lambda including S3, SNS, Kinesis, DynamoDB, API Gateway, a CloudWatch event, and more. And it allows you to specify additional properties such as environment variables for a function. Let’s get into a quick example right now to see how SAM and CloudFormation can help you script and deploy Lambda functions. It’s important to note that SAM may have limitations at this stage. At the time of writing, for example, an existing S3 bucket couldn’t be specified as an event source. The bucket would have to be created in the template to be used as an event source for Lambda. By the time you read this book, SAM will have been improved, so take a look at https://github.com/awslabs/serverless-application-model before you begin.

G.2.2. Example with SAM

To work through this exercise, you must have the AWS CLI installed on your computer. If you don’t have it installed, refer to appendix B for more details. You’ll be invoking CLI commands so your IAM user (it’s lambda-upload if you’ve been following the 24Hour Video application) needs the right permissions for CloudFormation. The user must have permissions to interact with CloudFormation and S3 for artifact uploads and additional permissions to do what CloudFormation is trying to accomplish. The setup of these permissions is outside the scope of this appendix, but we encourage you to go to https://aws.amazon.com/cloudformation/aws-cloudformation-articles-and-tutorials/ for tutorials and examples. If you just want to experiment and learn, you can give lambda-upload full administrator rights (you might have done it while going through the previous section already), but don’t forget to revoke them as soon as you’ve finished.

Assuming you have the CLI installed and lambda-upload has the right permissions, in a new directory create a file called index.js. This will be the Lambda function you’re going to deploy using SAM. Copy the following code to this file. The Lambda function itself is trivial. It retrieves an environment variable called HELLO_SAM and then uses it as a parameter to the callback function.

Listing G.27. Basic Lambda function

In the same folder as index.js, create a new file called sam_template.yaml and copy the next listing to it.

Listing G.28. SAM template

Open the directory that contains your Lambda function and zip index.js into an archive called function.zip. Make sure you call it function.zip because this is what your SAM template specifies. You also need to create an S3 bucket that will contain artifacts like the Lambda function, which CloudFormation will deploy. Jump into the S3 console and create a new bucket in N. Virginia (us-east-1). Call this bucket something akin to serverless-artifacts (your name will have to be unique). Jump back into the console and run the command given in the next listing.

Listing G.29. CloudFormation package

The CloudFormation package command carries out two important actions. It uploads your zip file with the Lambda function to S3 and creates a new template that points to the uploaded file. Now you can execute the CloudFormation deploy command to create your Lambda function. Here’s the command you need to run from the terminal.

Listing G.30. CloudFormation deploy

If everything goes well you should see a message in the terminal window that your stack was successfully created/updated. You can jump into the Lambda console and take a look at your new function. Don’t forget to check that the environment variable was created too. If you want to learn more about SAM, check out https://aws.amazon.com/blogs/compute/introducing-simplified-serverless-application-deplyoment-and-management/ and https://docs.aws.amazon.com/lambda/latest/dg/serverless-deploy-wt.html for further information and examples.

G.3. Summary

Serverless Framework and SAM are tools you can use to organize and deploy your serverless applications. At this stage, Serverless Framework is a more fully featured system with many useful plugins and a strong community. If you choose it, you won’t go wrong. But that doesn’t mean that you shouldn’t keep an eye on SAM. The mere fact that it is supported by AWS means a lot, so watch it as it grows and matures.

The one thing you might have noticed is that we haven’t addressed non-AWS services. Supporting hybrid environments is difficult, and neither Serverless Framework nor SAM will be of much help (although Serverless Framework is moving quickly to support multiple vendors and compute offerings such as Azure Functions, OpenWhisk, and Google Cloud Functions). For now, however, you’ll either have to stay entirely within AWS or do extra work (which may involve additional scripting) if you wish to support non-AWS services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.39.190