© Miguel A. Calles 2020
M. A. CallesServerless Securityhttps://doi.org/10.1007/978-1-4842-6100-2_12

12. Additional Considerations

Miguel A. Calles1 
(1)
La Habra, CA, USA
 

In this chapter, we will review additional topics for us to consider in our project. They are based on situations from projects using the Serverless Framework and Cybersecurity concepts. The topics we will review are in no particular order and were reserved for the penultimate1 chapter to share additional thoughts without disrupting the main messages from the previous chapters.

Balancing Security and Other Requirements

We might find while working on our projects that sometimes there might be contention between security engineers, software developers, or end users. Developers and end users might perceive security requirements and processes as overbearing and burdensome. Security engineers might create overly burdensome and complex processes to reduce risk and address the concern that no one can be trusted. At some point, the developers and users will find ways to circumvent security policies. Rather than implement more security policies and processes, we should consider having a balanced approach.

As security engineers, our goal is to protect our application and business from security threats. We should avoid implementing any security that would cost the company more money than the cost of the attack/breach. For example, we should not spend $1 million to protect a $5 asset. Alternatively, we want to invest $1 million to address a security risk that would cost the business $1 billion if it were to happen. We should avoid implementing password security users might perceive too difficult. For example, we should not require passwords with a minimum of 64 mixed characters when requiring a minimum of 12 mix characters with multi-factor authentication enabled would achieve similar results. Alternatively, we might consider adding passwordless authentication2 to an application that stores no sensitive data (e.g., payment information, social security numbers, mailing addresses, etc.). Sometimes more security does not mean better, and simpler processes provide greater security.

Continuous Integration/Continuous Delivery

We should consider having a CI/CD pipeline to perform automated checks that someone might forget to run manually. CI/CD pipelines allow us to evaluate our source code and deploy it after completing all automated source code checks. We can require that the automated source code checks must pass before the source code is accepted, merged, and deployed. Some of the types of source code checks include unit tests, code coverage, static code analysis,3 dynamic application security testing (DAST), and package and dependency vulnerability checks.4 As good as checklists and manual inspections might be in helping us remember steps to take, individuals are still prone to forgetting to use them. When the automated checks pass and the source code is ready to deploy, the CI/CD pipeline should automatically deploy it to a nonproduction environment. Doing this helps test the deployment process. It provides us with an environment where we can perform interactive application security testing (IAST) by using the application and trying to find issues. Having a CI/CD pipeline provides us with a repeatable process and can improve our security posture with the proper automated checks.

We must remember to secure and maintain our CI/CD pipeline. Some CI/CD pipelines use a dedicated server, container orchestration software, or third-party solutions. We should keep our dedicated servers and containers up to date with the latest operating system updates, deployment software updates, and security settings. When we choose to deploy our source code, the CI/CD pipeline needs credentials to deploy to AWS, Azure, and Google Cloud. We must take measures to protect these credentials, as discussed in Chapter 8, and ensure they are least privileged as mentioned in Chapter 6. If a malicious user managed to get access to our CI/CD environment, that person could obtain any plaintext secrets (e.g., passwords and AWS access keys) and might trigger delete processes that could remove our data and application. We should consider the CI/CD pipeline as an asset that needs protection from potential threats.

Source Control

Most likely, the majority of software projects use some type of source control software. What might not be common practice is how to secure the repositories we create in our source control software. We should set up our source control software and repositories with security in mind.

These security practices will help us protect our source code:
  • Use end-to-end encryption (e.g., SSH) for data in transit.

  • Limit access to the server or service through user accounts.

  • Enable two-factor or multi-factor authentication when possible.

  • Cryptographically sign commits with asymmetric keys.

  • Keep the source control software (and any servers hosting it) up to date.

  • Use hooks to verify the source code does not contain sensitive information before committing.5

  • Consider encrypting the repository.

  • Use ignore files to prevent committing unwanted files and files containing secrets (e.g., environment files).

Following these practices helps protect against a malicious actor obtaining or modifying our code.

Serverless Framework Plugins

The Serverless Framework plugins enhance how we use the Serverless Framework when deploying our application software. We might want to use plugins that optimize our deployments, improve our security, add additional functionality, and so on. We can find these plugins at the Serverless Plugins Directory6 or the Serverless GitHub page.7

When we find a plugin we are considering using, we should consider inspecting the source code. The majority of source code is open source, and we can check it freely. Although a plugin is open source, the author might be the only person that has inspected it. We might install a plugin that weakens our security posture. For example, there is a plugin that uses the JavaScript “eval” command, and it is possible to create a new file on the file system by injecting a command.8 Understanding how the plugins work is essential because they assume the same privileges used to deploy the Serverless configuration. A rogue command coded into the plugin could potentially cause an undesired change.

We might want to build custom plugins. Having custom plugins reduces the risk of malicious code, assuming we use credible npm packages. We can use plugins without publishing to npm or the Serverless Plugins Directory by running them locally. Private plugins can be used in projects to enhance the application deployment and perform data checks against policies. Custom plugins are handy but do require a time investment.

Serverless Configuration Sizes

We should consider keeping our Serverless configurations small and use multiple configurations. We might want to have one Serverless configuration for each of the following items:
  • Database

  • Object storage

  • Group of functions (or microservice)

  • Application component

Keeping our Serverless configurations small reduces our security attack surface. For example, in a project, there could be a group of AWS Lambda functions accepting events from an Amazon API Gateway. One Lambda function managed to have flawed logic that sent numerous requests to its API Gateway. The flawed logic resulted in the API Gateway becoming overloaded and starting to throttle requests. Eventually, the API Gateway stopped accepting all requests, and none of the other functions could trigger. This is an example of accidentally performing a Denial of Service (DoS) attack against the API Gateway and denying access to all the functions. If the application has all its Lambda functions in the same Serverless configuration file, the entire application would have experienced the DoS.

Another example could be someone accidentally deleting a database when trying to delete the functions. Putting the database and functions into separate Serverless configurations prevents inadvertently modifying the database when adding, modifying or removing the functions. Separating the resources allows us to be mindful when altering the Serverless configuration files.

Optimizing Functions

We discussed monitoring our functions in Chapter 11. We can use the monitoring data and test results when initially developing the function to optimize the function. We should attempt to optimize our functions when we first create them and periodically after that.

Having inefficient functions may have speed, cost, and security implications. Having functions with inefficient code, large package sizes, and lengthy synchronous executions might result in long function execution time and latent responses in the application. The longer it takes a function to execute, the more costly it becomes. Also, improper CPU and memory sizing might result in longer execution time if inadequately sized or increased costs if overprovisioned. If we manage to send an input that results in longer execution time, we can potentially create a DoS attack by sending numerous inputs to that same function. We should take steps to optimize our functions to ensure they execute as efficiently as possible.

Some steps we can take to optimize our functions include
  • Limit the scope of work in the function to a particular task.

  • Functions should only accept input from one event source instead of multiple.

  • Functions should limit interactions to one resource and one action (e.g., only write data to one database table).

  • Generate test data or use monitoring data to find the optimal amount of CPU and memory.

  • Set the shortest execution timeout that allows the function to finish executing.

  • Reduce the function package size by using packaging tools (e.g., webpack9).

  • Avoid using global variables in the function code and writing to temporary file systems because the data persists between function executions and may result in unexpected behavior.

Keeping functions small and optimized improves the application speed, reduces cost, and makes the application more resilient to DoS attacks.

Fault Trees

We should plan for outages. We mentioned DoS attacks earlier and service outages in Chapter 11. We can also experience outages from third-party integrations, unexpected changes in our function inputs, expired SSL certificates, and various other ways. Developing a fault tree can help us identify potential failures, and we can use that to develop mitigations.

These are some example failure scenarios and how we might mitigate them:
  • The timer service that triggers our functions used to send bills to our customers becomes non-operational. We could set up a backup timer service in a different region within the same cloud provider, set up a backup timer service with a different cloud provider, or set up a small server to act as a backup timer service.

  • Pricing data inputs suddenly changed their format to report price as a string instead of a number. We could configure the input validation to be flexible and accept strings containing numbers and cast them to a number.

  • A third-party integration stops providing data or starts rejecting inputs. We could keep track of which data records and system events we have not yet processed and attempt to process them again when the third-party integration is functional.

We should do the best we can to plan for failures and determine how those failures might affect our application and security posture.

Key Takeaways

We explored some additional considerations based on situations from projects using serverless environments. We started by discussing that we should balance security with the other requirements. We want to avoid implementing security that exceeds the business value and creates a burden for the developer and end users. We should remember that simple solutions might provide sufficient protection. We continued by addressing topics in development and how they affect our security postures, such as Continuous Integration/Continuous Delivery pipelines and source control. We shifted the conversation into optimizations and security concerns when defining a Serverless configuration file and using Serverless Framework plugins. Finally, we concluded by providing ways to optimize the serverless functions and discussing why we should use a fault tree to identify potential failures. These topics might be worthwhile to include in your security assessment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.107.241