Chapter 6
IN THIS CHAPTER
Building awareness for cloud application security
Exploring the fundamentals of the Software Development Lifecycle (SDLC)
Examining common SDLC methodologies
Learning how to apply security throughout the SDLC
Securely using and integrating third-party software components
Exploring cloud application architecture
Learning how to control identity and access management for your applications
In this chapter, you explore the most important application security concerns that exist in cloud environments. I provide an overview of the Secure Software Development Lifecycle process and then move into specific technologies for managing your cloud applications securely and effectively. Domain 4 represents 17 percent of the CCSP certification exam.
Cloud application development is a rapidly growing field thanks to the number of organizations that continue to migrate their applications and data to cloud-based infrastructures. As these migrations happen, it’s up to the well-informed cloud security professional to guide organizations through requirements definition, policy generation, and application development. Although the process of securing cloud-based applications shares many similarities with on-premise solutions, as a CCSP candidate, you must be mindful of techniques and methodologies that are specific to cloud application security.
While you may be surprised to see a section on training and awareness in the Cloud Application Security chapter, think for a minute about how critical application development and deployment is in cloud environments and keep in mind the potential impacts associated with insecure code being deployed across a vast cloud infrastructure. As a CCSP candidate, it’s important that you’re familiar with the basics of cloud application development and that you have a strong understanding of common pitfalls and vulnerabilities that exist throughout the Software Development Lifecycle.
The key difference between application development in the cloud and in traditional IT models is that cloud development relies heavily on the usage of APIs. While it may sound like type of beer, an API, or Application Programming Interface, is a software-to-software communication link that allows two applications (like a client and a server) to interact with one another over the Internet. APIs include the set of programming instructions and standards necessary for a client application to interact with some web-based server application to perform actions or retrieve needed information.
Because cloud environments are strictly web-accessible, CSPs make APIs available to cloud developers to allow them to access, manage, and control cloud resources. There are several types of API formats, but the most commonly used are Simple Object Access Protocol (SOAP) and Representational State Transfer (REST).
SOAP and REST both work across HTTP, and both rely on established rules that their users have agreed to follow — but that’s just about where the similarities end; they are quite different in approach and implementation.
SOAP is a protocol that was designed to ensure that programs built in different languages or platforms could communicate with each other in a structured manner. SOAP encapsulates data in a SOAP envelope and then uses HTTP or other protocols to transmit the encapsulated data. A major limitation of SOAP is that it only allows usage of XML-formatted data. While SOAP was long the standard solution for web service interfaces, today it is typically used to support legacy applications, or where REST is not technically feasible to use.
REST is a standard that addresses SOAP’s shortcomings as a complicated, slow, and rigid protocol. REST is not a protocol, but rather a software architecture style applied to web applications. REST is a flexible architectural scheme that can use SOAP as the underlying protocol, if desired, and supports data formats such as JSON, XML, or YAML. Part of the convenience behind REST is that RESTful services use standard URLs and HTTP methods (like GET, POST, and DELETE) to manage resources.
Understanding the basics of APIs is important for every cloud professional and is especially important when preparing for the CCSP exam. Make sure that you understand which types of APIs your cloud provider offers and thoroughly consider any limitations and impact to your organization’s cloud application security.
Being able to prepare for, identify, and understand cloud-based application issues is a tremendous skill that every cloud security professional should develop. As a CCSP, it’s your responsibility to help your organization maintain awareness of potential and actual risks when migrating to or developing applications in the cloud. Failure to do so may result in application vulnerabilities that ultimately lead to unsuccessful projects, unnecessary expenses, reputational damage for your company, or worse. The issues I discuss in this section are some of the most common cloud application development pitfalls.
Traditional on-premise systems and applications are designed, implemented, and optimized to run in traditional data centers. These applications have likely not been developed with cloud environments or services in mind, and so when migrating to the cloud, application functionality and security may not be exactly the same as what you’re used to in your on-prem environments. As a cloud security professional, you must assess your on-prem applications with the expectation that they have not been developed to run in the cloud. This expectation helps ensure that proper precautions are taken when designing (or redesigning) application security controls during the migration.
In many instances, on-premise applications are developed in such a way that they depend on very specific security controls as implemented in your organization’s data centers. When these applications are migrated to the cloud, some of these controls may need to be reconfigured or redeveloped to accommodate your new cloud-based solution. Remember that many CSPs offer modern security technologies that are routinely updated to remain cutting-edge. It’s very likely that an application that’s been hosted in a 20-year-old data center infrastructure is not properly configured for a simple lift and shift to the cloud.
In a traditional IT environment, your organization maintains full control over the entire infrastructure — you have complete access to servers, networking devices, and all other hardware and software in your data center. When moving to a cloud environment, your developers and administrators no longer have this level of control, and in some cases, the difference can be drastic. Performing system and application integrations without full access to all supporting systems and services can be very complicated. Cloud customers are left to rely on the CSP for assistance, which may not only extend integration timelines but also increase project costs. Using the cloud provider’s APIs, where possible, can help minimize integration risk and reduce the overall complexity of development. APIs are a CSP’s way of providing just enough control to the customer to facilitate integration and management of their cloud-based applications.
When developing cloud-based applications, it is essential that developers and system administrators understand the nuances of cloud environments and the potential pitfalls that these nuances may present during application development. The following list identifies some key factors that organizations should consider before cloud-based application development:
The preceding items ultimately come down to considerations around multitenancy, shared responsibility, and customer control/access. It’s important that you consider these factors to avoid common pitfalls, such as lack of required isolation (for example, as required by certain compliance frameworks) and lack of sufficient control over supporting resources. Fully understanding what deployment model and service model your application is being developed for is a critical step toward planning for and addressing potential development challenges in your cloud environment.
As a field, application development has evolved over the years into some generally accepted best practices, principles, and methodologies. Mature organizations maintain well-documented policies and procedures that guide their development teams through the SDLC. Following these policies and procedures helps developers efficiently create their applications while minimizing security risks.
As developers move to different environments, like the cloud, some of their tried-and-true methodologies don’t work the way they always have. As such, organizations often find themselves with a lack of thorough documentation focused on secure development in cloud environments. Although many CSPs provide guidance to their customers, documentation often lags their speed of innovation — service updates and new releases may make existing documentation obsolete or incomplete. Ultimately, it’s up to each organization (led by their fearless CCSP) to understand how cloud architectures influence their SDLC and to accurately document policies, procedures, and best practices that take into account the additional concerns and considerations associated with cloud development.
As discussed throughout this book, cloud environments demonstrate certain essential characteristics (on-demand self-service, resource pooling, broad network access, and so on) that help provide the goodness of cloud computing when properly understood and utilized. Having a firm understanding of these characteristics and the vulnerabilities inherent in cloud computing is critical for developers building cloud-based applications.
The following categories of common cloud vulnerabilities are each associated with one or more of the key cloud computing characteristics I discuss in Chapter 3.
The following section discusses each of the these in detail.
The cloud characteristic on-demand self-service means that cloud users can access cloud services whenever they need them. This sort of ongoing access makes identity and access management a critical concern for developers. It is essential that applications are developed and implemented with the principle of least privilege in mind, providing each user access to only the resources that she requires to do her job.
In addition to managing users’ roles and access within applications, developers should be mindful of using weak authentication and authorization methods that allow users to bypass built-in security measures, such as least privilege. Developers should enforce strong authentication by requiring periodic re-authentication and multifactor authentication where possible. For even more assurance, developers and cloud architects should consider implementing a Zero Trust architecture, as discussed in Chapter 5.
Another (even scarier) vulnerability is that of unauthorized access to the management plane. Some level of access to a management interface is usually required for cloud users to access their cloud resources. Without proper configuration, this access presents a higher risk in cloud environments than in traditional IT infrastructures with very few management interfaces. Cloud developers and administrators must ensure that management APIs are tightly secured and closely monitored.
The cloud characteristic broad network access means that cloud resources are accessible by users via a network, using standard network protocols (TCP, IP, and so on). In most cases, the network being used is the Internet, which must always be considered an untrusted network. As such, common Internet-related vulnerabilities like Denial of Service and man-in-the-middle attacks are major considerations in cloud computing. A well-designed system (and applications) include controls that detect and prevent misconfigurations that can lead to these types of vulnerabilities being exploited.
The cloud characteristic resource pooling and the related term multitenancy mean that the data of one cloud customer is likely to share resources with another, unrelated cloud customer. Although CSPs generally have strong logical separation between tenants and their data, the very nature of sharing physical machines presents some level risk to cloud customers. In many cases, legal or regulatory requirements can be satisfied only by demonstrating appropriate separation between tenants’ data. Developers can help enforce logical separation between tenants by encrypting data at rest (and in transit) using strong encryption algorithms. Where encryption is not possible, developers should explore data masking and other obfuscation techniques.
The cloud characteristic of rapid elasticity means that systems scale up and down as necessary to support changing customer demand. In order to provide this functionality, CSPs build large, geographically dispersed infrastructures that support users where they are. With this dispersion comes another legal and compliance vulnerability related to data location. While some cloud providers allow you to restrict your data to certain geographic locations, most CSPs do not currently provide this ability for all their services. When certain regulations require that data remains in specific regions, developers must ensure that their applications store and process regulated data only in compliant cloud services.
The cloud characteristic measured service means that you pay for what you use — nothing more and nothing less. Where this approach can go wrong is when system and application vulnerabilities allow unauthorized parties to misuse cloud resources and run up the bill. An example is crypto jacking, which is a form of malware that steals computing resources and uses them to mine for Bitcoin or other cryptocurrencies. Proper monitoring should be built into all systems and applications to detect cloud misuse as soon as possible.
Streamlined and secure application development requires a consistent methodology and a well-defined process of getting from concept to finished product. SDLC is the series of steps that is followed to build, modify, and maintain computing software.
Your organization’s business requirements should be a key consideration whenever you develop new software or even when you modify existing applications. You should make sure that you have a firm understanding of your organization’s goals (overall and specific to your project) and knowledge of the end-user’s needs and expectations.
It’s important to gather input from as many stakeholders as possible as early as possible to support the success of your application. Gathering requirements from relevant leaders and business units across your organization is crucial to ensuring that you don’t waste development cycles on applications or features that don’t meet the needs of your business.
These business requirements are a critical input into the SDLC.
While the SDLC process has multiple different variations, it most commonly includes the steps, or phases, in Figure 6-1:
The Planning phase is the most fundamental stage of the SDLC and is sometimes called Requirements Gathering. During this initial phase, the project scope is established and high-level requirements are gathered to support the remaining lifecycle phases. The project team should work with senior leadership and all project stakeholders to create the overall project timeline and identify project costs and resources required.
During the Planning phase, you must consider all requirements and desired features and conduct a cost-benefit analysis to determine the potential financial impact versus the proposed value to the end-user. Using all the information that you gather during this phase, you should then validate the economical and technical feasibility of proceeding with the project.
The Planning phase is where risks should initially be identified. Your project team should consider what may go wrong and how you can mitigate, or lower, the impact of those risks. For example, imagine that you’re building an online banking application. As part of the Planning phase, you should not only consider all functional requirements of such an application, but also security and compliance requirements, such as satisfying PCI DSS. Consider what risks currently exist within your organization (or your cloud environment) that might get in the way of demonstrating PCI DSS and then plan ways to address those risks.
You may also see this phase referred to as Requirements Analysis. During the Defining phase, you use all the business requirements, feasibility studies, and stakeholder input from the Planning phase to document clearly defined product requirements. Your product requirements should provide full details of the specific features and functionality of your proposed application. These requirements will ultimately feed your design decisions, so it needs to be as thorough as possible.
In addition, during this phase you must define the specific hardware and software requirements required for your development team — identify what type of dev environment is needed, designate your programming language, and define all technical resources needed to complete the project.
The Designing phase is where you take your product requirements and software specifications and turn them into an actual design plan, often called a design specification document. This design plan is then used during the next phase to guide the actual development and implementation of your application.
During the Designing phase, your developers, systems architects, and other technical staff create the high-level system and software design to meet each identified requirement. Your mission during this phase is to design the overall software architecture and create a plan that identifies the technical details of your application’s design. In cloud development, this phase includes defining the required amount of CPU cores, RAM, and bandwidth, while also identifying which cloud services are required for full functionality of your application. This component is critical because it may identify a need for your organization to provision additional cloud resources. Your design should define all software components that need to be created, interconnections with third-party systems, the front-end user interface, and all data flows (both within the application and between users and the application).
At this stage of the SDLC, you should also conduct threat modeling exercises and integrate your risk mitigation decisions (from the Planning phase) into your formal designs. In other words, you want to fully identify potential risks, I cover threat modeling in the aptly titled “Threat modeling” section in this chapter.
Software developers, rejoice! After weeks or even months of project planning, you can finally write some code! During this phase of the SDLC, your development team breaks up the work documented in previous steps into pieces (or modules) that are coded individually. Database developers create the required data storage architecture, front-end developers create the interface that users will interact with, and back-end developers code all the behind-the-scenes inner-workings of the application. This phase is typically the longest of the SDLC, but if the previous steps are followed carefully, it can be the least complicated part of the whole process.
During this phase, developers should conduct peer reviews of each other’s code to check for flaws, and each individual module should be unit tested to verify its functionality prior to being rolled into the larger project. Some development teams skip this part and struggle mightily to debug flaws once an application is completed.
In addition to conducting functional testing of each module, the time is right to begin security testing. Your organization should conduct static code analysis and security scanning of each module before integration into the project. Failure to do so may allow individual software vulnerabilities to get lost in the overall codebase, and multiple individual security flaws may combine to present a larger aggregate risk, or combined risk.
Once the code is fully developed, the application enters the Testing phase. During this phase, application testers seek to verify whether the application functions as desired and according to the documented requirements; the ultimate goal here is to uncover all flaws within the application and report those flaws to the developers for patching. This cyclical process continues until all product requirements have been validated and all flaws have been fixed.
As a completed application, security testers have more tools at their disposal to uncover security flaws. Instead of relying solely on static code analysis, testers can use dynamic analysis to identify flaws that occur only when the code is executed. Static analysis and dynamic analysis are further discussed in the “Security testing methodologies” section of this chapter.
Once the application has passed the Testing phase, it is ready to be deployed for customer use. There are often multiple stages of deployment (Alpha, Beta, and General Availability are common ones), each with its own breadth of deployment (for example, alpha releases tend to be deployed to select customers, whereas general availability means it’s ready for everyone).
Once applications have been tested and successfully deployed, they enter a maintenance phase where they’re continually monitored and updated. During the Maintaining phase, the production software undergoes an ongoing cycle of the SDLC process, where security patches and other updates go through the same planning, defining, designing, developing, testing, and deploying activities discussed in the preceding sections.
Many SDLC models include a separate phase for disposal or termination, which happens when an application is no longer needed or supported. From a security perspective, you should keep in mind that data (including portions of applications) may remain in cloud environments even after deletion. Consult your contracts and SLAs for commitments that your CSP makes for data deletion and check out Chapter 4 for more on secure data deletion.
Although the steps within the SDLC remain largely constant, several SDLC methodologies, or models, exist, and each approaches these steps in slightly different ways. Two of the most commonly referenced and used methodologies are waterfall and agile.
Waterfall is the oldest and most straightforward SDLC methodology. In this model, you complete one phase and then continue on to the next — you move in sequential order, flowing through every step of the cycle from beginning to end. Each phase of this model relies on successful completion of the previous phase; there’s no going back, because… well, because waterfalls don’t flow up.
Some advantages of the waterfall methodology include
Some challenges that come with waterfall include
Agile is more of the new kid on the block, having been introduced in the 1990s. In this model, instead of proceeding in a linear and sequential fashion, development and testing activities occur simultaneously and cyclically.
Application development is separated into sprints that produce a succession of releases that each improves upon the previous release. With the agile model, the goal is to move quickly and to fail fast — create your first release, test it, fix it, and create your next release fast!
Some advantages of the agile methodology include
Some challenges that come with Agile include
Applying the SDLC to your cloud application development requires an understanding of common application vulnerabilities, cloud-specific risks, and the use of threat modeling to assess the impact of those risks. This section guides you through securely applying the SDLC process to your cloud development initiatives.
The Open Web Application Security Project (OWASP) is an online community with a wealth of helpful projects and resources. I cover some of its helpful logging-related resources in Chapter 4, but one of the most famous projects is OWASP Top 10, which identifies the most critical security risks to web applications. This list is particularly relevant to cloud applications, which are inherently web-based.
As of this writing, OWASP Top 10 was last updated in 2017. The top ten web application security risks outlined by OWASP are
The following sections describe these risks in detail.
Injection attacks refer to a broad class of attacks in which a malicious actor sends untrusted commands or input to an application. Vulnerable applications process the untrusted input as part of a valid command or query, which then alters the course of the application’s execution. In doing so, injection attacks can give an attacker control over an application’s program flow, grant an attacker unauthorized access to data, or even allow full system compromise. It’s no wonder that this type of vulnerability ranks at the top of the OWASP Top 10.
Common injection attacks include SQL injection, code injection, and cross-site scripting, discussed later in this chapter. These attacks are not only dangerous, but also very widespread. Many freely available tools make exploiting these common vulnerabilities simple, even for inexperienced hackers.
Applications can be protected against injection attacks by restricted privileges for high-risk actions and by performing input validation. Input validation is the process of ensuring that all input fields are properly checked and approved by the application prior to processing the input. Input validation requires locking down your application code to allow only expected input types and values and filtering any suspicious or untrusted inputs.
Broken authentication is a vulnerability that allows an attacker to capture or bypass an application’s authentication mechanisms, allowing the attacker to assume the identity of the attacked user, thus granting the attacker the same privileges as that user.
Broken authentication can occur in several ways. It can be as obvious as an application allowing weak passwords that are easily guessed or as obscure as an application not terminating an authenticated session when a browser is closed. In the latter example, imagine that you’re using a public computer to check your bank account (generally not advised, but bear with me). Instead of clicking the “Sign out” button, you simply close your browser. If the banking site is not programmed to timeout upon browser closure, then the next user of that machine could potentially open the same browser and still be authenticated to your account.
Developers can do a few things to protect applications from broken authentication. Some recommendations include
Sensitive data exposure is exactly what it sounds like. Many web applications collect, store, and use sensitive user information — data like user credentials, PII, and credit card data. Some of these web applications do not properly secure this sensitive information, which can lead to exposure to unauthorized parties.
Many of the data protection principles you learn about throughout this book apply here. Web applications should enforce encryption at rest and in transit, especially where sensitive data exists. Applications should also check for and enforce secure communications methods when exchanging sensitive data with browsers.
An XML External Entity (XXE) attack occurs when XML input containing a reference to an external entity is processed by an application without thorough parsing. The deep technical details of XXE are outside the scope of this book, but you should understand that XXE attacks may lead to data theft, port scanning, Denial of Service, and more.
I introduce access control in Chapter 3, and you learn about it in detail throughout this book. In short, access control is the set of policies and mechanisms that ensures users aren’t able to act outside of their intended permissions. Broken access control is, of course, failure of access control mechanisms to properly limit or control access to resources. Broken access control includes things like unauthorized privilege escalation, bypassing access controls by modifying the URL or other settings, and manipulating metadata to gain unauthorized access.
Prevention of broken access control begins during the Testing phase, but continues well into the Maintaining phase. Static and dynamic analysis techniques can help identify weak access control mechanisms, but security teams should also conduct penetration tests on systems that process sensitive information. In addition, enforcing a deny by default policy, validating application inputs, and performing periodic checks of a user’s privilege can all help mitigate risks associated with broken access control. Finally, do not forget that detection is just as important as prevention; you must log and continually monitor access to your application to enable quick detection and remediation of broken access control.
Security misconfiguration is pretty straightforward; it’s when systems or applications are not properly or securely configured. Examples of security misconfiguration include
Preventing security misconfiguration starts with unit testing that you conduct in the Designing phase and continues through testing and into the Maintaining phase. It’s essential that you have strong configuration management practices in place to monitor and manage configurations across all your systems and applications.
Cross-site scripting, or XSS, is a specific variant of injection attacks that targets web applications. XSS enables an attacker to inject untrusted code (like a malicious script) into a web page or application. When an unsuspecting user navigates to the infected web page, the untrusted code is then executed in the user’s browser using their permissions. XSS acts as a vehicle for an attacker to deliver malicious code to anyone who navigates to the infected application. The infected code can manipulate the output of the original website, redirect the user to a malicious site, give the attacker control over the user’s web session, or even leverage the browser’s permissions to access information on the user’s local machine. As you can imagine, the potential damage caused by XSS vulnerability is huge, and it remains one of the top security concerns for cloud developers.
As with the rest of the family of injection attacks, cross-site scripting is primarily protected by input validation and sanitization. As a cloud security professional, make sure that your applications check all input for malicious code.
Insecure deserialization occurs when an application or API takes an untrusted stream of bytes and reconstructs it into a potentially malicious file. One of the ways that malware masks itself is by breaking itself down to avoid signature detection and then relying on some later process to reconstruct it. Insecure deserialization can be used to perform a wide array attacks and can also lead to remote code execution.
Developers should ensure that applications and APIs accept only serialized data from trusted sources, if at all.
This vulnerability occurs when your application is built on one or more vulnerable framework, module, library, or other software component. While each component may have limited privileges on its own, the potential risk increases once it is integrated into your application. Using components with known vulnerabilities may indirectly impact other parts of your application and may even compromise sensitive data.
The best protection from this vulnerability is vigilant updating and patching of all components within your application. Your application is only as secure as its weakest link; failing to patch one component’s security flaws makes your entire application vulnerable to attack.
Insufficient logging exists when systems and applications fail to capture, maintain, and protect all auditable events. Events that should be logged include privileged access, login failures, and other events I discuss in Chapters 4 and 5. The auditable events must be captured in logs and stored in a system separate from the system being audited to ensure that the logs are not compromised if the system itself is compromised. Also, be sure to maintain log data in accordance with any regulatory and contractual requirements.
Insufficient monitoring occurs when logged events are not sufficiently monitored or integrated into incident response activities. This vulnerability may allow attackers to maintain persistence, pivot to other systems, and cause additional harm that may be prevented with early detection. The best prevention against insufficient monitoring is to develop and maintain a comprehensive strategy for monitoring logs and taking action on important security events.
You probably realize that a great deal of overlap occurs between application security in the cloud and application security in traditional data center environments. Despite the similarities, it’s important that you take note of the nature of cloud computing and how cloud architectures contribute to a unique risk landscape. The Cloud Security Alliance (CSA) routinely publishes a fantastic guide that outlines the top risks in cloud environments. I cover CSA’s 2019 “Egregious Eleven” in Chapter 3.
I won’t go into the specifics of each risk again, but you should definitely check out Chapter 3 if you haven’t already. What’s important to remember is that your risks change depending upon your cloud service category. For PaaS, risks like insufficient identity, credential, access, and key management and limited cloud usage visibility are bigger concerns because you, as a cloud customer, have a lower level of access and control than you do in IaaS environments. For application developers in IaaS environments, your risk is skewed more toward misconfiguration and inadequate change control and insider threat because your users and applications generally have a higher level of access, which poses a higher level of risk if misused. When considering cloud-specific risks, make sure that you take into account how your service category affects your application’s risk posture.
Quality assurance, or QA, is the process of ensuring software quality through validation and verification activities. The role of QA in software development is to ensure that applications conform to requirements and to quickly identify any risks. QA is not testing, but rather an umbrella field that includes testing, guidance, and oversight activities throughout the entire SDLC.
QA professionals are an integral part of any application development project and should work with developers, cloud architects, and project managers to ensure a quality product is designed, developed, and delivered to the customer.
Threat modeling is a technique by which you can identify potential threats to your application and identify suitable countermeasures for defense. Threats may be related to overall system vulnerabilities or an absence of necessary security controls. You can use threat modeling to help securely develop software or to help reduce risk in an already deployed application.
There are numerous approaches to threat modeling, but two of the most commonly used are called STRIDE and PASTA.
STRIDE is a model developed by a team at Microsoft in 1999 to help identify and classify computer security threats. The name itself is a mnemonic for six categories of security threats. STRIDE stands for
Most people would be surprised to hear that spaghetti and linguini can help secure their cloud environments. I would be surprised, too — that’s just silly! The Process for Attack Simulation and Threat Analysis (PASTA) is a risk-based threat model, developed in 2012, that supports dynamic threat analysis. The PASTA methodology integrates business objectives with technical requirements, application risks, and attack modeling. This attacker-centric perspective of the application produces a mitigation strategy that includes threat enumeration, impact analysis, and scoring.
The PASTA methodology has seven stages:
Define objectives.
During this step, you define key business objectives and critical security and compliance requirements. In addition, you perform a preliminary business impact analysis (BIA) that identifies potential business impact considerations.
Define technical scope.
You can’t protect something until you know it exists and needs protecting. During this step, you document the boundaries of the technical environment and identify the scope of all technical assets that need threat analysis. In addition to the application boundaries, you must identify all infrastructure, application, and software dependencies. The goal is to capture a high-level, but comprehensive, view of all servers, hosts, devices, applications, protocols, and data that need to be protected.
Perform application decomposition.
This step requires you to focus on understanding the data flows between your assets (in other words, the application components) and identify all application entry points and trust boundaries. You should leave this step with a clear understanding of all data sources, the parties that access those data sources, and all use cases for data access within your application — basically, who should perform what actions on which components of your application.
Complete a threat analysis.
In this step, you review threat data from within your environment (SIEM feeds, WAF logs, and so on) as well as externally available threat intelligence that is related to your application (for example, if you run a banking app, numerous resources are available to learn about emergent cyber threats to financial services companies). You should be seeking to understand threat-attack scenarios that are relevant to your specific application, environment, and data. At the end of this stage, you should have a list of the most likely attack vectors for your given application.
Conduct a vulnerability analysis.
During this step, you focus on identifying all vulnerabilities within your code and correlating them to the threat-attack scenarios identified in Step 4. You should be reviewing your OS, database, network, and application scans, as well as all dynamic and static code analysis results, to enumerate and score existing vulnerabilities. The primary output of this stage is a correlated mapping of all threat-attack vectors to existing vulnerabilities and impacted assets.
Model attacks.
During this stage, you simulate attacks that could exploit identified vulnerabilities from Step 5. This step helps determine the true likelihood and impact of each identified attack vector. After this step, you should have a strong understanding of your application’s attack surface (for example, what bad things could happen to which assets within your application environment).
Conduct a risk and impact analysis.
During this final stage, you take everything you’ve learned in the previous stages and refine your BIA. You also prioritize risks that need remediation and build a risk mitigation strategy to identify countermeasures for all residual risks.
The final phase of the SDLC involves maintaining an application after deployment for the full lifetime of the application. A big part of ongoing software maintenance is configuration management and application versioning. Configuration management is the process of tracking and controlling configuration changes to systems and software. Versioning is the process of creating and managing multiple releases of an application, each with the same general function but incrementally improved or otherwise updated.
Configuration management is a major consideration for any development team in any environment. Ensuring that systems and applications remain properly configured and in harmony with one another is an important challenge. In cloud environments, where systems freely spin up and down and resources can be rapidly provisioned on the fly, configuration management becomes an even greater concern for developers and security professionals alike. Whereas traditional data center environments usually involve configuration updates being made directly on each server, cloud environments operate at massive scale that makes this task nearly impossible — and cloud customers typically lack the access or control to directly manage these systems anyway. Instead, in cloud environments, address configuration management by building and managing software images that are updated, tested, and deployed throughout the customer’s cloud environment. To maintain consistent configuration management and software versions, cloud developers should generally seek to use automated tools and processes.
For tracking source code changes throughout the SDLC, developers can use version-control tools like Git (https://git-scm.com
) or Apache Subversion (https://subversion.apache.org
). Both of these tools are open source version-control systems that are used by large and small organizations to manage their code development and releases.
A bevy of open source and commercial tools are available for maintaining system configurations and software versions. Aside from the tools and features built into most CSP offerings, developers often flock to solutions like Ansible (https://www.ansible.com
), Puppet (https://puppet.com
), and Chef (https://www.chef.io
). These tools enable a process known as Infrastructure as Code (IaC) that allows developers to view and manipulate their IT environments directly from lines of code using a programming or configuration language. Developers can use these tools to monitor and maintain system and application configurations, which allows centralized configuration management across their entire environment.
Many other code and configuration management tools (both open source and commercial) are available, including options offered directly by some CSPs. Your organization should carefully consider your business and technical needs to determine which tool(s) work best for your software development.
Having a mature SDLC process is really important. Testing, auditing, and verifying that your SDLC process is producing secure applications that function as intended is just as important. In this section, you learn about functional testing and explore various application security testing methodologies.
Function testing is a type of software testing that evaluates individual functions, features, or components of an application rather than the complete application as a whole. Functional testing is considered black box testing and works by feeding the tested function an appropriate input and comparing the output against functional requirements. This type of testing does not evaluate the actual source code or the processing within the application, but instead is concerned only with the results of the processing. Because functional testing is used to test various aspects of an application, the types of tests are wide-ranging. Examples of some functional tests include unit testing, component testing, integration testing, regression testing, user acceptance testing, and several others.
Functional testing within cloud environments has all of the same considerations as traditional data center environments and then some. Because you’re operating in an environment with shared responsibility (between the CSP and cloud customer), developers must perform functional testing to evaluate the application’s compliance with all legal and regulatory obligations. You must consider how multitenancy, geographic distribution, and other cloud-specific attributes impact your specific testing needs.
Before deployment and on an ongoing basis, cloud developers should use several application security testing methodologies to find and remediate weaknesses in their applications. For the most part, the methodologies described in the following sections align with security testing practices in traditional data center environments, but practical application of each methodology may differ due to the characteristics of cloud architectures.
Static application security testing (SAST), or static code analysis, is a security testing technique that involves assessing the security of application code without executing it. SAST is a white box test that involves examining source code or application binaries to detect structural vulnerabilities within the application. SAST tools and processes can help you detect things like memory overflows that are otherwise hard for humans to detect. Because they analyze source code, your development team must be sure to find and use a SAST tool that works with your particular development environment and your application’s programming language.
Dynamic application security testing (DAST), or dynamic code analysis, involves assessing the security of code during execution. DAST seeks to uncover vulnerabilities by running an application and simulating an attack against it. By examining the application’s reaction, you are able to determine whether it’s vulnerable. For cloud applications, DAST scanners run against web URLs or REST APIs and search for vulnerabilities like injections, XSS flaws, and so on. DAST scanners use applications in a similar manner as a typical user and often require application credentials in order to run.
DAST is considered a black box test because testing is performed strictly from outside the application, with no intimate knowledge of the application’s code or inner workings.
Vulnerability scanning is the process of assessing an application or system for known weaknesses. This process usually involves using a tool to run tests on servers, networks, or application that look for signatures that match known malware, misconfigurations, and other system vulnerabilities. Vulnerability scan tools typically generate reports that list all discovered vulnerabilities, rated by severity (for example, high, moderate, low). In cloud environments, your service category (IaaS, PaaS, or SaaS) impacts what your responsibility for scanning is. For all service categories, the CSP is responsible for scanning (and patching) the underlying cloud infrastructure. For IaaS deployments, customers are typically responsible for vulnerability scanning their virtual machine instances and database instances. SaaS customers generally leave the vulnerability management activities up to their cloud provider, while PaaS customers’ responsibilities vary based on the types of PaaS services in use. You should consult your CSP’s customer responsibility matrix, user guide, or other relevant documentation to determine what responsibility you have for conducting vulnerability scans.
Penetration testing (or pentesting) is the process of conducting a simulated attack on a system or application in order to discover exploitable vulnerabilities. A pentest may be a white box test, but is usually a black box exercise during which the tester uses tools and procedures similar to those of a malicious attacker. The objective of a penetration test is for the good guys to discover exploitable vulnerabilities before the bad guys do. In doing so, pen tests provide insights into high risk security flaws within an application, and highlight the potential impact of those flaws being taken advantage of.
A key aspect of software development is understanding your development environment and the components that make up your software application. Using verified secure software is critical in any environment, but even more important in cloud environments that are often comprised of or connected with many different components that are not completely within your control. In this section, you explore the use of approved APIs, management of your cloud development supply chain, and the benefits and risks associated with open source software.
In cloud computing, APIs are powerful mechanisms by which cloud providers expose functionality to developers. APIs provide cloud developers an interface they can use to programmatically access and control cloud services and resources. With great power comes great responsibility, and APIs are a great example of that. The security of APIs plays a big role in the overall security of cloud environments and their applications. Consuming or leveraging unapproved APIs can lead to insecure applications and compromised data.
As a CCSP, you must ensure that your organization builds a formal process for testing and approving all APIs prior to use. Any significant changes to an API, whether vendor updates or security vulnerabilities, should prompt additional review of the API before further use. API testing should ensure that the API is secured appropriately depending upon the type of API it is. Testing an API’s security includes ensuring that the REST or SOAP API uses secure access methods, enables sufficient logging, and encrypts communications where applicable.
It is increasingly common for companies to integrate pieces of code or entire applications from other organizations into their own applications. Cloud applications, in particular, tend to be composed of multiple different external components and API calls. They often leverage software or data sources from one or more cloud provider as well as other external sources. It is essential that organizations consider the security implications whenever they use software components outside of their organizational control.
In many cases, developers rely on third-party software components that they don’t have complete understanding of; they may need the functionality that an external component offers, but haven’t validated that the component has been securely developed and tested in accordance with the organization’s policies and requirements. It is critical that your organization assess all external services, applications, and components to validate their secure design and proper functioning before integrating into your own applications.
While supply-chain management is focused on securely managing your use of third-party applications, you should also assess your organization’s use of third parties to manage parts of your software. Examples include third-party patch management, third-party encryption software, and third-party access management solutions. Third-party software management goes both ways: You must carefully assess your organization’s implementation of external software and also perform due diligence on your use of third-party providers who help manage your own software, including cloud providers.
Open source software is widely used by individuals and organizations alike. In cloud environments, developers heavily rely on open source applications, libraries, and tools to build their own software. Open source software is often considered to be more secure than closed source software because its source code is publicly available and heavily reviewed and tested by the community. Popular open source software often garner so much attention and scrutiny that security bugs are found and patched much quicker than their proprietary software peers.
Despite the popular belief that open source software offers many security benefits, some organizations (government agencies, for example) are a little more skeptical and cautious when it comes to open source software. Every organization should carefully assess any software component — open source or proprietary — and determine its suitability for application development and usage.
Developing cloud applications involves more than a development environment and your application code. Cloud application architecture requires supplemental security components from your cloud infrastructure and a combination of technologies like cryptography, sandboxing, and application virtualization. You can explore these concepts throughout this section.
I introduce the topic of defense-in-depth in Chapter 2, and it’s a critical theme throughout much of this book. When developing applications, it’s important not to rely solely on the application itself for security. Following a defense-in-depth approach, your application architecture should include multiple layers of security controls that protect different aspects of your applications in different ways. The additional layers of security components serve to supplement the security already built into your application development.
Firewalls are a core security component in both traditional IT environments and cloud infrastructures. These foundational components are traditionally physical devices located at strategic points throughout a network to limit and control the flow of traffic for security purposes. In cloud environments, however, customers aren’t able to just walk into a CSP’s data center and install their own firewalls. As such, cloud customers rely on virtual firewalls to manage traffic to, from, and within their networks and applications. Most CSPs offer virtualized firewall functionality, and many vendors of traditional firewall appliances now produce software-based firewalls for cloud environments. These virtual firewalls can be used with any cloud service model (IaaS, PaaS, or SaaS) and can be managed by the customer, CSP, or a third party.
A web application firewall (WAF) is a security appliance or application that monitors and filters HTTP traffic to and from a web application. Unlike regular firewalls, WAFs are layer-7 devices that are actually able to understand and inspect HTTP traffic and can be used to apply rules to communication between clients and the application server. WAFs are typically used to protect against XSS, SQL injection, and other application vulnerabilities listed in the OWASP Top 10 (discussed in the “Common vulnerabilities during development” section of this chapter).
WAFs are highly configurable, and their rules must be carefully developed to fit your specific application and use-case; an overly sensitive WAF can lead to inadvertent Denial of Service, while weak WAF rules may not filter bad traffic. Cloud security professionals and application developers must work together to ensure that WAF rules are configured for security without loss of functionality.
Malware protection dates back to the earliest days of the Internet when every business and personal computer needed a good antivirus program to keep it safe from the latest Trojan horse or backdoor virus. Things have evolved quite a bit since then, but the fundamental purpose of malware protection remains the same. In modern computing, malware protection is often coupled with threat intelligence and protection. Together, malware and threat protection help intelligently discover zero day vulnerabilities and other threats to cloud applications before they become exploited. A good malware and threat protection solution correlates your cloud environment’s existing log infrastructure with other data sources, including externally provided threat intelligence. In doing so, these solutions help organizations proactively identify high-risk users, actions, and configurations that could lead to data loss or compromise if undetected. Companies like Palo Alto Networks, NortonLifeLock (formerly Symantec), and others offer malware and threat protection solutions for cloud-based applications.
Encryption is a central component of every cloud security strategy, as you read throughout this book. In cloud application architectures, encryption plays a huge role in securing data at rest and data in transit.
Application encryption at rest involves encrypting sensitive data at the file/object, database, volume, or entire instance level. Encryption at the file/object or database level allows customers to encrypt only their most sensitive information or data that has specific regulatory requirements around encryption. Volume encryption is similar to disk encryption in noncloud environments and involves encrypting the entire volume (or drive) and all of its contents. Instance encryption protects the entire virtual machine, its volumes, and all of its data; instance encryption protects all of an application’s data, both at runtime and when the instance is at rest on disk.
Encryption in transit typically involves either TLS or VPN technologies; both are discussed in Chapter 2. TLS encrypts traffic within an application and between an application server and a client’s browser. Using TLS helps maintain the confidentiality and integrity of data as it moves across a network. A VPN creates a secure network tunnel between the client and the application, effectively bringing the client’s machine into the trusted boundary of the application. VPNs may use the TLS protocol, but take security a step further by creating a private channel for all communications rather than merely encrypting individual data components.
Sandboxing is the process of isolating an application from other applications and resources by placing it in a separate environment (the sandbox). By isolating the application, errors or security vulnerabilities in that application are also isolated within the sandbox, thus protecting the rest of the environment from harm. Sandboxes can either mirror the full production environment or be limited to a stripped-down set of resources and are commonly used to run untrusted (or untested) code and applications in a safe manner. Sandboxing is tremendously important in cloud environments, where customers don’t have the ability to physically separate resources.
Application virtualization and orchestration are key concepts that center around bundling and using application components, but with different purposes.
Application virtualization is the process of encapsulating (or bundling) an application into a self-contained package that is isolated from the underlying operating system on which it is executed. This form of sandboxing allows the application to run on a system without needing to be installed on that system, which enables running the target application on virtually any system — even ones with operating systems that the application wasn’t built to run on. From the user’s perspective, the application works just as if it were running on its native OS, much like hypervisors trick virtual machines into thinking they’re running directly on hardware.
Application virtualization benefits cloud users by providing the ability to test applications in a known environment without posing risk to the rest of the environment. In addition, application virtualization allows applications to run in environments that they couldn’t function in natively — for example running Windows applications in Mac, or vice versa. Another notable benefit to cloud customers is that application virtualization uses fewer resources than virtual machines, as only the bare minimum resources needed to operate the application are bundled in the virtualized application.
It should come as no surprise that where there are benefits, there are also drawbacks or things to consider. Developers should be aware that applications that require heavy integration with the OS or underlying hardware are not suitable for virtualization. Additionally, application virtualization adds considerable software licensing challenges — both the virtualized application and its host system must be correctly licensed.
Application (or service) orchestration is the process of bundling and integrating two or more applications or services to automate a process. Orchestration involves configuring, managing, and coordinating a workflow between multiple systems and software components in an automated fashion. The objective of orchestration is to use automation to align your technology stack with a particular set of business needs or requirements. By automating the configuration and management of disparate applications and services, orchestration allows organizations to spend less time managing important, yet time intensive tasks.
Orchestration can be used to automate many different processes. In cloud, orchestration can be used to provision resources, create virtual machines, and several other tasks and workflows. Several CSPs offer cloud orchestration services, with AWS CloudFormation being among the most popular.
Managing and controlling access to your application and its data is front and center when it comes to application security. Identity and access management (IAM) solutions help you uniquely identify users, assign appropriate permissions to those users, and grant or deny access to those users, based on their permissions. Several components make up an IAM solution. I introduce the foundations of identification, authentication, and authorization in Chapter 5. In this section, you explore these topics further.
The concept of identity federation (discussed in Chapter 5) is pivotal in cloud environments, where customers often manage user identities across multiple systems (on-prem and cloud-based). Federated identity means that a user’s (or system’s) identity on one system is linked with their identity on one or more other systems. A federated identity system allows reciprocal trust access across unrelated systems and between separate organizations.
Federated identity management is enabled by having a common set of policies, standards, and specifications that member organizations share. This common understanding forms the basis for the reciprocal trust between each organization and establishes mutually agreed-upon protocols for each organization to communicate with one another. Organizations use multiple common standards (or data formats) to meet their federated identity goals. SAML, OAuth, and OpenID are the most common, and are discussed in the following sections.
Security Assertion Markup Language, or SAML, is an XML-based open standard used to share authentication and authorization information between identity providers and service providers. In short, SAML is a markup language (that’s the ML) used to make security assertions (there’s the SA) about a party’s identity and access permissions. In a federated system, the service provider (or the application being accessed) redirects the user’s access request to an identity provider. The identity provider then sends SAML assertions to the service provider that includes all information needed for the service provider to identify and authorize the user’s access.
SAML is managed by a global nonprofit consortium known as OASIS (or the Organization for the Advancement of Structured Information Standards), which adopted SAML 2.0 in 2005.
OAuth is an open standard that applications can use to provide clients with secure access delegation. In other words, OAuth works over HTTPS (secure) and issues access tokens rather than using credentials (like username and password) to authorize applications, devices, APIs, and so on. You might see OAuth in action with applications like Google or Facebook, which use OAuth to allow you to share certain information about your account with third parties without sharing your credentials with that third party.
OAuth 2.0 was released in 2012 and is the latest version of the OAuth framework. It’s important to note that OAuth 1.0 and OAuth 2.0 are completely different, cannot be used together, and do not share backwards compatibility.
OpenID is an open standard and a decentralized authentication protocol that allows users to authenticate to participating applications (known as relying parties). OpenID allows users to log in to multiple separate web applications using just one set of credentials. Those credentials may be username and password, smart cards, or other forms of authentication. Relying parties that participate in the OpenID community thus are able to manage user identification and authorization without operating their own IAM systems.
Cloud developers can leverage the OpenID standard as a free identification and authentication mechanism for their applications. In doing so, developers allow users of their application to sign in using an existing account and credentials.
The OpenID Foundation is a nonprofit standards development organization that oversees and promotes the OpenID framework. The most recent OpenID standard is OpenID 2.0, which was published in 2007.
In a federated system, an identity provider is a trusted third-party organization that stores user identities and authenticates your credentials to prove your identity to other services and applications. If you’ve ever visited a retail website and been prompted to “Sign in with Facebook,” then you have seen a real-life identity provider in action. In this example, Facebook serves as the online store’s trusted identity provider and uses your Facebook account info to authenticate you on behalf of that retailer. Instead of Facebook passing your account info to the retailer, it uses your verified Facebook credentials to tell the retailer that you are who you say you are. This verification saves you the trouble of creating a new account just to buy that pair of jeans and saves the retailer the trouble of storing and securing your account information; everybody wins!
Tons of identity providers work on-prem and in the cloud. Some popular identity providers include (in no particular order)
Using a trusted identity provider can offer a lot of security benefits. Not only does it offload the need for your application to manage user identities, but it also provides a centralized audit trail for all access to your application; reliable identity providers keep historical record of all access events, which is a major benefit when demonstrating compliance with various regulatory requirements. In addition, a good identity provider provides robust security around its identity management systems, allowing your development team to focus more on creating great applications and less on foundational access security. Whether your organization uses an identity provider or manages identities internally, it’s important that you give strong consideration to application identity management as part of your cloud security strategy.
Single sign-on, commonly referred to as SSO, is an access control property that allows a single user authentication to be used to sign on to multiple separate, but related applications. SSO allows a user to authenticate a single time to a centralized identity provider and then use tokens from that authentication to access other applications and systems without repeatedly signing in.
In the bad old days of the early Internet, it was common for organizations to require users to manage separate accounts for their desktops, email accounts, time-keeping systems, and so on. In many cases, each system would have different password complexity or password rotation requirements. This system not only wasted users’ time, but also led to forgotten passwords — and even worse, written down passwords! SSO is a saving grace for users and help desks alike.
Google applications are a great demonstration of SSO in action. When you sign in to your Google account, you’re able to access Gmail, Drive, YouTube, and all other Google services, without having to sign in again and again. Google apps are a pure example of single sign-on.
Multifactor authentication (MFA) is an authentication method requiring a user to present two or more factors (which are forms of evidence) to the authentication mechanism; the factors can come in the form of knowledge, possession, or inherence.
Two-factor (2FA) is the standard application of MFA and should really be the standard access method for sensitive systems and applications, as well as all privileged access. Most cloud providers and many third-party access management platforms support 2FA. In addition to passwords, they usually require “something you have,” such as
There was a time not long ago when popular belief was that the cloud was inherently insecure. That belief has mostly been dispelled, as mature CSPs have demonstrated an ability to secure systems and data better than many other organizations. The one issue that continues to haunt security professionals, including those in cloud security, is user error. Enter the CASB! A cloud access security broker, or CASB (pronounced kaz-bee), is a software application that sits between cloud users and cloud services and applications, while actively monitoring all cloud usage and implementing centralized controls to enforce security (see Figure 6-2). A CASB may be used to mitigate high-risk security events or to prevent such events altogether by enforcing security policies, stopping malware, and alerting security teams of potential security events.
A CASB can serve many purposes, but at a minimum, a CASB has four pillars:
From a security perspective, most CASBs are able to enforce policies related to authentication and authorization (including SSO), logging, encryption, malware prevention, and more.
The CASB market has exploded in recent years. Some popular names in the space include
The three primary types of CASB solutions are
3.141.193.158