DOMAIN 4
Cloud Application Security

CLOUD APPLICATION SECURITY CAN be a neglected part of cloud security. Often, security focuses only on the controls associated with identity and access management (IAM), networking, servers, and other infrastructure components. But if the application software running on these components is insecure, then the entire enterprise is insecure. This chapter will discuss the processes needed to secure the software through the application development lifecycle.

ADVOCATE TRAINING AND AWARENESS FOR APPLICATION SECURITY

Secure software development begins with the development of a culture of security and the implementation of a secure software development lifecycle (SSDLC). Without an appropriate SSDLC, the development of secure applications is posed for failure. A secure culture is not developed through occasional efforts but through regular, deliberate action. The three parts identified by the Software Assurance Forum for Excellence in Code (SAFECode) are executive support and engagement, program design and implementation, and program sustainment and measurement. Training and awareness is an important step in developing a security culture and the development of secure cloud applications sometimes called DevSecOps.

Cloud Development Basics

According to the Cloud Security Alliance (CSA) and SAFECode, the development of collective responsibility is essential and challenging when building a safety conscious application development program. This effort can be broken down into these three parts:

  • Security by design: This implies that security is part of every step in the process. Security is not a bolt-on after application release or in response to a security flaw, but part of the process from application feasibility to retirement.
  • Shared security responsibility: The idea is that security is the responsibility of everyone from the most junior member of the team to senior management. No one says that security is not their responsibility. In fact, security is the result of individual responsibility and trust.
  • Security as a business objective: Security is not something in addition to what we do. Instead, secure computing is what we do. Security is part of all business objectives.

Each of these basic concepts requires a clear understanding of organizational culture, security vulnerabilities, and organizational vision and goals.

Common Pitfalls

The first pitfall is not getting senior management's initial and ongoing support. Efforts to instill a security culture are unlikely to be successful unless senior leadership supports these efforts—through their approval and adherence to security policy as well as funding and staffing security initiatives that are part of organizational goals.

The next pitfall is failing to understand organizational culture. Even companies in the same industry have very different cultures. Building and sustaining a security culture requires careful planning that recognizes an organization's culture and then designs program-level efforts that work within that culture. In addition, program-level efforts must be reviewed regularly to ensure that they remain aligned with business objectives and cultural realities.

Another pitfall is staffing. The size of an organization may limit the number of security experts. These experts may then serve as resources to multiple development processes rather than as a member on one team. If the organization is large, development teams may use security experts as resources or may include security experts as team members. In either situation, it can be challenging to ensure the development of security-aware applications will be consistent across the organization.

Not having a framework for secure software development will also cause difficulties. A mature organization will have a well-defined process that integrates security as part of each step. The lack of a framework or process leads to ad hoc security and does not support a security-aware organization.

Finally, in times of budget constraints, the training budget is often trimmed of nonessential training. It is easy to cut training without immediate impact. However, a security-conscious organization will find ways to continue security awareness training and secure computing practices. There are many options that can reduce training costs while retaining many training options.

Potential security training options include subscriptions to online security courses, as well as online asynchronous and synchronous training. Each of these options eliminates travel expenses. This can trim the budget significantly, while maintaining training operations. One other option is to bring the trainer to the work location. If a large number of people need the same training, bringing the trainer to you rather than sending the people to training can be cost-effective. These options can keep training active while trimming the budget.

Common Cloud Vulnerabilities

Common cloud vulnerabilities include data breaches, data integrity, insecure Application Programming Interfaces (APIs), and denial-of-service (DoS) attacks. Each of these is present because of the extensive use of networks as part of cloud computing. Anytime an application is accessed or transmits data over the Internet, it is doing this in a hostile environment.

Two organizations that provide information on security threats are the CSA and the Open Web Application Security Project (OWASP). Both publish annual research on top threats in cloud computing and related technologies. Both organizations are worth regular review. The top four risks from each organization are provided as an example of their work.

CSA Top Threats to Cloud Computing

For the past several years, the CSA (cloudsecurityalliance.org) has published the top threats to cloud computing. The number of threats varies, and the publications are entertainingly named. In 2016 to 2018, it was the Treacherous 12. From 2019 to the present, it is the Egregious 11. Regardless of the name, these lists are a great security resource and should be reviewed each year. Here is an example of the top four threats identified in 2020's Egregious 11:

  • Data breaches
  • Misconfiguration and inadequate change control
  • Lack of cloud security architecture and strategy
  • Insufficient identity, credential access, and key management

None of these threats should be surprising. Protection of data, which may be put at risk through misconfiguration, poor access control, and other failures, tops the list. The top items on the list also suggests that cloud security needs to become more mature in many organizations.

OWASP Top 10

The OWASP Top 10 (owasp.org) is a periodically updated list and features the top threats in web application security. The last version is from 2017, with the previous version in 2013. The top 10 security risks lead to a variety of issues that include data breach, data integrity, and DoS. Essentially, each item of the CIA triad can be affected by one or more of these risks. The following were the top four in 2017:

  • Injection flaws, including SQL, NoSQL, OS, and LDAP injection
  • Broken authentication
  • Sensitive data exposure
  • XML external entities

The most disturbing thing about this list is that the items on the list rarely change. Sometimes all that changes is the order. For example, numbers 1 and 2 in 2013 and numbers 1 and 2 in 2017 are the same. In four years, the top risks were unchanged. Number 3 in 2013 moved to number 7 in 2017, while number 6 became number 3.

We know what the risks are. Organizations like the CSA and OWASP publish what the risks are. As expected, the overlap between the lists is high. Now, it is the responsibility of security professionals to address these risks.

DESCRIBE THE SECURE SOFTWARE DEVELOPMENT LIFECYCLE PROCESS

The software development lifecycle (SDLC) has been understood for several years. The SSDLC enhances the SDLC process. The SDLC has several phases. These steps are Requirements. Design, Development, Testing, Deployment, and Operations and Maintenance (O&M). In the SSDLC, these phases are enhanced to include specific security-focused steps to allow security by design. There are many resources for implementing an SSDLC. These include the Microsoft SDL and the NIST SP 800-160, Systems Security Engineering. Two resources that this section will discuss are the NIST Secure Software Development Framework and the OWASP Software Assurance Maturity Model (SAMM). This section will also explore business requirements, phases, and methodologies related to the SSDLC.

NIST Secure Software Development Framework

Similar to the popular NIST Cybersecurity Framework (CSF), the NIST Secure Software Development Framework (SSDF) defines and describes secure software development practices (nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04232020.pdf). This framework is useful for developing secure traditional IT systems, as well as Industrial Control Systems (ICS), IoT systems, and cyber-physical systems (CPS).

The SSDF can be adapted to existing SDLCs, supports the use of modern software development techniques such as agile, and leverages guidelines and standards from other organizations.

The SSDF is organized into these four groups:

  • Prepare the Organization: This includes people, processes, and technology.
  • Protect the Software: Protect it from tampering and unauthorized access.
  • Produce Well-Secured Software: This means software with minimal vulnerabilities.
  • Respond to Vulnerabilities: This includes preventing them in future releases.

OWASP Software Assurance Security Model

The OWASP SAMM can also be implemented into an existing SDLC. The OWASP (owaspsamm.org) consists of these four steps:

  1. Assess the current security posture.
  2. Define a strategy (security target).
  3. Implement a roadmap.
  4. Offer prescriptive advice on activity implementation.

SAMM provides assessment tools and guidance to improve an organization's security posture and supports the development of secure software and systems.

Business Requirements

Mature software development shops utilize an SDLC because it saves money and supports repeatable, quality software development. Studies have been conducted that show the later in the development phase issues are found, the more expensive it is to fix those issues. Adding security to the SDLC benefits secure software development. While the SSDLC adds some up-front cost to the development of new software applications and the modification of existing applications, identifying security vulnerabilities early will lower overall development costs. The expected return is software solutions that are more secure against attack, reducing the exposure of sensitive business and customer data. Bolting on security after the coding or deployment phases simply increases the cost of that security while limiting its effectiveness.

However, an SSDLC is fully successful only if the integration of security into an organization's existing SDLC is required for all development efforts. Only when secure development is a requirement of a business will security by design occur consistently.

Phases and Methodologies

There are many models for SDLCs, from linear and sequential approaches such as waterfall to interactive and incremental approaches such as spiral, agile, and most recently Development and Operations (DevOps), which increase the speed and frequency of deployment.

There are several primary stages to any SDLC, including an SSDLC. These are as follows:

  • Requirements: This phase includes all parts of the planning and can include feasibility studies, gathering business and security requirements, and some high-level design of the software solution.
  • Design: The design step starts as a high-level design and gets increasingly detailed as this stage progresses. The design must include the security requirements identified in the requirements phase. Design can also include the design of testing requirements including test cases and acceptance thresholds to ensure all business and security requirements are met. Test cases should be tied to specific requirements identified in the requirements stage. Tests should be developed to meet and implement all requirements.
  • Development: The coding phase is the creation of the software components as well as the integration or build of the entire solution. Unit testing is generally part of the coding phase.
  • Testing: This phase is the initial testing of the solution built as well as more focused testing that occurs to validate the solution before the final phase.
  • Deployment: Deployment is the work associated with the initial release of the software. Part of the effort in this stage is to ensure that default configurations conform to security requirements and best practices, including configuration of APIs and IAM. These steps reduce the risk of credential compromise. These steps also protect the processes for account creation, maintenance, and deletion. Finally, the deployment of a securely developed application service will often use multiple cloud services and APIs. It is a good practice to consider each service that is approved for use by a business and create a standard configuration guide for each of those services to ensure a secure configuration for each service that adheres to company standards.
  • O&M: This is often the longest phase as it encompasses everything that happens after the release of a software solution. This stage includes any operational, monitoring, and maintenance needs of the solution.

It is easy to see how each phase of the SDLC can be divided into ever finer phases. But these steps capture the flow of the work. At each step, security requirements are an important part of the overall solution. Following these steps leads to a consistent and repeatable process. These phases are supported in older development methodologies such as the waterfall and spiral models as well as more modern methodologies such as agile and DevSecOps.

APPLY THE SECURE SOFTWARE DEVELOPMENT LIFECYCLE

SSDLC is a collection of best practices focused on adding security to the standard SDLC. Applying an SSDLC process requires dedicated effort at each phase of the SDLC, from requirements gathering to deployment and maintenance. An SSDLC requires a change of mindset by the development teams, focusing on security at each phase of the project instead of just focusing on functionality.

Identifying security issues early helps reduce the risk of identifying security vulnerabilities late in the development process and minimizes the impact when these are found.

Having an SSDLC is beneficial only if it is implemented and used consistently and does not eliminate traditional security tests, such as penetration tests. Instead, it empowers developers to build secure applications from the very beginning of a project. Additionally, some standards and regulations, such as the General Data Protection Regulation (GDPR), Payment Card Industry (PCI), ISO27001, and others require security (data safeguards) to be incorporated in the development process.

Avoid Common Vulnerabilities During Development

Common vulnerabilities and risks are listed in many places. Perhaps the most common list of risks in web-based development are the OWASP Top 10. These are updated regularly to ensure the most common risks are known to developers. By learning this list, developers can actively design and develop systems that have reduced vulnerabilities. The latest OWASP Top 10 2017 list is discussed in the “Common Cloud Vulnerabilities” section. The first five vulnerabilities are the following:

  • Injection: SQL, NoSQL, LDAP, and OS injection errors allow the attacker to run unintended commands and access data without authorization.
  • Broken authentication: When authentication and session management are incorrectly implemented, it leads to compromise of passwords, keys, and tokens.
  • Sensitive data exposure: Web applications and APIs may poorly protect sensitive data, both at rest and in transit when encryption is not used.
  • XML external entities: Evaluation of external entities in XML documents may disclose internal files, allow remote code execution, and lead to DoS attacks.
  • Broken access control: Poor access control may allow authenticated users to view unauthorized and sensitive data, execute unauthorized functions, change access rights, and so on.

This OWASP Top 10 list does not change significantly at every release and has a few common themes. This includes misconfiguration, improper use of tools, and poor input validation. Each of these can then lead to data breaches, account takeover, and disclosure of sensitive data. Keeping this list in mind can lead to the identification of security vulnerabilities and the development of code that has fewer vulnerabilities. This in turn will lead to the development of secure software and systems.

Cloud-Specific Risks

There are a number of additional risks that apply specifically to cloud computing. Some specific information on cloud-specific risks or security issues is provided in the Egregious 11 from the CSA (cloudsecurityalliance.org).The Egregious 11 includes the following:

  1. Data breaches
  2. Misconfiguration and inadequate change control
  3. Lack of cloud security architecture and strategy
  4. Insufficient identity, credential, access and key management
  5. Account hijacking
  6. Insider threat
  7. Insecure interfaces and APIs
  8. Weak control plane
  9. Metastructure and applistructure failures
  10. Limited cloud usage visibility
  11. Abuse and nefarious use of cloud services

Cloud-specific vulnerabilities identified by the CSA's Egregious 11 can lead to violations of one of more of the CIA triad requirements of confidentiality, integrity, and availability. The risk of data breach, compromise, and exposure of sensitive data can lead to violations of regulations such as HIPAA and may lead to fines and other penalties. At the least, a well-publicized data breach can lead to a loss of reputation and potentially a loss of business and market share. Loss of data integrity and availability have similar consequences.

Risks beyond those discussed by the CSA will exist for specific companies at different levels of security maturity, in various industries, and in different geographic locations. Each organization must know their specific additional requirements.

The first few CSA Egregious 11 security issues will provide the framework for our discussion. Each organization should review all the CSA security issues and geographic, industry, and regulatory requirements as relevant.

CSA Security Issue 1: Data Breaches

Confidential and restricted data is a key target of hackers. It can lead to loss of trade secrets and other intellectual property (IP), strategic processes, and customer data. This can harm the organization's reputation, lead to regulatory fines, and expose the company to legal and contractual liability.

In the requirements gathering phase, the data to be protected and the users that need access are considered. Involvement of a security architect can be beneficial at this stage.

In the design and development phases, specific strategies to protect the data can be developed, such as data encryption and methods of authentication and authorization. Using existing infrastructure to provide these services leverages existing company assets.

During testing, each requirement, including security requirements, must be exercised. Traditional testing can be extended with penetration testing and vulnerability scanning in our effort to develop secure software and systems.

During deployment, the default setting in your system must be set for maximum security. The default state for any deployment must be a secure state. During O&M, organizations may modify the default settings and introduce vulnerabilities. This potential must be monitored and reported to ensure the system remains in a secure state and any changes are made with full knowledge of potential risks. In addition, monitoring for new vulnerabilities, threats, and the resulting risks is part of the O&M process. It is also during O&M that incident response and regular IAM reviews become part of the process.

CSA Security Issue 2: Misconfiguration and Inadequate Change Control

At deployment and during O&M, securely developed software can become insecure. The areas to watch are data storage and transmission, excessive permissions, default credentials and settings, and standard security controls being disabled.

Storage will be considered at the requirements, design, development, and test stages. But at deployment and O&M, the actual storage, software, and systems must be configured and monitored. The default configuration must be secure. Any changes to the default configuration must be monitored and the user provided with warnings when changed. A cloud environment should be configured to enforce these configurations.

This issue is made even more serious by the reduced visibility of a cloud environment. When all components are on-premise, the infrastructure is known and is easily reconfigured, monitored, and maintained. In the cloud, some of this visibility and control transfers to the cloud service provider (CSP). Extra steps must be taken to use the cloud services to maintain as much visibility and control as possible.

Many developers are new to the cloud environment, and even experienced developers can make errors in configuration that expose systems and data to unauthorized users. This exposure, when occurring in a cloud environment, can lead to worldwide exposure of sensitive data. Continuous automated scanning of cloud resources can prevent or mitigate changes in real time.

Change control is a key part of preventing configuration changes that make a system less secure. Security expertise on the change control committee is an important step in the secure software process. In addition to introducing new vulnerabilities, change control prevents the reintroduction of previously corrected vulnerabilities.

CSA Security Issue 3: Lack of Cloud Security Architecture and Strategy

Companies are moving to the cloud with increasing speed. Often the migration outpaces an organization's ability and preparation for cloud deployment. It is advisable to pause and ensure that the organization has a security-focused architecture in place and has developed a cloud strategy before cloud development and migration begins. If the strategy and architecture are determined after cloud development or migration, similar to adding security to an application that is already deployed, the ability to secure software and systems and bring them into compliance will be more expensive and less effective.

The size of the organization does not affect this issue. Organizations of all sizes need a cloud security architecture and strategy to migrate, develop, and operate in the cloud securely and safely. Not doing so can have severe operational and financial repercussions. This control exists prior to phases of the SSDLC and will impact each phase by requiring all phases conform to the organization's security architecture and strategy policies.

CSA Security Issue 4: Insufficient Identity, Credential, Access, and Key Management

This issue includes several concerns such as a scalable IAM system, the use of multifactor authentication, protection and rotation of cryptographic keys, protection of credentials, and enforcement of password policies.

Addressing this issue begins at the requirements phase. Here, the data and IAM considerations are first addressed. A requirement for strong passwords, cryptographic protections, multifactor authentication, and so on should be part of the requirements. The design phase continues this process by ensuring that all requirements are part of the design. Development then implements all parts of the secure design, and the test phase ensures that all requirements are met, designed fully, and implemented correctly.

During deployment and O&M, it is important to consider that a cloud solution may have thousands of individuals capable of establishing accounts through the on-demand self-service model over time. In a business environment, the turnover will require the creation of new accounts and the disabling and eventual deletion of other accounts. The use of the cloud service must be carefully configured and monitored to prevent escalation of privileges and access to sensitive data by unauthorized individuals.

Quality Assurance

In a traditional development process, quality assurance (QA) was the testing phase. Separate from the development team, QA was the check before deployment that ensured that requirements were met and that the code was bug-free. QA was also part of the configuration management process, testing patches prior to deployment. In many organizations, that remains the role of QA.

In the modern DevOps or DevSecOps, QA is part of the process and is embedded within the DevOps team. QA at this point isn't about the application service developed. Instead, QA is centered around service delivery of the application service developed. QA occurs at each phase, ensuring continuous improvement and quality tracking. Testing (often automated testing) is tied to both functional and security requirements developed in the requirements phase and specified by the security architecture and strategy.

For QA to be effective, further functional and requirements testing should be performed. QA should be involved in load testing, performance testing, stress testing, and vulnerability management. To be more effective, testing can be automated.

In some DevOps environments, it is the QA team that pushes out the code. So, that team is responsible for ensuring that the code delivered to the customer through the cloud environment is quality code and is both defect-free and secure. The QA team then takes a pivotal role in developing secure software and systems. Doing this requires that they have visibility into all phases of the process.

Threat Modeling

Threat modeling has five major steps that must be integrated. This can be greatly enhanced by using a threat model, such as STRIDE. These steps should be performed early in the SSDLC (in the Requirements phase) and updated throughout the SSDLC. These steps are as follows:

  1. Define security requirements. If requirements are known, clear, and focused, the remaining steps are simpler to implement. Vague and broad requirements are difficult to work to and achieve.
  2. Create application overview. In this step, we look at the application architecture, application characteristics, and the users of the system to more easily identify threats.
  3. Identify threats. When we understand the characteristics of our application and our security requirements, identification of threats is more easily accomplished.
  4. Mitigate threats. Once we identify threats, we can start identifying controls that will aid in threat mitigation.
  5. Validate threat mitigation. It is important to monitor and review controls to ensure they adequately address identified threats and decrease threat below the organization's risk appetite.

Throughout the five phases of the SSDLC, these five steps are refined. One way to identify security requirements is through the use of a threat model, such as the STRIDE model. STRIDE is an acronym for Spoofing, Tampering, Repudiation, Information disclosure, DoS, and Elevation of privilege. These are common threat categories to consider in this process. See Table 4.1.

TABLE 4.1 The STRIDE Model

LETTER THREAT PROPERTY VIOLATED DEFINITION
S Spoofing identity Authentication Pretending to be someone or something else
T Tampering with data Integrity Modifying data or memory
R Repudiation Nonrepudiation Claiming to not have done something
I Information disclosure Confidentiality Providing information to unauthorized parties
D Denial of service Availability Exhausting resources needed for a service
E Elevation of privilege Authorization Allowing someone to perform unauthorized tasks

Software Configuration Management and Versioning

The purpose of software configuration management (SCM) and versioning is to manage software assets. This can be challenging as software is almost always developed in a team setting. These teams may be geographically dispersed and may be working in multiple time zones. Multiple people may be making changes to both the config and source code files. Managing all of this is essential to ensure that changes are current and accurate.

SCM is important as it can make rolling back changes possible. For example, if a deployed version has a significant flaw, it is possible to redeploy an earlier version of the software while addressing this flaw in development. Versioning can also maintain copies of the configuration and software for deployment to different machines and operating system versions. There may also be different versions for different customers, different countries, different regulations, and so on.

A key role of SCM occurs at deployment and during O&M release management (updates or patches). Prior to formal SCM tools, there were instances where the wrong version of software was released. This can be a costly error, exposing the organization to reputational, operational, regulatory, and contractual issues.

Additionally, SCM allows for configuration audits and reviews by providing the artifacts necessary to ensure processes are followed. Compliance with requirements may be related to an organization's policies, regulatory requirements, or contractual obligations. The ability to quickly and accurately perform audits is an important role of SCM.

Configuration management and versioning is a common practice in all software development environments and is aided through the use of a configuration management database (CMDB). It may be possible to federate the CMDB with both on-premises and cloud-based solutions. The federated CMDB synchronizes across multiple systems and can store the database on premises or in the cloud.

Another approach is to have a single DB, but each system's application services are saved separately. However, a single corporate process is used to manage this process.

It is possible to do both approaches in a hybrid manner. Each company will have to decide how to manage configuration management. It is as important to perform configuration management on premise as it is for software development in the cloud.

APPLY CLOUD SOFTWARE ASSURANCE AND VALIDATION

Software assurance defines the level to which software is free from vulnerabilities and operates as intended. This assurance is a level of confidence as the absence of errors cannot be proven. We can also test compliance to requirements. But the possibility exists that a software solution will exhibit unintended behaviors as well. We use methods like an SSDLC to design security into the software solution from the beginning and to implement testing to ensure security goals are met and the software functions as designed and according to requirements.

In the following sections, functional testing is described first. These tests ensure that the software meets functional requirements by doing what it was designed to do. Security testing is discussed next. This testing validates the security requirements of the software and operates within the architecture security strategy of the organization. The goal of this testing is to determine if the software is secure. Again, this is a confidence level as to the security of the software system developed prior to deployment.

Functional Testing

Functional testing is used to test that the functional specifications of the system, linked to system requirements, are met. The execution of a robust set of test cases, linked to functional requirements, will create a level of confidence that the software operates as intended.

There are many ways to test software. However, there are some common tests that are consistently used leading up to functional testing. The primary categories of testing that lead up to functional testing are unit testing, integration testing, and usability testing.

  • Unit testing: This is testing by a developer on modules being developed as part of a larger system. All paths through the module need to be tested.
  • Integration testing: As modules are combined, integration testing ensures that the modules work together. As additional modules are added, we get ever closer to functional testing.
  • Usability testing: This testing uses customers in a production-like environment to get feedback on the interaction between the user and the system.

As the modules are tested and then are integrated (and tested) and the user's feedback is obtained and incorporated into the system, we get to the point where we are ready to perform functional testing on the entire system. When conducting functional testing, there are important considerations that include the following:

  • Testing must be realistic: Many development shops have Dev, Test, Stage, and Prod environments. These environments are called lanes in some organizations. The Dev or development environment can be set up to suit the developers’ needs. However, for the greatest assurance, the Test and Stage environments should be set up as closely as possible to the Prod environment. In many cases, the Test and Stage environments will have live data or older copies of live data to ensure functional requirements are met. Once development is complete, the application moves to the Test environment for testing. Upon successful testing, the application will move to the Stage environment for configuration management and potentially additional testing. Once the next release cycle occurs, the software in the Stage environment moves into production. Any environment with live data (current or older) must be protected as well as the Prod environment to prevent data loss.
  • Acceptance: Testing must be sufficient to guarantee that the application service meets the requirements of the customer and the organization (sometimes they are the same). This means that testing must be designed to exercise all requirements.
  • Bug free: Testing must be sufficient to have reasonable assurance that there are no major bugs in the software. If there are any remaining bugs, they need to be small, rare, and inconsequential.

Once the system passes functional testing, it is ready to follow a QA process to deploy the system. Once deployed, enhancements and bugs will lead to further development. It is important to use the SSDLC process for all further development. This leads to another form of testing, which is regression testing.

Regression testing is done during the maintenance phase of software development to ensure that modifications to the software application (for example, to fix bugs or enhance the software) do not reduce current functionality, add new vulnerabilities, or reintroduce previous bugs and vulnerabilities that have been fixed.

Testing is the way we obtain the confidence or assurance that our software is free of vulnerabilities and functions as required and designed. Testing allows for quality assurance. Adequate testing is important. This requires that adequate time be allocated to conduct testing to find, fix, and test again in an iterative process. In addition, automated testing tools can improve the efficiency and completeness of testing. In a continuous integration/ continuous deployment (CI/CD) environment, automated testing becomes a required feature.

Security Testing Methodologies

Security testing is conducted to provide assurance that the organization's security strategy and architecture are followed and that all security requirements have been met. Testing is usually one of three types:

  • White-box testing: Tests the internal structures of the software. This requires access to the software. Static application security testing (SAST) is a form of white-box testing.
  • Gray-box testing: Tests a system with limited information about the application. The tester does not have access to the code but will have knowledge of things such as algorithms and architectures. It is primarily used in integration and penetration testing.
  • Black-box testing: Tests a system with no knowledge of the code, algorithms, or architecture. Dynamic Application Security Testing (DAST) is a form of black-box testing.

There are common tests used in security testing. These happen at different stages of the development process. These include the following:

  • Static Application Security Testing (SAST): This test is able to do a static analysis of source code. Source code is available for internally developed software systems. Static testing will not find all vulnerabilities. SAST is a good initial test to eliminate common vulnerabilities that can be found in this manner. As the code is known, this is a form of white-box testing. SAST tests can be run prior to deployment once a testable amount of code is available and can be run throughout the remaining steps in the SSDLC.
  • Dynamic Application Security Testing (DAST): This tool is used primarily as a web application vulnerability scanner. It is a form of black-box testing. DAST is known for having poor risk coverage, unclear reporting, and slow performance. So, it should not be the only testing tool used. When used, it should be used as early in the development process as practical. Once an application is deployed, a DAST is not your best choice.
  • Interactive Application Security Testing (IAST): IAST is newer than SAST and DAST and provides a gray-box testing approach. IAST provides an agent within an application and performs real-time analysis of real-time traffic application performance, detecting potential security issues. It can also be used to analyze code as well as runtime behavior, HTTP/HTTPS traffic, frameworks, components, and back-end connections. IAST can be used at every phase of the SSDLC.

Another tool often discussed with SAST, DAST, and IAST is Runtime Application Self-Protection (RASP). RASP is less a test and more of a security tool. RASP runs on a server and works whenever the application is running. RASP intercepts all calls to and from the application and validates all data requests. The application can be wrapped in RASP and provides additional system security. In a layered defense, this is an additional layer and should not replace secure development and testing.

Security testing provides assurance that the software has been developed securely. While SAST and DAST may have their place, if you could use only one security testing tool, IAST would be the best choice. However, new methodologies are always being developed, and care should be taken to consider and use new tools as they become stable and available to improve your security assurance testing.

USE VERIFIED SECURE SOFTWARE

The only type of software that a security conscious organization should use is software that has been verified as secure. Verification generally comes from a third party that will perform testing on software and validate that it has no discernable vulnerabilities. When there are no verified secure options, a customer must do their own due diligence to ensure security. In this section, we will discuss some major components of secure software.

Approved Application Programming Interfaces

API development and deployment in custom applications requires the same SSDLC as other software development projects. The requirements can specify the methods used in the API to monitor access. API access monitoring is often done through authentication or keys. If not securely developed, a custom API can be vulnerable, leading to the compromise of the system it fronts. Deployment should focus on API configuration and automate the monitoring of that configuration.

APIs can control access to software or application services in a SaaS solution, to back-end services in a PaaS solution, or even to computing, storage, and other infrastructure components in an IaaS solution. For each of these, an approved API is important to ensure security to the system components with which we are interacting. In addition, when possible, enforcing the use of APIs to create a minimum number of methods for accessing an application service simplifies monitoring and protection of these application services.

A CSP or other vendor will provide an API or services that allow the use of an API. It is important to use the APIs as defined and to configure them carefully. An organization can develop approved configurations for APIs used commonly within that customer's organization, and policy can enforce the use of those standard configurations.

In addition, vulnerability scanning of APIs can test the adherence to standards and can provide assurance that known vulnerabilities do not exist. It is impossible of course to ensure that unknown vulnerabilities such as zero-day vulnerabilities do not exist.

Supply-Chain Management

There are two parts to supply chain management. First, it can refer to the needs of a CSP and potentially the customer to use third parties to provide services. For example, the CSP may rely on other vendors to provide services used by their customers. Management of this supply chain is a form of vendor risk management. When creating relationships with these vendors, both operational and security concerns should be addressed.

Additionally, traditional supply chain management is moving increasingly to the cloud. As the life of many organizations is tightly coupled with their supply chain management, the risks of cloud computing are important to consider. However, as the supply chain becomes increasingly global and sourcing of goods requiring primary and secondary sources, cloud computing increases the reach and benefit of supply chain management. Cloud computing can optimize infrastructure to provide operational and financial benefits.

In recent years, supply chain risk has become an increasingly common theme. An earthquake in one country will affect the availability of a resource in another. Large-scale cyberattacks like the SolarWinds hack can impact critical infrastructure. A global pandemic leads to shortages worldwide. Even a simple problem, like a ship blocking the Suez Canal, can interrupt the global supply in unexpected ways.

So, supply-chain management has a complex set of risks that include cloud computing. Cloud computing cannot overcome the risks of a pandemic or a ship running aground in the Suez Canal. But, securing the supply-chain management software in the cloud and securely connecting vendors globally through cloud services reduces the IT-related risk.

Third-Party Software Management

The use of third-party software adds additional risk. A third party may have limited access to your systems but will often have direct access to some portion of your data. If this is sensitive data, a careful review is necessary and should involve the vendor management office (VMO) if your organization has one. Specific language regarding security should be part of every vendor contract.

The typical issues that are addressed include the following:

  • Where in the cloud is the software running? Is this on a well-known CSP, or does the provider use their own cloud service?
  • Is the data encrypted at rest and in transit, and what encryption technology is used?
  • How is access management handled?
  • What event logging can you receive?
  • What auditing options exist?

In addition to basic security questions, a review of the third-party SOC-2 report, recent vulnerability scans and penetration tests, and security and privacy policies will provide an assessment of the security maturity of the organization and whether you should entrust them with your sensitive data. While you may delegate some processes, you cannot delegate responsibility for your data.

Another risk that will occur with some third-party vendors is fourth-party risk. Fourth party refers to a third party's third party, such as if your vendor uses a separate, independent vendor to provide you a service. For example, when a SaaS solution uses an independent CSP for some of their storage needs, your risks include the risks associated with the SaaS solution as well as any additional risks created by the CSP they use. In essence, your infrastructure (computing, storage, and so on) is hosted on the Internet, and any services used by these parties increase the perimeter of the risk that must be considered.

There are advantages to using third-party services. A third party may supply software to address in-house needs when there is no in-house expertise. Third-party solutions can also provide cost and tax advantages. However, these resources must be understood and managed just as much as in-house application solutions.

Validated Open-Source Software

All software, including open-source software (OSS), must be validated in a business environment. Some argue that open-source software is more secure because the source code is available to review, and many eyes are upon it. However, large and complex solutions are not simple to review. So, validation through sandbox testing, vulnerability scans, and third-party verification is required.

The common belief that there is less risk from OSS because it is inexpensive shows an incomplete understanding of risk. Risk is about the asset that is being protected and not about the cost of the software used. In most business, the data is a primary asset. Losing your data to inexpensive software does not lessen the cost associated with the data breach and exfiltration. OSS must follow the same risk-based steps of verification that commercial software undergoes.

When using OSS, there are steps you can take to validate this resource. The easiest method is to use well-known and well-supported products in the OSS space. For example, there are many versions of Linux available for use, but not all versions are equal. A well-supported version with a proven track record is preferable to a less known and less supported version.

One method for validation that can be used would be to perform code analysis on the open-source code. The advantage of OSS is that the code is available. SAST tools find security vulnerabilities in the code. Static analysis of an application will only get you some value, but will not get you all of the way there. IAST can be used in conjunction with SAST. An agent runs on the application server and analyzes traffic and execution flow to provide real-time detection of security issues.

These methods can also be utilized together. You can use a well-known and well-supported OSS, perform SAST to reveal initial vulnerabilities, and then implement IAST for real-time detection of additional security issues.

COMPREHEND THE SPECIFICS OF CLOUD APPLICATION ARCHITECTURE

The traditional application architecture is a three-tier client-server module. In cloud computing, we have some additional choices. These include microservices, cloud native, serverless, and cloud-based architectures.

The microservice application designs a complex architecture as a collection of services and data. This follows an old software engineering principle of cohesion. Each microservice performs a single business function. Each microservice uses the appropriate language and tools for development and can be combined as needed to provide a complex system. Microservices application architectures are a natural for containers running on virtual machines or physical machines, so they are well suited to the cloud. Container management is managed through services like Kubernetes (K8s) or Docker.

A cloud native architecture is for applications deploying to the cloud. The applications exploit the cloud computing delivery model and can be run in any type of cloud (public, private, community, or hybrid) and can assist in getting applications to market. A cloud native architecture can deploy in a DevOps, and a CI/CD process and can use microservices and containers.

Serverless environments use an event-driven architecture. Events trigger and communicate between decoupled services. Because they are serverless, these architectures scale well using a REpresentational State Transfer (REST) API or event triggers.

Cloud-based architectures are well suited to building and deploying web applications. Using an API Gateway, a secure API is a front door to web applications providing access to data and business logic.

Considering these cloud architectures, there are a number of tools and services that can support the security needs of both new software solutions as well as legacy devices. These services provide enhanced security and deal directly with common security issues. These services may be supplemental services that protect certain parts of the application architecture, encryption services that protect data at rest or in motion to ensure confidentiality of the data, methods to test securely, and services that tie all these services, web services, and application services together.

Supplemental Security Components

Supplemental security components provide service to your cloud environment to solve specific security concerns. For example, database monitoring works to ensure the integrity of our databases, while XML firewalls support application services through XML messages. Each supplemental security service will describe the problem solved by this service.

Web Application Firewall

A web application firewall (WAF) protects HTTP/HTTPS applications from common attacks. Usually, a WAF protects an Internet-facing application, but it can also be used internally on an intranet. The WAF can be a hardware device, a software device, or both. The WAF monitors GET and POST requests. The requests are then compared to configured rules. A WAF may look for specific signatures or apply heuristics.

By filtering HTTP/HTTPS traffic, a WAF helps protect against SQL injection, cross-site scripting (XSS) and cross-site forgery, and other attacks. The WAF specifically addresses attacks on application services and external sources.

The WAF differs from an Intrusion Detection System (IDS), which monitors specific traffic patterns on a network. A WAF works at the application level and focuses on specific web application traffic and is often employed as a proxy, with one or more websites or web applications protected behind the WAF. The CSPs and third-party vendors provide many WAF options.

Database Activity Monitoring

Database activity monitoring (DAM) refers to a set of tools that supports the identification and reporting of fraudulent or suspicious behavior in the databases used by your application services. This real-time monitoring may use but is independent of native DBMS auditing and logging tools. DAM analyzes and reports on suspicious activity and alerts on anomalies. In addition to application monitoring and protecting from web attacks, DAM also provides privileged user monitoring.

Like other services, there are third-party vendors providing DAM services and CSPs providing services that are configured for their database offerings. These tools do more than monitor database usage. They can monitor privileged use, data discovery, data classification, and other database needs. Some DAM toolsets also provide assistance in compliance to contractual and regulatory requirements such as PCI DSS, HIPAA, and GDPR.

Like all services provided by third parties and CSPs, the tools change over time, adding breadth and functionality, and sometimes even the service name. This toolset is designed to provide cloud native data activity monitoring and works in major CSPs. DAM tools can be deployed inline to monitor traffic like a WAF or IDS. They can also be used as a detective tool to scan log data and identify issues.

Extensible Markup Language Firewalls

While beneficial for application integration, security is a concern when deploying Extensible Markup Language (XML) services. XML provides a standard way to do data interchange between applications. XML can also be used to perform XML external entity processing, which is one of the OWASP Top 10.

XML firewalls work at the application layer to protect XML-based applications and APIs over HTTP, HTTPS, and other messaging protocols. XML messaging and APIs between these services are an area of security concern. An XML firewall can solve this problem. All service requests pass though the XML application firewall. As an XML firewall must inspect traffic, they are generally implemented as proxies and stand in front of the web application server. An XML firewall can implement complex security rules through Extensible Stylesheet Language Transformations (XSLT).

A number of common web-based attacks can be launched through XML. These attacks include SQL injection and cross-site scripting (XSS). This is done through misuse of input fields and can be prevented through data validation and verification on input fields and schema verification. The use of an XML firewall can support the security needs of an application but should not be a substitute for developing secure software and systems. Instead, it should be an added level of protection. An XML firewall can benefit legacy code that was not designed with security. This becomes a compensating control until the development and deployment of a secure system. By dropping inappropriate traffic, it can also decrease the likelihood of DoS attacks. Firewall as a Service is one of the many cloud services provided by vendors for the major CSPs.

Application Programming Interface Gateway

An API gateway allows traffic to your application backend services. The services provided by an API gateway include rate limiting, access logging, and authorization enforcement. For secure computing, there should be limited doors into your application service. For example, some SaaS providers provide an API to access your account from your PC, Apple device, or Android device. These gateways control the way a user accesses and interacts with the SaaS solution and allow securing and monitoring this traffic. API gateways provide authentication and key validation services that control who may access the service, ensuring confidentiality of data.

Amazon Web Services (AWS) provides this service through Amazon API Gateway. AWS provides both RESTful for serverless computing and WebSocket APIs for real-time communication. Google Cloud provides an API gateway for REST APIs to provide serverless computing and a consistent and scalable interface. Azure API management provides a REST-based API for legacy systems. Essentially, all CSPs provide an API to allow the customer to monitor and control access to data and services to their workforce, partners, and customers in a way that provides a layer of protection and access control.

Cryptography

Cryptography is a key technology for encrypting and protecting sensitive data in the cloud. Encryption is the first line of defense for data confidentiality. Encryption requires the use of keys. Management of the keys is a critical security vulnerability. There are three primary parts of encryption: data at rest, data in motion, and key management.

Encryption for data at rest is a standard practice for all sensitive information. Many CSP services have encryption as a standard option. Some CSPs provide standard APIs to allow encryption to be added to any CSP service or customer application. CSPs generally also provide encryption options for their storage and database services. Naturally, encryption tools are standard in all large CSPs.

Data-in-motion encryption is accomplished in standard ways to include TLS, HTTPS, and VPNs. The ability to work with standard secure data transmission methods is provided by all mature CSPs. In addition, larger CSPs can accommodate data sharing between software solutions, even in multiple regions, without ever transiting the public Internet. However, this promise of not transiting the Internet does not necessarily mean that data is transiting a trusted network. Even when staying off the Internet, encrypted data transmission should be expected.

With all of this encryption, there are encryption keys to manage. The major CSPs all provide key management services (KMS) as do some third-party vendors. The first decision is who will manage the keys. Solutions exist to allow the vendor to manage the keys. However, for sensitive data, it is preferable that the customer use a commercial KMS rather than the vendor in order to improve key security. This provides a separation of duties between the service managing the encrypted data and the service managing encryption keys.

Sandboxing

A sandbox is an environment with restricted connectivity and restrictions or functionality. Sandboxes provide two primary security benefits. These include sandboxes for developers and sandboxes for secure execution of code. Both provide benefits to the cloud customer.

Sandboxes for developers allow the development of code and, more importantly, the testing of code in an isolated environment. A sandbox is a temporary environment to build and test code. Any problems generated will not impact anything outside of the sandbox. This is also a suitable environment for developers new to cloud application development to try and test services provided by cloud providers. This is a safe way to learn and test cloud tools and services.

Secure evaluation of code provides several options, each of which is to protect you from malicious code or poorly developed or misconfigured code. The purpose of the sandbox is to allow the execution and evaluation of code without impacting the rest of the customer (or CSP's) environment. This is especially valuable when executing code that may be malware. The customer can evaluate the effect of the code and can determine if it is dangerous to run in the customer's nonsandbox environment.

Application Virtualization and Orchestration

Virtualization of infrastructure is an essential core technology for cloud computing, and application virtualization through containerization is part of several cloud application architectures such as microservices. For example, a customer may run a virtualized desktop or container on their personal laptop, a virtual machine, or a mobile device for business applications. The entire device is not virtualized, but those applications run separately from the host OS, providing secure access to the business application and data.

Application virtualization through containers allows an application to be packaged with all dependencies together. The container can then be subject to strict configuration management, patch management, and repeatable build processes. These containers can segregate organizational software and data from the user's device in a BYOD environment. This may allow a remote wipe capability the employer can use to remove corporate access, applications, and data without impacting the remainder of an employee's personal device. Application virtualization can make the trend for BYOD more secure for the employer and safer for the employee.

Containerization orchestration is commonly done through K8s. There are many containerization technologies, but K8s was built for container orchestration. Containers provide security to the organization through configuration management, patching, and dependency update support.

Containers do not come without a cost. The containerization technology used must be configured and patched. If the technology is compromised, so is the container. The orchestration software must also be patched and configured following secure computing policy and standards set by the organization.

Cloud orchestration allows a customer to manage their cloud resources centrally in an efficient and cost-effective manner. This is especially important in a multicloud environment. Management of the complexity of corporate cloud needs will only increase as the move to the cloud accelerates. Orchestration allows the automation of workflows and management of accounts, and the deployment of cloud applications, containerized applications, and services in a way that manages cost and enforces corporate policy in the cloud.

DESIGN APPROPRIATE IDENTITY AND ACCESS MANAGEMENT SOLUTIONS

Identity and access management solutions encompass a range of activities. The IAM solution properly begins with the provisioning of users. Provisioning includes the creation of credentials as well as authorization for the systems to which a user needs access. IAM solutions also include the ongoing maintenance of access, such as adding and deleting access to systems as user roles change within the organization. Finally, an IAM solution includes the deprovisioning of a user when an account is no longer needed.

IAM solutions also perform the identification and validation of users and provide access to systems. Once an IAM solution identifies and authenticates a user, it can perform authorization. Authorization determines which resources an authenticated user can access. Authentication can use role-based, attribute-based, or any other form of authentication. In addition, many IAM providers also support password management, while other solutions can be integrated with a customer's current on-premise authentication systems.

There are a variety of options for cloud-based IAM. This section will discuss the methods of authentication available and the security issues that exist with each of these.

A variety of protocols are available to IAM solutions, to include OAuth2, SAML, LDAP, etc. Each protocol provides different capabilities. For example, OAUth2 was developed to provide authorization with web applications and mobile devices. SAML is an XML-based authentication service well-suited for authentication between the identity provider and a service provider. LDAP is designed to work well with directory services, like Active Directory (AD). Which protocol is used varies by authentication provider and use and is a choice determined by each business.

Federated Identity

Federated identity is related to single sign-on (SSO). In a federated identity, a particular digital ID allows access across multiple systems. With federated identity, a digital ID can access applications across CSPs (a multicloud) and on-premise resources. Federation also allows SSO for systems across multiple organizations. These may be subsidiaries of the same company, multiple CSPs, cloud vendors, or multiple organizations.

There are security challenges with federated identifications. The primary issue is that once someone compromises a digital ID on one system, the ID is compromised on all systems that are federated. This makes the security of IAM systems equally important on all federated systems. The security of the IAM system for each cloud provider and cloud vendor as well as the on-premise IAM system must be equally protected and monitored for malicious use.

The drawback to a federated identification is similar to SSO. When one system is compromised, all federated systems are compromised. The complicating factor is that the level of trust between systems of multiple organizations may not be the same as between systems of a single organization. Additionally, no organization controls the system protection across all organizations or the evaluation of all users in each organization.

Identity Providers

Identity providers can be CSP services or the services of a third party. In a multicloud environment, a third-party solution may be the best choice as it provides a single solution to the customer. Identity providers do not replace other cloud security tools, such as a Cloud Access Security Broker (CASB), but work together to provide a layered security defense. In this case, the IAM ensures that users are authenticated and their access to cloud resources is authorized. The CASB monitors and protects those cloud resources.

The major CSPs provide IAM services. These include Azure Active Directory, AWS Identity and Access Management, and Google Cloud Identity and Access Management. There are also many good third-party choices for identity management. This would be considered identity as a service (IDaaS). IDaaS is a specialized SaaS. This can be especially advantageous in a multicloud environment.

Single Sign-On

Single sign-on allows access to all authorized systems that fall under a single IAM system after authenticating once. The user can move freely between systems without having to reauthenticate each time. It is important to monitor all access within an SSO system. If a digital ID is compromised, all systems within that IAM environment that the user has access to are also compromised.

The advantage is that an SSO limits the number of credentials that must be changed when compromised and allows for simpler central monitoring and access maintenance. When each user has multiple credentials, monitoring is more complex, and when a user's access must be modified or removed, it is easy to miss one or more sets of credentials. Access and use of all identities must be monitored within an SSO environment for malicious activity. In addition, an organization must have processes in place to react to malicious activity.

SSO is extended through the use of federation. Federation allows SSO for resources across multiple IAM systems, such as multiple cloud and on-premise environments. This increases the risk caused by a compromised identity as the number of systems that may be compromised is greater. This increases substantially the importance of monitoring for and responding to malicious activity.

Multifactor Authentication

Multifactor authentication (MFA) adds a level of security to standard user IDs and passwords when used appropriately. Two-factor authentication (2FA) and MFA are often used interchangeably. There is a subtle difference. 2FA refers to using two factors. MFA may use two or more factors. 2FA is a subset of MFA.

Authentication factors are as follows:

  • Something you know: This includes answers to security questions and identification of previously selected photos/pictures, PINs, and passwords.
  • Something you have: Examples include a hardware token, smartphone, or a card, such as a debit card or smart card.
  • Something you are: This category generally refers to biometrics. This is generally fingerprint, facial recognition, or iris scans. Of the three, these are the most challenging to do reliably and at a reasonable cost.

Often what is described as MFA is simply multiple instances of the same factor. An example is when a system requires a user ID, a password, and the answer to a security question—these are all things you know. This is single-factor authentication masquerading as MFA.

One use of MFA is to limit the potential damage caused by a compromised account in an SSO or federated system. If, when moving from one federated system to another or one federated resource to another, a token is required from something you have, then the ease of SSO or federation is decreased slightly while increasing security. The method chosen will be a balanced decision based on available options, the cost of available options, the value of the asset being protected, and the organization's risk tolerance.

Another approach would be to simply require a token from a device periodically throughout the day; for example, every two hours. This limits the time a compromised identity can be used. The additional burden of authenticating two to three times a day may be an acceptable price to pay to limit the damage of a compromised identity in a federated or SSO environment.

Cloud Access Security Broker

A CASB is an important addition to cloud security. A CASB sits between the cloud application or server and the customer. This service, which may be software or hardware-based, monitors activity, enforces corporate security policies, and mitigates security events through identification, notification, and prevention. A part of a layered security strategy, a CASB is not meant to replace firewalls, IDS/IPS systems, or similar security systems. Instead, a CASB is meant to enhance the security provided by these other devices.

A CASB that provides security must be in the path of user activity with the application. These CASBs may be agent-based or agentless. In either case, they are inline. There are also out-of-band CASBs that receive all cloud traffic for security analysis. These are somewhat analogous to an IPS and an IDS. The inline CASB enforces policy. The API-based CASB monitors for violation and analyzes cloud traffic broadly. In this section, we discuss inline CASBs.

Agent-based CASBs face friction from users when a bring your own device (BYOD) policy is in place. When the customer owns the device, it may not be possible to install an agent on the devices. Even with customer permission, the wide variety of mobile devices may be more extensive than available agents. So, there may not be a compatible agent. In addition, an agent-based CASB may severely impact the performance of a customer's mobile device. When the organization owns the devices, an agent-based approach can be more effective.

An agentless CASB uses an API on the cloud resources to inspect all traffic to and from that resource to perform its responsibilities. This allows access to all cloud resources to be monitored regardless of endpoint ownership. It always can limit inspection to organizational data, eliminating some privacy concerns. The other advantage is that agentless CASBs can be quickly deployed and more easily maintained.

SUMMARY

Cloud application security requires many things working together. The first step is to choose to develop secure applications by policy. All other decisions flow from this one decision. Once this decision is made, an SSDLC must be adopted with training on secure software development and the tools and processes used in this approach to application development. An SSDLC must be implemented, with development leading to assurance and validation activities to ensure that developed solutions are secure. The next step is to develop mechanisms to ensure verified software is distributed securely and that modified software can be detected to prevent the insertion of malicious code in your securely developed software solutions. Finally, the software solutions must be deployed using the architected secure solution. This approach implements secure API gateways, XML firewalls, web application firewalls, DAM, IAM, CASB, and other tools as necessary. These tools monitor the applications in use, with the ability to respond to potentially malicious behavior and potentially prevent malicious use.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
54.144.81.21