CHAPTER 7

Legal, Risk, and Compliance

This chapter covers the following topics in Domain 6:

•   Legal risks and controls relating to cloud computing

•   eDiscovery and forensic requirements

•   Laws pertaining to PII

•   Audit types

•   Audit processes in a cloud environment

•   Risk management as it pertains to the cloud

•   Outsourcing to a cloud and overseeing the contract

Cloud computing presents many unique challenges on the legal and policy front because it often crosses jurisdictional lines that have different rules and regulations as to data privacy and protection. Although auditing all IT systems is a very critical and sensitive process, a cloud environment presents unique challenges and requirements for auditing, as a cloud customer will not have full access to systems or processes in the same manner they would in a traditional data center. Risk management also poses unique challenges in a cloud, as it expands the realm of operations and systems for an organization. Also, the realities of a cloud environment introduce additional risks and complexities, especially with multitenancy. In this chapter, we will also touch on the requirements for managing outsourcing and contracts with cloud providers.

Articulate Legal Requirements and Unique Risks Within the Cloud Environment

Cloud environments often cross jurisdictional lines and create a wealth of complex issues regarding applicable laws and regulations from the policy side, as well as technological issues for data collection and discovery requirements.

Conflicting International Legislation

With the global nature of computing services and applications, it is almost certain that international boundaries and jurisdictions will be crossed from both policy and technological perspectives. When operating within a global framework, a security professional runs into a multitude of jurisdictions and requirements that many times might not be clearly applicable or might be in contention with each other. These requirements can include the location of the users and the type of data they enter into the systems, the laws governing the organization that owns the applications and any regulatory requirements they may have, as well as the appropriate laws and regulations for the jurisdiction housing the IT resources and where the data is actually stored, which might be in multiple jurisdictions as well.

With systems that span jurisdictions, it is inevitable that conflicts will arise when incidents occur, especially concerning the laws requiring reporting and information collection, preservation, and disclosure. Many times the resolution is not clear, because there is no international authority to mediate the problems that has full jurisdictional control at this time, so such incidents can become very complex and difficult legal matters to resolve. In many instances, legal proceedings in multiple jurisdictions are required, which may or may not have complementary outcomes and rulings, making orders and investigations increasingly difficult, especially with the move toward cloud computing.

Evaluation of Legal Risks Specific to Cloud Computing

Cloud computing adds an extra layer of complexity from a legal perspective. With a traditional data center, the organization will own and control the environment, systems, and resources, as well as the data that is housed within all of them. Even in environments where support and hosting services are contracted out to a third party, the contracting organization will still hold control over its own systems and data, and typically will be physically segregated within a data center from a network standpoint as well as in cages on the data center floor. This makes the legal requirements and the parties to them very clear for any types of issues that should arise.

With a cloud environment, the cloud customer is reliant on the cloud provider that actually owns and operates the overall system and services. The main difference that distinguishes a cloud environment from a legal perspective is the concept of multitenancy—that is, having cloud customers sharing the same physical hardware and systems as each other. This makes the cloud provider not only contractually bound to your organization, but also to all the other organizations using the same hosting environment. This precludes the cloud provider from simply capturing systems and turning over any and all data to investigators or regulatory agencies because it will need to ensure that no data or logs are captured from other customers and potentially exposed to additional parties.

Regardless of where a system and its services are hosted, an organization is legally responsible for all data it uses and stores. When a cloud provider is used for these services, the contract will need to ensure that the cloud provider accepts and complies with the same regulatory and legal requirements that pertain to the cloud customer. This includes jurisdictional requirements based on the locations of customers or data, but also regulatory requirements such as HIPAA and Sarbanes-Oxley (SOX) that apply to applications and their data, based on the type of application and what the data is used for, regardless of the specific physical location of the data and the services that contain and process it.

Legal Framework and Guidelines

With the complexities of cloud environments and the geographically disparate realities, any systems and applications hosted within a cloud will be subjected to a variety of different laws and legal controls.

In the United States, there are a myriad of federal and state laws and regulations that a system must conform to. On the federal level, there are regulations like HIPAA and SOX, as well as federal regulations based on the particular type of commerce and the data involved. With any systems that interact with federal agencies in any way, there are extensive requirements under The Federal Information Security Management Act (FISMA) for compliance with security controls required by the federal government, depending on the classification of the system and the data it uses. On the state level, there can be additional requirements placed on systems, also dependent on the type of commerce involved and the type of data being used and stored. Specific requirements can also arise from contractual language and the specific rights and responsibilities required within the jurisdictions.

Other countries have their own regulatory and legal requirements that may or may not be similar in nature to those of the United States. The most prominent of those requirements are put forth by the EU, which has a very strong focus on personal privacy and data protection. In fact, many countries and jurisdictions around the world have adopted the EU or U.S. guidelines and regulatory requirements, even if they reside outside of their jurisdiction, as a model to use. However, the EU guidelines are far more commonplace in this regard.

While not directly legal in nature, many standards organizations have adopted rules that mirror or closely align with actual legal requirements. This applies to regulatory and standardization models such as The Payment Card Industry Data Security Standard (PCI DSS) and the various ISO/IEC standards. These typically dictate specific requirements for security controls as well as operational procedures and policies for handling specific types of data and applications. They usually are included as contractual requirements from the organization level as well as between the cloud customer and the cloud provider.

eDiscovery

eDiscovery is the process of searching, identifying, collecting, and securing electronic data and records, to ultimately be used in either criminal or civil legal proceedings. With a traditional data center, the collection and identification processes are typically easier and less complex because physical systems are known and can be easily isolated or brought offline and preserved. With a cloud environment, all systems and data are virtualized, and because of this, additional challenges and complexity exist within cloud environments.

Within a traditional data center, when a request or requirement for eDiscovery is received, it is substantially easier to determine the scope and systems involved because they will be all under tight control and typically on-premises, with well-known configurations and systems involved. The security team, working with a company’s legal and privacy teams, can present the eDiscovery request to the application and operations teams, determine which server and systems are involved, and begin the data collection and preservation processes without much need for external involvement in most cases. Within a cloud environment, this information is scattered over virtual machines and storage devices, which may be dispersed across different physical data centers and jurisdictions as well. A further complication is that these systems have multiple tenants hosted on them, making isolation and collection more complex, and the privacy and confidentiality of other tenants’ data are required as well. Within a cloud environment, systems cannot simply be physically isolated and preserved.

Legal Issues

A few aspects of eDiscovery from a legal perspective make bringing a cloud environment into compliance more complex. These legal realities are found in the United States in the Federal Rules of Civil Procedure (FRCP) and the Federal Rules of Evidence (FRE). These rules must be followed for evidence to be identified, collected, and preserved in a manner that will make it admissible in court proceedings.

One major aspect is that eDiscovery is focused on information that is in the “possession, custody, or control” of an organization, per the Federal Rule of Civil Procedure 34(a) of the United States. For a traditional data center, this is a very easy issue to address because all information is on dedicated servers and storage for the particular company or organization. Even in instances where data center space is leased, each client will have its systems and assets on isolated physical servers and contained in locked cages that are particular to its own systems and protected from other clients’ systems. When information is contained in a virtualized environment, especially a cloud environment, questions about control and possession can become legally pertinent between the cloud customer and cloud provider. The big question as to who actually possesses and controls the information is something that should be articulated in the contract and terms of service with the cloud provider. While this is a complex question for private clouds, it is even more so in public clouds, which are much larger, open to anyone, and contain in many instances large numbers of tenants.

A second major issue from FRCP is that data custodians are assumed and expected to have full knowledge of the internal design and architecture of their system and networks. With a traditional data center, where full ownership and design is already established, this is trivial for an organization to comply with. With a cloud environment, and even with an IaaS implementation, the cloud customer will only have rudimentary knowledge of the underlying systems and networks. With a PaaS implementation and certainly with a SaaS implementation, the cloud customer’s knowledge will be extremely limited. This is another crucial area where the contract and SLA between the cloud customer and cloud provider will have to establish the roles and responsibilities, as well as the required timelines, for support of eDiscovery requirements and any other legal or regulatory requirements to which the organization is subjected.

Conducting eDiscovery in the Cloud

With the need to rely on the cloud provider to assist or conduct any eDiscovery investigations and respond to orders, it is vital for the cloud customer and their security personnel to have a good relationship with the support teams of the cloud provider. This relationship should be cultivated from the early days of the contract and hosting arrangement so that familiarity with processes and procedures can be established and understood while not under the pressure of an actual incident or order. Part of this relationship should also entail gaining a rudimentary understanding of the underlying cloud fabric and infrastructure. Although this will not be comprehensive or detailed for investigations, it will lay the groundwork and serve as a basis of understanding to enable more rapid and efficient work when an actual order is being processed.

Part of the contract should articulate where the potential hosting areas for data and applications are within the cloud environment. When data discovery and collection operations are being conducted, it is very possible in a cloud that data will reside in multiple different jurisdictions, and quite likely in different countries that may have drastically different laws and requirements from each other. Having an understanding of possible scenarios and contingency plans in place allows for a more structured approach to eDiscovery orders and serves as a basis of starting them with various facts and templates already prepared. Understanding different laws, jurisdictions, and expectations is vital for the Cloud Security Professional to effectively conduct and ensure eDiscovery. Depending on the nature, scope, and jurisdictions impacted by the eDiscovery order, it is also possible that some requirements clash with each other and need to be rectified in a manner that respects local laws as well as enables the organization to comply with the order.

The exact approach and method to eDiscovery in the cloud will be determined by the contractual requirements between the cloud customer and cloud provider as well as largely driven by the cloud model employed. Regardless of the cloud model used, the cloud provider will have to play a central role in any eDiscovery process. While the cloud customer has the greatest level of control, access, and visibility within an IaaS implementation, even under that model the cloud customer will be limited in their collection and isolation abilities. In most instances, the access to the toolsets and utilities needed for eDiscovery collection will be limited to the cloud provider because more extensive access to the underlying systems and management tools will be necessary. With a PaaS implementation, the burden for eDiscovery compliance will largely fall on the cloud provider because access from the cloud customer will be very limited in nature, and access to any sort of management toolsets or utilities will likely be nonexistent. With a SaaS implementation, the cloud customer will need to completely rely on the cloud provider for eDiscovery compliance. Although the cloud customer might have a degree of data access, and perhaps even the ability to export data from the application, it will almost certainly not be in a format that would be acceptable as evidence under an eDiscovery order.

eDiscovery Against the Cloud Provider

While the discussion and focus thus far has been on eDiscovery of cloud customers, their applications, and their data, there is also the perspective of eDiscovery orders against the cloud providers themselves. In this instance, the eDiscovery requirements would be in regard to the cloud environment itself, and this has the potential to impact many different cloud customers and data sets. Depending on the scope and requirements of the eDiscovery order against the cloud provider, there may be the need to turn over data or physical assets from the environment, which could include systems and data that is part of, or impacts directly, cloud customers. Due to this reality, the Cloud Security Professional will need to ensure that language is in the contract with the cloud provider pertaining to how such issues will be approached, addressed, and handled by the cloud provider, should the need emerge.

ISO/IEC 27050

ISO/IEC 27050 strives to establish an internationally accepted standard for eDiscovery processes and best practices. It encompasses all steps of the eDiscovery process, including the identification, preservation, collection, processing, review, analysis, and the final production of the requested data archive. It attempts to address the enormous volume of data that organizations produce and hold, the complex and disparate methods by which this data is stored and processed, as well as the challenges with how quickly and easily electronic data and records can be produced, stored, shared, and destroyed. With cloud computing very often crossing jurisdictional boundaries—either with the customer base or the distributed nature of cloud computing and the cloud provider using geographically diverse hosting locations—an internationally published and accepted standard will give both cloud providers and cloud customers a contractual component and basis for standardization. This allows cloud providers to structure their support and operations around the framework and best practices put forward, and to advertise that compliance to cloud customers and potential customers.

CSA Guidance

The Cloud Security Alliance has published, as part of its “CSA Security Guidance” series, a publication that addresses eDiscovery titled “Domain 3: Legal Issues, Contracts, and Electronic Discovery.” This document outlines specific cloud-based concerns for eDiscovery, issues specific to the cloud provider and cloud customer, and approaches to take to ensure compliance with eDiscovery orders and the pitfalls and challenges that occur because of them.

A big challenge with eDiscovery compliance is that orders for data collection and preservation are focused on data that an organization owns and controls. With a traditional data center, this is a lot clearer because the organization controls and owns the systems—from the hardware and network levels, all the way through the application level. With a cloud environment, even with IaaS, the cloud customer only owns and controls a subset of data, with the cloud provider maintaining ownership and control over the rest, usually without any access provided automatically to the cloud customer. With this setup, in many jurisdictions, one eDiscovery order may be required for the cloud customer and another for the cloud provider, in order to get the entire set of data. The contract between the cloud customer and cloud provider should clearly delineate the responsibilities of both parties as far as data ownership and collection, and what mechanisms and requests are necessary should an eDiscovery order be received. In some jurisdictions, or depending on the contracts and expectations of other tenants within the same cloud, separate court orders or eDiscovery orders may be necessary to gain access to the infrastructure-level data and logs.

Another big factor is the additional time that may be required in a cloud environment for data preservation and collection. This is due to the complexities of a cloud environment, but also due to the particular tools and access levels available to the cloud customer. In a traditional data center, the system administrators and privileged staff will have full access to the entire system and all components of it, including the network layer. With full access, a wide assortment of eDiscovery tools and utilities are available, both open source and proprietary. With a cloud environment, many of these tools will not be available to the cloud customer due to access restrictions, and may need to be loaded and executed by a cloud administrative user instead. Accounting for the time it will take to make separate requests to the cloud provider for information and assistance, the cloud customer will need to be prepared to properly respond to a requesting authority as to how long data collection will take, as well as know when to request extensions should time requirements prove to be too aggressive.

Cost is also a big factor in regard to the resources required for the preservation of data—namely, storage costs within a cloud. Depending on the size and scope of an eDiscovery order, the possibility exists that substantial storage space could be required to collect and consolidate all of the data, especially if it involves forensically valid full system images or binary data, which does not carry the benefit of compression. The SLA and contract between the cloud provider and cloud customer should spell out how costs and operations are handled during eDiscovery orders, as well as the time length limitations, if any, of that temporary data use and preservation. Apart from storage costs, some cloud environments are also built on pricing models that incorporate the amount of data transferred into and out of the resources consumed by the cloud customer. If large amounts of data need to be transferred out to fulfill the eDiscovery order, the SLA and contract should spell out how those costs are handled under the circumstances and if it differs at all from the typical pricing and metering that applies to the cloud customer.

A major factor from a legal standpoint, as well as one that will be of critical importance to the cloud customer, is notification of receiving any subpoena or eDiscovery order. The contract should include requirements for notification by the cloud provider to the cloud customer upon the receipt of such an order. This serves a few important purposes. First, it keeps communication and trust open between the cloud provider and cloud customers. More importantly, though, it allows the cloud customer to potentially challenge the order if they feel they have the grounds or desire to do so. Without immediate notification by the cloud provider upon the receipt of such an order, it may eliminate or make more difficult a challenge by the cloud customer against the requesting body. Of course, there may be instances where a legal order specifically precludes that cloud provider from notifying the cloud customer, and upon receipt of such an order, that is an area that will override the terms of the contract and require compliance by the cloud provider.

Forensics Requirements

Forensics is the application of scientific and methodical processes to identify, collect, preserve, analyze, and summarize/report digital information and evidence. It is one of the most powerful tools and concepts available to a security professional for determining the exact nature, method, and scope of a security incident within any application or system. With a traditional data center, where the organization has full access and control over systems (especially the physical aspect of those systems), forensic collection and analysis are far more simple and straightforward than in a cloud environment. Determining the location of the data and systems involved within a traditional data center will be far easier, as will isolating and preserving systems or data during collection and analysis.

Within a cloud environment, not only is determining the exact location of systems and data far more complex, the degree of isolation and preservation is also a lot different from a traditional server model. With forensic data collection processes being at the management or administrative level of a system, the involvement and cooperation of the cloud provider is absolutely essential and crucial, regardless of the specific cloud hosting model employed by the cloud customer.

With forensics in the cloud, the complexities and challenges spelled out previously with eDiscovery are also applicable. The particular challenge with forensics is that cloud providers may be unable or unwilling to provide information of this nature to a cloud customer because it might violate the agreements, privacy, or confidentiality of other cloud customers that are tenants within the same systems. However, with eDiscovery, and the nature of court orders and subpoenas, cloud providers are required to provide information to comply with the order. With forensics, if it is only at the request of the cloud customer for their own investigations or purposes, it is quite possible the cloud provider will decline to provide such information if there are concerns about the impact on other tenants. This is a main reason why it is vital for the Cloud Security Professional to ensure that contractual requirements will empower the cloud customer to obtain the information they might need; otherwise, they must accept the risk and realities of not being able to provide the same level of forensic investigation as they would in a traditional data center environment.

Understand Privacy Issues

Privacy issues can vary greatly among different jurisdictions. This variation can pertain to the types of records and information protected as well as the required controls and notifications that apply.

Difference Between Contractual and Regulated Personally Identifiable Information (PII)

Whether a system and its data are hosted in a traditional data center model or in a cloud hosting model, the owner of the application is responsible for the security of any PII data that is processed by or stored within their application and related services. While the concept of PII is widely understood, regardless of jurisdictional or legal requirements, there are two main types of PII, and they have differing approaches and requirements.

Contractual PII

Contractual PII has specific requirements for the handling of sensitive and personal information, as defined at the contractual level. These specific requirements will typically document required handling procedures and policies to deal with PII. These requirements may be in specific security controls and configurations, policies or procedures that are required, or limitations on who may gain authorized access to data and systems. With these requirements being part of the contract, auditing and enforcement mechanisms will be in place to ensure compliance. Failure to follow contractual PII requirements can lead to penalties with contract performance or a loss of business for an organization.

Regulated PII

Regulated PII has requirements put forth by specific laws or regulations. Unlike contractual PII, where a violation can lead to contractual penalties, a violation of regulated PII can lead to fines or even criminal charges in some jurisdictions. PII regulations can depend either on the jurisdiction that applies to the hosting location or application or on specific legislation based on the particular industry or type of data that is used. Regulated PII will typically have requirements for reporting any compromise of data, either to an official government entity or possibly to the impacted users directly.

Protected Health Information (PHI)

Protected health information (PHI) is a special subset of PII that applies to any entity defined under the United States HIPAA laws. It pertains to any information that can be tied to an individual and in regard to their past, present, or future health status, and covers all data that is created, collected, transmitted, or maintained by an entity covered under the law. This can include a wide range of data, such as account numbers, diagnoses, test results, as well as the PII that connects them to the individual.

Country-Specific Legislation Related to PII and Data Privacy

While many countries have laws that protect and regulate PII and data privacy, there can be significant variance among various jurisdictions as to what is required or allowed. The following is not an exhaustive or complete list of all countries and regulations; instead, it is a sampling of the major and most prominent jurisdictions and their respective regulations.

United States

The United States lacks a single law at the federal level addressing data security and privacy, but there are multiple federal laws that deal with different industries and types of data that the Cloud Security Professional needs to be aware of. It is also important to note that, unlike many other countries, the United States has very few laws on housing of data within geographic areas, so data can often be housed on systems outside the United States, even though the individuals the data pertains to, as well as the applications, are accessed from within the borders of the United States.

The Gramm-Leach-Bliley Act (GLBA)   The Gramm-Leach-Bliley Act (GLBA), as it is commonly called, based on the names of the lead sponsors and authors of the act, is officially known as “The Financial Modernization Act of 1999.” It is specifically focused on PII as it relates to financial institutions. There are three specific components of GLBA, covering various areas and use, on top of a general requirement that all financial institutions must provide all users and customers with a written copy of their privacy policies and practices, including with whom and for what reasons their information may be shared with other entities. The first component is the Financial Privacy Rule, which regulates overall the collection and disclosure of financial information of customers and users. The second component is the Pretexting Provision, which prevents an organization from accessing, or attempting to access, PII based on false representation or pretexts to customers or potential customers. The last component is the Safeguards Rule, which puts a requirement and burden on financial institutions to enact adequate security controls to protect the privacy and personal information of their customers.

The Health Insurance Portability and Accountability Act of 1996 (HIPAA)   HIPAA requires the Federal Department of Health and Human Services to publish and enforce regulations pertaining to electronic health records and identifiers between patients, providers, and insurance companies. It is focused on the security controls and confidentiality of medical records, rather than the specific technologies used, so long as they meet the requirements of the regulations.

The Sarbanes-Oxley Act (SOX)   SOX is not an act that pertains to privacy or IT security directly, but rather it regulates accounting and financial practices used by organizations. It was passed to protect stakeholders and shareholders from improper practices and errors, and sets forth rules for compliance, as regulated and enforced by the Securities and Exchange Commission (SEC). The main influence on IT systems and operations are the requirements it sets for data retention, specifically in regard to what types of records must be preserved, and for how long. This will impact IT systems as far as the requirements for data preservation and the ability to read the preserved data, and in particular with cloud computing, the need to ensure that all required data can be accessed and preserved by the cloud customer, or the acceptance of that burden by the cloud provider. SOX also places substantial requirements—in some cases well beyond those already required from other regulatory or certification requirements—on virtually all aspects of a financial system’s operations and controls. These apply to networks, physical access, access controls, disaster recovery, and any other aspect of operations or security controls.

European Union (EU)

The EU has some of the most stringent and specific requirements as far as data privacy and protection of the confidentiality of PII are concerned, as well as strict requirements that the data not be shared or exported beyond its borders to any jurisdiction that does not have adequate and similar protections and regulations of its own. This can have an enormous impact on cloud hosting, as it places the burden on an organization to know exactly where its processes or data will be housed at all times, and to ensure that it does not violate the EU requirements for geographic and jurisdictional hosting. This can be very difficult in large cloud implementations, which often span many geographic areas.

General Data Protection Regulation (GDPR)   The GDPR (EU 2016/679) is a regulation and law, covering the European Union and the European Economic Area, pertaining to data protection and privacy. The GPDR is a uniform law throughout the EU and covers all countries, citizens, and areas under its jurisdiction, regardless of where the data is created, processed, or stored. The law places the burden for technical and operational controls on the entities that are using and storing the data for the protection and enforcement of it.

Under the GPDR, organizations must make it known to users what data they are collecting and for what purpose, whether it will be shared with any third parties, and what their policies are for data retention. The GPDR grants the right to individuals to obtain a copy of the data that an organization is storing in regard to themselves, as well as the right to request deletion of the data in most instances.

Under Article 33 of the GPDR, data controllers are required to notify the applicable government enforcement agencies within 72 hours of any data break or leaking of personal and private information. Importantly, this is applicable in instances where the data is readable and usable by a malicious party, but not in cases where the data is obfuscated or encrypted.

The GPDR does provide exemptions for law enforcement and national security agencies.

Russia

Effective as of September 1, 2015, Russian Law 526-FZ establishes that any collecting, storing, or processing of personal information or data on Russian citizens must be done from systems and databases that are physically located within the Russian Federation.

Differences Among Confidentiality, Integrity, Availability, and Privacy

The big three core aspects of security are confidentiality, integrity, and availability. As more data and services have moved online, especially with the explosion of mobile computing and apps that utilize sensitive information, privacy has become a fourth key aspect. All four work closely together, and based on the particular specifics of the application and data that it utilizes, the importance of some versus the others will change and adapt in their degree of importance.

Confidentiality

In short, confidentiality involves the steps and effort taken to limit access to sensitive or private information. The main goal of confidentiality is to ensure that sensitive information is not made available or leaked to parties that should not have access to it, while at the same time ensure that those with appropriate need and authorization to access the same information can do so in a manner commensurate with their needs and confidentiality requirements, which is also a part of the availability principle.

When it comes to granting access to appropriate users, there can be different levels of access; in other words, confidentiality is not an all-or-nothing granting of data access to users. Based on the particular data sets and needs, users can have different access or requirements for more stringent security, even within an application based on different data types and classifications. With some applications, even for users with approval to access particular data fields or sets, there may be further technological limitations or restrictions on access. For example, a privileged user may have access to an application and sensitive data within it, but certain data fields may have requirements that the user access through a particular VPN or a particular internal network. Other data within the application may be available even from outside those requirements, but the enhancements are applied to particular portions of the application or particular data sets or fields.

While a main focus of confidentiality revolves around technological requirements or particular security methods, an important and often overlooked aspect of safeguarding data confidentiality is appropriate and comprehensive training for those with access to it. Training should be focused on the safe handling of sensitive information overall, including best practices for network activities as well as the physical security of the device or workstation used to access the application. Training should also focus on specific organizational practices and policies for data access, especially if any aspect of data storage or persistence is a factor with the specific application or types of data it utilizes. Although it should be evident, best practices such as strong passwords, at a minimum, should be stressed with general security training specifically geared toward confidentiality. Training should also include awareness of social-engineering-type attacks to fill in the gaps beyond technological security measures.

Integrity

Whereas confidentiality focuses on the protection of sensitive data from inappropriate disclosure, integrity is focused on the trustworthiness of data and the prevention of unauthorized modification or tampering. The same focus on the importance of access controls applies to integrity as well, but with more of an emphasis on the ability to write and modify data, rather than simply the ability to read the data. A prime consideration for maintaining integrity is an emphasis on the change management and configuration management aspects of operations, so that any and all modifications are predictable, tracked, logged, and verified, whether they are performed by actual human users or systems processes and scripts. All systems should be implemented in such a manner that all data modification operations and commands are logged, including information as to who made the modification.

With many systems, especially where downloading of data, executable code, or packages is permitted or desired, the ability to maintain the integrity of those downloads is very important to the end users. To facilitate this verification, technologies such as checksums are in widespread use. With a checksum, a hashed value is made of the specific package, with that known hash published by the entity offering the data. The user downloading the data can then perform the same hashing operation on the file they have downloaded and compare that to the published value from the vendor. Any discrepancy in values will be an immediate indication that the file, as downloaded by the user, is not the same file being offered from the vendor. This is especially important with scripts or executable code to ensure that no malware has been injected into the package and that no modifications have been made by a malicious actor at some point in the process.

Images

NOTE    A common misconception is that integrity and confidentiality are always used together or correlated together with a data set. However, this is not always the case. For example, the medicare.gov website offers downloads of quality rating databases for researchers or anyone else who might desire the data. These are complete sets of data, the same as that used by the tools on the website. As such, these data sets have no level of confidentiality attached to them at all. However, the integrity of them is extremely important. While the data is freely and completely available to anyone, the altering of the data could have enormous impact on healthcare facilities or providers. Consider a hospital that is graded on a one-to-five-star rating from a quality perspective. A hospital that has a high rating, such as a 4 or 5, could find themselves facing considerable negative publicity should the website now display a rating of 1 or 2 stars, due to either inadvertent or purposeful modification of the intended data.

Availability

Availability is often not understood as part of security in that it is often viewed as part of operations. However, in order for a system to be considered secure and operate in a secure manner, it must provide robust accessibility to those who are authorized to have it, and do so in a trustworthy manner. A few different components make up the concept of availability.

The first and most obvious consideration is the availability of the live production system. This is accomplished through redundancy of hardware, networks, data storage systems, as well as supporting systems such as authentication and authorization mechanisms. Apart from redundancy, in order to ensure availability, production systems must also have adequate resources to meet their demands within acceptable performance metrics. If the system is overloaded or unable to handle attacks such as a denial of service, then redundancy will not matter in the larger scheme of things, because the system will be unavailable to users and services that rely on it or its data. The ability to mitigate such attacks is also a prime reason why availability falls under security concepts. Redundancy also is vital for proper maintenance and patching of systems. Without appropriate redundancy to perform those functions without incurring downtime, organizations will delay doing vital security and system patching until later when downtime is more palatable to management. Doing so can expose systems and data to higher risk because known security vulnerabilities are left in place.

Disaster recovery and business continuity are also vital concerns with availability. Within security, they ensure the protection of data and continuity of business operations. If data is destroyed or compromised, having regular backup systems in place as well as the ability to do disaster recovery in the event of a major or widespread problem will allow operations to continue with an acceptable amount of time and data loss for management, while also ensuring that sensitive data is protected and persisted in the event of a loss or corruption of data systems or physical storage systems. Cloud services offer unique opportunities to have distributed data models and a lack of dependency on physical locations and assets over traditional data centers, making the reliability of systems and disaster recovery options more streamlined and robust.

Privacy

Privacy used to be considered part of confidentiality, but with the prevalence of online services and especially mobile computing, the need for a separate category for privacy has become imperative. The growth of strong personal privacy laws in many jurisdictions has also created the necessity for a unique focus. The concept of privacy overall relates to an individual’s control over their own information and activities, versus the information that an organization would have in its own data stores, which is governed under confidentiality.

A central concept of privacy is the right to look at things online anonymously, but also the right to be forgotten by a system once you are done using it. This includes any information about yourself, your means of access, location, and so on. The most stringent protections and privacy requirements currently employed are those from the European Union, with other jurisdictions and regulatory requirements being less stringent or enforced. With many applications needing information about a user’s location, device, or client used for access, there is a constant battle between utilizing and storing the information needed to conduct services and data access, while at the same time enabling users to have control over their own privacy and personal information in a way that complies with regulatory requirements.

Standard Privacy Requirements

Standards are established by industry groups or regulatory bodies to set common configurations, expectations, operational requirements, and definitions. They form a strong body of understanding and collaboration across jurisdictional boundaries, and allow for users and cloud customers to evaluate services and cloud providers based on external and independent criteria, with an understanding that the awarding of a certification ensures that standards have been met and verified by an external party.

ISO/IEC 27018

ISO/IEC 27018 is an international standard for privacy involving cloud computing. It was first published in 2014, is part of the ISO/IEC 27001 standards, and is a certification that cloud providers can adhere to. The standard is focused on five key principles:

•   Communication   Any events that could impact security and privacy of data within the cloud environment are clearly documented with detail, as well as conveyed per requirements to the cloud customers.

•   Consent   Despite all information being on systems that are owned and controlled by the cloud provider, no data or information about the cloud customer can be used in any way, including for advertising, without the express consent of the cloud customer. This also extends to any users and cloud customers being able to use the cloud resources without having to consent to such use of information as a precondition of the cloud provider.

•   Control   Despite being in a cloud environment, where the cloud provider owns and controls the actual infrastructure and storage systems, the cloud customer retains complete and full control over their data within the environment at all times.

•   Transparency   With the lack of full control within a cloud environment by the cloud customer, the cloud provider bears the responsibility for informing them about where their data and processes reside, as well as any potential exposure to support staff, and especially to any subcontractors.

•   Independent and yearly audit   To assure cloud customers and users as to its certification and protection of data privacy, the cloud provider must undergo a yearly assessment and audit by a third party.

Generally Accepted Privacy Principles (GAPP)

GAPP is a privacy standard, focused on managing and preventing risks to privacy, that was developed by a joint privacy task force of the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accountants (CICA). The standard contains ten main privacy principles, as well as over 70 privacy objectives and associated methods for measuring and evaluating criteria. These ten generally accepted privacy principles form a basis for security and privacy best practices for an organization:

•   Management   As an organization, privacy policies and procedures are clearly documented, reviewed, and communicated to the necessary parties, and official measures and criteria for accountability are established.

•   Notice   Whether by regulation, law, or best practices, the organization publishes and makes available to interested or required parties its privacy policies, including what information it collects, stores, shares, and securely protects or destroys after use.

•   Choice and consent   With any systems or applications that collect or use personal information, the choice is clearly presented to the user to decide if they want to disclose their information. This ideally will require active consent on behalf of the user to share information, but it should also make clear to users what limitations they will have if they chose not to disclose information as far as use and processing of information through the application or system.

•   Collection   The organization has policies and procedures in place to ensure that any personal information that is collected is used only for the express purposes stated and known to the user, and any additional use is prohibited without additional informed consent.

•   Use, retention, and disposal   Any personal information that is collected, and collected only after affirmative consent, is only used for the purposes stated, and as soon as it is no longer needed, the information is securely removed following security best practices or any requirements from regulation or law.

•   Access   As required by regulation or policy, the organization makes available to an individual the personal information that it has collected on them for review, and then allows for any modifications, updates, or removal requests.

•   Disclosure to third parties   With many modern applications, external services or components are often used and integrated throughout an application. Where personal information needs to be shared with third parties in this manner, that notice is made to the user and their consent is required for such a disclosure.

•   Security for privacy   Any information that is used or stored by a system or application is protected with stringent security measures to ensure its confidentiality.

•   Quality   Whereas security protects the confidentiality of sensitive and personal information within a system or application, the quality principle is focused on integrity and ensuring that the organization has accurate and correct information on individuals that use it, and that it is correctly processed and used.

•   Monitoring and enforcement   As with any policy or best practice, proactive and accurate monitoring is required to ensure it is applied correctly and enforced. This also includes having processes in place to resolve compliance problems or complaints and disputes from users about the use of their information.

Understand Audit Processes, Methodologies, and Required Adaptations for a Cloud Environment

Many audit practices and requirements are the same, whether the system is deployed in a traditional data server model or within a cloud environment. Regulations and laws are written in a way that are agnostic to the underlying hosting model, and they focus on the ways applications must be secured and data protected. However, with a cloud environment having unique qualities, features, and challenges, there are different approaches and strategies to successfully conducting audits within it.

The Cloud Controls Matrix, published by the Cloud Security Alliance, provides a detailed approach and framework for cloud customers, with a focus on controls that are pertinent and applicable to a cloud environment. Many of the certifications that are well known throughout the IT industry are also very applicable and adaptable to cloud environments.

Internal and External Audit Controls

With any data center and application, there will be a variety of both internal and external controls that are vital and required for security. These controls must be audited and evaluated regularly to ensure continued applicability, but also for compliance. Internal audits can be used to ensure corporate policies and mandates are being properly executed and adhered to for the satisfaction of management. They are also useful for gauging the efficiencies and effectiveness of internal policies and procedures, and for allowing management to find new ways to expand on the implementation of controls and policies, or to rectify problems that are causing additional costs or overhead. Internal audits are also very useful for planning future expansions of services or upgrades within the environment.

Independent external audits of controls will be necessary for customer assurance and compliance with regulatory or certification programs. An external audit will evaluate the IT system and policy controls, but will not address the same types of issues that an internal audit will, such as operating efficiency, costs, and design or expansion plans.

Impact of Audit Requirements

The use of a cloud environment will likely have a profound effect on how auditors have traditionally operated and conducted audits. A typical component of many legacy audits is having the audit team on location at the data center or physically on the same network, where they can directly scan and probe systems, without going over the public network. However, in a cloud environment, this is not possible because the organization will not have the level of control or access over the physical environment; coupled with the reality of geographic distribution of cloud environments, this makes being on site virtually impossible. Another big difference with the cloud environment is the use of virtual servers and images, which can and will change often over time. This makes repeated audits or later verification significantly more difficult, as the state of a system in its current form may look substantially different than when the original audit was conducted. What’s more, the virtual machine that was originally tested may no longer exist at all.

Identify Assurance Challenges of Virtualization and Cloud

Virtual machines and cloud environments pose enormous challenges to auditing and scanning as compared to traditional data centers. Many application audits within a virtualized environment will span many virtual machines and often different geographic locations. Even those within the same physical location will span multiple different physical servers, with different hypervisors controlling some subset of the overall environment. The challenge presented is how to audit and ensure compliance without testing the entire environment, which can also be very fluid and change rapidly. This is further complicated with a lack of access to the physical environment within a cloud, where the cloud customer and auditors working on behalf of the cloud customer will have very limited access, or even no access at all, to the underlying physical environment.

In order to map out an audit plan, an auditor also needs to have full and complete documentation as to the structure and architecture of a system or application. While this is very easy to do within a traditional data center, it poses a significant challenge within a cloud environment. The cloud customer will not be aware of the underlying architecture, and almost no cloud provider will be willing to disclose this information either, because doing so exposes information and security controls that pertain to all the other tenants within the same cloud environment. The main strategy to deal with this is to rely on audits and certifications for the underlying environment that are done in conjunction with the cloud provider, and thus can be used by all tenants as the basis for their own audits.

Audits of the underlying cloud environment will test and validate the security hardening and configurations of the physical assets and their associated systems, such as hypervisors. The cloud provider can then publish to their customers, or to the public, the audit reports and some information about their baseline configurations. The vast majority of information and audit reports will likely be restricted to only current customers or potential customers, and only after the signing of nondisclosure agreements to protect the underlying environment and other tenants. The use of accepted industry standard certifications will also give customers an immediate insight into the type of testing and controls in place without the need to see specific audit reports and results, as the standards and evaluation criteria are public and well known.

Apart from providing audit reports and certifications, a cloud provider can build customer assurance by conducting patching and upgrades in a timely manner. With security risks and vulnerabilities changing and emerging on a continual basis, this is an important area that needs constant attention. Although this is largely governed through the use of contracts and SLAs, the verification of adherence is the important part. Therefore, the cloud provider will need to establish a program that is satisfactory to its customers to provide this assurance.

Types of Audit Reports

Several different audit reports have been standardized throughout the industry. Although they differ some in approach and audience, they have a similar design and serve a similar purpose.

SAS

The Statement on Auditing Standards (SAS) Number 70, which is commonly known as SAS 70, is a standard published by the American Institute of Certified Public Accounts (AICPA) and is intended to provide guidance for auditors when analyzing service organizations specifically. Within this context, the definition of a “service organization” is intended to include those that provide outsourced services to an organization, where the services provided impact data and processes within their controlled and secured environment (in this case, the specific application is in regard to hosted data centers). It is also known commonly as a “service auditor’s examination.”

There are two types of reports under SAS 70. Type 1 reports are focused on an evaluation by the auditor as to the service organization’s declarations and to the security controls it has put in place, as well as an opinion on the side of the auditor as to the effectiveness of the design of the controls to meet the objectives stated by the service organization. With a Type 2 audit, the same information and evaluation is included, but additional evaluations and opinions on behalf of the auditor are added as to the effectiveness in actually meeting the control objectives on behalf of the service organization. So whereas a Type 1 audit focuses on how effective the design of the controls are in meeting objectives, the Type 2 audit adds an additional qualitative assessment as to the actual effectiveness of the controls as implemented.

There are multiple reasons for why an organization would undergo an SAS 70 audit. A primary reason is to present audit reports to current or potential customers as to the state of controls designed and implemented. The report would serve as an independent and outside evaluation of the environment as a way of gaining the approval of customers and providing assurance. This is the intended application of the audit reports and their design. However, many service organizations have expanded their use beyond this for other purposes as well. Many regulatory and legal systems have requirements for audits and reports from organizations as to their controls and effectiveness. In some instances, these requirements are based on the organization providing evidence that they are implementing adequate oversight for data protection and privacy, and these types of reports can be used to fulfill and meet those requirements. Some examples of regulatory schemes are Sarbanes-Oxley (SOX), ITIL, and COBIT, but there are many others where these types of audit reports can fulfill requirements for providing assurances with independent oversight.

Images

NOTE    SAS 70 reports have been standard for many years, but were replaced in 2011 by the SSAE 16 reports, which are covered in the next section. However, because they are so well known and were standardized in the industry, the information is provided here for historical context, as well as to show the evolution of auditing and oversight with regulatory requirements. The Cloud Security Professional should have a sound understanding of the past use and intent of SAS 70 reports, even though they have been deprecated.

SSAE

The Statements on Standards for Attestation Engagements (SSAE) 16 replaced the SAS 70 as of 2011, updated to SSAE 18 on May 1, 2017, and is the standard that most in the United States now use. Rather than focusing on specific control sets, the SSAE 18 is focused on auditing methods. Much like the SAS 70 reports, the SSAE 18 reports are largely used to help satisfy regulatory requirements, such as Sarbanes-Oxley (SOX), for auditing and oversight of financial systems. The main change from SSAE 16 to SSAE 18 was the addition of a concrete requirement for organizations to conduct formal third-party vendor management programs and implement a formal annual risk assessment process.

Whereas the SAS 70 was known as the “service auditor’s examination,” the SSAE 18 is known as the Service Organization Control (SOC) report, and there are three different types: SOC 1, SOC 2, and SOC 3.

SOC 1   SOC 1 reports effectively are the direct replacement for the SAS 70 reports and are focused specifically on financial reporting controls. SOC 1 reports are considered restricted-use reports, in that they are intended for a small and limited scope of controls auditing and are not intended to be expanded into greater use. They are focused specifically on internal controls as they relate to financial reporting, and for uses beyond financial reporting, the SOC 2 and SOC 3 reports should be used instead.

The audience for these restricted-use reports is defined as follows:

•   The management and stakeholders of the company or organization having the SOC 1 reports done, known in this instance as the “service organization.”

•   The clients of the service organization having the reports done.

•   The auditors of the financial organization having the SOC 1 reports done. This is where the use of SOC 1 reports to assist in compliance with regulations, such as SOX, comes into play.

SOC 1 reports, like SAS 70 reports, also have two subtypes:

•   Type 1 reports   These are focused on the policies and procedures at a specific and set point in time. They evaluate the design and effectiveness of controls from an organization, and then verify that they were in place at a specific time.

•   Type 2 reports   These are focused on the same policies and procedures, as well as their effectiveness, as Type 1 reports, but are evaluated over a period of at least six consecutive months, rather than a finite point in time.

To instill confidence in the viability of the audit reports, most users and external organizations will only accept Type 2 reports, as they are more comprehensive than the single-point-in-time Type 1 report.

SOC 2: Trust Services Criteria   SOC 2 reports expand greatly on SOC 1 reports and apply to a broad range of service organizations and types, whereas Type 1 reports are only for financial organizations. The SOC 2 is modeled around four broad areas: policies, communications, procedures, and monitoring. The basis of SOC 2 reporting is a model that incorporates “principles.” With the last update to SOC 2 in 2017, five principles were established. Under the guidelines, the security principle must be included with any of the following four to form a complete report:

•   Availability   The system has requirements and expectations for uptime and accessibility, and it is able to meet those requirements within parameters set by contract or expectation.

•   Confidentiality   The system contains confidential or sensitive information, and information is properly safeguarded to the extent required by regulation, law, or contract.

•   Processing integrity   The system processes information, and it does so in a manner that is accurate, verified, and done only by authorized parties.

•   Privacy   The system uses, collects, or stores personal and private information, and does so in a manner that conforms to the organization’s stated policy privacy, as well as any pertinent regulations, laws, or standards requirements.

The security principle itself is then made up of seven categories:

•   Change management   How an organization determines what changes are needed as well as how they are approved, implemented, tested, and verified. The goal is to ensure that all changes are done in a methodical and controlled manner with appropriate approvals, with safeguards to prevent unauthorized changes.

•   Communications   How an organization communicates all aspects of its operations to stakeholders and users, including policies, procedures, outages, system statuses, or any other contractually obligated or expected communications.

•   Logical and physical access controls   How an organization implements controls related to physical and logical access to systems and applications, including policies and procedures related to the granting, authorizing, and revoking of access.

•   Monitoring of controls   How an organization oversees and verifies the controls it has implemented, ensuring their correct configuration and application, as well as looking for methods to improve upon them.

•   Organization and management   How the organization is structured and managed, as well as oversight over individual personnel. This includes how personnel are selected, verified, and supervised within their environment and perform their jobs duties.

•   Risk management and design and implementation of controls   How an organization handles risks that may impact its systems and data—from identifying, evaluating, and responding to them, to, in some cases, accepting them.

•   System operations   How an organization implements and monitors all of its IT systems and applications, as well as ensures they are running properly and performing to expectations or requirements.

Similar to SOC 1 reports, the SOC 2 reports are considered “restricted use” for the internal review of the organization. They also contain two subtypes:

•   Type 1 reports   These reports are focused on the systems of a service organization, coupled with the design of the security controls for them and an evaluation as to their suitability from a design and intent standpoint.

•   Type 2 reports   These reports are based on the design and application of security controls on the service organization’s systems and an evaluation as to their effectiveness from an operational standpoint.

SOC 3   SOC 3 reports are similar to the SOC 2 reports in scope, design, and structure. The main difference between SOC 2 and SOC 3 is the audience these reports are intended for. Whereas SOC 2 reports are meant to be internal or restricted to an organization or regulatory oversight body, SOC 3 reports are designed to be for general use. This basically means that SOC 3 reports will not contain sensitive or proprietary information that a service organization would not want open or available to release and review.

ISAE

The International Auditing and Assurance Standards Board (ISAE) 3402 reports are very similar in nature and structure to the SOC Type 2 reports, and are also designed to be a replacement for the SAS 70 reports. Given that they serve the same function and have mostly the same structure, the largest difference in use between the SOC and ISAE reports is that SOC is mostly used in the United States, whereas ISAE is used more internationally.

Similar to SOC reports, the ISAE reports have two subtypes:

•   Type 1 reports   These are aligned with SOC Type 2 reports, in that they are based on a snapshot of a single point in time.

•   Type 2 reports   These are also aligned with SOC Type 2 reports in scope and intent, and are done typically for six months to show the management and use of controls over that period of time.

Restrictions of Audit Scope Statements

Before any audit can be undertaken, it is vital for the organization and its auditors to define the scope of the audits, as well as any restrictions on what is covered by and subjected to the audit process and testing. This is done during the initial stages of the audit process, where an audit scope statement will be developed.

The audit scope statement is done by the organization, and it serves to define to the auditors what exactly will be covered and required as part of the audit. This will incorporate organizational goals and expectations, as well as any audit requirements specified by regulation or law. The audit scope typically includes the following items:

•   Statement of purpose   An overall summation and definition for the purpose of the audit. This serves as the basis for all aspects of the audit, as well as the audience and focus of the final reports.

•   Scope of audit   This defines what systems, applications, services, or types of data are to be covered within the scope of the audit. It is an affirmative statement of inclusion, informing the auditors of the structure and configuration of the items to be audited, but it can also list any exclusions or scope limitations. Limitations can apply broadly to the entire audit or exclude certain types of data or queries.

•   Reasons and goals for audit   There can be more than one reason for an audit, such as for management oversight internal to an organization, to assure stakeholders or users, and as a requirement for compliance with regulations or laws.

•   Requirements for the audit   This defines how the audit is to be conducted, what tools or technologies are to be used, and to what extent they are to be used. Different tools and technologies will test systems and applications to different levels of impact or comprehensiveness, and it is vital to have an agreed-upon approach, as well as to prepare and monitor any systems and applications during testing.

•   Audit criteria for assessment   This defines how the audit will measure and quantify results. It is vital for the organization and auditors to clearly understand what type and scale of rating system will be used.

•   Deliverables   This defines what will be produced as a result of the audit. The main deliverable will of course be the actual report, but what format or structure the report is presented in needs to be defined. The organization may have specific format or file type requirements, or regulatory requirements may specify exact formats or data types for submission and processing. This area also includes what parties are to receive the audit report.

•   Classification of audit   This defines the sensitivity level and any confidentiality requirements of the audit report and any information or documents used during the preparation or execution of the audit. This can be either organizationally confidential or officially classified by the government as Confidential, Secret, or Top Secret.

Apart from the audit scope statement, the limitations and restrictions on the audit scope are also very important. These will define what exactly is subjected to the audit by placing limits on where the auditors can test and expand into based on what they discover during the course of testing. They are also very important for the impact on current systems and operations, and to ensure that the audit does not negatively impact the systems or data. Most audits are more focused on operational design, policies, and procedures rather than actual technical testing and evaluations.

If actual technical testing is to be performed, to limit and restrict impact on systems and operations, it is important to declare times when audit testing can be done, as well as what types of methods are to be used. If an audit is expected to cause increased and noticeable load on a system, leading to performance degradation or user impact, then it is vital to schedule the audit for off-peak times and when system utilization is at its lowest. If possible, testing should always be done against nonproduction systems with nonproduction data. This will remove the possibility of data corruption or impact on users as far as the system’s availability and performance are concerned. In most instances, organizations will strictly prohibit any such testing done against live or production systems, and regulations may prevent testing against any systems that contain real or sensitive data, especially if the testing could lead to any type of potential data leakage or exposure.

Gap Analysis

A gap analysis is a crucial step that is performed after all information has been gathered, tested, and verified through the auditing process. This information comes from reviews of documentation, tools for discovery of IT systems and configurations, interviews with key personnel and stakeholders, and the actual audit testing to verify the information provided through these processes. The desired configuration or requirements can come from a variety of sources, including organization policy, contractual requirements, regulatory or certification requirements, and applicable legal requirements.

The gap analysis is then performed to determine if the results found from information discovery and testing match with the configuration standards and policies. Any resulting deviation from them will be considered a finding, or a “gap” between the desired state of a system or its operations and the actual verified current state.

Images

EXAM TIP    Remember that a gap analysis and the presentation of audit findings should always be from an impartial and independent actor. While many organizations conduct internal audits, these should be considered for their own purposes or fact-finding efforts, and should never be used for certifications or compliance. Only findings done by someone external and independent, who has no financial or other interests in the results of the audit, should be considered valid and trustworthy. Under virtually all regulatory and certification programs, only audits performed by an independent actor, sometimes with certification requirements of their own, will be accepted as valid for the purposes of compliance.

Audit Planning

The overall audit plan falls into a series of four steps, each with important and sequenced components that drive the overall process to meet the objectives and requirements.

Here are the four steps in the audit plan:

•   Define objectives

•   Define scope

•   Conduct the audit

•   Lessons learned and analysis

Define Objectives

Defining the objectives of the audit includes several steps to lay the groundwork for the entire audit process and will drive the process of defining the scope for actually conducting the audit. This step clearly defines and articulates the official objectives of the audit and produces a document attesting to them. The objectives take into account the management priorities and risk acceptance to ensure they are aligned with what is desired from the audit. They also define the format or formats of the audit report and any other deliverables that will be produced as a result of it. Based on the requirements and the scale of the audit, this process also defines the number of staff needed to conduct the audit, not only on behalf of the auditing group, but also the systems and application teams that need to be available to produce data, answer interview questions, and run whatever scripts or grant access needed from the audit team’s perspective.

Images

EXAM TIP    Make sure you understand that audits can be conducted for different purposes and audiences. In some instances, depending on the regulatory requirements, there may be several different audits that seemingly overlap. This is especially true with government contracts, where many different regulations and agencies share responsibility for oversight of security and enforcement of policies. It is possible that some findings will require remediation that cause other audits to be in conflict for similar requirements; therefore, some coordination and negotiation may be necessary.

Define Scope

The definition of the audit scope is one of the most important (and most tedious) aspects of the audit process. This is where a very detailed set of rules and information gathering occurs that ultimately and completely drives the entire audit process. If done correctly, a well-defined and detailed scope makes the actual performance of the audit straightforward and efficient, as well as ensures that it successfully meets the goals and objectives set forth by the organization concerning the purpose for conducting the audit.

The following are many key concepts and information points for an audit scope. While it is impossible to fully encapsulate all aspects of all audits, because systems, applications, and objectives can differ wildly, this list will form the basis for the overwhelming majority of audits and points of consideration:

•   Audit steps and procedures   The process and procedures for the actual audit will be clearly documented and agreed upon. This is the primary grouping for all aspects of the audit plan, and the other components support and carry out the purpose and initiatives from it. The overall steps and sequence of the audit will be defined and broken into stages for a methodical approach, ranging from information gathering, to the actual audit, reviewing results, making recommendations, and enacting responsive changes or actions based on findings.

•   Change management   During the audit, the change management process and controls will be evaluated for effectiveness and documentation. Any changes since the last audit should be evaluated for their effectiveness in meeting their goals, and a sampling of change requests should be verified to ensure that the process has been followed. This should include systems and technological changes as well as operational and policy changes and how they are handled throughout the entire process.

•   Communications   There are a few different aspects to communications in regard to audits, and they are absolutely crucial at all levels to ensure the audit is successfully completed and done so in a smooth and efficient manner. A major component of the communications plan that will be done early on is the gathering and documenting of the key points of contacts, as well as backups, of all parties involved. This should include the audit team, the cloud customer, cloud provider, and all support staff that will be assisting with the audit under each group. The communications plan also documents the methods of communications to be used and their frequency. Note that these may differ by audience and party involved. In order to ensure timely responses and progression of the audit, escalation contacts and plans should be included as part of the communications plan.

•   Criteria and metrics   The metrics used to evaluate the effectiveness of controls, as well as the methodology for their evaluation, need to be fully understood, agreed upon by all parties, and clearly documented in the audit plan. As part of this component, it is also important to verify that any metrics and criteria are consistent with contractual and SLA requirements, especially in a cloud environment where the cloud customer does not have full control over or access to all available metrics and data points.

•   Physical access and location   Where the audit will be conducted from and what level of access will be used or required are key components of audit planning. Many applications or systems have controls in place to limit where connections can be made from, especially if means of access other than those used by typical users are to be utilized by the auditors. Some systems also have geographic restrictions based on the type of data, such as United States federal government contractors having requirements that they must physically be located and present within the borders of the United States to work on or interact with the systems. It is also important to define how the audit team will be located to conduct their tests. Will they be located together as a team or work remotely? Will the staff from the organization be present or just be available as a resource when needed?

•   Previous audits   When a new audit is being conducted, it is imperative to review previous audits for any high or critical findings. The new audit should comprehensively test those previous findings to verify they have been mitigated and officially closed. Under any regulatory or auditing requirements, repeat findings are typically considered very serious, so special attention to previous findings should always be used to verify final disposition of them.

•   Remediation   After the audit has been completed and all findings documented and reported, the organization must develop a plan to address and remediate all findings. The plan to remediate can be to fix the actual finding, to put in other compensating controls to reduce the level of the finding, or for management to accept the risk and leave the finding in place. Not all findings will be up to management to determine remediation decisions, as regulation or law may dictate the approach that that must be taken, depending on the type of data and its classification level. The audit plan should also articulate the process that will be used to document and track findings for remediation.

•   Reporting   The audit plan must clearly define and document the requirements for the final report, including format, how it will be delivered, and how it will be housed long-term. The organization will either demand reports be delivered in their format and then later processed and packaged for the official report copy, once resolved findings have been verified and all language from the course of the audit is cleaned up, or require a final polished report from the auditors themselves once all clarifications and closures have been completed. The audit plan will also document who should receive reports, what information will be in them, as well as the required dates for their dissemination to all parties.

•   Scale and inclusion   The systems, applications, components, and operations to be included within the scope of the audit are vital to planning the actual execution of it. At this point, everything will be clearly documented as to what is included, as well as any boundaries and limitations on it. Within a cloud environment, this is highly dependent on the cloud service model used, as that will dictate how deep and far the audit can be conducted with the access that an auditor or cloud customer has. This also includes exactly which computing resources are subjected as part of the audit, such as storage, processing, and memory.

•   Timing   With any system and operation, there are likely to be time restrictions and limitations as to when testing can be done. Special care should be taken to not schedule audits or testing for busy times of the year for the organization or during peak usage times. In regard to calendar limitations, audits should not be scheduled during peak times of the year where systems and staff have heavy demands based on their organization and services, which could include holiday periods, cyclical peak times of usage, special initiatives, and during system upgrades or new rollouts. For regular times, tests against systems, if they are against live systems, should be done to avoid peak processing times and user load during the day or week, deferring instead to off hours and lower utilization times to avoid impacting current operations or users.

Conduct the Audit

Once the audit plan has been completed and signed off by all parties, the actual audit will be conducted based on the agreed parameters and timelines. While the audit is being conducted, it is important to monitor the timelines, staff requirements, and any potential impact on systems. Although all effort to minimize user impact would have been taken during the audit planning process, there can still be unintended or unforeseen consequences while the actual audit is being conducted. If such circumstances arise, management will have to work closely with the audit team to determine if the audit will proceed as planned, or if some changes need to be made to the audit plan to mitigate further or continued negative impact on users or systems.

Lessons Learned and Analysis

Once the audit has been completed, management and systems staff will analyze the process and findings to determine what lessons have been learned and how continued improvement processes can be applied to further harden systems or improve upon processes and operations. There are multiple areas for focus and analysis after an audit:

•   Audit duplication or overlap   Once the audit has been completed, since many organizations undergo multiple audits from different auditors, an analysis should be done to compare the scope of each and any overlap or duplication. With some audits that are mandated by regulation, there may be nothing the organization can do about duplication. However, with other types of audits, it might be possible to alter the scope or get the auditors to accept specific evaluation of controls from other audits, rather than repeating the same tests. This will lead to a costs savings, both in the costs of the audit, but also in the staff time that must be dedicated during an audit and potential systems impact and downtime.

•   Data collection processes   The first time most audits are performed, there is a considerable amount of manual data collection and processing. As audits are completed, methods for automation and proactive collection should be explored wherever possible, which can provide an organization with a better handle on its data and processes, but also make future and additional audits much more expedient and efficient. It is also very important to look at the specific data elements that were collected for analysis, with the intent on determining if more data was collected than was needed and can be eliminated in future efforts.

•   Report evaluation   After the report has been completed and submitted, it should be reviewed for structure and content as well. It should be evaluated for how findings and information are presented and the format they are presented in, to ensure that management and stakeholders can effectively use them to make improvements in systems or processes, but also to ensure that users or regulatory groups will find them useful and appropriate for their needs.

•   Scope and limitations analysis   Once the audit has been completed and reports submitted, there should be a look back at the original scope and limitations documents to determine if they were correct and appropriate. During many audits it will become clear that some items included in the scope were really not necessary, or that some things not included, or even explicitly excluded, should have been included. This process will enable an organization to have a solid scope and limitations document already prepared for future audits, and will enable a more efficient updating process rather than starting from scratch for future audits. If possible, the document should be incorporated into the change management process so that any appropriate changes can be made in an iterative and incremental manner, rather than a wholesale concentrated effort, when needed.

•   Staff and expertise   With any audit, there are considerable investments in staff and the particular skills sets of that staff in order to complete the audit, including the collection and presentation of data. The number of staff involved and dedicated to the audit, as well as the particular skill sets of that staff, should be evaluated for need in regard to future audits. In many instances, adjustments can be made in either skill sets or numbers to lower the costs for and impact on the operations of an organization during audits.

Internal Information Security Management System (ISMS)

An internal information security management system (ISMS) includes the policies that establish a formal program in an organization, focused on reducing threats and risks against its IT resources and data in regard to confidentiality, integrity, and availability. The main purposes for an ISMS are minimizing risks, protecting the organization’s reputation, ensuring business continuity and operations, and reducing potential liability from the exposure of security incidents. The protection of reputation and the higher degree of confidence it can bring applies to users, customers, and stakeholders of any system or application.

In order for an ISMS to have measurable validity and accepted structure, it should be formed and implemented along the lines of an accepted and established standardization, such as ISO/IEC 27001. The ISO/IEC 27001:2005 standard specifically outlines steps to create an ISMS for any organization. To start with an ISMS, an organization must first define its security policies, which should be in place already from normal operations and audit requirements. This will lead into the formulation of an ISMS scope, which must have management support and be matched specifically to the organization’s goals and objectives.

The most important part of the ISMS is then to undertake a risk assessment and determine how to address those risks to meet the requirements and expectations of management or regulations. The risk assessment and risk management will be highly subjective and specifically tuned to each system or application based on the type of data and management oversight and expectations, as well as any applicable regulatory requirements or laws. Without taking into consideration the full range of risk requirements and expectations, an organization is likely to end up with an ISMS that is either ineffective or does not meet requirements to bring value to the organization.

Once a comprehensive risk assessment has been completed and management decisions have been made concerning how to approach the organization’s risk, the selection and implementation of controls can be made to address the risk. Because there are so many possible permutations of risk and it is so subjective to a specific system, application, or organization, no two risk assessment or risk management plans will look the same. The selection and implementation of additional or different controls will be geared specifically toward the specific instance and risk appetite of management.

In order for an ISMS to be successful, several factors need to be consistent across organizations, regardless of their specific considerations or unique needs and requirements. The full and continued support of management and stakeholders will be absolutely crucial with any security implementation or policy. The ISMS must be applied consistently and uniformly across the entire organization, in order to fully implement and adhere to the overall strategy and risk appetite. All business processes and management processes must include the ISMS as a key and central component to maintain consistency and standardization. Whenever business demands or policies change, the ISMS must be included, adjusted, and adapted as needed to remain consistent. The personnel of the organization, as well as any outsiders or subcontractors who work on the organization’s systems, must be fully trained and aware of the ISMS and how it is implemented and designed. Finally, as with any type of management policy, the ISMS must be a continuous process, and one that is implemented in a way that complements and supports the staff and operations, rather than preventing or obstructing them.

Internal Information Security Controls System

The ISO/IEC 27001:2013 standard puts forth a series of domains that are established as a framework for assisting with a formal risk assessment program. These domains cover virtually all areas of IT operations and procedures, making ISO/IEC 27001:2013 one of the most widely used standards in the world.

Here are the domains that comprise ISO/IEC 27001:2013:

•   A.5   Management

•   A.6   Organization

•   A.7   Personnel

•   A.8   Assets

•   A.9   Access Control

•   A.10   Cryptography

•   A.11   Physical Security

•   A.12   Operations Security

•   A.13   Network Security

•   A.14   Systems Security

•   A.15   Supplier/Vendor Relationships

•   A.16   Incident Management

•   A.17   Business Continuity

•   A.18   Compliance

Images

EXAM TIP    While ISO/IEC 27001:2013 is the most widely used international standard, make sure you are aware of other standards that have their own domain sets that organizations either choose to or are required to use. These can include PCI for financial, HIPAA for healthcare, and FedRAMP for U.S. federal government assets. There are many others, depending on the jurisdiction and type of application or system as well as the data that it uses.

Policies

Policies document and articulate in a formal manner the desired or required systems and operations standards for any IT system or organization. They are crucial to correctly and securely implementing any system or application, and they are the basis for how an organization operates and governs its activities, hiring practices, authorization for access, and compliance. For an organization, especially a large organization, there can be a large number of policies put in place following a granular model, but together form a cohesive overall program and framework to govern its systems and activities, and will complement and depend on each other.

Organizational policies govern how an organization is structured and how it operates. They are done with a goal of efficiency and profit, as well as protecting the organization’s reputation, legal liabilities, and the data it collects and processes. These policies form the basis for the implementation of IT and functional policies to govern the actual detailed operations and activities of an organization. Of great importance to any organization is the minimization of legal liabilities and exposure. This is where the focus on security and privacy comes into play, but it also shows the important of hiring and authorization policies and practices. Without strong control over hiring staff who are trusted and verified, and only granting access to those individuals who are trusted based on management’s or regulatory standards, the entire organization and its data can be placed at immediate risk from malicious actors, or even those who are just sloppy with their security duties and practices.

IT policies govern all aspects of IT systems and assets within an organization. These include access control, data classification, backup and recovery, business continuity and disaster recovery, vendor access, segregation of duties, network and system policies, and virtually any aspect of an IT organization that you can think of. Some of the most prominent policies that even nontechnical staff are well aware of are password policies and Internet use policies, and while many will complain about restrictions and requirements in both of these areas, they represent the first and most visible layer of security within many organizations, and are crucial to the success of the security program overall.

With the introduction of cloud computing, the need for making modifications to current policies or crafting new policies has become very important. Many policies that organizations already have for operating within a traditional data center will require substantial modification or addition to work within a cloud environment, as the underlying structure and realities of a cloud are far different from operating in a controlled and private physical environment. With the lack of control over assets and access, policies will need to be augmented to allow for cloud provider access and any compensating changes needed for the realities of multitenancy.

Identification and Involvement of Relevant Stakeholders

With any IT system or operation, the proper and correct identification of stakeholders is absolutely vital. With the correct list of stakeholders, proper communications can be ensured to reach all relevant and important parties—and in a timely manner. To properly identify stakeholders, multiple audiences need to be addressed. The most obvious are those support members of the actual organization, including management. With a cloud environment, this will also need to include the cloud provider, as it will be responsible for many facets of the IT services hosting and implementation, and will have access, insight, and responsibilities that expand far past the limited scope of the cloud customer. Apart from the actual organization and other support services it uses, stakeholders include users and consumers of the system or application. These groups are highly and crucially dependent on the availability and security of the services they are leveraging, and will need to be informed of any changes or risk exposure, as this will directly impact their own systems or initiatives as well. Lastly, depending on the type of data and any pertinent regulations, there may be additional bodies or auditors who need to be kept in continual communication as well.

Once the proper stakeholders have been identified, the level of involvement by them must be documented and developed. A key challenge is determining where and when each stakeholder needs to be involved or sent communications. Any IT environment, especially a cloud environment, will likely be highly complex and have many different possible sections or processes that need their own communications process and have different stakeholders. The challenge is also then to determine, in a highly complex and integrated environment, which issues affect which stakeholders when the impact is felt across multiple levels or components. Regulations, contracts, and SLAs will highly impact how communications and stakeholder involvement play out as well, especially in regard to the timeliness of involvement.

Specialized Compliance Requirements for Highly Regulated Industries

With any environment or system there will be jurisdictional requirements for security and privacy, as well as audit requirements and oftentimes reporting and communications environments. However, for specific types of data or systems, there are additional requirements that pertain to the specific data and processes that involve them, regardless of physical location or hosting model used. With highly sensitive data models, such as healthcare, financial, and government systems, there are additional compliance requirements in the form of HIPAA, PCI, and NIST/FedRAMP, respectively. When hosting in a cloud environment, all of these regulations still apply the same as a traditional data center, and with cloud being a much newer technology, not all regulatory requirements have been fully updated or adapted to cloud environments either. In this case, there may be some specific changes to the configurations needed to comply, or waivers for some requirements may need to be documented and requested. However, most major regulatory systems have been either partially or fully updated for cloud at this point, and it will be a matter of compliance and auditing rather than adapting and engineering.

Impact of Distributed IT Model

Modern applications, especially mobile and web-based applications, are a stark departure from the traditional server model, where there are presentation, application, and data zones, and the lines between each and the communications channels are obvious and well understood. Modern applications rely on very complex systems that are built on a variety of different components and technologies, many times located across several geographic areas, and they rely on web service calls and APIs rather than direct network connections, function calls, and tight integration. The introduction of cloud computing has made this complexity even more pronounced, as the reliance on consumable services, rather than owned and maintained systems, has increased at a rapid pace. While this has made building systems and scaling systems far cheaper and easier than ever before, it has also introduced significant changes for security, auditing, and compliance.

With distributed models, security is always a top challenge and concern. With the reliance on so many different components, many of which are external web services and features, it is impossible for an organization to have a full grasp on security at every layer of an application like it would in a traditional data center, where it has full control and ownership over the entire system. With this type of configuration and distribution, the importance of audit reports and certifications of systems are even more vital, as they serve as one of the few ways the provider and owner of a web service can promote confidence and verify security to external parties. The stated and published privacy policies of a web service and provider are also very important, which is where audit reports can also serve to promote confidence in their controls and adherence to policies. When choosing to use an external web service or provider, the Cloud Security Professional must ensure that SLAs and other agreements are in place to handle security incidents should they arise, and have a clear understanding of what levels of support and services will be offered by that provider should a security incident occur.

Another complication that arises with a reliance on external services is coordination of versions, upgrades, patching, and compatibility of features and APIs. With large external services that potentially have thousands of systems and users dependent on them, it is impossible for a service provider to meet all requests and demands, especially in regard to the timeliness of patches and upgrades. The lack of control over the patching and upgrade cycle for components, some of which may be core and central components to an application, can lead to versioning problems and configuration matching, thus exposing the organization to increased risk.

With a distributed model, communication can also be a big challenge for many of the same reasons. The identification of stakeholders and key communications needs are always a challenge to obtain and keep updated with any organization or system, but with the reliance on outside resources and components, the difficulty of having proper and timely communications becomes even more pronounced. With the application owners now becoming consumers of other services, they will no longer have the type of internal control and management authority that they would in a traditional data center, and instead must rely on SLAs and other mechanisms for communication and assurance.

Perhaps the biggest and most impacting difference with a distributed model is the crossing of jurisdictional and geographical locations. With a traditional data center model, the location of data and services is well known and static, with any potential moves being under the control and initiative of the organization. This allows for extensive research and planning for any potential move, including the vetting of jurisdictional changes and requirements, allowing management to make an informed decision before authorizing and approving any such change. An organization can also purposely choose locations of data centers and data storage to take advantage of beneficial jurisdictional requirements. With a distributed cloud model, or reliance on external services, an organization loses some (or even all) control over where the systems are located overall or at any specific point in time. Cloud services can and will move all the time between geographical locations, or span multiple jurisdictions at any given point in time. With external services, it may be completely unknown where the services are located and operated from, and with these services not being under the control of the organization and management, they can move at any point in time, or span multiple jurisdictions in an ever-changing dynamic.

Understand Implications of Cloud to Enterprise Risk Management

As with any other component of IT services and management, cloud computing will have a definite impact on an organization’s risk management program and practices, adding an additional layer of complexity that must be taken into account.

Assess Provider’s Risk Management

With a move to any hosted environment, it is crucial to understand and evaluate the risk management program and processes with the provider. Since the cloud customer will be housing their services and data within that environment, the risk management processes and risk acceptance of the hosting provider will directly impact the security and risk management programs of the customer as well. This is another area where certification is so valuable for the cloud provider because it will largely tell the cloud customer the type of risk management programs and policies in place with the cloud provider, and what level of risks are allowed to be accepted or are required to be mitigated.

Difference Between Data Owner/Controller vs. Data Custodian/Processor

To understand risk as it pertains to data, it is important to first understand the different roles and responsibilities as they pertain to data. In many organizations, these roles have formal titles and dedicated responsibilities. At the same time, many regulatory and certification bodies require specific individuals to be named as data owner and data custodian, and they bear responsibility for their jobs and roles when it comes to liability, auditing, and oversight. For these purposes, the roles can be defined as follows, with some variance allowed depending on the particular organization and audience for the definition:

•   Data owner   The data owner is considered the individual who bears responsibility for controlling the data and determining the appropriate controls for it, as well as the appropriate use of it. In some cases, there may be an additional role of a data steward to oversee access requests and the utilization of the data, ensuring that organizational policies are being adhered to and access requests have the correct approvals.

•   Data custodian   The data custodian is anyone who processes and consumes data that is owned or controlled by the data owner, and must adhere to policies and oversight while conducting business with the use of the data.

The data owners and custodians work with the management of the organization to establish the overall risk profile for their systems and applications. The risk profile will document and identify the level of risk that management is willing to take, as well as how risk is evaluated and approved for appropriate use requests. With a cloud environment, the risk profile can become a lot more complicated with the amount of systems and services that are outside the control of the organization—not to mention the reliance on mitigating technologies and systems needed to have the same level of security controls in a cloud that a dedicated traditional data center would enjoy.

The willingness of management to take and accept strategic risks forms the organization’s risk appetite, which is the overall culture of security and how much allowance there is for using specific systems and services when coupled with the classification of data that is being used. An understanding of an organization’s risk appetite will allow systems staff and application managers to make quick decisions for development and operations, without having to consult management for every decision. This allows for efficient operations while adhering to the organization’s overall security and privacy strategy.

Risk Treatment

Risk measures the threats an organization, its IT systems, and its data face. Risk assessment evaluates the organization’s overall vulnerability level, the threats it faces, the likeliness a threat will succeed, and any mitigation efforts that can be undertaken to further minimize the risk level. Ultimately, the organization can try to decrease and minimize the level of risk facing a system, or the organization will have to accept the level of risk.

Framing Risk

Framing risk sets the stage for the rest of the risk management processes and serves as its basis. At this stage, the organization will determine what risk and levels it wants to evaluate, based on the unique characteristics of its systems, the requirements of its data, and any specific details about implementations or specific threats it wants to determine. Decisions made at this point will guide how the risk assessment is approached and its overall scope.

Assessing Risk

The formal risk assessment process involves several steps of measurement and evaluation, combined with assigning numerical values to categories, ranging from low risk to high risk, typically on a scale of 1 to 5, with 5 being the highest risk.

The first step is to determine the specific threats the organization and its systems face. This entails analyzing the type of data and its sensitivity, to determine the value to an attacker and the types of efforts they will employ in an attempt to compromise it. The threat analysis will also be focused on what kinds of groups and attackers are determined to compromise the systems, which in turn determines the sophistication of likely attacks and the types of methods employed during it.

The second part is to assess the vulnerabilities of a particular system. Vulnerabilities can be internal or external in nature, and both should be included with the assessment. Internal vulnerabilities focus on configuration issues, known weaknesses in security measures, particular software used, or any other type of item an organization is aware of. External vulnerabilities can include natural and environmental issues facing a data center and operations, as well as social engineering attacks.

The third key component to a risk assessment is evaluating the potential harm that an attack can cause to an organization’s systems, data, operations, or reputation. This is based on the vulnerabilities and threats already established and the potential damage that could be caused by them.

The last main component is the likelihood of a successful attack and the harm it could cause. While there will be a variety of possible vulnerabilities identified, not all carry the same likelihood of being successfully exploited by an attack, and even within individual attacks, there can be varying degrees of exploitation and damage.

Images

TIP    An often overlooked aspect of potential harm is the loss of reputation to an organization. The focus tends to be on the loss of uptime or business, but impaired services or news stories about data leaks or systems problems can do far more damage over the long term than short-duration outages.

After the four main components have been identified and documented, the actual assessment of risk and testing can be performed. The actual tests performed fall into one of two categories: qualitative or quantitative.

Qualitative Assessments   A qualitative risk assessment is done against nonnumerical data and is descriptive in nature, rather than data driven. Qualitative assessments are typically used when an organization does not have the money, time, data, or sophistication to perform a full quantitative assessment. They typically involve the review of documentation for operational procedures and system designs, as well as interviews with system maintainers, developers, and security personnel. Once all data is gathered from interviews and documentation and technical specifications have been reviewed, the findings of threats, vulnerabilities, impact, and likeliness can be matched to the system implementations and operational procedures. From this synthesis, reports can be generated for management as to a risk evaluation and categorization, from low to high, for areas such as risk, likelihood, and damage possibilities.

Quantitative Assessments   Qualitative assessments are typically used when an organization has the money, time, data, and sophistication to perform them. Quantitative assessments are data driven, where hard values can be determined and used for comparison and calculative measure. Whereas quantitative assessment is considered its own type, qualitative assessment is always part of a quantitative one because not all data can be numerical, and key aspects of operations and systems will be missed if only numerical methods are utilized.

The following measures and calculations form the basis of quantitative assessments:

•   SLE   The single loss expectancy value. The SLE is defined as the difference between the original value of an asset and the remaining value of the asset after a single successful exploit. It is calculated by multiplying the asset value in dollars by what is called the exposure factor, which is the loss due to a successful exploit as a percentage.

•   ARO   The annualized rate of occurrence value. The ARO is an estimated number of the times a threat will successfully exploit a given vulnerability over the course of a single year.

•   ALE   The annualized loss expectancy value. The ALE is the value of the SLE multiplied by the ARO, so the ALE = SLE × ARO.

By calculating the ALE and assigning a numeral dollar value to the expected loss from a single vulnerability, the organization can now use that value to determine the cost–benefit implications of available countermeasures. If the ALE is higher than the cost to mitigate the vulnerability, it makes sense for the organization to spend the money on mitigation. However, if the ALE is less than the cost of mitigation, the organization may soundly decide to accept the risk and the cost from dealing with successful exploits, as doing so will cost less money than the cost of mitigation. Management can use the ALE to make sound judgments on investments in security countermeasures for their systems.

For example, an organization may calculate the SLE for the failure of a particular feature of its application at $50,000 in loss of business based on its usage and the time it will take to recover. Based on analysis and trends, the organization anticipates that such an outage will occur potentially three times a year. This means on an annual basis, the organization can expect to lose $150,000 in revenue from these outages. If there is a piece of additional software that could mitigate this outage, and the cost to license the software is $100,000 per year, then this would be a sound investment as the cost to mitigate is lower than the cost of the outages. However, if the software to mitigate were to cost $200,000 per year, then management may soundly decide to accept the risk as the costs to prevent it would be greater than the loss of revenue. Of course these considerations must take into account potential loss of reputation and trust, above and beyond just the loss of revenue during the outages.

Images

CAUTION    The ALE, while useful for evaluating the spending of funds to mitigate risk, also has to be considered within the context of any data that is covered under regulatory rules. While the cost to mitigate a vulnerability may outweigh the direct risk to the organization to deal with it if it occurs, regulatory rules can impose stiff fines or even the denial of operating on contracts or services that use protected data. The potential cost for exploits under regulatory requirements must also be heavily factored into any decisions on cost–benefit analyses.

Responding to Risk

After the identification and evaluation of risk, combined with the potential mitigation efforts and their costs, an organization must decide on the appropriate course of action for each risk. There are four main categories for risk responses, as detailed next.

Accept the Risk   An organization may opt to simply accept the risk of a particular exploit and the threats posed against it. This occurs after a thorough risk assessment and the evaluation of the costs of mitigation. In an instance where the cost to mitigate outweighs the cost of accepting the risk and dealing with any possible consequences, an organization may opt to simply deal with an exploit when and if it occurs. In most instances, the decision to accept a risk will only be permitted for low-level risks, and never for moderate or high risks. The decision to accept a risk must be taken very seriously, as any successful exploit may transcend simple dollar values in remediation—it could have lasting impact on an organization’s reputation and user base. Any decisions to accept risk must be clearly documented and official approvals granted by management.

Avoid the Risk   An organization may opt to take measures to ensure that a risk is never realized, rather than accepting or mitigating it. This typically involves a decision to not utilize certain services or systems. For example, if a company decides that integrating direct purchasing through its website, rather than requiring faxed or phone orders, will pose a significant risk to its systems and data protections, the company can opt to not enable ordering from its website at all. While this obviously could lead to significant loss of revenue and customers, it allows an organization to avoid the risk altogether. This is typically not a solution that an organization will undertake, with the exception of very minor feature sets of systems or applications, where the disabling or removal will not pose a significant impediment to the users or operations.

Transfer the Risk   Risk transfer is the process of having another entity assume the risk from the organization. One thing to note, though, is that risk cannot always be transferred to another entity. A prime example of transfer is through insurance policies to cover the financial costs of successful risk exploits. However, it should be noted that this will not cover all issues related to risk transference. The direct financial costs may be able to be transferred via insurance policies, but this will not cover the loss of the organization’s reputation. Also, under some regulations, risk cannot be transferred, because the business owner bears final responsibility for any exploits resulting in the loss of privacy or confidentiality of data, especially personal data.

Mitigate the Risk   Risk mitigation is the strategy most commonly expected and understood. Through risk mitigation, an organization takes steps—sometimes involving the spending of money on new systems or technologies—to fix and prevent any exploits from happening. This can involve taking steps to totally eliminate a particular risk or taking steps to lower the likelihood of an exploit or the impact of a successful exploit. The decision to undertake risk mitigation will heavily depend on the calculated cost–benefit analysis from the assessments.

Images

NOTE    As a Cloud Security Professional, you will need to be aware of the concept of residual risk. In short, no matter what efforts an organization takes, and no matter how much money it decides to spend, it will never reach a point were all risks are mitigated or removed. This remaining risk falls into the category of residual risk.

Monitoring Risk

Once risks have been identified, and possible directions to deal with them have been analyzed and decisions made, they will need to be tracked and monitored continually. The first main focus of risk monitoring is an ongoing process of evaluation to determine if the same threats and vulnerabilities still exist in the same form as when the assessment was undertaken. With the rapidly changing world of IT systems and services, both the threats and vulnerabilities will also be in a constant state of flux. Understanding the changing dynamic of risks, threats, and vulnerabilities will enable the risk monitoring process to evaluate whether risk mitigation strategies are still effective. It also serves as a formal process to monitor changing regulatory requirements and whether the current risk evaluations and mitigations still fulfill and meet their expectations.

Different Risk Frameworks

Three prominent risk frameworks pertain to cloud and are in widespread use throughout the IT world: ENISA, ISO/IEC 31000:2018, and NIST.

European Network and Information Security Agency (ENISA)

In 2012, ENISA published a general framework for risk management in regard to cloud computing titled “Cloud Computing: Benefits, Risks, and Recommendations for Information Security.” This publication outlines a series of 35 risks that organizations face, as well as a “Top 8” list of risks based on their probability of occurrence and potential impact on an organization.

ISO/IEC 31000:2018

The ISO/IEC 31000:2018 standard focuses on risk management from the perspectives of designing and implementing a risk management program. Unlike other ISO/IEC standards, it is not intended to serve as a certification path or program, but merely a guide and a framework. It advocates for risk management to be a central and integral component of an organization’s overall IT strategy and implementations, similar to how security or change management is integrated throughout all processes and policies. The 2018 update provides additional guidance from previous versions for a strategic approach, and places more emphasis on the inclusion in the risk management process of senior management.

The ISO/IEC 31000:2018 standard puts forth 11 principles of risk management:

•   Risk management as a practice should create and protect value for an organization.

•   To be successful and comprehensive, risk management should be part of all aspects and processes of an organization and an integral component.

•   Risk management should be a component of all decisions that an organization makes to ensure that all potential problems are considered and evaluated.

•   With all organizations there is uncertainty at all times. Risk management can be utilized to mitigate and minimize the impact of uncertainty.

•   To be effective, risk management must be fully integrated and efficient in providing information and analysis; it cannot slow down the processes or business of an organization.

•   As with any type of decision making, risk management needs to ensure that it is using accurate and complete data; otherwise, any evaluations will be inaccurate and possibly counterproductive.

•   While there are multiple general frameworks for risk management, they must be tailored to the particular needs and realities of each organization to be effective.

•   While risk management is always heavily focused on IT systems and technologies, it is essential to consider the impact and risk of human elements as well.

•   To instill confidence in staff, users, and customers, an organization’s risk management processes and policies should be transparent and visible.

•   With IT systems and operations being highly dynamic, the risk management program needs to be responsive, flexible, and adaptive.

•   As with all aspects of an organization, risk management should be focused on making continual improvements in operations and efficiency.

National Institute of Standards and Technology (NIST)

NIST published Special Publication 800-146, titled “Cloud Computing Synopsis and Recommendations,” in 2012, which has components focused on risk in a cloud environment and recommendations for their analysis, among many other aspects of cloud computing and security. This document is the U.S. version of the ENISA document and pertains to U.S. federal government computing resources. While it only officially applies to the United States and computing resources of the federal government, it can serve as a general reference for other computing systems, and many organizations and regulatory bodies around the world use it as a guide and framework.

Metrics for Risk Management

Risks are almost always rated and presented as a score that balances the impact of a successful exploit, weighed against the likeliness of occurrence. Through the application of additional controls or configuration changes, the risk level can often be lowered to where it can be accepted by management. In addition, many organization policies or certification requirements will only allow the acceptance of risks at certain levels.

Here are the most commonly used categories of risk:

•   Minimal

•   Low

•   Moderate

•   High

•   Critical

Assessment of the Risk Environment

To adequately evaluate risk in a cloud environment, multiple different levels need to be assessed. The particular application, system, or service is the first component, and involves a similar analysis to hosting in a traditional data center, with the addition of cloud-specific aspects added into the mix. The cloud provider must also be evaluated for risk, based on its track record, stability, focus, financial health, and future direction.

Understand Outsourcing and Cloud Contract Design

Many organizations have shifted IT resources to outsourced and hosting models for many years now. This typically has involved an organization not owning and controlling its own data center, but leasing space, and possibly also contracting for support services with a hosting and IT services organization. The economies of scale always made it cheaper for many organizations to have space within data centers that were owned by other organizations, where the physical facilities and requirements could be done on a larger scale, with the cost shared by all the customers, rather than each organization needing to build and control its own physical data center. With leased space, an organization still has full control over its systems and operations, as its systems and equipment are physically isolated from other customers. However, with outsourcing to cloud environments, there is significant additional complexity to contracts that needs to be addressed, and many organizations likely lack expertise and experience in managing cloud contracts.

Images

NOTE    With traditional data centers, an organization has to build out systems to have sufficient capacity to handle their highest expected load, because it is cumbersome and expensive to add additional capacity and is not realistically feasible to do so for short periods of time. Cloud environments alleviate that problem, but also introduce new complexities to contracts and funding models. This is especially true with government contracts, where strict funding is set in advance, and many contract models do not support the elastic nature of cloud, especially with auto-scaling, where additional costs are fluid and potentially unpredictable.

Business Requirements

Before an organization can consider moving systems or applications into a cloud environment, it must first ensure that it has a comprehensive understanding of how its systems are currently built and configured, and how they relate to and interact with other systems. This understanding will form the basis for an evaluation as to whether a system or application is even appropriate for a cloud environment. It is very possible that current systems or applications may need extensive code changes or configuration retooling before they will effectively work in a cloud environment, as well as the possibility of enormous impacts on compliance programs and requirements.

This analysis forms the basis of articulating and documenting specific business requirements for a cloud environment. It enables an organization to begin to explore cloud offerings with an eye toward meeting those requirements. As with any outsourcing endeavor, it is likely that not all business requirements can be met by a vendor completely, and a thorough gap analysis will need to be performed by both the operations staff and the security staff of an organization to determine the level of risk associated with a move toward cloud. The risk must then be evaluated by management to determine if it fits within acceptable levels. During this analysis, the business continuity and disaster recovery plans of the organization should also be considered, and what impact, either positive or negative, a move toward cloud would have on them, and what degree of updating and modification would need to be undertaken on them. These are all substantial costs in both time and money to an organization, and they must be considered beyond just the actual costs of hosting and computing resources.

As with any contract, SLAs are of vital importance between the organization and its service providers, but with a cloud environment, there is an even greater level of importance for SLAs. With a traditional data center, an organization will have broad access to its systems and equipment, and many aspects that would fall under a cloud SLA can be handled by its own personnel, with its own management fully in control over allocation of staff resources. However, in a cloud environment, not only is the cloud provider fully responsible for its own infrastructure, the customer will not have this same level of access, nor will the cloud provider solely answer to them as a customer because the cloud provider will potentially have a large number of other customers as well. With this in mind, the SLA is vitally important to ensure the cloud provider allocates sufficient resources to respond to problems and gives these problems the needed prioritization to meet management’s requirements.

Vendor Management

Once a decision has been reached to move toward a cloud solution, the process of selecting a cloud provider must be approached with care and exhaustive evaluation. Cloud computing is still a relatively new technology on the IT landscape, and as such, many companies are scrambling to offer cloud solutions and to become part of the explosive growth in the industry. While the level of competition is certainly beneficial to any customer, it also means that many new players are emerging that do not have long-established reputations or track records of performance to evaluate. It is crucial to ensure that any vendor being considered is stable and reputable to host critical business systems and sensitive data with. The last thing any organization wants is to give critical business systems to a cloud provider that is not mature enough to handle the growth and operational demands, or to a small startup that might not be around when the contract has run its course. Although major IT companies are offering cloud services and have extensive track records in the IT industry, the startups trying to gain a share of the explosion in cloud services may offer very attractive pricing or options in an attempt to establish a strong presence in the industry based on the number (or importance) of customers using their services.

Many factors need to be evaluated before selecting a cloud provider. The following is a list of core factors an organization must consider in the selection process:

•   What is the reputation of the cloud provider? Is it a long-established IT company or a newer startup? Is cloud computing its core business, or is it something the company has added along the way, like a side project? What is the financial situation and future outlook for the company?

•   How does the company manage its cloud services? Are the services all handled by staff employed by the company, or does it contract out support services and their management? Is its support model or strategy likely to change in the future during your contract with the provider?

•   What types of certifications does the company have? Does it have current cloud, security, or operations certifications? Does it intend to get additional ones? Do the certifications it has match up with the regulatory requirements of your business? Is it willing to get additional certifications or conform to additional certifications if required by a cloud customer?

•   Where are the facilities located for the cloud provider? What jurisdictional requirements would apply? Are there limitations based on the specifics of the application or its data that would preclude specific cloud providers based in certain physical locations?

•   How does the company handle security incidents? What is its process and track record with incidents, and is it willing to provide statistics or examples of them? Has it had publicly disclosed or high-profile security incidents or loss of availability?

•   Does the cloud provider build its platform on standards and flexibility, or does it focus a lot more on proprietary configurations? How easy is it to move between cloud providers or move to a different one if desired by management? Will you get locked in with a specific provider and limit your options?

Apart from the major operational and technological questions with selection, certifications will play a very prominent role, especially as regulatory compliance becomes increasingly important and public. The Common Criteria framework serves as a strong starting point for evaluating the security posture of any service provider, and it’s based on standards from ISO/IEC. By using such standards and their associated certifications, a potential cloud customer can be assured that a provider is adhering to a well-understood and verified set of security controls, and can use these independent certifications as a way of comparing different providers to a common baseline. Certifications can also ensure that a cloud customer will be in full compliance with any regulatory requirements they may be subjected to. Depending on the type of application and data, the certifications may be a required component for them.

Images

TIP    A common issue that arises between cloud providers and cloud customers involves the certifications the cloud provider holds or is willing to obtain. In many instances, a cloud provider may already have in place all controls and procedures to meet the requirements of a particular certification, but has not actually done so, or is even unwilling to. A cloud customer should not assume that a cloud provider will be agreeable to obtaining additional certifications, as they require considerable expense and staff time. A Cloud Security Professional must remember that most cloud environments contain a large number of customers, and the cloud provider must balance the needs and requirements of each against the others. The cloud provider must take the best approach for its business model and its customers, even if that means it is not able to meet the needs of a number of specific customers.

Contract Management

As with any service arrangement or hosting situation, a sound and structured approach to contract management is a necessity, and being in a cloud environment does not make contract management any easier—and in many cases, it makes it more complex and involved. A thorough understanding of the organization’s needs and expectations has to be done through a discovery and selection process before the contract is drawn up. Those requirements, combined with specific regulatory or legal requirements, will form the basis of the initial contract draft. The main components and considerations for the contract are as follows (note that this is not an exhaustive list, and depending on the system, application, data, or organization, there may be additional elements to the specific contract):

•   Access to systems   How both users and the cloud customer will get access to systems and data is a key contract and SLA component. This includes multifactor authentication requirements, as well as supported identity providers and systems. From the cloud customer perspective, this defines what level of administrative or privileged access they are granted, depending on the cloud model used and the services offered by the cloud provider.

•   Backup and disaster recovery   How systems and data will be backed up and how the cloud provider will implement disaster recovery is a key contract component. The disaster recovery requirements of the cloud customer must also be specified. This includes the acceptable times for recovery, as well as to what extent the systems must be available during a disaster situation. The contract should also document how backup and disaster recovery procedures will be tested and verified, as well as the frequency of the verifications and testing.

•   Data retention and disposal   How long data will be preserved and ultimately disposed of is a crucial piece to a contract. This is even more important in a cloud contract, as the cloud customer will be dependent on the cloud provider for backup and recovery systems to a degree they would not be with a traditional data center. The contract must clearly document how long data is to be preserved and in what format, as well as the acceptable methodologies and technologies used for data sanitizing after removal. Upon secure removal and sanitization, the contract should document what level of proof must be presented to the cloud customer by the cloud provider to meet either regulatory requirements or their own internal policies.

•   Definitions   The contract should include agreed-upon definitions of any terms and technologies. Although it may seem obvious to people in the industry what various terms mean, having them in the contract formalizes the definitions and ensures that everyone has the same understanding. Without doing this, issues could arise where expectations are different between both parties, with no clear recourse if the contract has not clarified the terms and set expectations.

•   Incident response   How the cloud provider will handle incident response for any security or operational incidents, as well as how communication will be provided to the cloud customer is defined by the contract. This forms the basis of an SLA between the cloud provider and cloud customer, and it documents the cooperation between the incident response teams from both sides.

•   Litigation   From time to time, the cloud customer could be the subject of legal action, which would likely also impact the cloud provider and require its involvement. The contract will document the responsibilities on behalf of both parties in such a situation, as well as the required response times and potential liabilities.

•   Metrics   The contract should clearly define what criteria is to be measured as far as system performance and availability are concerned, and the agreement between the cloud customer and cloud provider as to how the criteria will be collected, measured, quantified, and processed.

•   Performance requirements   The contract, and more specifically the SLA, will document and set specific performance requirements of the system and its availability, and this will form the basis for determining the success of and compliance with the contract terms. The specific performance requirements may also impact the ultimate system design and allocated resources as well. The performance requirements will also set expectations for the response time for system requests and support requests.

•   Regulatory and legal requirements   This section details any specific certifications that are required, or any specific set of laws or regulations that the cloud customer is subject to. This includes the specific steps taken by the cloud provider to comply with these requirements, as well as the response to legal orders such as eDiscovery, and the process that the cloud provider will use to audit and verify compliance. The cooperation of the cloud provider for application- and system-specific audits that will be done by the cloud customer is also included in this section of the contract.

•   Security requirements   This includes technology systems and operational procedures that are in place from the cloud provider, but also includes requirements for personnel who work on systems from the cloud customer and have access to their systems and data. This includes background checks and employee policies, and may have an impact on cloud providers from a regulatory standpoint. For example, contractors for U.S. federal government contracts typically must have all personnel located within the United States, and for some cloud providers, this may be an eliminating factor.

•   Termination   Should the need arise, the contract must clearly define the terms under which a party can terminate the contract, and what conditions are required as part of doing so. This will typically include specific processes for formal notice of nonperformance, remedy steps and timelines that can be taken, as well as potential penalties and termination costs, depending on the reasons for and timing of such action.

Executive Vendor Management

For managing outsourced and cloud services, the SLA is crucial in documenting the expectations and responsibilities of all parties involved. The SLA dictates specific requirements for uptime, support, response times, incident management, and virtually all operational facets of the contract. The SLA also articulates specific penalties for noncompliance and the impact they will have on the overall contract and satisfactory performance of it.

Supply-Chain Management

With the nature of modern applications built on a myriad of different components and services, the supply chain of any system or application is rapidly expanding to a scale far outside a single organization. This complexity makes the security of systems increasingly difficult to maintain, as a breach of any component of the supply chain can impact all other components, or the overall system or application itself. No matter how many stringent security controls and policies are in place, attackers will find the weakest component and attempt to use that to gain further access. This means the Cloud Security Professional must not only worry about systems under their control, but also the security posture and exposure from all components and external services that are being leveraged.

The best approach to take from the perspective of the Cloud Security Professional is to fully document and understand each component being used and to perform an analysis as to the extent of exposure and vulnerability each represents to the overall application. This allows for a risk assessment for each component as well as for the possibility of designing additional controls or verifications for each connection point to ensure data and processes are operating securely and not performing actions outside of their intended purpose.

Exercise

The company you work for is based in the United States but is looking to expand services to European countries. However, management is very happy with the setup and systems currently in place and does not want to operate additional hosting from other locations during this expansion, which would increase the complexity of their systems.

1.   What issues from a legal and regulatory standpoint will this type of situation incur?

2.   What technical concerns and issues will likely come into play with this scenario?

3.   How would you update and augment the risk management program of the organization to account for this additional expansion?

4.   What new policies or procedures are likely to be needed with this effort from a regulatory standpoint?

Chapter Review

In this chapter, you learned about the various laws, regulations, and complexities of hosting in a cloud environment and how they pertain to data protection and privacy. You learned about how eDiscovery and forensics work in a cloud environment, as well as who has authority and responsibility for conducting them. We talked about how audits are defined and performed in a cloud environment, and their importance for ensuring confidence in security and operations within a cloud environment. We also talked about how risk management programs are crucial to any organization, and what unique factors cloud computing brings to a risk management program, as well as the unique challenges of managing outsourced contracts with cloud services.

Questions

1.   Which of the following regulations specifies the length that financial records must be kept?

A.   HIPAA

B.   EU

C.   SOX

D.   Safe Harbor

2.   Which type of audit report would be suited for the general public to review to ensure confidence in a system or application?

A.   SAS 70

B.   SOC 1

C.   SOC 2

D.   SOC 3

3.   Which of the following would not be part of an audit scope statement?

A.   Deliverables

B.   Cost

C.   Certifications

D.   Exclusions

4.   Which of the following would be appropriate to include in an audit restriction?

A.   Time when scans can be run

B.   Type of device the auditors use

C.   Length of audit report

D.   Training of auditors

5.   What is the correct sequence for audit planning?

A.   Define objectives, define scope, conduct audit, lessons learned

B.   Define scope, conduct audit, prepare report, remediate findings

C.   Conduct audit, prepare report, remediate findings, verify remediation

D.   Define objectives, conduct audit, prepare report, management approval

6.   Which of the following is not a domain of ISO/IEC 27001:2013?

A.   Personnel

B.   Systems

C.   Network

D.   E-mail

7.   Which of the following is not a specialized regulatory requirement for data?

A.   HIPAA

B.   FIPS 140-2

C.   PCI

D.   FedRAMP

8.   Which of the following is the best definition of “risk profile”?

A.   An organization’s willingness to take risk

B.   A publication with statistics on risks taken by an organization

C.   A measure of risks and possibility of successful exploit

D.   An audit report on an organization’s risk culture

9.   Which of the following is responsible for data content and business rules within an organization?

A.   Data custodian

B.   Data steward

C.   Database administrator

D.   Data curator

10.   When can risk be fully mitigated?

A.   After a SOC 2 audit

B.   When in compliance with SOX

C.   When using a private cloud

D.   Never

11.   Which of the following shows the correct names and order of risk ratings?

A.   Minimal, Low, Moderate, High, Critical

B.   Low, Moderate, High, Critical, Catastrophic

C.   Mitigated, Low, Moderate, High, Critical

D.   Low, Medium, Moderate, High, Critical

12.   Which of the following is not one of the major risk frameworks?

A.   NIST

B.   ENISA

C.   GAPP

D.   ISO/IEC 31000:2018

13.   What does ENISA stand for?

A.   European National Information Systems Administration

B.   European Network and Information Security Agency

C.   European Network Intrusion Security Aggregation

D.   European Network and Information Secrecy Administration

14.   Where do Russian data privacy laws allow for data on Russian citizens to reside?

A.   Anywhere that conforms to Russian security policies

B.   Russian or EU hosting facilities

C.   Any country that was part of the Soviet Union

D.   Russian data centers

15.   In a cloud environment, who is responsible for collecting data in response to an eDiscovery order?

A.   The cloud customer

B.   The cloud provider

C.   The data owner

D.   The cloud customer and cloud provider

Questions and Answers

1.   Which of the following regulations specifies the length that financial records must be kept?

A.   HIPAA

B.   EU

C.   SOX

D.   Safe Harbor

C. SOX specifies how long financial records must be kept and preserved, as well as many other regulations for transparency and confidentiality protection. The EU and HIPAA guidelines are for European Union privacy protections and healthcare data protection, respectively. The Safe Harbor program is not a series of regulations, but rather a voluntary program to bridge the gap between privacy rules and laws of the United States and Europe.

2.   Which type of audit report would be suited for the general public to review to ensure confidence in a system or application?

A.   SAS 70

B.   SOC 1

C.   SOC 2

D.   SOC 3

D. SOC 3 audit reports are meant for general consumption and to be shared with a wider and open audience. The other types of audit reports listed—SAS 70, SOC 1, and SOC 2—are all restricted use audit reports that are only used internally, with current customers, or with regulators.

3.   Which of the following would not be part of an audit scope statement?

A.   Deliverables

B.   Cost

C.   Certifications

D.   Exclusions

B. Cost would not be part of an audit scope statement. The audit scope statement covers the breadth and depth of the audit, as well as the timing and tool sets used to conduct it. Cost is not part of the audit scope at this level, nor part of the planning and discussion between management and the auditors. The deliverables, exclusions, and certifications covered or required are all part of the audit scope statement.

4.   Which of the following would be appropriate to include in an audit restriction?

A.   Time when scans can be run

B.   Type of device the auditors use

C.   Length of audit report

D.   Training of auditors

A. Specifying the time when scans can be run would be appropriate for an audit restriction, so as to ensure they do not impact production operations or users. The type of devices that the auditors use, the length of the final report and deliverables, as well as the particular training of the auditors, would not be part of audit restrictions.

5.   What is the correct sequence for audit planning?

A.   Define objectives, define scope, conduct audit, lessons learned

B.   Define scope, conduct audit, prepare report, remediate findings

C.   Conduct audit, prepare report, remediate findings, verify remediation

D.   Define objectives, conduct audit, prepare report, management approval

A. Define objectives, define scope, conduct audit, lessons learned is the correct sequence for audit planning. The other options have either incorrect choices, choices that are really subsections of the real sections, or incorrect ordering.

6.   Which of the following is not a domain of ISO/IEC 27001:2013?

A.   Personnel

B.   Systems

C.   Network

D.   E-mail

D. E-mail is not a domain covered under ISO/IEC 27001:2013. The other options—personnel, systems, and network—are all separate and distinct domains under the standard.

7.   Which of the following is not a specialized regulatory requirement for data?

A.   HIPAA

B.   FIPS 140-2

C.   PCI

D.   FedRAMP

B. FIPS 140-2 is a certification and accreditation program for cryptographic modules, not a specialized regulatory requirement for data. HIPAA for healthcare records and systems, PCI for credit cards and financial systems, and FedRAMP for United States federal government cloud systems are all specialized regulatory requirements.

8.   Which of the following is the best definition of “risk profile”?

A.   An organization’s willingness to take risk

B.   A publication with statistics on risks taken by an organization

C.   A measure of risks and possibility of successful exploit

D.   An audit report on an organization’s risk culture

A. The “risk profile” is an organization’s willingness to take risks and how they evaluate and weigh those risks. The other choices are all incorrect with regard to what a risk profile is.

9.   Which of the following is responsible for data content and business rules within an organization?

A.   Data custodian

B.   Data steward

C.   Database administrator

D.   Data curator

B. The data steward is responsible for overseeing data content and ensuring that applicable policies are applied to access controls, as well as for ensuring that appropriate approvals have been obtained before access is granted. Data custodian is another term for the data owner. Although the data custodian (owner) has overall responsibility for data and its protection within a system or application, the data steward is the one who handles the actual operations processes of granting access and ensuring policies are followed. The database administrator role is a technical position that does the actual administration of a database, but is not responsible for setting policy or granting access to data. Data curator is an extraneous answer.

10.   When can risk be fully mitigated?

A.   After a SOC 2 audit

B.   When in compliance with SOX

C.   When using a private cloud

D.   Never

D. Risk can never be fully mitigated within any system or application. Risk can be lowered and largely mitigated, but can never be fully mitigated, as any system with users and access will always have some degree of possible successful exploit.

11.   Which of the following shows the correct names and order of risk ratings?

A.   Minimal, Low, Moderate, High, Critical

B.   Low, Moderate, High, Critical, Catastrophic

C.   Mitigated, Low, Moderate, High, Critical

D.   Low, Medium, Moderate, High, Critical

A. Minimal, Low, Moderate, High, Critical are the correct names and order of risk ratings for a system or application, based on the classification of the data and the specific threats and vulnerabilities that apply to it. The other answers are either out of order or contain invalid names for risk ratings.

12.   Which of the following is not one of the major risk frameworks?

A.   NIST

B.   ENISA

C.   GAPP

D.   ISO/IEC 31000:2018

C. The Generally Accepted Privacy Principles (GAPP) is not a major framework for risk to a system or application, but is instead focused on principles of privacy risks. NIST, ENISA, and ISO/IEC 31000:2018 are all specifically focused on systems, threats, and risks facing them directly.

13.   What does ENISA stand for?

A.   European National Information Systems Administration

B.   European Network and Information Security Agency

C.   European Network Intrusion Security Aggregation

D.   European Network and Information Secrecy Administration

B. ENISA stands for European Network and Information Security Agency.

14.   Where does Russian data privacy laws allow for data on Russian citizens to reside?

A.   Anywhere that conforms to Russian security policies

B.   Russian or EU hosting facilities

C.   Any country that was part of the Soviet Union

D.   Russian data centers

D. Russian laws require data on Russian citizens to be kept in Russian data centers only, based on Russian Law 526-FZ, which became effective September 1, 2015. It requires specifically that any collection, storing, or processing of personal information or data on Russian citizens must be done on systems that are physically located within the political borders of the Russian Federation.

15.   In a cloud environment, who is responsible for collecting data in response to an eDiscovery order?

A.   The cloud customer

B.   The cloud provider

C.   The data owner

D.   The cloud customer and cloud provider

D. In a cloud environment, both the cloud provider and cloud customer are responsible for collecting data in response to an eDiscovery order. Depending on the scope of the eDiscovery order, as well as the specific cloud service category used, the degree to which it falls on either party will differ somewhat.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.100.202