Chapter 6
Legal, Risk, and Compliance

The cloud offers companies and individuals access to vast amounts of computing power at economies of scale made possible only by distributed architectures. However, those same distributed architectures can bring a unique set of risks and legal challenges to companies due to the geographic distribution of the cloud infrastructure. The cloud, similar to the Internet, allows data to flow freely across national borders and facilitates storage and processing in data centers worldwide. Determining what laws apply to cloud computing environments is an ongoing challenge. The cloud service provider (CSP) can be based in one country, operate data centers across multiple countries, and serve customers in even more countries. In this situation there may be overlapping legal requirements, which introduces risk, and using a third-party CSP can introduce additional risks.

Articulating Legal Requirements and Unique Risks within the Cloud Environment

Legal and compliance requirements are more complex for cloud computing than they were for traditional on-premises information systems. With data and compute power spread across countries and continents, international disputes have dramatically increased. Various countries and regions have taken differing approaches to governing data privacy, intellectual property protection, and law enforcement methods. These types of disputes existed before cloud computing, but the transborder data flow and processing enabled by the cloud emerged before legal frameworks were written to deal with these scenarios. To prepare for these challenges, a cloud security professional must be aware of the legal requirements and unique risks presented by cloud computing architectures.

Conflicting International Legislation

Using distributed cloud services provides benefits for redundancy and data integrity and can even offer performance benefits by moving information systems closer to the end users. This can, however, lead to physical infrastructure, business operations, and customers that are all governed by completely separate and sometimes conflicting laws.

For example, the European Union (EU) is governed by a data privacy law known as the General Data Protection Regulation (GDPR), which is a wide-ranging law. A Brazilian company that handles or stores data of EU citizens is obligated to comply, even though they are not based in the EU. Further complicating matters, each EU member state has its own privacy laws—although they align with GDPR, there can be subtle variations, such as timeframes and procedures for reporting security breaches to the member state data protection authority.

Although cloud security practitioners are not expected to be legal professionals as well, it is important to be aware of the various laws and regulations that govern cloud computing. Laws can introduce risks to a business, such as fines, penalties, or even a loss of the ability to do business in a certain place. It is important to identify such risks and make recommendations to mitigate them just like any other risk.

As an example, GDPR forbids the transfer of data to countries that lack adequate privacy protections; a mitigation to avoid a GDPR fine might involve building an application instance in an EU cloud region and preventing transfer of the data outside the EU. However, some countries require companies operating in that country to respond to law enforcement actions, such as a warrant to turn over data. In this situation, the GDPR restriction on data transfers might be violated in order to comply with another country's legal requirements.

Because of the international nature of cloud offerings and customers, cloud practitioners must be aware of multiple sets of laws and regulations and the risks introduced by conflicting legislation across jurisdictions. These conflicts may include the following:

  • Copyright and intellectual property law, particularly the jurisdictions that companies need to deal with (local versus international) to protect and enforce their IP protections
  • Safeguards and security controls required for privacy compliance, particularly details of data residency or the ability to move data between countries, as well as varying requirements of due care in different jurisdictions
  • Data breaches and their aftermath, particularly breach notification
  • International import/export laws, particularly technologies that may be sensitive or illegal under various international agreements

Craig Mundie, the former chief of Microsoft's research and strategy divisions, explained it in these terms:

People still talk about the geopolitics of oil. But now we have to talk about the geopolitics of technology. Technology is creating a new type of interaction of a geopolitical scale and importance… . We are trying to retrofit a governance structure which was derived from geographic borders. But we live in a borderless world.

In simple terms, a cloud security practitioner must be familiar with a number of legal arenas when evaluating risks associated with a cloud computing environment. This does not mean, however, that they must be legal experts. As with many aspects of security, legal compliance requires collaboration; in this case, legal counsel should be part of the evaluation of any cloud-specific risks, legal requests, and the company's response to these.

Evaluation of Legal Risks Specific to Cloud Computing

The cloud offers computing capabilities that were unheard of a decade ago. CSPs can offer content delivery options to host data within a few hundred miles of almost any human being on Earth, offer novel architectures like microservices, and make computing power cheaper than it has ever been. Customers are not limited by political borders when accessing services from cloud providers, but this flexibility introduces a new set of risks. Legal, regulatory, and compliance risks in the cloud can be significant for certain types of data or industries.

Storing or processing data in multiple countries introduces legal and regulatory challenges. Cloud computing customers may be impacted by one or more of the following:

  • Differing legal requirements: For example, state and provincial laws in the United States and Canada have different requirements for data breach notifications, such as timeframes. In addition, there are federal laws governing certain types of privacy data in these countries, which have separate breach reporting requirements. In building out incident response plans, security practitioners must account for the types of data they handle and the location of their users, as well as ensure that any incident communications procedures meet these legal reporting obligations.
  • Different legal systems and frameworks in different countries: In some countries there is clear, written legislation in place, while in others legal precedent is more important. Precedent refers to the judgments in past cases and is subject to change over time with less advance notice than updates to legislation. Security practitioners will need to get input from legal counsel to ensure that the different types of legal systems and evolving requirements are understood and to take appropriate action to help the organization respond accordingly.
  • Conflicting laws: The EU GDPR and the U.S. Clarifying Lawful Overseas Use of Data (CLOUD) Act can leave an organization in a legal mess. GDPR forbids transfer of EU citizen data without adequate protections, while the CLOUD Act requires U.S.-based companies to respond to legal requests for data regardless of where the data is physically located. Simply locating physical infrastructure in another country is not enough to avoid potential legal consequences in this scenario—different corporate structures may be required to avoid the risk of one country's law enforcement action leading to legal issues in another.

Legal Frameworks and Guidelines

Cloud security practitioners should be aware of the legal frameworks that affect the cloud computing environments. The following frameworks are the products of multinational organizations working together to identify key priorities in the security of information systems and data privacy.

The Organisation for Economic Co-operation and Development

The Organisation for Economic Co-operation and Development (OECD) guidelines lay out privacy and security guidelines. (See www.oecd.org/sti/ieconomy/privacy-guidelines.htm.) The OECD guidelines are echoed in European privacy law in many instances. The basic principles of privacy in the OECD include the following:

  • Collection limitation principle: There should be limits on the collection of personal data as well as consent from the data subject.
  • Data quality principle: Personal data should be accurate, complete, and kept up-to-date.
  • Purpose specification principle: The purpose of data collection should be specified, and data use should be limited to these stated purposes.
  • Use limitation principle: Data should not be used or disclosed without the consent of the data subject or by the authority of law.
  • Security safeguards principle: Personal data must be protected by reasonable security safeguards against unauthorized access, destruction, use, or disclosure.
  • Openness principle: Policies and practices about personal data should be freely disclosed, including the identity of data controllers.
  • Individual participation principle: Individuals have the right to know if data is collected on them, access any personal data that might be collected, and obtain or destroy personal data if desired.
  • Accountability principle: A data controller should be accountable for compliance with all measures and principles.

In addition to the basic principles of privacy, there are two overarching themes reflected in the OECD: first, a focus on using risk management to approach privacy protection, and second, the concept that privacy has a global dimension that must be addressed by international cooperation and interoperability. The OECD council adopted guidelines in September 2015, which provide guidance in the following:

  • National privacy strategies: Privacy requires a national strategy coordinated at the highest levels of government. Security practitioners should be aware of the common elements of national privacy strategies based on the OECD suggestions and work to help their organizations design privacy programs that can be used across multiple jurisdictions. Additionally, they should help their organization design a privacy strategy using business requirements that deliver a good cost-benefit outcome.
  • Data security breach notification: Both the relevant authorities and the affected individuals must be notified of a data breach. CCSPs should be aware that multiple authorities may be involved in notifications and that the obligation to notify authorities and individuals rests with the data controller.
  • Privacy management programs: These are operational mechanisms to implement privacy protection. A CCSP should be familiar with the programs defined in the OECD guidelines.

Asia-Pacific Economic Cooperation Privacy Framework

The Asia-Pacific Economic Cooperation Privacy Framework (APEC) is an intergovernmental forum consisting of 21 member economies in the Pacific Rim. The full framework text is available here: apec.org/Publications/2017/08/APEC-Privacy-Framework-(2015). The goal of this framework is to promote a consistency of approach to information privacy protection. This framework is based on nine principles.

  • Preventing harm: An individual has a legitimate expectation of privacy, and information protection should be designed to prevent the misuse of personal information.
  • Collection limitation: Collection of personal data should be limited to the intended purposes of collection and should be obtained by lawful and fair means with notice and consent of the individual. As an example, an organization running a marketing operation should not collect Social Security or national identity numbers, as they are not required for sending marketing materials.
  • Notice: Information controllers should provide clear and obvious statements about the personal data that they are collecting and their policies around use of the data. This notice should be provided at the time of collection. You are undoubtedly familiar with the banners and pop-ups in use at many websites to notify users of what data is being collected.
  • Use of personal information: Personal information collected should be used only to fulfill the purposes of collection and other compatible or related purposes except: a) with the consent of the individual whose personal information is collected; b) when necessary to provide a service or product requested by the individual; or c) by the authority of law and other legal instruments, proclamations, and pronouncements of legal effect.
  • Integrity of personal information: Personal information should be accurate, complete, and kept up-to-date to the extent necessary for the purposes of use.
  • Choice and consent: Where appropriate, individuals should be provided with clear, prominent, easily understandable, accessible, and affordable mechanisms to exercise choice in relation to the collection, use, and disclosure of their personal information.
  • Security safeguards: Personal information controllers should protect personal information that they hold with appropriate safeguards against risks, such as loss or unauthorized access to personal information or unauthorized destruction, use, modification, or disclosure of information or other misuses.
  • Access and correction: Individuals should be able to obtain from the personal information controller confirmation of whether the personal information controller holds personal information about them, and have access to information held about them, challenge the accuracy of information relating to them, and have the information rectified, completed, amended, or deleted.
  • Accountability: A personal information controller should be accountable for complying with measures. Companies must identify who is responsible for complying with these privacy principles.

General Data Protection Regulation

The EU GDPR is perhaps the most far-reaching and comprehensive set of laws ever written to protect data privacy. Full details can be found at gdpr.eu/what-is-gdpr. Within the EU, the GDPR mandates privacy for individuals, defines companies' duties to protect personal data, and prescribes punishments for companies violating these laws. GDRP fines for violating personal privacy can be massive at 20 million euros or 4 percent of global revenue (whichever is greater). For this reason alone, security practitioners must be familiar with these laws and the effects that they have on any company operating within, housing data in, or doing business with citizens of these countries in the 27-nation bloc. GDPR came into force in May 2018 and incorporated many of the same principles of the previous EU Data Protection Directive.

GDPR formally defines many roles related to privacy and security, such as the data subject, controller, and processor. The data subject is defined as an “identified or identifiable natural person,” or, more simply, a person. There is a subtle distinction between a data controller and a data processor, which recognize that not all organizations involved in the use and processing of personal data have the same degree of responsibility.

  • A data controller under GDPR determines the purposes and means of processing personal data.
  • A data processor is the body responsible for processing the data on behalf of the controller.

In cloud environments, the data controller is often the cloud customer, while the data processor is the CSP. The cloud customer provides services to their customers and utilizes the CSP's infrastructure to process the data. Both the controller and processor have responsibilities to ensure that privacy data is adequately protected, but the data controller retains most legal liability. If the CSP suffers a data breach, the data controller is likely to be fined if they cannot prove they took adequate steps to mitigate the risk associated with a CSP breach. This drives security decisions such as encrypting all data stored in the cloud, which reduces the impact of unauthorized access.

The GDPR is a massive set of regulations that covers almost 90 pages of details and requires significant effort to achieve compliance. Security practitioners must be familiar with the broad areas of the law if it applies to their organization, but ultimately organizations should consult an attorney to identify requirements. Legal counsel or an outside auditor may also be needed to ensure that operations are compliant. GDRP encompasses the following main areas:

  • Data protection principles: If a company processes data, it must do so according to seven protection and accountability principles.
    • Lawfulness, fairness, and transparency: Processing must be lawful, fair, and transparent to the data subject.
    • Purpose limitation: The organization must process data for the legitimate purposes specified explicitly to the data subject when it was collected.
    • Data minimization: The organization should collect and process only as much data as absolutely necessary for the purposes specified.
    • Accuracy: The organization must keep personal data accurate and up-to-date.
    • Storage limitation: The organization may store personally identifying data for only as long as necessary for the specified purpose.
    • Integrity and confidentiality: Processing must be done in such a way as to ensure appropriate security, integrity, and confidentiality (for example, by using encryption).
    • Accountability: The data controller is responsible for being able to demonstrate GDPR compliance with all of these principles.
  • Data security: Companies are required to handle data securely by implementing appropriate technical measures and to consider data protection as part of any new product or activity.
  • Data processing: GDPR limits when it is legal to actually process user data. There are very specific instances when this is allowed under the law.
  • Consent: Strict rules are in place for how a user is to be notified that data is being collected.
  • Personal privacy: The GDPR implements a litany of privacy rights for data subjects, which gives individuals far greater rights over their data that companies collect. This includes the rights to be informed, access, rectify, restrict, and obtain, and the well-known “right to be forgotten,” which allows the end user to revoke consent and request deletion of all personal data.

Additional Legal Frameworks

Cloud services have added complexity to well-established legal frameworks that were put into place before cloud computing was developed. The CSP is a vital third party to many organizations, but one that introduces complexity to existing security and privacy risk mitigation. As a service provider, legal liability usually does not transfer to CSPs; instead, the cloud consumer is responsible for evaluating if a service provider offers adequate security controls. These evaluations are often done in light of laws or regulations that govern data security, such as the following:

  • Health Insurance Portability and Accountability Act (HIPAA): This 1996 U.S. law regulates the privacy and control of health information data. It mandates privacy, security, and breach notification, as well as strict management of third parties processing healthcare data through contracts known as business associate agreements (BAAs).
  • Payment Card Industry Data Security Standard (PCI DSS): This is an industry standard for companies that accept, process, or receive payment card transactions. The network accessibility of cloud services has actually made PCI DSS compliance easier, as large third-party processors can offer their services to smaller organizations. This shifts the burden of PCI DSS compliance to the processor, who has more scale and resources available.
  • Privacy Shield: The United States lacks a federal-level privacy law comparable to the EU GDPR, so companies with EU citizen data face hurdles when transferring that data from the EU to the United States. Various frameworks have existed to address this problem, such as Safe Harbor and Privacy Shield (the most current). However, all have run into legal issues due to disclosures regarding how the U.S. government handles data on nonU.S. citizens. This unsettled legal environment requires security practitioners and legal counsel to continuously evaluate the legality and potential risks associated with data transfers.
  • Sarbanes-Oxley Act (SOX): This law was enacted in 2002 and sets requirements for U.S. public companies to protect financial data when stored and used. It is intended to protect shareholders of the company as well as the general public from accounting errors or fraud within enterprises. This act specifically applies to publicly traded companies and is enforced by the Securities and Exchange Commission (SEC). It is applicable to cloud practitioners in particular because it specifies what records must be stored and for how long, internal control reports, and formal data security policies.

Laws and Regulations

The cloud represents a dynamic and changing environment, and monitoring/reviewing legal requirements is essential to staying compliant. All contractual obligations and acceptance of requirements by contractors, partners, legal teams, and third parties should be subject to periodic review. When it comes to compliance, words have very specific meanings, which impact how security programs must be defined and implemented. For example, many security practitioners treat law and regulation as interchangeable terms. However, there are very different consequences, and therefore very different risks, associated with noncompliance.

Understanding the compliance requirements that your organization is subject to is vital for a security practitioner. The outcome of noncompliance can vary greatly, including imprisonment, fines, litigation, loss of a contract, or a combination of these. To understand this landscape, it is helpful to recognize the difference between data protections required by laws, regulations, or contractual obligations:

  • Statutory requirements are required by law.
    • Country-level laws such as HIPAA in the United States, GDPR and associated member state laws like the German Bundesdatenschutzgesetz, and the Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada
    • Industry-specific laws, such as the Family Education Rights and Privacy Act (FERPA) in the United States that deals with privacy rights for students
    • State, provincial, or regional data privacy laws, such as the Personal Information Protection Act (PIPA) in British Columbia, Canada, or the Stop Hacks and Improve Electronic Data Security (SHIELD) Act in New York
  • Regulatory requirements may also be required by law, but refer to rules issued by a regulatory body that is appointed by a government entity. In this case, the guidance does not come from the legislation itself but from a regulatory body that interprets the law and issues rules, requirements, or guidance on compliance.
    • In the United States, legal requirements exist for the security of sensitive data handled by contractors to the U.S. government. For example, the Federal Information Security Modernization Act (FISMA) requires government agencies to implement security programs. The actual guidance for these programs is published by the National Institute of Standards and Technology, such as the security control catalog in NIST Special Publication 800-53. These controls have evolved to govern the security and privacy controls in cloud environments as part of the Federal Risk and Authorization Management Program (FedRAMP).
    • At the state level, several states use regulatory bodies to implement their cybersecurity requirements. One example is the New York State Department of Financial Services (NY DFS) cybersecurity framework for the financial industry (23 NYCRR 500).
    • There are numerous international requirements dealing with cloud legal requirements, including APEC and GDPR. Member states of these international blocs may implement their laws and regulations aligned to these standards, but with some variations such as reporting timeframes.
  • Contractual requirements are required by a legal contract between private parties. These are not laws, and breaching these requirements cannot result in imprisonment, but can result in financial penalties, litigation, and termination of contracts. PCI DSS is a common example: to process payment card transactions, the payment card issuers require compliance with the standard. Vendor risk management programs often specify a set of security controls or a compliance framework that must be implemented by a vendor, such as Service Organization Controls (SOC), Generally Accepted Privacy Principles (GAPP), Center for Internet Security (CIS) Critical Security Controls (CSC), or the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM). The organizations that publish these standards do not mandate compliance, but contracts with business partners do. In this case, the organization must provide evidence of compliance or risk losing business with that partner.

It is vital to consider the legal and contractual issues that apply to how a company collects, stores, processes, and ultimately deletes data. Few companies or entities can ignore the need for compliance with national and international laws, since noncompliance can result in loss of the ability to do business. Perhaps not surprisingly, company officers have a vested interest in complying with these laws. Laws and regulations are specific in who is responsible for the protection of information, and many laws identify penalties for individuals who knowingly support noncompliance. Federal laws like HIPAA spell out that senior officers within a company are responsible for (and liable for) the protection of data. International regulations like GDPR and state regulations such as NYCRR 500 identify the role of a data protection officer or chief information security officer and outline their culpability for negligence that leads to a data breach.

As a cloud consumer, security practitioners are responsible for identifying all legal, regulatory, and contractual obligations, and ensuring that the CSP is able to meet those requirements. If not, using the CSP's services or infrastructure introduces significant risks. No matter who is hosting data or services, the data controller (usually the cloud customer) is ultimately accountable for effective security controls, privacy protections, and compliance with legal, regulatory, and compliance obligations.

eDiscovery

When a crime is committed, law enforcement or other agencies may perform eDiscovery using forensic practices to gather evidence about the crime that has been committed, which can be used to prosecute the guilty parties. eDiscovery is defined as any process in which electronic data is pursued, located, secured, and searched with the intent of using it as evidence in a civil or criminal legal case. In a typical eDiscovery case, computing data might be reviewed offline, with the equipment powered off or viewed with a static image, or online with the equipment powered on and accessible.

In the cloud environment, almost all eDiscovery cases will be done online due to the nature of distributed computing and the difficulty in taking those systems offline, though some offline analysis is also possible. Virtual machine (VM) images or snapshots may be analyzed without powering on the VM itself. Forensics, especially cloud forensics, is a highly specialized field that relies on expert technicians to perform the detailed investigations required for discovery while not compromising the potential evidence and chain of custody. Not all security practitioners will possess this set of skills, so it is important to identify where in-house resources can perform certain actions and where highly trained external resources are required. This may involve the use of dedicated digital forensics and incident response (DFIR) personnel or law enforcement.

eDiscovery Considerations and Challenges in the Cloud

Cloud computing's essential characteristics add significant complexity to eDiscovery. A SaaS app in the cloud distributed across 100 countries is exponentially more complex to investigate than a simple server cluster in a traditional data center. An organization investigating an incident may lack the ability to compel the CSP to turn over vital information needed to investigate, or the information may be housed in a country where jurisdictional issues make the data more difficult to access. Even if information is available, it may not be sufficient to support the investigation, and maintaining a chain of custody is more difficult since there are more entities involved in the process.

When considering a cloud vendor, eDiscovery should be considered as a security requirement during the selection and contract negotiation phases. Once a CSP is chosen, it's important to proactively gather information that might be relevant in an investigation or discovery situation. This includes contact information, escalation procedures, and any relevant stakeholders to such a process. This type of information may logically fit into an incident response plan or similar documentation.

Data residency and system architecture are other important considerations for eDiscovery in the cloud and can be handled proactively when designing or deploying a system or business process. For example, if a platform is likely to receive law enforcement requests to hand over data, that data should not be stored in a country that restricts the organization's ability to honor such a request. Distributed cloud services can reduce availability risks by providing global replication and failover abilities but can introduce additional legal risks due to competing laws and jurisdictions.

Cloud security practitioners must inform their organizations of any risks and required due care and due diligence related to the use of cloud computing. This may involve working with legal counsel to identify requirements, documenting the steps needed and steps taken to meet those requirements, and also performing oversight functions like audits and assessments to measure compliance. This can make the process of eDiscovery easier by ensuring that the organization is prepared in the event of a discovery process, rather than finding itself unable to conduct necessary activities to investigate.

eDiscovery Frameworks

DFIR practitioners working in a cloud environment can utilize a variety of frameworks and tools to conduct investigations. Some of these include proactive practices similar to traditional on-premises investigations, such as identifying vital data, logging it, and centralizing it in a platform where analysis can take place. CSPs may not preserve essential data for the required period of time to support historical investigations or may not even log data relevant to support an investigation. This shifts the burden of recording and preserving potential evidence onto the consumers, who must identify and implement their own data collection. For example, the CSP is unlikely to log application-level data related to microservices usage, such as whether the user-requested action completed successfully or was terminated. The CSP is concerned only with whether the microservice was available and the length of time it was used, which is critical for CSP billing.

Frameworks designed to assist with planning for eDiscovery include some cloud-specific guidance, as well as more general guidance for any information system. These include the following:

  • ISO 27050: This standard is applicable to any information system and deals with practices and procedures needed to collect data, perform analysis, and present the findings of analysis to support investigations or theories. It is not cloud-specific but covers many common tasks associated with digital forensics that are applicable in any environment.
  • Cloud Security Alliance (CSA): Although they do not provide a specific cloud forensics and eDiscovery framework, CSA has done a cross-mapping of relevant ISO standards for cloud computing environments. This includes several key practices that extend forensics to cloud computing and can be accessed here: downloads.cloudsecurityalliance.org/initiatives/imf/Mapping-the-Forensic-Standard-ISO-IEC-27037-to-Cloud-Computing.pdf.
  • NIST: Common issues and solutions needed to address DFIR in cloud environments are the focus of NISTIR 8006, “Cloud Computing Forensic Science Challenges,” which can be found here: csrc.nist.gov/publications/detail/nistir/8006/final.

Forensics Requirements

Digital forensics and eDiscovery requirements for many legal controls are greatly complicated by the cloud environment. Unlike on-premises systems, it can be difficult or impossible to perform physical search and seizure of cloud resources such as storage or hard drives. ISO/IEC and CSA provide guidance to cloud security practitioners on best practices for collecting digital evidence and conducting forensics investigations in cloud environments.

As discussed, DFIR is a highly specialized field within security and should be performed only by qualified personnel. Untrained or unskilled personal performing forensics run the risk of destroying or altering evidence, which renders it useless for investigation and prosecution. All security practitioners should be familiar with the following standards, even if they do not specialize in forensics:

  • Cloud Security Alliance: The CSA Security Guidance Domain 3: Legal Issues: Contracts and Electronic Discovery highlights some of the legal aspects raised by cloud computing. This provides cloud security practitioners with guidance on negotiating contracts with cloud service providers in regard to eDiscovery, searchability, and preservation of data.
  • ISO/IEC 27037:2012: This provides guidelines for the handling of digital evidence, which include the identification, collection, acquisition, and preservation of data related to a specific case.
  • ISO/IEC 27041:2015: This provides guidance on mechanisms for ensuring that methods and processes used in the investigation of information security incidents are “fit for purpose.” Cloud consumers and security practitioners should pay close attention to the sections on how vendor and third-party testing can be used to assist with assurance processes.
  • ISO-IEC 27042:2015: This standard is a guideline for the analysis and interpretation of digital evidence. Security practitioners can use these methods to demonstrate proficiency and competence with an investigative team.
  • ISO/IEC 27043:2015: The security techniques document covers incident investigation principles and processes. This can help a security practitioner build processes for various types of investigations, including unauthorized access, data corruption, system crashes, information security breaches, and other digital investigations.

Understand Privacy Issues

The Internet age has brought with it an unprecedented amount of information flow. Everything from financial transactions to academic research to cat videos can be shared around the world. However, not all of this data is equally valuable. Some categories of information can be used to cause real-world harm if they fall into the wrong hands. Private information is of particular importance, because in many cases tampering with or stealing this data can have serious consequences to the subject of the information. These consequences may include identity theft, discrimination, or even death in cases where unpopular or illegal speech or viewpoints is involved.

Privacy is defined as the state of being free from observation by others, and it is often discussed alongside security. The two fields are not the same, however. Privacy is often codified into laws and regulations as an individual's right, which organizations must uphold when they collect, store, or process the individual's information. Security practitioners often implement security controls as part of privacy compliance, such as encryption to reduce the impact of a data breach or incident response procedures that include mandatory reporting to victims of a data breach.

Data that is considered private is also often useful for identifying an individual, meaning the data can be associated with a single person or entity. This can lead to situations where a point of data in a larger data set makes the difference between private data, which needs protection, and sensitive data, which might warrant some protection but does not carry the risk of fines or legal penalties. Information about a medical procedure is not generally considered sensitive, but if the patient's insurance number is associated with the procedure records, it can now be used to uniquely identify an individual and their health condition. In many jurisdictions that information is regulated by privacy laws.

Difference between Contractual and Regulated Private Data

It is important to understand what types of data an organization is processing, where it is being processed, and any associated requirements such as contractual obligations. In any cloud computing environment, the legal responsibility for data privacy and protection rests with the cloud consumer, who may enlist the services of a cloud service provider (CSP) to gather, store, and process that data. The data controller is always responsible for ensuring that the requirements for protection and compliance are met, whether the data is processed in on-premises systems or a cloud solution. When third parties like a CSP are involved, it is important to ensure that contracts with the CSP stipulate data privacy, security, and protection requirements.

There are a number of terms that describe data that requires protection, and the specific types of data may have unique protection requirements. It is essential to understand what type of data is being handled, such as personally identifiable information (PII). This is a widely recognized classification of data that is almost universally regulated. PII is defined by the NIST standard 800-122 as follows:

“any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”

While NIST is a U.S. body, the definition of PII is similar to other standards and frameworks, such as GDPR. Different frameworks explicitly identify certain data points that may be unique, but in general any information that can be used to uniquely identify an individual is considered PII. Security practitioners should be aware of the types of data their organization handles and associated regulatory or contractual obligations.

Protected health information (PHI) is a U.S.-specific subset of PII and is codified under HIPAA. Data that relates to a patient's health, treatment, or billing for medical services that could identify a patient is PHI. When this data is electronically stored, it must be adequately secured by controls such as unique user accounts for every user, strong passwords and MFA, least privilege-based access controls, and auditing all access and changes to a patient's PHI data.

Table 6.1 summarizes types of private data and how they differ.

TABLE 6.1 Types of Private Data

Data TypeDefinition
Personally identifiable information (PII)Personally identifiable information is information that, when used alone or with other relevant data, can identify an individual. PII may contain direct identifiers, such as name or address that can identify a person uniquely. It may also include indirect or quasi-identifiers such as place of birth, date of birth, etc., which could be combined with other quasi-identifiers to successfully recognize an individual.
PII is legally defined in and regulated by numerous privacy laws, and additional PII data elements may be defined in contracts along with required protections.
Protected or personal health information (PHI)PHI includes medical histories, test and laboratory results, mental health conditions, insurance information, and other data that a healthcare professional collects to identify an individual and determine appropriate care.
In the United States, PHI is explicitly identified in and regulated by HIPAA. Covered entities, which store or handle PHI, are defined in the law, and any third parties they do business with are also required to safeguard the information. These requirements are passed on to the third parties via business associate agreements (BAAs), which are contractual requirements for handling any PHI that is shared.
Payment data (PCI DSS)Bank and credit card data used by payment processors in order to conduct point-of-sale transactions.
Payment data is governed by contracts with payment card issuers such as Mastercard and Visa. To accept and process payment card transactions, merchants must agree to implement the required protections specified in PCI DSS.

Regulated Private Data

The biggest differentiator between contractual and regulated data is that the requirements to protect regulated data flow from legal and statutory requirements. Both PII and PHI data are subject to regulation, and the disclosure, loss, or altering of these data can subject a company (and individuals) to statutory penalties including fines and imprisonment. Organizations can be fined for mishandling or failing to protect private data under laws like HIPAA or GDPR, and in some cases individuals may also be held accountable if they are found to be negligent.

Regulations are put into place by governments and government-empowered agencies to protect entities and individuals from risks. In addition, they force providers and processors to take appropriate measures to ensure that protections are in place while identifying penalties for lapses in procedures and processes. In some industries, the regulators also perform audit and oversight functions to ensure that organizations are meeting their regulatory obligations. Security practitioners must understand their regulatory environment and work to implement the required controls and facilitate any oversight activities by regulators.

One of the major differentiators between contracted and regulated privacy data is in breach reporting. A data breach of regulated data (or unintentional loss of confidentiality of data through theft or negligence) is covered by regional and country laws around the world. As an example, in some U.S. states there are financial penalties for data breaches and no requirement to report to state regulators, while in others there are very specific timeframes for reporting. Knowing the overlapping requirements and ensuring that policies and procedures are adequate to meet those requirements will require collaboration between the security team and others in the organization, such as the legal team. The International Association of Privacy Professionals (iapp.org) publishes several guides that highlight privacy regulations and requirements across different countries.

There are several risks associated with regulated privacy data. Obviously a breach can lead to severe consequences for individuals — some information could be used by malicious parties to perform identity theft or extortion. A large dating site serving people who wanted to have extramarital affairs suffered a data breach of its user information, and individuals were blackmailed to prevent their use of the service from being publicized. Sadly, some of the affected individuals committed suicide rather than face embarrassment. From an organizational standpoint, fines and penalties are another major privacy regulation risk. As an example, the ridesharing company Uber was forced to pay $148 million in fines to the state of California for a 2016 data breach and associated cover-up of the incident.

Contractual Private Data

Contractual obligations to safeguard data can be used as a method to enforce data safeguards throughout a supply chain, such as with a HIPAA BAA for subprocessors. This method can also be used to provide safeguards for data that does not have a legal or regulatory need for protection but is nonetheless valuable. Examples include business confidential information, intellectual property, and other nonpublic information that an organization creates or uses.

Contracts are used to provide governance for relationships with third parties, such as vendors, service providers, and business partners. They are relevant to security professionals, because they provide a way to communicate the requirements for handling this data and also provide some risk mitigation if a breach occurs. Contracts are similar to nondisclosure agreements, which identify the types of data being shared, required safeguards, and legal measures that may be pursued if either party fails to meet the specified agreement. A contractual obligation can be a proactive mitigation, similar to security policies that define approved data handling activities, as well as a reactive mitigation. If either party breaches the contract, methods such as the right to pursue legal action are specified in the contract.

The major difference between contractual and regulated private data is the level of control the organization can exert. Regulatory frameworks often specify the exact controls that must be in place, while organizations are free to write and negotiate their own contracts. As with other areas, this is a critical point of collaboration between the security team and other departments, such as legal or contract management.

Components of a Contract

Outsourcing to a CSP does not transfer risk away from the cloud consumer, as they remain the data owner and must ensure that it is adequately protected. Key elements of contracts to enforce security should be defined based on the data owner's responsibilities and include the following:

  • Scope of data processing: The CSP must have a clear definition of the permitted forms of data processing. For example, data collected on user interactions for a cloud customer should not be used by the CSP to inform new interface designs, unless the data subjects have given explicit consent for this.
  • Subcontractors: It is important to know exactly where all processing, transmission, storage, and use of data will take place and whether any part of the process might be undertaken by a subcontractor to the cloud service provider. If a subcontractor will be used in any phase of data handling, it is vital that the customer is contractually informed of this development. The contract should bar the CSP from using any unapproved subcontractors.
  • Deletion of data: How data is to be removed or deleted from a cloud service provider is important and helps protect against unintentional data disclosure. Cloud environments make physical destruction of storage media difficult, so contracts must spell out methods for secure data deletion if that is a service provided by the CSP. Due to the nature of the cloud, this may be handled by the consumer organization with a method such as cryptoshredding, which is easier to implement.
  • Data security controls: If data requires a level of security controls when it is processed, stored, or transmitted, that same level of security control must be ensured in a contract with a cloud service provider. Ideally, the level of data security controls would exceed what is required (as is often the case for cloud service providers). Typical security controls include encryption of data while in transit and while at rest, access control and auditing, layered security approaches, and defense-in-depth measures.
  • Physical location of data: Many privacy frameworks specify where data can be stored or processed, and this requirement must be passed on to any data processors to ensure compliance. Contract clauses specifying the methods, locations, and approved purposes for data transfer or storage should be included. Many CSPs shift this responsibility to the customer by offering a variety of geolocated services and allowing the customer to choose those in a region or geography that matches their compliance needs.
  • Return or surrender of data: It is important to include termination agreements and requirements, such as specifying what the CSP is and is not allowed to do with data after termination of a contract.
  • Audits: Right to audit clauses should be included to allow a data owner or an independent consultant to audit compliance with the agreed-upon security practices. In practice, most CSPs do not allow all customers to perform their own audits, but instead publish security and privacy documentation such as ISO 27001 or SOC 2 Type II reports for all customers. If these audits identify a weakness or deficiency that increases risk, the contract should specify that the customer can terminate the contract with no penalties.

Country-Specific Legislation Related to Private Data

An individual's right to privacy, and therefore their right to have their data handled in a secure and confidential way, varies widely by country and culture. The global nature of the cloud and many services, such as social media apps, means that organizations face a much broader range of privacy compliance obligations. As a result, security practitioners need to be aware of the broad range of statutory and regulatory obligations around data privacy. These are based on the citizenship of a data subject, rather than the location of the organization's operations, and govern many aspects such as the jurisdiction for any legal proceedings.

There are many different attitudes and expectations of privacy in countries around the world. In some more authoritarian regimes, the data privacy rights of the individual are almost nonexistent. In other societies, data privacy is considered a fundamental right. Security practitioners handling privacy data should be aware of these requirements and seek qualified legal opinion on how these requirements govern their organization's operations.

There are hundreds of country- and region-specific privacy laws and regulations, and an exhaustive analysis is outside the scope of this reference guide. The first task that a security practitioner should perform when addressing privacy is identifying all relevant laws and regulations that govern the data their organization handles. Legal firms that specialize in international privacy laws exist, and should be engaged as needed to provide the necessary guidance.

The European Union (GDPR)

The 27-member EU has one of the most robust privacy frameworks in the world. The right to personal and data privacy is strictly regulated and actively enforced in Europe, and it is enshrined into European law in many ways. In Article 8 of the European Convention on Human Rights (ECHR), a person has a right to a “private and family life, his home and his correspondence,” with some exceptions. Some additional types of private data under the GDPR include information such as race or ethnic origin, political affiliations or opinions, religious or philosophical beliefs, and information regarding a person's sex life or sexual orientation.

In the European Union, PII covers both facts and opinions about an individual. Individuals are guaranteed certain privacy rights as data subjects. Significant areas of Chapter 3 of the GDPR (see gdpr.eu/tag/chapter-3) include the following on data subject privacy rights:

  • The right to be informed
  • The right of access
  • The right to rectification
  • The right to erasure (the so-called right to be forgotten)
  • The right to restrict processing
  • The right to data portability
  • The right to object
  • Rights in relation to automated decision making and profiling

Australia

Australian privacy law was originally published in 1988, with a revision in 2014 and republication with minor updates in 2021. It provides a solid foundation of privacy rights similar to GDPR, and the updates were designed to address evolving privacy rights driven by GDPR as well as issues associated with international data transfers and cloud computing. The foundational principles can be found at oaic.gov.au/privacy/australian-privacy-principles, while the full Privacy Act can be found here: legislation.act.gov.au/a/2014-24/default.asp.

Under the Australian Privacy Act, organizations may process data belonging to Australian citizens offshore, but the transferring entity (the data owner) must ensure that the receiver of the data holds and processes it in accordance with the principles of Australian privacy law. As discussed in the previous section, this is commonly achieved through contracts that require recipients to maintain or exceed the data owner's privacy standards. An important consequence under Australian privacy law is that the entity transferring the data out of Australia remains responsible for any data breaches by or on behalf of the recipient entities, meaning significant potential liability for any company doing business in Australia under current rules.

The United States

Data privacy laws in the United States generally date back to fair information practice guidelines that were developed by the precursor to the Department of Health & Human Services (HHS). (See Ware, Willis H. (1973, August). “Records, Computers and the Rights of Citizens,” Rand. Retrieved from www.rand.org/content/dam/rand/pubs/papers/2008/P5077.pdf.) These principles include the following concepts:

  • For all data collected, there should be a stated purpose.
  • Information collected from an individual cannot be disclosed to other organizations or individuals unless specifically authorized by law or by consent of the individual.
  • Records kept on an individual should be accurate and up-to-date.
  • There should be mechanisms for individuals to review data about them, to ensure accuracy. This may include periodic reporting.
  • Data should be deleted when it is no longer needed for the stated purpose.
  • Transmission of personal information to locations where “equivalent” personal data protection cannot be assured is prohibited.
  • Some data is too sensitive to be collected (such as sexual orientation or religion) unless there are extreme circumstances.

Perhaps the defining feature of U.S. data privacy law is its fragmentation. There is no overarching law regulating data protection in the United States. In fact, the word privacy is not included in the U.S. Constitution. However, there are now data privacy laws in each of the 50 states as well as U.S. territories.

There are few restrictions on the transfer of PII or PHI out of the United States, a fact that makes it relatively easy for companies to engage cloud providers and store data in other countries. The Federal Trade Commission (FTC) and other regulatory bodies do hold companies accountable to U.S. laws and regulations for data after it leaves the physical jurisdiction of the United States. U.S.-regulated companies are liable for the following:

  • Personal data exported out of the United States
  • Processing of personal data by subcontractors based overseas
  • Projections of data by subcontractors when it leaves the United States

Several important international agreements and U.S. federal statutes deal with PII. The Privacy Shield agreement is a framework that regulates the transatlantic movement of PII for commercial purposes between the United States and the European Union. Federal laws worth review include HIPAA, GLBA, SOX, and the Stored Communications Act, all of which impact how the United States regulates privacy and data. At the state level, it is worth reviewing the California Consumer Protection Act (CCPA), the strongest state privacy law in the nation.

U.S. CLOUD Act

After multiple requests for electronic evidence led to lengthy court battles regarding jurisdiction, the United States passed the CLOUD Act. This provided a framework for bilateral agreements between countries in support of law enforcement requests for access to data. In one well-known case, Microsoft received a legal request to hand over data that was stored in European data centers, but due to privacy regulation in the EU, the company was unable to honor the request.

Under the CLOUD Act, U.S. law enforcement agencies and any counterparts in a corresponding country with an agreement in place can issue requests for data. CSPs may honor these requests without fear of violating privacy regulations. This solves some of the problems that cloud computing creates by allowing easy flow of data across national borders. The full text of the law and supporting resources can be found here: justice.gov/dag/cloudact.

Privacy Shield

Unlike the GDPR, which is a set of regulations that affect companies doing business in the EU or with citizens of the EU, Privacy Shield is an international agreement between the United States and the European Union that allows the transfer of personal data from the European Economic Area (EEA) to the United States by U.S.-based companies. Organizations can pursue a certification of their privacy practices under the framework, which enables them to demonstrate sufficient protections to allow for data processing in the United States.

This agreement replaced the previous safe harbor agreements, which were invalidated by the European court of justice in October 2015. The Privacy Shield agreement faces ongoing legal challenges in EU courts due to nonexistent federal level privacy laws in the United States that govern data belonging to non-U.S. citizens, and the agreement was partially struck down in 2020. Organizations with existing obligations may continue to operate under the framework but should seek legal guidance to stay informed of the changing requirements.

Adherence to Privacy Shield does not make U.S. companies GDPR-compliant, but it allows the company to transfer personal data out of the EEA into infrastructure hosted in the United States. Under Privacy Shield, organizations self-certify to the U.S. Department of Commerce and publicly commit to comply with the seven principles of the agreement. Those seven principles are as follows:

  • Notice: Organizations must publish privacy notices containing specific information about their participation in the Privacy Shield Framework; their privacy practices; and the use, collection, and sharing of EU residents' data with third parties.
  • Choice: Organizations must provide a mechanism for individuals to opt out of having personal information disclosed to a third party or used for a different purpose than that for which it was provided. Opt-in consent is required for sharing sensitive information with a third party or its use for a new purpose.
  • Accountability for onward transfer: Organizations must enter into contracts with third parties or agents who will process personal data for and on behalf of the organization, which require them to process or transfer personal data in a manner consistent with the Privacy Shield principles.
  • Security: Organizations must take reasonable and appropriate measures to protect personal data from loss, misuse, unauthorized access, disclosure, alteration, and destruction, while accounting for risks involved and nature of the personal data.
  • Data integrity and purpose limitation: Organizations must take reasonable steps to limit processing to the purposes for which it was collected and ensure that personal data is accurate, complete, and current.
  • Access: Organizations must provide a method by which the data subjects can request access to and correct, amend, or delete information the organization holds about them.
  • Recourse, enforcement, and liability: This principle addresses the recourse for individuals affected by noncompliance, consequences to organizations for noncompliance, and means for verifying compliance.
The Health Insurance Portability and Accountability Act of 1996

The HIPAA legislation of 1996 defined what comprises personal health information, mandated national standards for electronic health record keeping, and established national identifiers for providers, insurers, and employers. Under HIPAA, PHI may be stored by cloud service providers provided that the data is protected in adequate ways. Under HIPAA there are separate rules for privacy, security, and breach notification, as well as specifications for how these requirements flow down to third parties. HIPAA-covered entities are those organizations that collect or generate PHI, while their third parties are known as business associates and must enter into a formal agreement that defines their obligations for safeguarding the PHI.

The Gramm-Leach-Bliley Act (GLBA) of 1999

This U.S. federal law requires financial institutions to explain how they share and protect their customers' private information. GLBA is widely considered one of the most robust federal information privacy and security laws, but it is very narrowly targeted to financial services firms. This act consists of three main sections.

  • The Financial Privacy Rule, which regulates the collection and disclosure of private financial information
  • The Safeguards Rule, which stipulates that financial institutions must implement security programs to protect such information
  • The Pretexting provisions, which prohibit the practice of pretexting (accessing private information using false pretenses)

The act also requires financial institutions to give customers written privacy notices that explain their information-sharing practices. GLBA explicitly identifies security measures such as access controls, encryption, segmentation of duties, monitoring, training, and testing of security controls.

The Stored Communication Act of 1986

The Stored Communication Act (SCA), enacted as Title II of the Electronic Communication Privacy Act, created privacy protection for electronic communications such as email or other digital communications stored on the Internet. In many ways, this act extends the Fourth Amendment of the U.S. Constitution—the people's right to be “secure in their persons, houses, papers, and effects, against unreasonable searches and seizures”—to the electronic landscape. It outlines that private data is protected from unauthorized access or interception (by private parties or the government).

State-Level Laws

The United States is made of up at least 51 smaller governments, one for each of the 50 states and Washington, D.C., which governs some of its own affairs. The federal government provides services and legislation for affairs between states and on issues that are not state-specific, such as regulating PHI. All states have enacted some privacy legislation, and in the wake of GDPR some states have begun to implement privacy laws with similar requirements.

As with the multitude of international privacy laws, competent legal counsel should be engaged to identify any and all state-specific security and privacy requirements that must be met. States like California, with the California Consumer Privacy Act (CCPA), and New York's SHIELD Act provide a robust set of privacy rights, protections, and defined terms.

One major difference between the various state-level laws is the means of recourse made available to data subjects. Some states provide what is known as a private right to action, meaning an organization can be sued by an individual who feels the company violated their privacy rights. In states where a collective right to action is provided, some organization can pursue legal action against an organization for violating privacy laws. This may be an attorney general or other designated office within the state government, but individual data subjects cannot sue the organization directly. Risks associated with legal action should be assessed carefully—a single individual lawsuit is unlikely to have a large impact, and a state government with relatively lax enforcement is also unlikely to pose a significant risk. However, a state with strict enforcement is much more likely to pursue action, raising the likelihood and possibly impact of any legal action.

Jurisdictional Differences in Data Privacy

Cloud computing resources enable global placement of infrastructure for processing and storing data, which brings with it challenges related to complying with overlapping, and often conflicting, privacy laws. Different laws and regulations may apply depending on the location of the data subject, the data collector, the cloud service provider, subcontractors processing data, and the company headquarters of any of the entities involved. Security practitioners must be aware of these challenges and ensure that their risk assessments adequately capture these risks. Mitigation activities should be implemented to ensure compliance, and consultation with legal professionals during the construction of any cloud-based services is essential.

Legal concerns can prevent the utilization of a cloud services provider, add to costs and time to market, and drive changes to the technical architectures required to deliver services. Nevertheless, it is vital to never replace compliance with convenience when evaluating services, as this increases risks. In 2020, the video conferencing service Zoom was found to be engaged in the practice of routing video calls through servers in China in instances when no call participants were based there. This revelation caused an uproar throughout the user community and led to the abandonment of the platform by many customers out of privacy concerns: see theguardian.com/uk-news/2020/apr/24/uk-government-told-not-to-use-zoom-because-of-china-fears. In this case, the impact was mainly reputational, as customers abandoned the platform because their data would certainly not enjoy the same privacy protections within China compared to other nations. While lost business can be hard to quantify, many privacy frameworks impose fines or other regulatory action for noncompliance.

Standard Privacy Requirements

With so many concerns and potential harm from privacy violations, entrusting data to a CSP can be daunting. Fortunately, there are industry standards that address the privacy aspects of cloud computing for customers. International organizations such as ISO/IEC have codified privacy controls for the cloud. Adherence to the privacy requirements outlined by ISO 27018 enable cloud customers to trust their providers.

ISO 27018 was published in July 2014 as a component of the ISO 27001 standard and was most recently updated in 2019. Security practitioners can use the certification of ISO 27000 compliance as assurance of adherence to key privacy principles, and CSPs can publish details of their ISO certification to provide assurance to their customers. Major cloud service providers such as Microsoft, Google, and Amazon maintain ISO 27000 compliance, which include the following key principles:

  • Consent: Personal data obtained by a CSP may not be used for marketing purposes unless expressly permitted by the data subject. A customer should be permitted to use a service without requiring this consent.
  • Control: Customers shall have explicit control of their own data and how that data is used by the CSP.
  • Transparency: CSPs must inform customers of where their data resides and any subcontractors that might process personal data.
  • Communication: Auditing should be in place, and any incidents should be communicated to customers.
  • Audit: Companies must subject themselves to an independent audit on an annual basis.

Privacy and security concerns can generate conflict when monitoring is used to inspect network traffic or system usage. Organizations may have a legitimate need to observe what their users are doing, for example to identify users who violate policy by visiting inappropriate websites or sending protected data outside the organization's control. Monitoring tools can be useful, but the privacy rights of the users may conflict with this monitoring. In some jurisdictions, providing notice that a system is monitored is sufficient, while in others it is illegal to perform monitoring without a specific, documented reason. It is important to ensure that the monitoring strategy does not create a breach of privacy protections the users are entitled to.

Generally Accepted Privacy Principles

Generally Accepted Privacy Principles (GAPP) is a framework of privacy principles originally published by a task force of professional accountants in the United States and Canada. It is now widely incorporated into the SOC 2 framework as an optional criterion, meaning organizations that pursue a SOC 2 audit can include their privacy controls if appropriate based on the type of services they provide. Similar to ISO 27018, which is an optional extension of the controls defined in ISO 27002, the privacy criteria in SOC 2 provide objectives, which can be met by an organization's security controls. An audit of these controls results in a report that can be shared with customers or potential customers, who can use it to assess a service provider's ability to protect sensitive data.

GAPP is a set of standards for the appropriate protection and management of personal data. There 10 main privacy principles grouped into the following categories:

  • Management: The organization defines, documents, communicates, and assigns accountability for its privacy policies and procedures.
  • Notice: The organization provides notice of its privacy policies and procedures. The organization identifies the purposes for which personal information is collected, used, and retained.
  • Choice and consent: The organization describes the choices available to the individual. The organization secures implicit or explicit consent regarding the collection, use, and disclosure of the personal data.
  • Collection: Personal information is collected only for the purposes identified in the notice provided to the individual.
  • Use, retention, and disposal: The personal information is limited to the purposes identified in the notice the individual consented to. The organization retains the personal information only for as long as needed to fulfill the purposes or as required by law. After this period, the information is disposed of appropriately and permanently.
  • Access: The organization provides individuals with access to their personal information for review or update.
  • Disclosure to third parties: Personal information is disclosed to third parties only for the identified purposes and with implicit or explicit consent of the individual.
  • Security for privacy: Personal information is protected against both physical and logical unauthorized access.
  • Quality: The organization maintains accurate, complete, and relevant personal information that is necessary for the purposes identified.
  • Monitoring and enforcement: The organization monitors compliance with its privacy policies and procedures. It also has procedures in place to address privacy-related complaints and disputes.

Standard Privacy Rights under GDPR

GDPR codifies specific roles, such as a data controller and data subject, as well as rights and responsibilities for each role. Specifically, the rights of the data subject are enumerated and must be met by any data collector or processor. These rights are outlined in Chapter 3 of the GDPR (“Rights of the Data Subject”) and consist of 12 articles detailing those rights:

  • Transparent information, communication, and modalities for the exercise of the rights of the data subject
  • Information to be provided where personal data are collected from the data subject
  • Information to be provided where personal data have not been obtained from the data subject
  • Right of access by the data subject
  • Right to rectification
  • Right to erasure (“right to be forgotten”)
  • Right to restriction of processing
  • Notification obligation regarding rectification or erasure of personal data or restriction of processing
  • Right to data portability
  • Right to object
  • Automated individual decision making, including profiling
  • Restrictions

The complete language for the GDPR data subject rights can be found at gdpr.eu/tag/chapter-3.

Privacy Impact Assessments

Assessing the impact of systems and business processes is a familiar task—a business impact assessment (BIA) is a crucial element of performing continuity and resilience planning. Similarly, a privacy impact assessment (PIA) is designed to identify the privacy data being collected, processed, or stored by a system, and assess the effects that a breach of that data might have. Several privacy laws explicitly require PIAs as a planning tool for identifying and implementing required privacy controls, including GDPR and HIPAA.

Conducting a PIA typically begins when a system or process is being evaluated, though evolving privacy regulation often necessitates assessment of existing systems. The first step is to define a scope of the PIA, such as a single system or an organizational unit. Once the scope is defined, the types of data collected and data flow throughout the target system must be documented. These are critical for the next phase, which is analysis, since the types of data often dictate the required protections.

For example, a system that handles sensitive personal data like health records or financial transactions is regulated by privacy legislation that mandates specific security controls. The culmination of the PIA process is a documented impact assessment detailing the information in use, consequences of a breach or mishandling of the data, and required controls. These may include identifying a data or system owner and assigning them responsibility for ensuring that required controls are implemented, choosing technologies that offer required security capabilities, and architecting systems to meet the requirements. From a cloud security perspective, this may drive decisions about which CSP to use, which specific services can or cannot be used, and even whether the proposed system is appropriate for cloud hosting at all.

Methods of gathering information when conducting the PIA can include questionnaires and interviews with relevant staff, such as system architects, administrators, or even project leaders. Diagrams of systems, networks, or data flows can be created and are a useful tool when defining what data is being handled and where it exists during different lifecycle phases. This dictates the type and manner of controls implemented—for example, a system that does not archive data will not need any controls in place for data retention. Some regulatory frameworks mandate retention, however, so understanding if the data in question is regulated by one of these frameworks is an essential part of the analysis.

The IAPP has published guides and resources related to privacy efforts like PIAs. More details on the functioning and creation of PIA processes can be found here: iapp.org/resources/article/privacy-impact-assessment.

Understanding Audit Process, Methodologies, and Required Adaptations for a Cloud Environment

The word audit can be daunting, as many IT professionals have undergone audits that can feel invasive, and in the context of government taxing agents an audit is never a pleasant experience. Complexity and uncertainty can make the process highly unpleasant, and the rigorous and time-consuming processes that must be followed exactly are error prone. However, audits are an essential part of verifying compliance and effectiveness of security controls. A well-architected audit strategy and security controls that are properly designed to provide necessary information proactively can make audits much less burdensome.

Auditing in a cloud environment presents additional challenges when compared to traditional on-premises requirements. This section will detail some of the controls, impacts, reports, and planning processes for a cloud environment and how these preparations may differ from noncloud environments. It is important for cloud security professionals to work in concert with other key areas of the business to successfully navigate the journey to and in cloud computing. Since the cloud and IT services are utilized by and affect the whole organization, it is vital to coordinate efforts with legal counsel, compliance, finance, and executive leadership.

A key element of a well-designed audit strategy is a security control framework that helps the organization map their internal controls to a variety of compliance frameworks. Auditors are looking for evidence of compliance, so controls that are aligned with the relevant compliance obligations make the task much easier. There are multiple control sets that can be used to achieve this purpose, such as the CSA Cloud Controls Matrix (CCM) and the Secure Control Framework (SCF). The frameworks can be found at cloudsecurityalliance.org/research/cloud-controls-matrix and at securecontrolsframework.com. Both frameworks identity key security controls and activities, as well as compliance framework mappings that show how the controls satisfy compliance objectives.

Internal and External Audit Controls

Audits help organizations communicate important details of their security and privacy controls, including the adequacy of control design and whether the controls are in place and achieving the desired level of risk mitigation. External auditors provide a trusted source of information that allows this information to be communicated with outside parties; CSPs can engage a third-party auditor to conduct a review and share that report with potential customers to earn new business. The auditor in this case is unbiased, so the level of trust in the report is higher than with an internal auditor, whose job security rests on the results of the report.

Internal audit and compliance does have a key role to manage and assess risk for both CSPs and cloud customers. External audits perform a vital function in evaluating controls but are typically expensive and happen relatively infrequently. An internal audit function can provide more continuous monitoring of control effectiveness and also brings more inside knowledge of the organization's operations. This can uncover issues that an outsider might miss, and the more frequent review schedule allows the organization to catch and fix any issues before they show up on a formal audit report.

An internal auditor acts as a “trusted advisor” as an organization takes on new risks. In general, this role works with IT to offer a proactive approach with a balance of consultative and assurance services. An internal auditor can engage with relevant stakeholders to educate the customer about cloud computing risks, such as security, privacy, contractual clarity, business continuity planning (BCP) and disaster recovery planning (DRP), compliance with legal and jurisdictional issues, etc. They fulfill this role both proactively, when projects begin, and also reactively, as they conduct audits of existing systems or processes and report on any weaknesses.

An internal audit can also mitigate risk by examining cloud architectures to provide insights into an organization's cloud governance, data classifications, identity and access management effectiveness, regulatory compliance, privacy compliance, and cyber threats. While more frequent audit schedules can create an operational burden, the rapidly evolving nature of cloud computing means that risks can change significantly in a short period of time. Waiting for the next annual external audit may allow risks to exist for much longer than desirable.

It is a best practice for an internal auditor to maintain independence from both the cloud customer and the cloud provider, even though they may be employed by one of these organizations. The auditor is not “part of the team” but rather an independent entity who can provide facts without fear of reprisal. To achieve this, most internal audit teams report to a different executive than their IT counterparts. Controls in place around the audit function typically focus on this separation of duties and minimizing potential for conflict of interest.

Security controls may also be evaluated by external auditors, and in many compliance frameworks the engagement of a third-party, unbiased auditor is required. An external auditor, by definition, is not employed by but does work on behalf of the firm being audited. This is similar to financial audits, which require an objective third-party auditor to review financial statements. External auditors are generally barred from offering advisory services due to the potential conflict of interest, so controls in place for selecting and interacting with the auditors must account for this requirement.

Other controls that should be in place for audits include the following:

  • Timing: When audits must be conducted will likely be driven by business requirements, especially legal and regulatory frameworks that require reporting on or before a specific date. Contractual obligations may also drive this decision, as new customers may require proof of security controls within a specific timeframe after signing a contract. Audits happen regularly; these requirements are often gathered once, and then a recurring schedule is set based on them—for example, providing an initial SOC 2 Type II report to a new customer within 18 months of signing the contract, and then annually thereafter.
  • Requirements for internal/external audit: Some legal and regulatory frameworks require the use of an independent auditor, while others are explicit in requiring a competent third-party auditor. Understanding these requirements is crucial to designing an audit approach. Even if an organization must engage an external auditor, internal audits can be used to perform spot checks or continuous monitoring that complements the external auditor's work.

Impact of Audit Requirements

The requirement to conduct audits can have a large procedural and financial impact on a company. In a cloud computing context, the types of audits required are impacted largely by a company's business sector, the types of data being collected and processed, and the variety of laws and regulations that these business activities subject a company to. In addition, customer requirements can be a significant driver, especially for organizations providing SaaS that is built on another CSP's infrastructure. While some elements of the security program are covered by the infrastructure CSP's controls and audit reports, the SaaS provider is responsible for implementing controls over their activities and providing an audit report showing how they are implemented.

Some entities operate in heavily regulated industries subject to numerous auditing requirements, such as banks or critical infrastructure providers. Others may be data processors with international customers, such as big tech companies like Apple, Facebook, Google, and Microsoft. This significantly increases the scope and complexity of the audit program, due to overlapping and sometimes conflicting requirements.

The dynamic and quickly evolving nature of cloud computing demands changes to processes associated with audits. For example, auditors must rethink some traditional methods that were used to collect evidence needed during an audit. As an example, consider the problems of data storage, virtualization, and dynamic failover.

  • Is a data set representative of the entire user population? Cloud computing allows for the relatively easy distribution of data to take advantage of geographic proximity to end users, so obtaining a representative sample may be difficult since the data and system architecture can be dispersed.
  • Physical servers are relatively easy to identify and locate. Virtualization adds a layer of challenge in ensuring that an audited server is, in fact, the same system over time.
  • Dynamic failover presents an additional challenge to auditing operations for compliance to a specific jurisdiction. A system that is operating under normal conditions in a particular region can be compliant with legal obligations, but a disaster that causes failover to another region could be a violation of the regulatory requirements.

Identify Assurance Challenges of Virtualization and Cloud

The cloud is made possible by virtualization technologies. Abstracting the physical servers that power the cloud from the virtual servers that provide cloud services allows for the necessary dynamic environments that make cloud computing powerful and cost-effective. Furthermore, the underlying virtualization technologies that power the cloud are changing rapidly. Even a seasoned systems administrator who has worked with VMware or Microsoft's Hyper-V may struggle with understanding the inherent complexity of mass scalable platforms such as AWS, Google, or Azure cloud.

Migrating from on-premises to cloud hosting fundamentally changes the practice of risk management, which presents challenges to gaining the necessary assurance that controls are in place and reducing risk to an acceptable level. An on-premises system audit can be conducted by an organization using their own personnel. That same audit is likely impossible in a cloud environment for a number of reasons. CSPs rarely allow customers to perform their own audits of the CSP's facilities, and even if an auditor could gain access, finding the specific physical hardware hosting a cloud system may be impossible. This means that assurance must come from third-party-issued reports rather than direct observation, shifting the process to more of a supply chain or vendor risk management activity.

Depending on the cloud architecture employed, a cloud security professional must perform multiple layers of auditing. Elements of both the hypervisor and VMs themselves must be inspected to obtain assurance during the audit. It is vital for the auditor to understand the architecture that a cloud provider is using for virtualization and ensure that both hypervisors and virtual host systems are hardened and up-to-date. Change logs are especially important in a cloud environment to create an audit trail as well as an alerting mechanism for identifying when systems may have been altered in inappropriate ways by accidental or intentional manners.

Because of the shared responsibility model, some elements of auditing will be shared by the CSP and the cloud customer. Audits of controls over the hypervisor will usually be the purview of the CSP, since they control and manage the relevant hardware. VMs deployed on top of that hardware are usually under the direct control of the cloud customer, so assurance activities must be performed by either customer personnel or their third-party auditor. This is more complicated than auditing an on-premises environment where one organization has complete control over the infrastructure. Audit standards, as discussed in the next section, have evolved to deal with this complexity by specifying which controls are owned by the audited organization, and which are inherited from another provider.

Types of Audit Reports

Any audit, whether internal or external, will produce a report focused either on the organization or on the organization's relationship with an outside entity or entities. In a cloud relationship, oftentimes the ownership of security controls designed to reduce risk resides with a cloud service provider. An audit of the cloud service provider can identify if there are any gaps between what is contractually specified and what controls the provider has in place.

SOC, SSAE, and ISAE

The American Institute of CPAs (AICPA) provides a suite of audit and assurance standards that are widely used to report on controls in place at a service organization, such as a CSP. This includes standards for auditors to use when conducting audit activities, as well as specifics for report formats and details that customers can use to understand the risks associated with using a CSP's services. The various report types are detailed in Table 6.2.

TABLE 6.2 AICPA Service Organization Control (SOC) Reports

Table source: Adapted from AICPA SOC Reports

ReportUsersConcernsDetails Required
SOC 1User entities and the CPAs that audit their financial statementsEffect of the controls at the service organization on the user entities' financial statementsSystems, controls, and tests performed by the service auditor and results of tests
SOC 2Broad range of users who need detailed information and assurance about controls at a service organizationSecurity, availability, and processing integrity of the systems the service organization uses to process users' data and the confidentiality and privacy of the information processed by these systemsSystems, controls, and tests performed by the service auditor and results of tests
SOC 3Broad range of users who need information and assurance about controls but do not have the need for detailed information provided in a SOC 2 reportSecurity, availability, and processing integrity of the systems the service organization uses to process users' data and the confidentiality and privacy of the information processed by these systemsReferred to as a “Trust Services Report,” SOC 3 reports are generally used and can be freely distributed, unlike SOC 2, which usually requires nondisclosure

The differences between the SOC reports are as follows:

  • SOC 1: These reports deal mainly with financial controls and are intended to be used primarily by CPAs who audit an entity's financial statements. Business partners may find these reports useful to gauge the financial stability of partner organizations, but this is usually an operational risk rather than a security risk concern.
  • SOC for Service Organizations: Trust Services Criteria (SOC 2): This is a report on “Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy.” These reports are intended to meet the needs of a broad range of users who need detailed information and assurance about the controls at a service organization relevant to security, availability, and processing integrity of the systems the service organization uses to process users' data and the confidentiality and privacy of the information processed by these systems. Put simply, a SOC 2 report can show customers how well a CSP's controls are designed and whether they are operating as intended to reduce risk. These reports can play an important role in the following:
    • Organizational oversight
    • Vendor management programs
    • Internal corporate governance and risk management processes
    • Regulatory oversight

      There are two types of reports for these engagements:

    • Type I: Report on the fairness of the presentation of management's description of the service organization's system and the suitability of the design of the controls to achieve the related control objectives included in the description as of a specified date. Type I reports are only a review of control design but do not test the effectiveness of controls; as such they provide less assurance regarding the service provider's ability to safeguard data.
    • Type II: Report on the fairness of the presentation of management's description of the service organization's system and the suitability of the design and operating effectiveness of the controls to achieve the related control objectives included in the description throughout a specified period. When conducting a Type II audit the auditor performs tests of the controls, so the report provides greater assurance that the provider is effectively addressing risks.
  • Service Organization Controls 3 (SOC 3): SOC 3 reports are considered general use and can be freely distributed, as sensitive details that are captured in a SOC 2 Type II have been removed. These contain only the auditor's general opinions and nonsensitive data, unlike a SOC 2 which usually contains sensitive system and business process details. A SOC 3 may be shared publicly, while most organizations require a nondisclosure agreement (NDA) in order to access a SOC 2 report.

AICPA definitions of SOC controls can be found at the following locations:

The Statement on Standards for Attestation Engagements (SSAE) is a set of standards defined by the AICPA to be used when conducting audits and generating SOC reports. The most current version (SSAE 18) was made effective in May 2017 and added additional sections and controls to further enhance the content and quality of SOC reports. It is primarily used by auditors when conducting SOC audits rather than service providers or customers.

The International Auditing and Assurance Standards Board issues the International Standard on Assurance Engagements (ISAE). This is similar to the AICPA and SSAE standards, but there are some differences between the two standards. A security professional should always consult the relevant business departments to determine which audit report(s) will be used when assessing cloud systems. Although SOC 2 is a standard defined by a U.S. body, it has become something of a de facto global standard. Although cloud computing is global, the major CSPs are U.S.-based and implemented these standards for other large tech companies that are also U.S.-based. The ISAE 3402 standard is roughly equivalent to the SOC 2; the major CSPs offer audit reports for both. As a cloud provider or customer, it is important for a security practitioner to understand the relevant types of reports they need to either consume from their CSPs or provide to their customers.

CSA

The Security Trust Assurance and Risk (STAR) certification program from CSA can be used by cloud service providers, cloud customers, or auditors and consultants to demonstrate compliance to a desired level of assurance. STAR consists of two levels of certification, which provide increasing levels of assurance to customers:

  • Level 1: Self-assessment is a complimentary offering that documents the security controls provided by the CSP, which helps customers assess the security of cloud providers they currently use or are considering using.
  • Level 2: Third-party audit requires the CSP to engage an independent auditor to evaluate the CSP's controls against the CSA standard. This can be done as a stand-alone report or incorporated into other audits such as SOC 2 or ISO 27001. The controls are evaluated against the CSA CCM objectives, and the audit report is then submitted to the CSA registry for customers to access.

Since CSA is an industry group comprising cloud providers and major customers, it is focused specifically on cloud computing security risks and controls. A Level 1 STAR is a weak form of assurance, as an organization's self-assessment is not as rigorous as a third-party audit conducted by a trained, qualified auditor. More details on the registry and assurance requirements can be found at cloudsecurityalliance.org/star.

Restrictions of Audit Scope Statements

Audit scope statements are an essential part of an audit report. They provide the reader with details on what was included in the audit and what was not—if the reader is using a service that was not included in the scope of the audit, then the report provides nothing useful for making a risk decision. Learning to read audit reports and extract these important details is key to gaining assurance regarding the security controls in place at a CSP or other service provider.

Determining the scope of an audit is usually a joint activity performed by the organization being audited and their auditor. Several frameworks, such as SOC 2 and ISO 27001, include guidance on defining the scope of the audit, specifying which parts of the organization and services are included. The final scope is documented by the auditor in the resulting report and should be used by any consumers when determining if the services they are evaluating have been audited.

An audit scope statement generally includes the following information:

  • Statement of purpose and objectives
  • Scope of audit and explicit exclusions
  • Type of audit
  • Security assessment requirements
  • Assessment criteria and rating scales
  • Criteria for acceptance
  • Expected deliverables
  • Classification (for example, secret, top secret, public, etc.)

Any audit must have parameters set to ensure that the efforts are focused on relevant areas that can be effectively audited. Setting these parameters for an audit is commonly known as the audit scope restrictions. Why limit the scope of an audit? Audits are expensive endeavors that can engage highly trained (and highly paid) content experts. The auditing of systems can affect system performance and, in some cases, require the downtime of production systems.

Large organizations with multiple service offerings may also restrict the scope of an audit to a specific service or set of services for a variety of reasons. A newly created service may not have all relevant controls implemented, so an audit is largely useless until the service is complete and controls are implemented. In other cases, it may be a deliberate decision to exclude certain services from being audited, as the cost of implementing controls and auditing to verify their effectiveness is too high relative to the revenue the service generates.

Scope restrictions are of particular importance to security professionals. They can spell out the operational components of an audit, such as the acceptable times and time periods (for example, days and hours of the week), types of testing that will be conducted, and which systems or services are to be audited. Carefully crafting scope restrictions can ensure that production systems are not adversely impacted by an auditor's activity, and it is vital to ensure that systems that customers need assurance for are included in the scope. Scoping can also be a means of controlling costs related to compliance. For example, an audit on HIPAA compliance should only include systems that handle PHI; otherwise, the auditor will charge for time spent auditing systems that should have no valid reason to be audited.

Gap Analysis

As a precursor to a formal audit process, an organization may find a gap analysis a useful starting point. Gap analyses lack the rigor of a formal audit and can be a quick check of compliance, which is useful for organizations preparing to undergo a formal audit for the first time. They can also be useful when assessing the impact of changes to regulatory or compliance frameworks, which introduce new or modified requirements. A gap analysis identifies where the organization does not meet these changed requirements and provides important information to help remediate thee gaps.

The main purpose of a gap analysis is to compare the organization's current practices against a specified framework and identify the gaps between the two. These may be performed by either internal or external parties, and the choice of which to use will be driven by the cost and need for objectivity. If a gap analysis is being performed against a business function, the first step is to identify a relevant industry-standard framework to compare business activities against. In information security, this usually means a standard such as ISO 27002 (best-practice recommendations for information security management). Another common comparison framework used as a cybersecurity benchmark is the NIST cybersecurity framework.

A gap analysis can be conducted against almost any business function, from strategy and staffing to information security. The common steps generally consist of the following:

  • Establishing the need for the analysis and gaining management support for the efforts.
  • Defining scope, objectives, and relevant frameworks.
  • Identifying the current state of the department or area (which involves the evaluation of the department to understand current state, generally involving research and interviews of employees).
  • Reviewing evidence and supporting documentation, including the verification of statements and data.
  • Identifying the “gaps” between the framework and reality. This highlights the risks to the organization.
  • Preparing a report detailing the findings and getting sign-off from the appropriate company leaders.

Since a gap analysis provides measurable deficiencies and, in some cases, needs to be signed off by senior leadership, it can be a powerful tool for an organization to identify weaknesses in their efforts for compliance. It is also useful as a planning and prioritization tool, as any identified gaps can be evaluated against known risks. The gaps that correspond to risks should be prioritized first, since closing them also supports the organization's overall security risk management strategy.

Audit Planning

Any audit, whether related to financial reporting, compliance, or cloud computing security risk management, must be carefully planned and organized. This helps ensure that the results of the audit are relevant to the organization and contain useful information that can be used to help the organization improve on any identified weaknesses or deficiencies.

The audit process can generally be broken down into four phases, starting with audit planning. During this phase, the organization must perform several tasks, including the following:

  • Document and define audit program objectives: This process must be a collaborative effort and begins with internal planning to determine what systems or processes are to be audited and what standards are to be used. In many scenarios, this will be done many months before the audit, since time is required to implement controls that are required by the chosen standard.
  • Gap analysis or readiness assessment: A mock or mini audit, usually performed by internal personnel, can be useful in assessing the organization's ability to successfully undergo a full audit. In the process of implementing security and compliance controls, it is possible to overlook key tasks, and the changing nature of an organization can also render existing controls ineffective. Identifying and fixing these issues before a formal audit helps to ensure that the audit report does not contain unfavorable findings.
  • Define audit objectives and deliverables: Once the organization is ready to undergo an audit, it is important to identify the expected outputs from the audit. These may include a report that can be shared with customers, data to be shared with leadership, and action items for security or compliance teams to address, among others. Many audit frameworks dictate the deliverables, such as a SOC 2 Type II, which always results in a SOC 2 report, or FedRAMP, which results in an authorization to operate (ATO).
  • Identifying auditors and qualifications: Compliance and audit frameworks usually specify the type of auditor required, such as a partially independent internal auditor or completely independent third-party auditor. Frameworks that rely on third-party auditors often specify a standard and issue credentials to auditors authorized to perform specific types of audits, such as a CPA who can perform a SOC audit. Security practitioners must ensure that they engage an auditor with appropriate skills, credentials, and training.
  • Identifying scope and restrictions: Once the auditor is chosen, they will usually work with the organization to define a scope and any restrictions or exclusions. This might include scoping the audit to just a set of systems or organizational units and is usually documented formally in the audit report. This allows consumers of the report to understand whether the systems or services they are accessing have been audited and whether any issues or deficiencies exist.

Once the audit planning process is completed, the actual work of the audit begins. After planning, there are three major phases of an audit, which include the following activities:

  • Audit fieldwork: This involves the actual work the auditors perform to gather, test, and evaluate the organization. This includes examining evidence, interviewing organization personnel, and testing controls.
  • Audit reporting: The report writing begins as the auditors conduct their fieldwork, as they capture notes and any findings. The formal audit report is typically provided in a draft form to allow the organization to challenge any incorrect information. Once agreed upon, the final audit report is issued.
  • Audit follow-up: Various activities may be conducted after the audit, including addressing any identified weaknesses. Some auditors will perform retesting of fixed controls and issue an update or addendum to an audit report to indicate the organization's actions that addressed the original finding. This is useful, as consumers of the audit report will naturally ask what the organization has done to address identified findings, since they represent risks.

In many organizations, audit is a continuous process. This is often structured into business activities to provide an ongoing view into how an organization is meeting compliance and regulatory goals. As part of the audit planning process, scheduling and coordinating these audits can be challenging but is essential to prevent audits from adding too much operational overhead. Cloud security practitioners can utilize audits as a way to monitor the status of their compliance programs and therefore the status of their risk mitigation strategies.

Internal Information Security Management System

An information security management system (ISMS) is a systematic approach to information security consisting of processes, technology, and people designed to help protect and manage an organization's information. The ISO 27001 standard directly addresses the need for and approaches to implementing an ISMS, starting with an explanation of what the ISMS is and how it should align with other organizational processes:

The information security management system preserves the confidentiality, integrity, and availability of information by applying a risk management process and gives confidence to interested parties that risks are adequately managed.

It is important that the information security management system is part of and integrated with the organization's processes and overall management structure and that information security is considered in the design of processes, information systems, and controls. It is expected that an information security management system implementation will be scaled in accordance with the needs of the organization.

This International Standard can be used by internal and external parties to assess the organization's ability to meet the organization's own information security requirements.

Source: ISO/IEC 27001:2013

Information technology — Security techniques — Information security management systems — Requirements

An ISMS is a powerful risk management tool and is most often implemented at medium or large organizations where there is a formal need to quantify risk, develop and execute strategies to mitigate it, and provide formal reporting on the status of these risk mitigation efforts. It gives both internal and external stakeholders additional confidence in the security measures in place at the company.

Though the function of an ISMS can vary from industry to industry, there are a number of benefits to implementation that hold true across all industries.

  • Security of data in multiple forms: An ISMS can help protect data in all forms, whether the organization relies on hard-copy, on-premises, or cloud-based information systems.
  • Cyberattacks: Having an ISMS can make an organization more resilient to cyberattacks, because the risk of these attacks is known and formal processes exist to mitigate them. As the threat landscape evolves, the ISMS's continuous improvement efforts and ongoing risk management activities help the organization to respond effectively.
  • Central information security management: An ISMS will put in place centralized frameworks for managing information, reducing shadow systems, and easing the burden of data protection. Organizations with multiple units or dispersed authority can benefit from a centralized source of information security risk management by sharing best practices and resources.
  • Formal risk management: Having a codified set of processes and procedures in place can reduce operational risks in a number of areas, including information security, business continuity, and adapting to evolving security threats. Although an ISMS does not explicitly address operational issues like financial risk, it can help address operational risks like system interruptions. Most organizations rely heavily on technology systems, and a security risk management framework that addresses availability also addresses operational risks.

As with any major organizational element, an ISMS requires buy-in from company leadership to be effective. For CSPs an ISMS can provide a single organizational function for addressing risks that customers will ask about, such as the security of the data they put into the cloud and availability of systems hosted in the CSP's environments. For cloud customers, their own internal ISMS is the implementation point for all the security controls discussed throughout this book, including risk management activities associated with migrating to and using cloud computing.

Internal Information Security Controls System

As a companion to an ISMS, a system of information security controls provides guidance for mitigating the risks identified as part of the ISMS's risk management processes. Often known as control frameworks, these are considered best practices guidance that can give the organization a starting point when addressing their identified risks. As with all shared resources, some modifications may be required.

Scoping controls refers to identifying which controls in the framework apply to the organization and which do not. There may be controls that deal with business processes, system types, or even technologies that are not in use in an organization. Tailoring is a process of taking the applicable controls and matching them to the organization's specific circumstances, such as removing any guidance for Windows systems if an organization is exclusively Linux based. To use a clothing analogy, scoping refers to excluding the sections of a store that sell clothes designed for other age groups—adults are unlikely to find anything wearable in the children's department! Once an appropriate outfit is selected, tailoring can ensure that it fits your individual body type, resulting in the best fit.

There are a number of control frameworks to choose from and various reasons to choose one or another. Organizations implementing an ISO 27001 ISMS will find the ISO 27002 controls very easy to use, since they are designed to fit together. Other control frameworks include NIST Special Publication 800-53, the NIST Cybersecurity Framework (CSF), the Secure Controls Framework, and the CSA CCM.

In addition to providing a set of standardized control activities, these frameworks may also provide guidance and processes for the tailoring and implementation of activities needed to meet the objectives. For instance, the NIST CSF organizes controls based on their intended risk mitigation functions.

  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

For example, Identify controls are useful for identifying threats and risks, while Protect controls are designed to proactively mitigate identify risks. Once controls are in place, the Detect category includes controls related to detecting whether a security incident has occurred, while Respond and Recover focus on mitigating the impact and returning to normal operations. More information on the NIST CSF can be found at nist.gov/cyberframework/online-learning/five-functions.

Policies

Policies are a key part of any data security strategy. Policies provide users and organizations with a way to understand requirements and provide the organization to enforce these requirements in a systematic way. Employees and management are made aware of their roles and responsibilities via policies, which is a way for organizations to govern activities occurring during the course of operations. Policies are an important piece of standardizing practices in an organization.

From a cloud computing perspective, policies can be an important tool to govern migration to and use of cloud resources. While cloud computing offers significant benefits like cost savings, it can also introduce unexpected or unwanted risk. Policies communicate expectations such as acceptable use of cloud services, helping to ensure that the organization balances the benefits realized via cloud computing without taking on unacceptable risks.

Policies are a formal and high-level document that should be approved by the organization's management. They support strategic goals and initiatives and generally do not contain highly specific details like system configurations or step-by-step procedures. Without formal management approval and proper education for relevant stakeholders, policies will be ineffective, so it is important for security practitioners to devote adequate attention to them.

Organizational Policies

Companies use policies to outline rules and guidelines, which are usually complemented by other documentation such as procedures, job aids, etc. Policies make employees aware of the organization's views and values on specific issues and what actions will occur if they are not followed. As an example, organizations typically define policies related to proper use of company resources like expense reimbursements and travel. These specify how and when employees can seek reimbursement and what rules they must follow when booking travel to ensure that the company complies with relevant accounting and fiduciary laws.

Policies are a proactive risk mitigation tool designed to reduce the likelihood of risks, such as the following:

  • Financial losses
  • Loss of data
  • Reputational damage
  • Statutory and regulatory compliance issues
  • Abuse or misuse of computing systems and resources

Functional Policies

A functional policy is a set of standardized definitions for employees that describe how they are to make use of systems or data. Functional policies typically guide specific activities crucial to the organization, such as appropriate handling of data, vulnerability management, and so on.

One common policy at many organizations is a data classification policy, which communicates what types of data the organization handles and what protections must be in place. Other policies, such as cloud computing and acceptable use policies, can provide guidance for appropriate handling of data on employee workstations and cloud services based on the data's classification level. This might include requirements for applying encryption or even specify specific classification levels where data is not to be processed in the cloud at all.

Functional policies generally codify requirements identified in the ISMS and often align with the families of controls in security frameworks. The following, while not an exhaustive list, identifies several common policies that organizations might find useful:

  • Data classification: Identifies types of data and how each should be handled
  • Network services: How issues such as remote access and network security are handled
  • Vulnerability scanning: Routines and limitations on internal scanning and penetration testing
  • Patch management: How equipment is patched and on what schedule
  • Acceptable use: What is and is not acceptable to do on company hardware and networks
  • Email use: What is and is not acceptable to do on company email accounts
  • Passwords and access management: Password complexity, expiration, reuse, requirements for MFA, and requirements for use of access management tools such as a password manager
  • Incident response: How incidents are handled, and requirements for defining an incident response plan

Cloud Computing Policies

The ease of deploying cloud resources has led to a significant problem known as shadow IT, which is any IT service or information system that exists without formal knowledge of the organization. In an organization that uses SharePoint for filesharing and collaboration, a single team signing up for and using Dropbox to share files is an example of shadow IT. The controls in place for data security in SharePoint are unlikely to be applied to Dropbox, since the service was not formally approved and secured by the organization's IT department. Shadow IT can also create financial risks, as the organization's IT spending becomes harder to measure when multiple teams are involved.

Cloud services should not be exempt from organizational policy application. These policies will define the requirements that users must adhere to in order to make use of the services and may dictate specific cloud services that are approved for various uses. Because of the ease of provisioning cloud services, many organizations have specific policies in place that discourage or prohibit the use of cloud services by individuals outside of central IT oversight.

Since cloud computing is outside the direct control of the organization, policies may be written to guide the selection and use of cloud environments, rather than being used to govern the day-to-day activities of internal employees. When evaluating policies and how they should be applied to the cloud, security practitioners should address major areas of risk such as the following:

  • Password policies: If an organization has password policies around length, complexity, expiration, or MFA, it is important to ensure that these same requirements are met by a cloud service provider.
  • Remote access: Cloud services are inherently remote accessible, so organizations that previously prohibited or limited remote work will need to create remote access policies that apply to a large group of users (possibly even the whole organization). Secure remote network access can be cumbersome, requiring tools such as a VPN, while cloud computing often uses standard technologies like browser encryption to provide security. Ensuring that the correct tools are deployed and expectations are set for their use is a key element of a remote access policy.
  • Encryption: Policies about encryption strength and when encryption is required should identify where and how these apply to cloud services. Key escrow can be an important aspect of policy to focus on, as well as minimum acceptable encryption algorithms and key lengths.
  • Data backup and failover: Policies on data retention and backup must be enforced on cloud providers. Cloud services that offer built-in high availability and data replication features can make this process much easier, but selecting or architecting these solutions appropriately should be guided by the policy requirements.
  • Third-party access: What third parties might have access to data stored with the CSP? Can this access be logged and audited? The answers to these questions could introduce risks, so the policy provisions on third-party access should be used as requirements when choosing a CSP.
  • Separation of duties: Cloud services, especially SaaS, can introduce new user management models that could impact the organization's access controls, including separation of duties and minimum necessary access.
  • Incident response: Incidents in the cloud are more complicated to investigate due to other parties who must be included, such as the CSP and any other tenants who might be affected by an incident. Policies and response plans should be updated to include these other stakeholders, coordination required, and any testing modifications that must be made due to a changed environment.

In some instances, a cloud service provider cannot meet a company's requirements when it comes to adhering to a specific policy. If this happens, it is important to consider the risks of using the provider, and any deviations from policy should be carefully documented. All policy exceptions should be treated as risks, which require compensating controls to mitigate. If the threat landscape changes significantly, these risks may increase above the organization's tolerance, which will necessitate action such as finding a new CSP or moving back to on-premises hosting.

Identification and Involvement of Relevant Stakeholders

One key challenge in the audit process is the inclusion of any relevant stakeholders. This includes the organization's management who will likely be paying for the audit, security practitioners who will be responsible for facilitating the audit, and employees who will be called upon to provide evidence to auditors in the form of documentation, artifacts, or sitting for interviews.

Cloud computing environments can include more stakeholders than on-premises systems, because there can be multiple CSPs involved. For instance, a SaaS application may introduce both the SaaS vendor as well as their infrastructure provider, where an on-premises environment would involve only the organization's internal IT department. When it comes to performing audits, certain challenges can arise from these complicated supply chains,

It is important to both identify and involve all relevant stakeholders. If this is not done, any audit performed risks missing important details and information the auditors need to uncover potential weaknesses. This applies even without additional vendors or stakeholders—auditors will need access to relevant personnel such as system administrators and management inside the organization. When auditing a cloud system, stakeholders from the CSP may need to be informed or involved.

To identify relevant stakeholders, some key challenges that cloud security practitioners face include the following:

  • Defining the enterprise architecture currently used to deliver services, including all service providers.
  • Identifying any contractual obligations or requirements that impact audits, such as a limitation on the right to audit or resources provided by the CSP that can be used by customers when performing an audit. Most CSPs publish their own audit reports that detail controls under their purview, such as physical and environmental. Cloud customers may carve these requirements out of their audit scope and instead rely on the findings of the CSP's auditors.

Specialized Compliance Requirements for Highly Regulated Industries

Responsibility for compliance to any relevant regulations ultimately rests with the cloud consumer, and organizations that migrate to the cloud do not absolve themselves of risks associated with their information systems. Some industries have cloud-specific regulatory or compliance guidance, and some industries have extensive regulatory frameworks due to the sensitivity of the data handled by that industry. This significantly impacts the work of security practitioners, who may find their entire job description dictated by the compliance requirements.

Many CSPs have compliance-focused cloud service offerings, which meet the requirements of specific regulatory or legal frameworks. An organization's cloud computing strategy should be designed with regulatory compliance in mind, including mandating the use of compliant cloud service offerings. The cloud customer is unlikely to perform their own audit of a CSP and instead will rely on the CSP's published audit reports to gain assurance that the CSP's services implement adequate protections to meet the regulatory requirements.

Highly regulated industries typically involve highly sensitive data, such as health or financial information, or provide services that make them critical infrastructure, such as power and other utility providers. Organizations in these industries need to be aware of the regulations governing their operations and ensure that their strategy for using cloud computing enables them to be compliant. Examples of these regulatory frameworks include the following:

  • North American Electric Reliability Corporation Critical Infrastructure Protection (NERC/CIP): In the United States and Canada, organizations involved with power generation and distribution must regulate their operations according to the CIP standards. This includes the use of any cloud computing resources, which must meet requirements like maintaining adequate security protections to prevent disruption of power generation and delivery.
  • HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act: Both HIPAA and HITECH deal with PHI and implement specific requirements for security and privacy protections, as well as breach notification requirements. While cloud computing is not specifically addressed, these laws do identify required controls that must be in place for any system handling PHI. Cloud customers should verify their chosen CSP's ability to meet these requirements before processing any PHI in the cloud.
  • PCI: PCI DSS specifies protections for payment card transaction data. Similar to HIPAA, it does not specifically address the use of cloud computing, but any CSP chosen must be able to meet the PCI DSS standards for data security and privacy. Reviewing the CSP's audit reports to gain assurance is a key task for a security practitioner.

Since public CSPs do not generally allow individual customers to perform audits, organizations in highly regulated industries may seek out a different cloud deployment model. If enough organizations need cloud computing, creating a community cloud might be a feasible option. Since the user community shares the same regulatory requirements, the community cloud can be specifically designed to meet those needs. This simplifies the task of compliance, and any audits performed on the cloud will be specific to the industry-specific regulations, which will make security activities easier for all customers of that community cloud.

Impact of Distributed Information Technology Model

Cloud computing enables distributed IT service delivery, with systems that can automatically replicate data and provide services from data centers around the globe. Auditing such a complex environment requires significant modifications from traditional computing models, where it was possible to point to a specific data center and specific server rack where data or systems were hosted.

One obvious impact of this distributed model is the additional geographic locations auditors must consider when performing an audit. An important term in audits is sampling, which is the act of picking a subset of the system's physical infrastructure to inspect. For example, when performing a configuration audit on a system with 100 web servers, an auditor might pull configuration information for 20 of them to perform checks. The time needed to audit all 100 is prohibitive, so reviewing 20 percent is adequate to determine if the organization's configuration management policy is being followed.

Now expand this problem from 100 servers in a few data centers in one country: auditors now face hundreds of data centers in many different countries. Further complicating the issue is virtualization, which means the virtual servers that are part of a specific system could exist on any one of thousands of hardware clusters around the world—and their location can change almost instantly. This is an obvious benefit for system availability but makes the process of sampling much more difficult.

CSPs have found ways to collect evidence that provides auditors with sufficient assurance that they have collected a representative sample. This can include continuous monitoring strategies that capture information on a frequent enough basis to supply the auditor with sufficient, competent evidence. However, the cost of audits with geographically distributed systems will be greater, as the auditors may have to perform physical site inspections that require travel.

Legal jurisdiction issues can also complicate the process of conducting audits. While the process of auditing is not necessarily illegal, issues associated with auditors traveling and gaining access to facilities can add complexity to the audit process. For example, gaining access and approval to do business in some countries requires visas and work permits, which must be approved before an audit can take place. As with other aspects of cloud security, practitioners should coordinate with appropriate legal resources to determine any needs related to international auditing.

Understand Implications of Cloud to Enterprise Risk Management

If you compare how IT services were provisioned two decades ago to how they are done today, you would see a completely different landscape. In the dot-com era, provisioning systems took experts days or weeks to build out hardware, operating systems, and applications, and that assumed a facility was available—if not, building out a data center could take months. Companies spent millions of dollars on physical infrastructure in the form of data centers, wide area networks, and physical server hardware. Today, anyone can provision a multitier web application in a few minutes or sign up for a SaaS application in mere seconds.

This shift in how IT services is provisioned has significantly altered enterprise risk management practices. In the past, the thought of “accidentally” provisioning a server simply did not exist, much less the scenario of spinning up infrastructure in another country or legal jurisdiction. Today that scenario is not only possible but also highly likely in many organizations, which opens them up to new risks. These require new management and mitigation strategies, approaches, and tools.

It is vital for both cloud customers and CSPS to understand not only how enterprise risk management has changed, but how it continues to evolve as more organizations adopt cloud computing and novel cloud services emerge. New strategies can be employed for risk mitigation, and new ways of assessing, evaluating, and communicating about risk are needed.

Assess Provider’s Risk Management Programs

Prior to establishing a relationship with a cloud provider, a cloud customer needs to analyze the risks associated with adopting that provider's services. The goal of this is the same as performing risk assessments for on-premises infrastructure, but the method of gaining assurance is different. Rather than performing a direct audit, the customer must rely on their supply chain risk management (SCRM) processes. Similar to shifting IT control away from the customer to the CSP, SCRM requires new approaches.

First and foremost in SCRM is evaluating whether a supplier has a risk management program in place, and if so whether the risks identified by that program are being adequately mitigated. Unlike traditional risk management activities, where the organization can directly review their own processes and procedures, SCRM may require an indirect approach. The major CSPs do not permit direct customer audits or assessments, so cloud customers must review audit reports furnished by the CSP to gain the information needed.

SOC 2, ISO 27001, FedRAMP, and CSA STAR have all been discussed in previous sections of this chapter. CSPs will engage a qualified third-party auditor to perform an audit and issue a report using one or more of these frameworks, and possibly others, depending on the markets the CSP is trying to win business in. Some customers are required to choose CSPs that are compliant with a particular framework, such as U.S. government agencies that must use FedRAMP-accredited CSPs. Nonregulated organizations may be able to choose a CSP that provides an audit report that offers adequate assurance, such as picking a CSA STAR Level 1 CSP. Although the Level 1 self-assessment provides only low assurance, the lower cost associated with a less-audited environment may be appropriate for lower-sensitivity data.

When reviewing an audit report, there are several key elements of the report to focus on. These include any scoping information or description of the audit target. Some compliance frameworks allow audits to be very narrowly scoped, such as SOC 2. If the CSP's SOC 2 audit did not cover a specific service a customer wants to use, then the audit finding does not provide any value. Also important to review are any findings, weaknesses, or deficiencies identified in the report, as these represent inadequate or nonfunctional risk mitigations. If the risk applies to a service the customer is not using, the finding can be ignored, but if it does impact a service in use, then that risk is inherited by the customer. This may drive changes, such as enhanced customer-side controls, tracking the CSP's mitigation and resolution efforts, or even migrating to another CSP altogether.

There are resources that can help organizations build out or enhance their SCRM program. NIST has a resource library that includes working groups, publications, and other resources, available here: csrc.nist.gov/Projects/cyber-supply-chain-risk-management. ISO 27000:2022 specifies a security management system for security and resilience, with a particular focus on supply chain management. It extends concepts found in the ISO 31000:2018 standard, which focuses on enterprise risk management. Both standards provide guidance for identifying and assessing a supplier's security controls, policies and procedures, and the effectiveness of their security risk mitigations.

Two other important aspects to consider when evaluating a CSP's risk management program include the company's risk profile and risk appetite. Risk profile describes the risk present in the organization based on all the identified risks and any associated mitigations in place. For example, technology startups typically have much higher risk than established financial services firms. This is due to several factors, including the age of the company and maturity of their risk management programs, as well as the varying amount of regulation each industry faces.

Risk appetite describes the amount of risk an organization is willing to accept without mitigating—once again a startup is likely to accept more risk than a bank simply due to the resources required to mitigate those risks. A cash-strapped startup cannot staff up a risk department, so it must accept more operational and technology risk. Both of these factors should be considered by any cloud customers when evaluating a provider's risk management program. These details may be provided in audit reports as part of the provider's description of their ISMS, or in other documentation like security whitepapers.

Differences between Data Owner/Controller vs. Data Custodian/Processor

An important distinction in data is the difference between the data owner (data controller) and the data custodian (data processor). While these nuanced definitions may seem unneeded, they do have implications for managing risks associated with privacy data. It is helpful to start with some definitions:

  • A data subject is the individual or entity that is the subject of the personal data.
  • A data controller is the person (or company) that determines the purposes for which, and the way in which, personal data is processed. This entity owns the data and, importantly, risks associated with any breaches of the data.
  • The data processor is anyone who processes personal data on behalf of the data controller. This entity is a custodian of data, who is charged with implementing protections at the direction of the data controller.

For example, let's say BikeCo sells bicycles and allows users to provide personal data online to fulfill orders. They use a CSP, called CloudWheelz, to host their website and customer database, as well as an online payment processor, Circle, to handle payment card transactions. In this case, any customer is the data subject, and BikeCo is the data controller. Both CloudWheelz and Circle are data processors that act on behalf of the data owner. In the event that either CloudWheelz or Circle suffers a data breach, BikeCo is still legally liable for the data. If they have not taken adequate steps to ensure that their data processors implemented adequate protections, regulatory agencies are likely to assess significant penalties.

The distinctions are important for regulatory and legal reasons. Data processors are responsible for the safe and private custody, transport, and storage of data according to business agreements. Data owners are legally responsible (and liable) for the safety and privacy of the data under most international laws. When data controllers use processors, they must ensure that security requirements follow the data. This is often achieved via contract clauses that specify data protection and handling requirements, breach notification timelines, and possibly risk transfer such as the requirement for the processor to carry insurance that helps defray costs associated with a security incident.

Regulatory Transparency Requirements

A cloud security professional should be aware of the transparency requirements imposed on data controllers by various regulations and laws around the world. Many of these were written before cloud computing was as pervasive as it is today, so it is also important to stay informed about changes, as well as new regulations that come into force and impact the organization. Many legal firms will provide this kind of guidance as a service, and in-house legal counsel can also be a useful resource to identify regulatory requirements and any changes needed to come into compliance.

The following is a short and noncomprehensive list of several important regulatory frameworks that require transparency related to data security and privacy. Security practitioners in organizations regulated by these frameworks must be aware of the frameworks and work to implement the required controls. As a data owner or processor, cloud security professionals must be aware of all relevant regulatory requirements.

Breach Notification

Most recent privacy laws include mandatory breach notification. If an organization suffers a data breach, it is obligated to provide notification of that breach. There are some variations among the laws, mainly around issues of timing of the notification and who must be notified. Some regulations require notification within the specified time period of a suspected breach, while others are less strict and require only notification for a confirmed breach. Similarly, some regulations require notification only to the affected data subjects, while others require notification to a regulatory official such as a governmental data privacy official.

Regulations that require breach notification include, but are not limited to, GDPR, HIPAA (as amended by the HITECH Act), GLBA, and PIPEDA. In addition to these, numerous regional, state, and provincial regulations require data breach notification. Cloud security professionals should identify all relevant regulatory frameworks their organization is subject to, and build processes to ensure that obligations are met for notifying affected data subjects.

Incident response plans and procedures should include relevant information about the time period for reporting, as well as the required contacts in the event of a data breach. They should also include guidance on when it is necessary to contact specific data privacy officials. For example, a data breach that affects only U.S. citizens does not need to be reported to any data protection officials in the EU, since the GDPR does not apply to those data subjects.

Sarbanes-Oxley Act

If a company is publicly traded in the United States, they are subject to transparency requirements under the Sarbanes-Oxley Act (SOX) of 2002. Specifically, as data owners, these companies should consider the following:

  • Section 802: It is a crime to destroy, change, or hide documents to prevent their use in official legal processes.
  • Section 804: Companies must keep audit-related records for a minimum of five years.

SOX compliance is often an issue with both data breaches and ransomware incidents at publicly traded companies. The loss of data related to compliance due to external actors does not protect a company from legal obligations.

GDPR and Transparency

For companies doing business in the European Union or with citizens of the EU, transparency requirements under the GDPR are laid out in Article 12 (see gdpr-info.eu/art-12-gdpr). The exact language states that a data controller (data owner) “must be able to demonstrate that personal data are processed in a manner transparent to the data subject.” The obligations for transparency begin at the data collection stage and apply “throughout the lifecycle of processing.”

The GDPR stipulates that communication to data subjects must be “concise, transparent, intelligible and easily accessible, and use clear and plain language.” Achieving this task may not be the responsibility of a security practitioner, but security should be present when requirements are developed for user interfaces and language presented to users. Legal counsel may also be involved to ensure that the requirements under GDPR are met by any system or application designs.

Meeting the requirement for transparency also requires processes for providing data subjects with access to their data. This process may be owned by customer-facing resources such as a support team, with input required from legal and security to ensure that the procedures meet the GDPR requirements.

In simple terms, this means that plain language must be used to explain why data is being collected and what it is being used for. Similar language is included in other privacy regulations, so building a robust process for providing transparent information is a requirement for many security teams.

Risk Treatment

Risk treatment is the practice of modifying risk, usually to lower it, which can be achieved in a number of ways. Risk treatment begins with identifying and assessing risks, typically by measuring the likelihood and impact of their occurrence. Not all risks can be treated equally, so risk management usually prioritizes those risks that are higher impact and likelihood first. These risk assessment procedures should be documented as part of the organization's ISMS.

Once risks are identified, risk treatments should be selected to reduce the likelihood or impact (or both). Treatments that reduce likelihood are proactive, and sometimes known as safeguards, while risks that reduce the impact after a risk has occurred are reactive and known as countermeasures. These are collectively referred to as controls, which should be using a cost-benefit analysis to achieve acceptable risk mitigation at an adequate price.

There are four main approaches to treat risk, and many organizations will use more than one treatment option for the same risk. The options are as follows:

  • Avoid: The organization can avoid risk altogether by not engaging in a particular activity, such as not doing business with EU citizens to avoid GDPR compliance fines. However, this also means losing out on new customers, so it is not unusual for this treatment option to go unused.
  • Transfer: The organization can transfer risk to another organization, which is typically an insurance company. An organization's insurance policy pays out in the event of a cyber incident, which helps to offset the financial impact. Insurance carriers are in the business of managing risk, and it is unlikely a carrier will offer insurance if the organization cannot show they have adequate risk mitigations in place. Therefore, risk transfer is often used in conjunction with risk mitigation.
  • Mitigate: The organization implements controls to reduce the likelihood and impact of the identified risks. It is common for multiple controls to be layered in a risk mitigation, such as proactive access controls to prevent unauthorized system access, and data encryption to prevent an attacker from reading any data they do gain access to. This reduces the impact of a breach.
  • Accept: All risks are accepted by organizations, and risks that are not mitigated or transferred are accepted as is. Mitigated risks should be evaluated to determine if the residual risk, that is, the risk that remains after the control is implemented, falls below the organization's risk tolerance. If the residual risk remains too high, other mitigations or risk transfer should be implemented.

It is important to remember that risks are never entirely eliminated. Mitigations and transfer reduce the risks, and cloud security practitioners should keep this in mind as they perform tasks like evaluating CSPs and selecting security controls for implementation.

Risk Frameworks

Similar to security control frameworks, there are several risk management frameworks available for security practitioners to use as guides when designing a risk management program. Many of these are published by the same bodies that publish the control frameworks, and they are complementary. Organizations designing an ISO 27001 ISMS can easily utilize the relevant ISO standard for designing a security risk management program. These standards are known as risk management frameworks (RMFs).

It is important to note that risk management is not only a security activity. Other departments typically pursue risk management as well, which means the security team might be required to work in a collaborative way when conducting risk assessment and management. Some organizations may find a single risk management function to be useful, so executives have a single view of risk and associated metrics. Other organizations may allow different departments to conduct risk management differently, especially if there are conflicting regulations that govern the activities.

In the cloud computing arena, a cloud security professional should be familiar with the ISO 31000:2018 guidance standard, the European Network and Information Security Agency (ENISA)'s cloud computing risk assessment tool, and NIST standards such as 800-146, “Cloud Computing Synopsis and Recommendation,” and 800-37, “Risk Management Framework for Information Systems and Organizations: A System Lifecycle Approach for Security and Privacy.”

ISO 31000

ISO 31000 contains several standards related to building and running a risk management program. ISO 31000:2018, “Risk management — Guidelines,” provides the foundation of an organization's RFM, while IEC 31010:2019, “Risk management — Risk assessment techniques,” provides guidance on conducting a risk assessment. The related ISO GUIDE 73:2009, “Risk management — Vocabulary,” provides a standard set of terminology used through the other documents and is useful for defining elements of the risk management program.

This ISO standard provides generic recommendations for the design, implementation, and review of risk management processes within an organization. The 2018 update provides more strategic guidance and redefines risk from the concept of a “probability of loss” to a more holistic view of risk as the “effect of uncertainty of objectives,” recasting risk as either a negative or positive effect. ISO 31000 recommends the following steps in planning for risk:

  • Avoiding the risk by deciding not to start or continue with the activity that gives rise to the risk
  • Accepting or increasing the risk to pursue an opportunity
  • Removing the risk source
  • Changing the likelihood
  • Changing the consequences
  • Sharing the risk with another party or parties (including contracts and risk financing)
  • Retaining the risk by informed decision

ISO 31000 is a detailed framework but is not designed to be used in certification (there is no such thing as “ISO 31000 certified”). Adopting this framework will require extensive management conformity to accountability standards as well as strategic policy implementation, communication, and review practices. Documents and supporting resources can be found here: iso.org/iso-31000-risk-management.html.

ENISA

As a rough equivalent to the U.S. NIST, ENISA produces useful resources related to information and cybersecurity aligned with EU government objectives and programs. The “Cloud Computing Risk Assessment” is one of these documents and provides details of cloud-specific risks that organizations should be aware of and plan for when designing cloud computing systems.

This guide identifies various categories of risks and recommendations for organizations to consider when evaluating cloud computing. These include research recommendations to advance the field of cloud computing, legal risks, and security risks. Examples of the security risks identified include the following:

  • Loss of governance: Gaps in the security defenses caused by differences in the understanding of responsibility between the client and the CSP.
  • Isolation failure: The potential failures caused by lack of separation in storage, memory, and other hardware between cloud clients.
  • Compliance risk: The CSP provides a new challenge to achieving certification.
  • Management interface compromise: Management interfaces for cloud environments provide an additional attack vector.
  • Data protection: How CSPs handle data in a lawful way.
  • Insecure data deletion: Secure deletion of the cloud is complicated by its distributed nature.
  • Malicious insiders: Addition of a CSP adds high-risk-access individuals who can comprise cloud architectures and data.

The full document can be accessed here: enisa.europa.eu/publications/cloud-computing-risk-assessment.

NIST

Although a U.S. government agency, NIST publishes well-regarded information security standards that are free to download and may be used by any organization. NIST Special Publication (SP) 800-146, “Cloud Computing Synopsis and Recommendations,” provides definitions of various cloud computing terms. These include the service and deployment models like SaaS and public cloud, which were discussed in Chapter 1. Although not a dedicated risk management standard, the various risks and benefits associated with different deployment and service models are discussed. These can be an important input to any discussion of cloud computing risk, and the document may be found here: csrc.nist.gov/publications/detail/sp/800-146/final.

NIST also publishes an RMF, documented in NIST Special Publication 800-37. This document specifies the RMF to be used by U.S. government federal agencies and is often applied to organizations providing goods and services to these agencies. Although it shares some terminology with the ISO 31000 standard, the NIST RMF is specifically designed to address security and privacy risks. The RMF is flexible and can be applied at multiple levels of an organization, including the system level, an organizational unit level, or across the entire organization. The full publication and supporting documents are located at csrc.nist.gov/publications/detail/sp/800-37/rev-2/final.

Metrics for Risk Management

There are some key cybersecurity metrics that companies can track to present measurable data to company stakeholders. Each organization should evaluate its strategy, risks, and management requirements for data when designing a metrics program. Some metrics that are commonly tracked include the following:

  • Patching levels: How many devices are fully patched and up-to-date? This is a useful proxy for risk, as unpatched devices often contain exploitable vulnerabilities.
  • Time to deploy patches: How may devices receive required patches in the defined timeframes? This is a useful measure of how effective a patch management program is at reducing the risk of known vulnerabilities.
  • Intrusion attempts: How many times have unknown actors tried to breach cloud systems? Increased intrusion attempts can be an indicator of increased risk likelihood.
  • Mean time to detect (MTTD), mean time to contain (MTTC), and mean time to resolve (MTTR): How long does it take for security teams to become aware of a potential security incident, contain the damage, and resolve the incident? Inadequate tools or resources for reactive risk mitigation can increase the impact of risks occurring.

Metrics provide vital information for decision makers in the organization. Metrics that are within expected parameters indicate risk mitigations that are operating effectively and keeping risk at an acceptable level. Metrics that deviate from expected parameters, such as MTTD increasing, can indicate that existing risk mitigations are no longer effective and should be reviewed.

Assessment of Risk Environment

The cloud has become a critical operating component for many organizations, so it is crucial to identify and understand the risks posed by a CSP. Cloud providers are subject to risks similar to other service providers, but since they provide a critical service to many organizations, the impact of these risks is increased. It is important to consider a number of questions when considering a cloud service, vendor, or infrastructure provider.

  • Is the provider subject to takeover or acquisition?
  • How financially stable is the provider?
  • In what legal jurisdiction(s) are the provider's offices located?
  • Are there outstanding lawsuits against the provider?
  • What pricing protections are in place for services contracted?
  • How will a provider satisfy any regulatory or legal compliance requirements?
  • What does failover, backup, and recovery look like for the provider?

It can be a daunting challenge for any cloud customer to perform due diligence on their provider. However, since the customer organization still holds legal accountability, it is a vital step in selecting and using a vendor. Designing a SCRM program to assess CSP or vendor risks is a due diligence practice, and actually performing the assessment is an example of due care. As a data controller, any organization that uses cloud services without adequately reviewing and mitigating the risks is likely to be found negligent should a breach occur.

There are frameworks for evaluating vendor and infrastructure risks, which provide guidance on designing and executing the required processes. Some of these are general technology risk management frameworks, while others are specifically designed for cloud computing.

ISO 15408-1:2009: The Common Criteria

The Common Criteria for Information Technology Security Evaluation is an international standard for information security certification. The evaluation process is designed to establish a level of confidence in a product or platform's security features through a quality assurance process.

Common Criteria (CC) evaluation is done through testing laboratories where the product or platform is evaluated against a standard set of criteria. This includes the Target of Evaluation (ToE) and Protection Profiles (PP), which describe the system evaluated and specific security services the product offers. The result is an Evaluation Assurance Level (EAL), which defines how robust the security capabilities are in the evaluated product.

Most CSPs do not have common criteria evaluations over their entire environments, but many cloud-based products do. One common example relevant to security practitioners is security tools designed to be deployed in virtual environments like the cloud. Defining a desired EAL level can be used when evaluating security products, as it allows the organization to select products that have been independently verified against a standardized set of criteria.

A product that has undergone CC evaluation cannot be considered totally secure. The ToE specifies the configuration of the product, and failure to configure a system to the same specification as the ToE can result in a less-secure state. Similarly, newly discovered vulnerabilities could lead to loss of security in a system even if it is properly configured. EALs are useful as a selection tool but are not an absolute guarantee of security.

An up-to-date list of certified products can be found at commoncriteriaportal.org/products.

CSA STAR

When evaluating the risks in a specific CSP or other cloud service, the CSA STAR can be a useful method for ascertaining risks. The CSA STAR contains evaluations of cloud services against the CSA's cloud-specific controls (the CCM), and organizations have the flexibility to select self-assessed or third-party-assessed cloud services. Organizations that are not regulated by other frameworks and that make extensive use of the cloud may find this a lightweight but useful risk management framework.

Since the registry of certified providers is publicly available, the STAR program makes it easy for a company to assess the relative risk of a provider and should certainly be consulted when assessing any new CSP.

EU Cybersecurity Certification Scheme on Cloud Services

ENISA has published a standard for certifying the cybersecurity practices present in cloud environments. The framework, known as EUCS, defines a set of evaluation criteria for various cloud service and deployment models, with the goal of producing security evaluation results that allow comparison of the security posture across different cloud providers. The standard is still under development as of 2022, but the draft scheme can be found here: enisa.europa.eu/publications/eucs-cloud-service-scheme.

The EUCs defines several elements needed to support certification, and many are similar to the Common Criteria. This includes assurance levels, necessary information for assurance reviews and tests, and a process for self-assessments. Newly discovered vulnerabilities are explicitly identified as an area of concern, and the scheme identifies a process for handling such vulnerabilities and updating any relevant certification documentation as needed. In addition, the scheme identifies conformance assessment body (CAB) criteria that detail the requirements an organization must meet to perform evaluations and issue certifications under EUCS.

Understand Outsourcing and Cloud Contract Design

Outsourcing refers to using a party outside the organization to perform services or deliver goods. Outsourcing can allow organizations to take advantage of higher-skilled resources that may be difficult or expensive to hire internally or take advantage of shared resources that benefit from an economy of scale. CSPs provide this type of outsourcing; by pooling and sharing resources, organizations gain access to a globally distributed network of data centers that would be prohibitively expensive for all but the largest multinational companies.

When entering into an outsourcing arrangement, organizations utilize a variety of legal agreements and must also perform oversight and monitoring functions to validate compliance with the agreed terms. Cloud security professionals are well served by understanding key contractual provisions that provide risk management options for the specific CSPs that are used by their organizations.

Business Requirements

Before executing a contract with a CSP, it is important for any business to fully understand their own business needs to select a CSP that can adequately meet those needs. The evolution of the cloud means that more and more IT functions can use cloud computing. Once an organization deems cloud computing fit for their needs, the process of codifying these needs and identifying CSPs that meet them can begin. In legal terms, a cloud customer and a CSP enter into a master service agreement (MSA), which is defined as any contract that two or more parties enter into as a service agreement.

Many organizations will have standardized contract templates, and the task of creating and maintaining these is usually not assigned to the security team. Legal counsel is most often responsible for these contracts, but input from the security team is essential to ensure that security requirements make it into these templates. Common areas of security that should be addressed in contracts include any compliance requirements the customer is passing along to the CSP, as well as important processes and parameters the CSP must meet, such as the duty to inform the customer of a breach within a specific time period after detection.

Another important legal document that may be required is a statement of work (SOW). SOWs are usually created after an MSA has been executed and govern a specific unit of work. For example, the agreement to use a CSP's services at specific prices would be documented in the MSA. A SOW could be issued under this MSA detailing requirements, expectations, and deliverables for a major project, such as paying the CSP to assist with a migration from on-premises to cloud hosting.

The greater specificity of a SOW allows for more granular security requirements that are specific to that unit of work, such as the use of physically secured transport and secure handling of hard drives that the CSP takes and migrates data into their systems. If this activity is performed only during the initial migration, these requirements do not make sense in the overall MSA since this physical data transfer is not part of the ongoing services the CSP provides.

The final legal document where business requirements can be captured is the service level agreement (SLA), which specifies levels of service the CSP is obligated to provide. SLAs measure common aspects of service delivery like uptime and throughput and are often tied to system requirements the organization needs in order to function properly.

The SLA is a legally binding agreement, and if the CSP fails to provide the specified levels of service, the customer usually has recourse options defined. These may include refunds or credits for the service and possibly the ability to terminate the contract without penalties. While this is a dramatic option, if a CSP is unable to meet the organization's required service levels, then the organization should be free to seek out another provider. Contracts often contain penalties for early termination without cause, so monitoring the service levels and properly documenting any shortcomings is essential, as this often enables the customer to terminate the contract with cause and avoid the termination fee.

Key SLA Requirements

Service level agreements can be a key factor in avoiding potential issues once a contract is in place. Service metrics, such as uptime or quality of service, can be included in this section of a contract. An uptime of 99 percent may seem adequate, but that level of allowed downtime would be equal to 87.6 hours a year. For a nonessential system, this may be acceptable, but a mission-critical system that must be available 24/7 cannot tolerate that much downtime!

SLAs should be written to ensure that the organization's service level requirements (SLRs) are met, and SLAs are best suited for defining recurring, discrete, measurable items the parties agree upon. This is in contrast to nonrecurring items that are better suited to a contract, such as agreed prices for specific services. Examples of these requirements, and common elements documented in SLAs, include the following:

  • Uptime guarantees
  • SLA violation penalties
  • SLA violation penalty exclusions and limitations
  • Suspension of service clauses
  • Provider liability
  • Data protection and management
  • Disaster recovery and recovery point objectives
  • Security and privacy notifications and timeframes

Vendor Management

As discussed previously, the process of managing risk is complicated when parts of the organization's IT infrastructure exist outside the organization's direct control. The practices of SCRM and vendor management overlap significantly, though in many cases vendor management will include more activities related to operational risks.

Vendor management concerns existed for traditional, on-premises infrastructure, but the activities required for cloud computing necessitate different processes and approaches. Selecting a vendor for on-premises hardware might have been a once-and-done activity, since hardware would be expected to last for a defined period of time; at the end of its useful life, assessment of replacement vendors would be conducted.

Cloud computing requires more continuous management activities, since it involves outsourcing ongoing organizational processes and infrastructure to a service provider. This redefined relationship requires a great deal of trust and communication with vendors. Cloud professionals need strong project and people management skills to be successful when performing activities such as the following:

  • Assess vendors: Security practitioners should participate in the initial selection process for a CSP, which involves assessing security risks present in CSP and related services. Once a CSP has been selected, ongoing assessments should be conducted at a specified frequency. For many customers, this process will entail reviewing security reports like a SOC 2 on an annual basis after the CSP has undergone their yearly audit.
  • Assess vendor lock-in risks: This assessment will require knowledge of not only the CSP's offerings but the architecture and strategy the customer organization intends to use. Simply moving physical services into virtual cloud-based equivalents is unlikely to face lock-in risks, as all CSPs offer basic IaaS that can host a virtualized server. Using any unique CSP offerings, such as artificial intelligence/machine learning (AI/ML) platforms, can result in a system that is dependent on that specific CSP. If the CSP suffers a breach or discontinues the service, the customer organization has no effective means of mitigating that risk, short of completely rebuilding the system from the ground up using another CSP's offerings.
  • Assess vendor viability: This is often a process that is not conducted by the security team, as it deals with operational risk. Customers assume significant risk if a CSP is hosting mission-critical systems but is unable to continue their operations, which could be caused by issues like bankruptcy. Assessing the viability of vendors may involve reviews of public information like financial statements, the CSP's performance history and reputation, or even formal reports like a SOC 1, which identifies potential weaknesses that could impact the CSP's ability to continue operations.
  • Explore escrow options: Escrow is a legal term used when a trusted third party holds something on behalf of two or more other parties, such as a bank holding money on behalf of the individuals buying and selling a home. In IT services, escrow is often used as a way to hold sensitive material like source code or encryption keys. Exposure of the information to unauthorized parties could be damaging but may be necessary in extreme circumstances. For example, a CSP that performed custom software development may wish to protect the intellectual property of their source code, but if they go out of business, their customers are left with an unmaintainable system. In this scenario, an escrow provider could hold a copy of the source code and release it to customers in the event the provider is no longer in business.

Contract Management

The management of cloud contracts is a core business activity that is central to any ongoing relationship with a CSP. Organizations must employ adequate governance structures to monitor contract terms and performance and be aware of outages and any violations of stated agreements. A standards body known as the OMG Cloud Working Group publishes a useful guide to cloud service agreements, including defining and enforcing contracts, managing SLAs, and building programs to govern these service arrangements. The guide can be found here: omg.org/cloud/deliverables/Practical-Guide-to-Cloud-Service-Agreements.pdf.

Contract Clauses

There are a number of specific elements that should be considered when engaging a CSP or other cloud provider. A contract clause is a specific article of related information that specifies the agreement between the contracting parties. Examples of clauses include language related to the customer's obligation to pay and any security requirements the customer expects the service provider to meet, such as implementing industry-standard security.

Writing and reviewing contract clauses may be outside the scope of a security professional's job, but understanding the function of these clauses and important considerations that should be addressed in the contract is important. This necessitates collaboration between security practitioners and legal counsel, especially as contract negotiations often involve very advanced legal knowledge. Some common contract clauses that should be considered for any CSP or other data service provider include the following:

  • Right to audit: The customer can request the right to audit the service provider to ensure compliance with the security requirements agreed in the contract. Many CSPs do not accept these clauses due to the burden it would create on them to facilitate these audits, so the clauses are often written to allow the CSP's standard audits (e.g., SOC 2, ISO 27001 certification) to be used in place of a customer-performed audit.
  • Metrics: Not all contracts will specify metrics, but if there are specific indicators that the service provider must provide to the customer, they can be documented in a contract.
  • Definitions: A contract is a legal agreement between multiple parties, and it is essential that all parties share a common understanding of the terms and expectations. Defining key terms like security, privacy, and compliance, as well as specifying key practices like breach notifications provided within 24 hours of detection, can avoid misunderstandings.
  • Termination: Termination refers to ending the contractual agreement. This clause will typically define conditions under which either party may terminate the contract and require notice that must be given, and it may specify consequences if the contract is terminated early. Failure to provide the services agreed on or failure to pay is often defined in this clause, providing both the CSP and the customer a way out of the contract without penalties.
  • Litigation: This is an area where legal counsel must be consulted, as agreeing to terms for litigation can severely restrict the organization's ability to pursue damages if something goes wrong. For example, some providers require the use of arbitration instead of a court trial, which has different rules and may offer fewer options for the customer to recover damages.
  • Assurance: Defining assurance requirements sets expectations for both the provider and customer. Many contracts specify that a provider must furnish a SOC 2 or equivalent to the customer on an annual basis, since the customer needs that document to gain assurance that the provider's risk management is adequate.
  • Compliance: Any customer compliance requirements that flow to the provider must be documented and agreed upon in the contract. Data controllers that use cloud providers as data processors must ensure that adequate security safeguards are available for that data, and documenting the requirements in a contract is an example of exercising due care.
  • Access to cloud/data: Clauses dealing with customer access can be used to avoid risks associated with vendor lock-in. For example, it could be catastrophic if a customer informs the provider that a contract will not be renewed and the provider deletes all that customer's data. Contract clauses guaranteeing right to access the customer's data provide protection against this risk and may specify legal recourse if access is not available.

Cyber Risk Insurance

Cyber risk insurance is designed to help an organization reduce the financial impact of risk by transferring it to an insurance carrier. In the event of a security incident, the insurance carrier can help offset associated costs, such as digital forensics and investigation, data recovery, system restoration, and even covering legal or regulatory fines associated with the incident. As discussed previously, cyber insurance carriers are in the business of risk management, so they are unlikely to offer coverage to an organization that is lacking in security controls designed to mitigate some of the risk.

Cyber insurance requires organizations to pay a premium for the insurance plan, and most plans have a limit of coverage that caps how much the insurance carrier pays. There may also be sublimits, which cap the amount that will be paid for specific types of incidents such as ransomware or phishing. It is important to understand what type of coverage is best suited to your organization's unique operating circumstances, and an insurance broker can be a useful resource when investigating insurance options. Factors to discuss with a broker include the amount of coverage needed, different types of coverage such as business interruption or cyber extortion, and security controls that the insurance carrier requires such as MFA. The broker can help to ensure that the insurance coverage is appropriate to an organization's unique circumstances and may be able to save money by eliminating unnecessary elements of the policy.

Cyber risk insurance usually covers costs associated with the following:

  • Investigation: Costs associated with the forensic investigation to determine the extent of an incident. This often includes costs for third-party investigators.
  • Direct business losses: Direct monetary losses associated with downtime or data recovery, overtime for employees, and, oftentimes, reputational damages to the organization.
  • Recovery costs: These may include costs associated with replacing hardware or provisioning temporary cloud environments during contingency operations. They may also include services like forensic data recovery or negotiations with attackers to assist in recovery.
  • Legal notifications: Costs are associated with required privacy and breach notifications required by relevant laws.
  • Lawsuits: Policies can be written to cover losses and payouts due to class action or other lawsuits against a company after a cyber incident.
  • Extortion: The insurance to pay out ransomware demands is growing in popularity. This may include direct payments to ensure data privacy or accessibility by the company.
  • Food and related expenses: Incident often require employees to work extended hours or travel to contingency sites. Costs associated with the incident response, including catering and lodging, may be covered, even though they are not usually thought of as IT costs!

Supply Chain Management

Supply chain attacks are increasing in frequency and severity and have been on this track for some time. Back in 2015 the retail company Target was attacked using a vendor with weak security controls. In 2020, governments and major companies around the world were impacted by an attack against a popular SolarWinds network monitoring tool that was shipped to customers with compromised code. The popular open-source library npm has come under repeated attack, since so many open-source software (OSS) packages it hosts are incorporated into other software tools used by organizations all over the world.

Managing risk in the supply chain focuses on both operational risks, to ensure that suppliers are capable of providing the needed services, and security risks. This includes ensuring that suppliers have adequate risk management programs in place to address the risks that they face. Without these controls, risks that impact your organization's suppliers can easily turn into risks that impact your organization. If a major CSP does not enforce environmental controls and server equipment begins to fail, this translates to loss of availability for all of the CSP's customers.

The supply chain should always be considered in any business continuity or disaster recovery planning. The same concepts of understanding dependencies, identifying single points of failure, and prioritizing services for restoration are important to apply to the entire supply chain. Proactive measures including contract language and assurance processes can be used to quantify the risks associated with using suppliers like CSPs, as well as the effectiveness of these suppliers' risk management programs.

ISO 27036

The ISO 27000 family of standards has been discussed in many areas of this reference guide, and there is a specific standard dedicated to supply chain cybersecurity risk management. ISO 27036:2021 provides a set of practices and guidance for managing cybersecurity risks in supplier relationships. This standard is particularly useful for organizations that use ISO 27001 for building an ISMS or ISO 31000 for risk management, as it builds on concepts found in those standards.

ISO 27036 comprises four parts, including the following:

  • ISO/IEC 27036-1:2021, “Cybersecurity — Supplier relationships — Part 1: Overview and concepts,” which provides an overview and foundation for a supply chain management capability.
  • ISO/IEC 27036-2:2014, “Information technology — Security techniques — Information security for supplier relationships — Part 2: Requirements,” which provides a set of best practices and techniques for designing and implementing the supply chain management function.
  • ISO/IEC 27036-3:2013, “Information technology — Security techniques — Information security for supplier relationships — Part 3: Guidelines for information and communication technology supply chain security,” which is of particular concern for security practitioners, as it lays out practices and techniques specific to managing security risks in the supply chain.
  • ISO/IEC 27036-4:2016, “Information technology — Security techniques — Information security for supplier relationships — Part 4: Guidelines for security of cloud services,” which is the most relevant to cloud security practitioners. This standard deals with practices and requirements for managing supply chain security risk specific to cloud computing and CSP.

ISO 27036, like other ISO standards, is not a free resource. Additional resources worth review include the NISTIR 8276, “Key Practices in Cyber Supply Chain Risk Management: Observations from Industry”; NIST SP 800-161,” Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations”; and the 2015 ENISA publication “Supply Chain Integrity: An overview of the ICT supply chain risks and challenges, and vision for the way forward.”

Summary

A cloud security professional must be constantly aware of the legal and compliance issues inherent in migrating and maintaining systems in the cloud. Understanding the legal requirements, privacy issues, audit challenges, and how these relate to risk and contracts with cloud providers is a must for any company taking advantage of cloud services. A cloud security professional must also be well versed in the frameworks provided by professional organizations such as ENISA, NIST, ISO, and the CSA. All information security activities are tied back to business risks, since security should always be aligned to the needs of the organization. Understanding, assessing, and mitigating these risks is critical to any business strategy, and requires collaboration between the security professional and other teams. Cloud security professionals should understand the role that IT plays in this larger picture and have the communication and people skills to involve the appropriate business, legal, and risk decision makers across the organization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.248.149