The cloud offers companies and individuals access to vast amounts of computing power at economies of scale made possible only by distributed architectures. However, those same distributed architectures can bring a unique set of risks and legal challenges to companies due to the geographic distribution of the cloud infrastructure. The cloud, similar to the Internet, allows data to flow freely across national borders and facilitates storage and processing in data centers worldwide. Determining what laws apply to cloud computing environments is an ongoing challenge. The cloud service provider (CSP) can be based in one country, operate data centers across multiple countries, and serve customers in even more countries. In this situation there may be overlapping legal requirements, which introduces risk, and using a third-party CSP can introduce additional risks.
Legal and compliance requirements are more complex for cloud computing than they were for traditional on-premises information systems. With data and compute power spread across countries and continents, international disputes have dramatically increased. Various countries and regions have taken differing approaches to governing data privacy, intellectual property protection, and law enforcement methods. These types of disputes existed before cloud computing, but the transborder data flow and processing enabled by the cloud emerged before legal frameworks were written to deal with these scenarios. To prepare for these challenges, a cloud security professional must be aware of the legal requirements and unique risks presented by cloud computing architectures.
Using distributed cloud services provides benefits for redundancy and data integrity and can even offer performance benefits by moving information systems closer to the end users. This can, however, lead to physical infrastructure, business operations, and customers that are all governed by completely separate and sometimes conflicting laws.
For example, the European Union (EU) is governed by a data privacy law known as the General Data Protection Regulation (GDPR), which is a wide-ranging law. A Brazilian company that handles or stores data of EU citizens is obligated to comply, even though they are not based in the EU. Further complicating matters, each EU member state has its own privacy laws—although they align with GDPR, there can be subtle variations, such as timeframes and procedures for reporting security breaches to the member state data protection authority.
Although cloud security practitioners are not expected to be legal professionals as well, it is important to be aware of the various laws and regulations that govern cloud computing. Laws can introduce risks to a business, such as fines, penalties, or even a loss of the ability to do business in a certain place. It is important to identify such risks and make recommendations to mitigate them just like any other risk.
As an example, GDPR forbids the transfer of data to countries that lack adequate privacy protections; a mitigation to avoid a GDPR fine might involve building an application instance in an EU cloud region and preventing transfer of the data outside the EU. However, some countries require companies operating in that country to respond to law enforcement actions, such as a warrant to turn over data. In this situation, the GDPR restriction on data transfers might be violated in order to comply with another country's legal requirements.
Because of the international nature of cloud offerings and customers, cloud practitioners must be aware of multiple sets of laws and regulations and the risks introduced by conflicting legislation across jurisdictions. These conflicts may include the following:
Craig Mundie, the former chief of Microsoft's research and strategy divisions, explained it in these terms:
People still talk about the geopolitics of oil. But now we have to talk about the geopolitics of technology. Technology is creating a new type of interaction of a geopolitical scale and importance… . We are trying to retrofit a governance structure which was derived from geographic borders. But we live in a borderless world.
In simple terms, a cloud security practitioner must be familiar with a number of legal arenas when evaluating risks associated with a cloud computing environment. This does not mean, however, that they must be legal experts. As with many aspects of security, legal compliance requires collaboration; in this case, legal counsel should be part of the evaluation of any cloud-specific risks, legal requests, and the company's response to these.
The cloud offers computing capabilities that were unheard of a decade ago. CSPs can offer content delivery options to host data within a few hundred miles of almost any human being on Earth, offer novel architectures like microservices, and make computing power cheaper than it has ever been. Customers are not limited by political borders when accessing services from cloud providers, but this flexibility introduces a new set of risks. Legal, regulatory, and compliance risks in the cloud can be significant for certain types of data or industries.
Storing or processing data in multiple countries introduces legal and regulatory challenges. Cloud computing customers may be impacted by one or more of the following:
Cloud security practitioners should be aware of the legal frameworks that affect the cloud computing environments. The following frameworks are the products of multinational organizations working together to identify key priorities in the security of information systems and data privacy.
The Organisation for Economic Co-operation and Development (OECD) guidelines lay out privacy and security guidelines. (See www.oecd.org/sti/ieconomy/privacy-guidelines.htm
.) The OECD guidelines are echoed in European privacy law in many instances. The basic principles of privacy in the OECD include the following:
In addition to the basic principles of privacy, there are two overarching themes reflected in the OECD: first, a focus on using risk management to approach privacy protection, and second, the concept that privacy has a global dimension that must be addressed by international cooperation and interoperability. The OECD council adopted guidelines in September 2015, which provide guidance in the following:
The Asia-Pacific Economic Cooperation Privacy Framework (APEC) is an intergovernmental forum consisting of 21 member economies in the Pacific Rim. The full framework text is available here: apec.org/Publications/2017/08/APEC-Privacy-Framework-(2015
). The goal of this framework is to promote a consistency of approach to information privacy protection. This framework is based on nine principles.
The EU GDPR is perhaps the most far-reaching and comprehensive set of laws ever written to protect data privacy. Full details can be found at gdpr.eu/what-is-gdpr
. Within the EU, the GDPR mandates privacy for individuals, defines companies' duties to protect personal data, and prescribes punishments for companies violating these laws. GDRP fines for violating personal privacy can be massive at 20 million euros or 4 percent of global revenue (whichever is greater). For this reason alone, security practitioners must be familiar with these laws and the effects that they have on any company operating within, housing data in, or doing business with citizens of these countries in the 27-nation bloc. GDPR came into force in May 2018 and incorporated many of the same principles of the previous EU Data Protection Directive.
GDPR formally defines many roles related to privacy and security, such as the data subject, controller, and processor. The data subject is defined as an “identified or identifiable natural person,” or, more simply, a person. There is a subtle distinction between a data controller and a data processor, which recognize that not all organizations involved in the use and processing of personal data have the same degree of responsibility.
In cloud environments, the data controller is often the cloud customer, while the data processor is the CSP. The cloud customer provides services to their customers and utilizes the CSP's infrastructure to process the data. Both the controller and processor have responsibilities to ensure that privacy data is adequately protected, but the data controller retains most legal liability. If the CSP suffers a data breach, the data controller is likely to be fined if they cannot prove they took adequate steps to mitigate the risk associated with a CSP breach. This drives security decisions such as encrypting all data stored in the cloud, which reduces the impact of unauthorized access.
The GDPR is a massive set of regulations that covers almost 90 pages of details and requires significant effort to achieve compliance. Security practitioners must be familiar with the broad areas of the law if it applies to their organization, but ultimately organizations should consult an attorney to identify requirements. Legal counsel or an outside auditor may also be needed to ensure that operations are compliant. GDRP encompasses the following main areas:
Cloud services have added complexity to well-established legal frameworks that were put into place before cloud computing was developed. The CSP is a vital third party to many organizations, but one that introduces complexity to existing security and privacy risk mitigation. As a service provider, legal liability usually does not transfer to CSPs; instead, the cloud consumer is responsible for evaluating if a service provider offers adequate security controls. These evaluations are often done in light of laws or regulations that govern data security, such as the following:
The cloud represents a dynamic and changing environment, and monitoring/reviewing legal requirements is essential to staying compliant. All contractual obligations and acceptance of requirements by contractors, partners, legal teams, and third parties should be subject to periodic review. When it comes to compliance, words have very specific meanings, which impact how security programs must be defined and implemented. For example, many security practitioners treat law and regulation as interchangeable terms. However, there are very different consequences, and therefore very different risks, associated with noncompliance.
Understanding the compliance requirements that your organization is subject to is vital for a security practitioner. The outcome of noncompliance can vary greatly, including imprisonment, fines, litigation, loss of a contract, or a combination of these. To understand this landscape, it is helpful to recognize the difference between data protections required by laws, regulations, or contractual obligations:
It is vital to consider the legal and contractual issues that apply to how a company collects, stores, processes, and ultimately deletes data. Few companies or entities can ignore the need for compliance with national and international laws, since noncompliance can result in loss of the ability to do business. Perhaps not surprisingly, company officers have a vested interest in complying with these laws. Laws and regulations are specific in who is responsible for the protection of information, and many laws identify penalties for individuals who knowingly support noncompliance. Federal laws like HIPAA spell out that senior officers within a company are responsible for (and liable for) the protection of data. International regulations like GDPR and state regulations such as NYCRR 500 identify the role of a data protection officer or chief information security officer and outline their culpability for negligence that leads to a data breach.
As a cloud consumer, security practitioners are responsible for identifying all legal, regulatory, and contractual obligations, and ensuring that the CSP is able to meet those requirements. If not, using the CSP's services or infrastructure introduces significant risks. No matter who is hosting data or services, the data controller (usually the cloud customer) is ultimately accountable for effective security controls, privacy protections, and compliance with legal, regulatory, and compliance obligations.
When a crime is committed, law enforcement or other agencies may perform eDiscovery using forensic practices to gather evidence about the crime that has been committed, which can be used to prosecute the guilty parties. eDiscovery is defined as any process in which electronic data is pursued, located, secured, and searched with the intent of using it as evidence in a civil or criminal legal case. In a typical eDiscovery case, computing data might be reviewed offline, with the equipment powered off or viewed with a static image, or online with the equipment powered on and accessible.
In the cloud environment, almost all eDiscovery cases will be done online due to the nature of distributed computing and the difficulty in taking those systems offline, though some offline analysis is also possible. Virtual machine (VM) images or snapshots may be analyzed without powering on the VM itself. Forensics, especially cloud forensics, is a highly specialized field that relies on expert technicians to perform the detailed investigations required for discovery while not compromising the potential evidence and chain of custody. Not all security practitioners will possess this set of skills, so it is important to identify where in-house resources can perform certain actions and where highly trained external resources are required. This may involve the use of dedicated digital forensics and incident response (DFIR) personnel or law enforcement.
Cloud computing's essential characteristics add significant complexity to eDiscovery. A SaaS app in the cloud distributed across 100 countries is exponentially more complex to investigate than a simple server cluster in a traditional data center. An organization investigating an incident may lack the ability to compel the CSP to turn over vital information needed to investigate, or the information may be housed in a country where jurisdictional issues make the data more difficult to access. Even if information is available, it may not be sufficient to support the investigation, and maintaining a chain of custody is more difficult since there are more entities involved in the process.
When considering a cloud vendor, eDiscovery should be considered as a security requirement during the selection and contract negotiation phases. Once a CSP is chosen, it's important to proactively gather information that might be relevant in an investigation or discovery situation. This includes contact information, escalation procedures, and any relevant stakeholders to such a process. This type of information may logically fit into an incident response plan or similar documentation.
Data residency and system architecture are other important considerations for eDiscovery in the cloud and can be handled proactively when designing or deploying a system or business process. For example, if a platform is likely to receive law enforcement requests to hand over data, that data should not be stored in a country that restricts the organization's ability to honor such a request. Distributed cloud services can reduce availability risks by providing global replication and failover abilities but can introduce additional legal risks due to competing laws and jurisdictions.
Cloud security practitioners must inform their organizations of any risks and required due care and due diligence related to the use of cloud computing. This may involve working with legal counsel to identify requirements, documenting the steps needed and steps taken to meet those requirements, and also performing oversight functions like audits and assessments to measure compliance. This can make the process of eDiscovery easier by ensuring that the organization is prepared in the event of a discovery process, rather than finding itself unable to conduct necessary activities to investigate.
DFIR practitioners working in a cloud environment can utilize a variety of frameworks and tools to conduct investigations. Some of these include proactive practices similar to traditional on-premises investigations, such as identifying vital data, logging it, and centralizing it in a platform where analysis can take place. CSPs may not preserve essential data for the required period of time to support historical investigations or may not even log data relevant to support an investigation. This shifts the burden of recording and preserving potential evidence onto the consumers, who must identify and implement their own data collection. For example, the CSP is unlikely to log application-level data related to microservices usage, such as whether the user-requested action completed successfully or was terminated. The CSP is concerned only with whether the microservice was available and the length of time it was used, which is critical for CSP billing.
Frameworks designed to assist with planning for eDiscovery include some cloud-specific guidance, as well as more general guidance for any information system. These include the following:
downloads.cloudsecurityalliance.org/initiatives/imf/Mapping-the-Forensic-Standard-ISO-IEC-27037-to-Cloud-Computing.pdf
.csrc.nist.gov/publications/detail/nistir/8006/final
.Digital forensics and eDiscovery requirements for many legal controls are greatly complicated by the cloud environment. Unlike on-premises systems, it can be difficult or impossible to perform physical search and seizure of cloud resources such as storage or hard drives. ISO/IEC and CSA provide guidance to cloud security practitioners on best practices for collecting digital evidence and conducting forensics investigations in cloud environments.
As discussed, DFIR is a highly specialized field within security and should be performed only by qualified personnel. Untrained or unskilled personal performing forensics run the risk of destroying or altering evidence, which renders it useless for investigation and prosecution. All security practitioners should be familiar with the following standards, even if they do not specialize in forensics:
The Internet age has brought with it an unprecedented amount of information flow. Everything from financial transactions to academic research to cat videos can be shared around the world. However, not all of this data is equally valuable. Some categories of information can be used to cause real-world harm if they fall into the wrong hands. Private information is of particular importance, because in many cases tampering with or stealing this data can have serious consequences to the subject of the information. These consequences may include identity theft, discrimination, or even death in cases where unpopular or illegal speech or viewpoints is involved.
Privacy is defined as the state of being free from observation by others, and it is often discussed alongside security. The two fields are not the same, however. Privacy is often codified into laws and regulations as an individual's right, which organizations must uphold when they collect, store, or process the individual's information. Security practitioners often implement security controls as part of privacy compliance, such as encryption to reduce the impact of a data breach or incident response procedures that include mandatory reporting to victims of a data breach.
Data that is considered private is also often useful for identifying an individual, meaning the data can be associated with a single person or entity. This can lead to situations where a point of data in a larger data set makes the difference between private data, which needs protection, and sensitive data, which might warrant some protection but does not carry the risk of fines or legal penalties. Information about a medical procedure is not generally considered sensitive, but if the patient's insurance number is associated with the procedure records, it can now be used to uniquely identify an individual and their health condition. In many jurisdictions that information is regulated by privacy laws.
It is important to understand what types of data an organization is processing, where it is being processed, and any associated requirements such as contractual obligations. In any cloud computing environment, the legal responsibility for data privacy and protection rests with the cloud consumer, who may enlist the services of a cloud service provider (CSP) to gather, store, and process that data. The data controller is always responsible for ensuring that the requirements for protection and compliance are met, whether the data is processed in on-premises systems or a cloud solution. When third parties like a CSP are involved, it is important to ensure that contracts with the CSP stipulate data privacy, security, and protection requirements.
There are a number of terms that describe data that requires protection, and the specific types of data may have unique protection requirements. It is essential to understand what type of data is being handled, such as personally identifiable information (PII). This is a widely recognized classification of data that is almost universally regulated. PII is defined by the NIST standard 800-122 as follows:
“any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”
While NIST is a U.S. body, the definition of PII is similar to other standards and frameworks, such as GDPR. Different frameworks explicitly identify certain data points that may be unique, but in general any information that can be used to uniquely identify an individual is considered PII. Security practitioners should be aware of the types of data their organization handles and associated regulatory or contractual obligations.
Protected health information (PHI) is a U.S.-specific subset of PII and is codified under HIPAA. Data that relates to a patient's health, treatment, or billing for medical services that could identify a patient is PHI. When this data is electronically stored, it must be adequately secured by controls such as unique user accounts for every user, strong passwords and MFA, least privilege-based access controls, and auditing all access and changes to a patient's PHI data.
Table 6.1 summarizes types of private data and how they differ.
TABLE 6.1 Types of Private Data
Data Type | Definition |
---|---|
Personally identifiable information (PII) | Personally identifiable information is information that, when used alone or with other relevant data, can identify an individual. PII may contain direct identifiers, such as name or address that can identify a person uniquely. It may also include indirect or quasi-identifiers such as place of birth, date of birth, etc., which could be combined with other quasi-identifiers to successfully recognize an individual. PII is legally defined in and regulated by numerous privacy laws, and additional PII data elements may be defined in contracts along with required protections. |
Protected or personal health information (PHI) | PHI includes medical histories, test and laboratory results, mental health conditions, insurance information, and other data that a healthcare professional collects to identify an individual and determine appropriate care. In the United States, PHI is explicitly identified in and regulated by HIPAA. Covered entities, which store or handle PHI, are defined in the law, and any third parties they do business with are also required to safeguard the information. These requirements are passed on to the third parties via business associate agreements (BAAs), which are contractual requirements for handling any PHI that is shared. |
Payment data (PCI DSS) | Bank and credit card data used by payment processors in order to conduct point-of-sale transactions. Payment data is governed by contracts with payment card issuers such as Mastercard and Visa. To accept and process payment card transactions, merchants must agree to implement the required protections specified in PCI DSS. |
The biggest differentiator between contractual and regulated data is that the requirements to protect regulated data flow from legal and statutory requirements. Both PII and PHI data are subject to regulation, and the disclosure, loss, or altering of these data can subject a company (and individuals) to statutory penalties including fines and imprisonment. Organizations can be fined for mishandling or failing to protect private data under laws like HIPAA or GDPR, and in some cases individuals may also be held accountable if they are found to be negligent.
Regulations are put into place by governments and government-empowered agencies to protect entities and individuals from risks. In addition, they force providers and processors to take appropriate measures to ensure that protections are in place while identifying penalties for lapses in procedures and processes. In some industries, the regulators also perform audit and oversight functions to ensure that organizations are meeting their regulatory obligations. Security practitioners must understand their regulatory environment and work to implement the required controls and facilitate any oversight activities by regulators.
One of the major differentiators between contracted and regulated privacy data is in breach reporting. A data breach of regulated data (or unintentional loss of confidentiality of data through theft or negligence) is covered by regional and country laws around the world. As an example, in some U.S. states there are financial penalties for data breaches and no requirement to report to state regulators, while in others there are very specific timeframes for reporting. Knowing the overlapping requirements and ensuring that policies and procedures are adequate to meet those requirements will require collaboration between the security team and others in the organization, such as the legal team. The International Association of Privacy Professionals (iapp.org
) publishes several guides that highlight privacy regulations and requirements across different countries.
There are several risks associated with regulated privacy data. Obviously a breach can lead to severe consequences for individuals — some information could be used by malicious parties to perform identity theft or extortion. A large dating site serving people who wanted to have extramarital affairs suffered a data breach of its user information, and individuals were blackmailed to prevent their use of the service from being publicized. Sadly, some of the affected individuals committed suicide rather than face embarrassment. From an organizational standpoint, fines and penalties are another major privacy regulation risk. As an example, the ridesharing company Uber was forced to pay $148 million in fines to the state of California for a 2016 data breach and associated cover-up of the incident.
Contractual obligations to safeguard data can be used as a method to enforce data safeguards throughout a supply chain, such as with a HIPAA BAA for subprocessors. This method can also be used to provide safeguards for data that does not have a legal or regulatory need for protection but is nonetheless valuable. Examples include business confidential information, intellectual property, and other nonpublic information that an organization creates or uses.
Contracts are used to provide governance for relationships with third parties, such as vendors, service providers, and business partners. They are relevant to security professionals, because they provide a way to communicate the requirements for handling this data and also provide some risk mitigation if a breach occurs. Contracts are similar to nondisclosure agreements, which identify the types of data being shared, required safeguards, and legal measures that may be pursued if either party fails to meet the specified agreement. A contractual obligation can be a proactive mitigation, similar to security policies that define approved data handling activities, as well as a reactive mitigation. If either party breaches the contract, methods such as the right to pursue legal action are specified in the contract.
The major difference between contractual and regulated private data is the level of control the organization can exert. Regulatory frameworks often specify the exact controls that must be in place, while organizations are free to write and negotiate their own contracts. As with other areas, this is a critical point of collaboration between the security team and other departments, such as legal or contract management.
Outsourcing to a CSP does not transfer risk away from the cloud consumer, as they remain the data owner and must ensure that it is adequately protected. Key elements of contracts to enforce security should be defined based on the data owner's responsibilities and include the following:
An individual's right to privacy, and therefore their right to have their data handled in a secure and confidential way, varies widely by country and culture. The global nature of the cloud and many services, such as social media apps, means that organizations face a much broader range of privacy compliance obligations. As a result, security practitioners need to be aware of the broad range of statutory and regulatory obligations around data privacy. These are based on the citizenship of a data subject, rather than the location of the organization's operations, and govern many aspects such as the jurisdiction for any legal proceedings.
There are many different attitudes and expectations of privacy in countries around the world. In some more authoritarian regimes, the data privacy rights of the individual are almost nonexistent. In other societies, data privacy is considered a fundamental right. Security practitioners handling privacy data should be aware of these requirements and seek qualified legal opinion on how these requirements govern their organization's operations.
There are hundreds of country- and region-specific privacy laws and regulations, and an exhaustive analysis is outside the scope of this reference guide. The first task that a security practitioner should perform when addressing privacy is identifying all relevant laws and regulations that govern the data their organization handles. Legal firms that specialize in international privacy laws exist, and should be engaged as needed to provide the necessary guidance.
The 27-member EU has one of the most robust privacy frameworks in the world. The right to personal and data privacy is strictly regulated and actively enforced in Europe, and it is enshrined into European law in many ways. In Article 8 of the European Convention on Human Rights (ECHR), a person has a right to a “private and family life, his home and his correspondence,” with some exceptions. Some additional types of private data under the GDPR include information such as race or ethnic origin, political affiliations or opinions, religious or philosophical beliefs, and information regarding a person's sex life or sexual orientation.
In the European Union, PII covers both facts and opinions about an individual. Individuals are guaranteed certain privacy rights as data subjects. Significant areas of Chapter 3 of the GDPR (see gdpr.eu/tag/chapter-3
) include the following on data subject privacy rights:
Australian privacy law was originally published in 1988, with a revision in 2014 and republication with minor updates in 2021. It provides a solid foundation of privacy rights similar to GDPR, and the updates were designed to address evolving privacy rights driven by GDPR as well as issues associated with international data transfers and cloud computing. The foundational principles can be found at oaic.gov.au/privacy/australian-privacy-principles
, while the full Privacy Act can be found here: legislation.act.gov.au/a/2014-24/default.asp
.
Under the Australian Privacy Act, organizations may process data belonging to Australian citizens offshore, but the transferring entity (the data owner) must ensure that the receiver of the data holds and processes it in accordance with the principles of Australian privacy law. As discussed in the previous section, this is commonly achieved through contracts that require recipients to maintain or exceed the data owner's privacy standards. An important consequence under Australian privacy law is that the entity transferring the data out of Australia remains responsible for any data breaches by or on behalf of the recipient entities, meaning significant potential liability for any company doing business in Australia under current rules.
Data privacy laws in the United States generally date back to fair information practice guidelines that were developed by the precursor to the Department of Health & Human Services (HHS). (See Ware, Willis H. (1973, August). “Records, Computers and the Rights of Citizens,” Rand. Retrieved from www.rand.org/content/dam/rand/pubs/papers/2008/P5077.pdf
.) These principles include the following concepts:
Perhaps the defining feature of U.S. data privacy law is its fragmentation. There is no overarching law regulating data protection in the United States. In fact, the word privacy is not included in the U.S. Constitution. However, there are now data privacy laws in each of the 50 states as well as U.S. territories.
There are few restrictions on the transfer of PII or PHI out of the United States, a fact that makes it relatively easy for companies to engage cloud providers and store data in other countries. The Federal Trade Commission (FTC) and other regulatory bodies do hold companies accountable to U.S. laws and regulations for data after it leaves the physical jurisdiction of the United States. U.S.-regulated companies are liable for the following:
Several important international agreements and U.S. federal statutes deal with PII. The Privacy Shield agreement is a framework that regulates the transatlantic movement of PII for commercial purposes between the United States and the European Union. Federal laws worth review include HIPAA, GLBA, SOX, and the Stored Communications Act, all of which impact how the United States regulates privacy and data. At the state level, it is worth reviewing the California Consumer Protection Act (CCPA), the strongest state privacy law in the nation.
After multiple requests for electronic evidence led to lengthy court battles regarding jurisdiction, the United States passed the CLOUD Act. This provided a framework for bilateral agreements between countries in support of law enforcement requests for access to data. In one well-known case, Microsoft received a legal request to hand over data that was stored in European data centers, but due to privacy regulation in the EU, the company was unable to honor the request.
Under the CLOUD Act, U.S. law enforcement agencies and any counterparts in a corresponding country with an agreement in place can issue requests for data. CSPs may honor these requests without fear of violating privacy regulations. This solves some of the problems that cloud computing creates by allowing easy flow of data across national borders. The full text of the law and supporting resources can be found here: justice.gov/dag/cloudact
.
Unlike the GDPR, which is a set of regulations that affect companies doing business in the EU or with citizens of the EU, Privacy Shield is an international agreement between the United States and the European Union that allows the transfer of personal data from the European Economic Area (EEA) to the United States by U.S.-based companies. Organizations can pursue a certification of their privacy practices under the framework, which enables them to demonstrate sufficient protections to allow for data processing in the United States.
This agreement replaced the previous safe harbor agreements, which were invalidated by the European court of justice in October 2015. The Privacy Shield agreement faces ongoing legal challenges in EU courts due to nonexistent federal level privacy laws in the United States that govern data belonging to non-U.S. citizens, and the agreement was partially struck down in 2020. Organizations with existing obligations may continue to operate under the framework but should seek legal guidance to stay informed of the changing requirements.
Adherence to Privacy Shield does not make U.S. companies GDPR-compliant, but it allows the company to transfer personal data out of the EEA into infrastructure hosted in the United States. Under Privacy Shield, organizations self-certify to the U.S. Department of Commerce and publicly commit to comply with the seven principles of the agreement. Those seven principles are as follows:
The HIPAA legislation of 1996 defined what comprises personal health information, mandated national standards for electronic health record keeping, and established national identifiers for providers, insurers, and employers. Under HIPAA, PHI may be stored by cloud service providers provided that the data is protected in adequate ways. Under HIPAA there are separate rules for privacy, security, and breach notification, as well as specifications for how these requirements flow down to third parties. HIPAA-covered entities are those organizations that collect or generate PHI, while their third parties are known as business associates and must enter into a formal agreement that defines their obligations for safeguarding the PHI.
This U.S. federal law requires financial institutions to explain how they share and protect their customers' private information. GLBA is widely considered one of the most robust federal information privacy and security laws, but it is very narrowly targeted to financial services firms. This act consists of three main sections.
The act also requires financial institutions to give customers written privacy notices that explain their information-sharing practices. GLBA explicitly identifies security measures such as access controls, encryption, segmentation of duties, monitoring, training, and testing of security controls.
The Stored Communication Act (SCA), enacted as Title II of the Electronic Communication Privacy Act, created privacy protection for electronic communications such as email or other digital communications stored on the Internet. In many ways, this act extends the Fourth Amendment of the U.S. Constitution—the people's right to be “secure in their persons, houses, papers, and effects, against unreasonable searches and seizures”—to the electronic landscape. It outlines that private data is protected from unauthorized access or interception (by private parties or the government).
The United States is made of up at least 51 smaller governments, one for each of the 50 states and Washington, D.C., which governs some of its own affairs. The federal government provides services and legislation for affairs between states and on issues that are not state-specific, such as regulating PHI. All states have enacted some privacy legislation, and in the wake of GDPR some states have begun to implement privacy laws with similar requirements.
As with the multitude of international privacy laws, competent legal counsel should be engaged to identify any and all state-specific security and privacy requirements that must be met. States like California, with the California Consumer Privacy Act (CCPA), and New York's SHIELD Act provide a robust set of privacy rights, protections, and defined terms.
One major difference between the various state-level laws is the means of recourse made available to data subjects. Some states provide what is known as a private right to action, meaning an organization can be sued by an individual who feels the company violated their privacy rights. In states where a collective right to action is provided, some organization can pursue legal action against an organization for violating privacy laws. This may be an attorney general or other designated office within the state government, but individual data subjects cannot sue the organization directly. Risks associated with legal action should be assessed carefully—a single individual lawsuit is unlikely to have a large impact, and a state government with relatively lax enforcement is also unlikely to pose a significant risk. However, a state with strict enforcement is much more likely to pursue action, raising the likelihood and possibly impact of any legal action.
Cloud computing resources enable global placement of infrastructure for processing and storing data, which brings with it challenges related to complying with overlapping, and often conflicting, privacy laws. Different laws and regulations may apply depending on the location of the data subject, the data collector, the cloud service provider, subcontractors processing data, and the company headquarters of any of the entities involved. Security practitioners must be aware of these challenges and ensure that their risk assessments adequately capture these risks. Mitigation activities should be implemented to ensure compliance, and consultation with legal professionals during the construction of any cloud-based services is essential.
Legal concerns can prevent the utilization of a cloud services provider, add to costs and time to market, and drive changes to the technical architectures required to deliver services. Nevertheless, it is vital to never replace compliance with convenience when evaluating services, as this increases risks. In 2020, the video conferencing service Zoom was found to be engaged in the practice of routing video calls through servers in China in instances when no call participants were based there. This revelation caused an uproar throughout the user community and led to the abandonment of the platform by many customers out of privacy concerns: see theguardian.com/uk-news/2020/apr/24/uk-government-told-not-to-use-zoom-because-of-china-fears
. In this case, the impact was mainly reputational, as customers abandoned the platform because their data would certainly not enjoy the same privacy protections within China compared to other nations. While lost business can be hard to quantify, many privacy frameworks impose fines or other regulatory action for noncompliance.
With so many concerns and potential harm from privacy violations, entrusting data to a CSP can be daunting. Fortunately, there are industry standards that address the privacy aspects of cloud computing for customers. International organizations such as ISO/IEC have codified privacy controls for the cloud. Adherence to the privacy requirements outlined by ISO 27018 enable cloud customers to trust their providers.
ISO 27018 was published in July 2014 as a component of the ISO 27001 standard and was most recently updated in 2019. Security practitioners can use the certification of ISO 27000 compliance as assurance of adherence to key privacy principles, and CSPs can publish details of their ISO certification to provide assurance to their customers. Major cloud service providers such as Microsoft, Google, and Amazon maintain ISO 27000 compliance, which include the following key principles:
Privacy and security concerns can generate conflict when monitoring is used to inspect network traffic or system usage. Organizations may have a legitimate need to observe what their users are doing, for example to identify users who violate policy by visiting inappropriate websites or sending protected data outside the organization's control. Monitoring tools can be useful, but the privacy rights of the users may conflict with this monitoring. In some jurisdictions, providing notice that a system is monitored is sufficient, while in others it is illegal to perform monitoring without a specific, documented reason. It is important to ensure that the monitoring strategy does not create a breach of privacy protections the users are entitled to.
Generally Accepted Privacy Principles (GAPP) is a framework of privacy principles originally published by a task force of professional accountants in the United States and Canada. It is now widely incorporated into the SOC 2 framework as an optional criterion, meaning organizations that pursue a SOC 2 audit can include their privacy controls if appropriate based on the type of services they provide. Similar to ISO 27018, which is an optional extension of the controls defined in ISO 27002, the privacy criteria in SOC 2 provide objectives, which can be met by an organization's security controls. An audit of these controls results in a report that can be shared with customers or potential customers, who can use it to assess a service provider's ability to protect sensitive data.
GAPP is a set of standards for the appropriate protection and management of personal data. There 10 main privacy principles grouped into the following categories:
GDPR codifies specific roles, such as a data controller and data subject, as well as rights and responsibilities for each role. Specifically, the rights of the data subject are enumerated and must be met by any data collector or processor. These rights are outlined in Chapter 3 of the GDPR (“Rights of the Data Subject”) and consist of 12 articles detailing those rights:
The complete language for the GDPR data subject rights can be found at gdpr.eu/tag/chapter-3
.
Assessing the impact of systems and business processes is a familiar task—a business impact assessment (BIA) is a crucial element of performing continuity and resilience planning. Similarly, a privacy impact assessment (PIA) is designed to identify the privacy data being collected, processed, or stored by a system, and assess the effects that a breach of that data might have. Several privacy laws explicitly require PIAs as a planning tool for identifying and implementing required privacy controls, including GDPR and HIPAA.
Conducting a PIA typically begins when a system or process is being evaluated, though evolving privacy regulation often necessitates assessment of existing systems. The first step is to define a scope of the PIA, such as a single system or an organizational unit. Once the scope is defined, the types of data collected and data flow throughout the target system must be documented. These are critical for the next phase, which is analysis, since the types of data often dictate the required protections.
For example, a system that handles sensitive personal data like health records or financial transactions is regulated by privacy legislation that mandates specific security controls. The culmination of the PIA process is a documented impact assessment detailing the information in use, consequences of a breach or mishandling of the data, and required controls. These may include identifying a data or system owner and assigning them responsibility for ensuring that required controls are implemented, choosing technologies that offer required security capabilities, and architecting systems to meet the requirements. From a cloud security perspective, this may drive decisions about which CSP to use, which specific services can or cannot be used, and even whether the proposed system is appropriate for cloud hosting at all.
Methods of gathering information when conducting the PIA can include questionnaires and interviews with relevant staff, such as system architects, administrators, or even project leaders. Diagrams of systems, networks, or data flows can be created and are a useful tool when defining what data is being handled and where it exists during different lifecycle phases. This dictates the type and manner of controls implemented—for example, a system that does not archive data will not need any controls in place for data retention. Some regulatory frameworks mandate retention, however, so understanding if the data in question is regulated by one of these frameworks is an essential part of the analysis.
The IAPP has published guides and resources related to privacy efforts like PIAs. More details on the functioning and creation of PIA processes can be found here: iapp.org/resources/article/privacy-impact-assessment
.
The word audit can be daunting, as many IT professionals have undergone audits that can feel invasive, and in the context of government taxing agents an audit is never a pleasant experience. Complexity and uncertainty can make the process highly unpleasant, and the rigorous and time-consuming processes that must be followed exactly are error prone. However, audits are an essential part of verifying compliance and effectiveness of security controls. A well-architected audit strategy and security controls that are properly designed to provide necessary information proactively can make audits much less burdensome.
Auditing in a cloud environment presents additional challenges when compared to traditional on-premises requirements. This section will detail some of the controls, impacts, reports, and planning processes for a cloud environment and how these preparations may differ from noncloud environments. It is important for cloud security professionals to work in concert with other key areas of the business to successfully navigate the journey to and in cloud computing. Since the cloud and IT services are utilized by and affect the whole organization, it is vital to coordinate efforts with legal counsel, compliance, finance, and executive leadership.
A key element of a well-designed audit strategy is a security control framework that helps the organization map their internal controls to a variety of compliance frameworks. Auditors are looking for evidence of compliance, so controls that are aligned with the relevant compliance obligations make the task much easier. There are multiple control sets that can be used to achieve this purpose, such as the CSA Cloud Controls Matrix (CCM) and the Secure Control Framework (SCF). The frameworks can be found at cloudsecurityalliance.org/research/cloud-controls-matrix
and at securecontrolsframework.com
. Both frameworks identity key security controls and activities, as well as compliance framework mappings that show how the controls satisfy compliance objectives.
Audits help organizations communicate important details of their security and privacy controls, including the adequacy of control design and whether the controls are in place and achieving the desired level of risk mitigation. External auditors provide a trusted source of information that allows this information to be communicated with outside parties; CSPs can engage a third-party auditor to conduct a review and share that report with potential customers to earn new business. The auditor in this case is unbiased, so the level of trust in the report is higher than with an internal auditor, whose job security rests on the results of the report.
Internal audit and compliance does have a key role to manage and assess risk for both CSPs and cloud customers. External audits perform a vital function in evaluating controls but are typically expensive and happen relatively infrequently. An internal audit function can provide more continuous monitoring of control effectiveness and also brings more inside knowledge of the organization's operations. This can uncover issues that an outsider might miss, and the more frequent review schedule allows the organization to catch and fix any issues before they show up on a formal audit report.
An internal auditor acts as a “trusted advisor” as an organization takes on new risks. In general, this role works with IT to offer a proactive approach with a balance of consultative and assurance services. An internal auditor can engage with relevant stakeholders to educate the customer about cloud computing risks, such as security, privacy, contractual clarity, business continuity planning (BCP) and disaster recovery planning (DRP), compliance with legal and jurisdictional issues, etc. They fulfill this role both proactively, when projects begin, and also reactively, as they conduct audits of existing systems or processes and report on any weaknesses.
An internal audit can also mitigate risk by examining cloud architectures to provide insights into an organization's cloud governance, data classifications, identity and access management effectiveness, regulatory compliance, privacy compliance, and cyber threats. While more frequent audit schedules can create an operational burden, the rapidly evolving nature of cloud computing means that risks can change significantly in a short period of time. Waiting for the next annual external audit may allow risks to exist for much longer than desirable.
It is a best practice for an internal auditor to maintain independence from both the cloud customer and the cloud provider, even though they may be employed by one of these organizations. The auditor is not “part of the team” but rather an independent entity who can provide facts without fear of reprisal. To achieve this, most internal audit teams report to a different executive than their IT counterparts. Controls in place around the audit function typically focus on this separation of duties and minimizing potential for conflict of interest.
Security controls may also be evaluated by external auditors, and in many compliance frameworks the engagement of a third-party, unbiased auditor is required. An external auditor, by definition, is not employed by but does work on behalf of the firm being audited. This is similar to financial audits, which require an objective third-party auditor to review financial statements. External auditors are generally barred from offering advisory services due to the potential conflict of interest, so controls in place for selecting and interacting with the auditors must account for this requirement.
Other controls that should be in place for audits include the following:
The requirement to conduct audits can have a large procedural and financial impact on a company. In a cloud computing context, the types of audits required are impacted largely by a company's business sector, the types of data being collected and processed, and the variety of laws and regulations that these business activities subject a company to. In addition, customer requirements can be a significant driver, especially for organizations providing SaaS that is built on another CSP's infrastructure. While some elements of the security program are covered by the infrastructure CSP's controls and audit reports, the SaaS provider is responsible for implementing controls over their activities and providing an audit report showing how they are implemented.
Some entities operate in heavily regulated industries subject to numerous auditing requirements, such as banks or critical infrastructure providers. Others may be data processors with international customers, such as big tech companies like Apple, Facebook, Google, and Microsoft. This significantly increases the scope and complexity of the audit program, due to overlapping and sometimes conflicting requirements.
The dynamic and quickly evolving nature of cloud computing demands changes to processes associated with audits. For example, auditors must rethink some traditional methods that were used to collect evidence needed during an audit. As an example, consider the problems of data storage, virtualization, and dynamic failover.
The cloud is made possible by virtualization technologies. Abstracting the physical servers that power the cloud from the virtual servers that provide cloud services allows for the necessary dynamic environments that make cloud computing powerful and cost-effective. Furthermore, the underlying virtualization technologies that power the cloud are changing rapidly. Even a seasoned systems administrator who has worked with VMware or Microsoft's Hyper-V may struggle with understanding the inherent complexity of mass scalable platforms such as AWS, Google, or Azure cloud.
Migrating from on-premises to cloud hosting fundamentally changes the practice of risk management, which presents challenges to gaining the necessary assurance that controls are in place and reducing risk to an acceptable level. An on-premises system audit can be conducted by an organization using their own personnel. That same audit is likely impossible in a cloud environment for a number of reasons. CSPs rarely allow customers to perform their own audits of the CSP's facilities, and even if an auditor could gain access, finding the specific physical hardware hosting a cloud system may be impossible. This means that assurance must come from third-party-issued reports rather than direct observation, shifting the process to more of a supply chain or vendor risk management activity.
Depending on the cloud architecture employed, a cloud security professional must perform multiple layers of auditing. Elements of both the hypervisor and VMs themselves must be inspected to obtain assurance during the audit. It is vital for the auditor to understand the architecture that a cloud provider is using for virtualization and ensure that both hypervisors and virtual host systems are hardened and up-to-date. Change logs are especially important in a cloud environment to create an audit trail as well as an alerting mechanism for identifying when systems may have been altered in inappropriate ways by accidental or intentional manners.
Because of the shared responsibility model, some elements of auditing will be shared by the CSP and the cloud customer. Audits of controls over the hypervisor will usually be the purview of the CSP, since they control and manage the relevant hardware. VMs deployed on top of that hardware are usually under the direct control of the cloud customer, so assurance activities must be performed by either customer personnel or their third-party auditor. This is more complicated than auditing an on-premises environment where one organization has complete control over the infrastructure. Audit standards, as discussed in the next section, have evolved to deal with this complexity by specifying which controls are owned by the audited organization, and which are inherited from another provider.
Any audit, whether internal or external, will produce a report focused either on the organization or on the organization's relationship with an outside entity or entities. In a cloud relationship, oftentimes the ownership of security controls designed to reduce risk resides with a cloud service provider. An audit of the cloud service provider can identify if there are any gaps between what is contractually specified and what controls the provider has in place.
The American Institute of CPAs (AICPA) provides a suite of audit and assurance standards that are widely used to report on controls in place at a service organization, such as a CSP. This includes standards for auditors to use when conducting audit activities, as well as specifics for report formats and details that customers can use to understand the risks associated with using a CSP's services. The various report types are detailed in Table 6.2.
TABLE 6.2 AICPA Service Organization Control (SOC) Reports
Table source: Adapted from AICPA SOC Reports
Report | Users | Concerns | Details Required |
---|---|---|---|
SOC 1 | User entities and the CPAs that audit their financial statements | Effect of the controls at the service organization on the user entities' financial statements | Systems, controls, and tests performed by the service auditor and results of tests |
SOC 2 | Broad range of users who need detailed information and assurance about controls at a service organization | Security, availability, and processing integrity of the systems the service organization uses to process users' data and the confidentiality and privacy of the information processed by these systems | Systems, controls, and tests performed by the service auditor and results of tests |
SOC 3 | Broad range of users who need information and assurance about controls but do not have the need for detailed information provided in a SOC 2 report | Security, availability, and processing integrity of the systems the service organization uses to process users' data and the confidentiality and privacy of the information processed by these systems | Referred to as a “Trust Services Report,” SOC 3 reports are generally used and can be freely distributed, unlike SOC 2, which usually requires nondisclosure |
The differences between the SOC reports are as follows:
There are two types of reports for these engagements:
AICPA definitions of SOC controls can be found at the following locations:
aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc1report.html
aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html
aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc3report.html
The Statement on Standards for Attestation Engagements (SSAE) is a set of standards defined by the AICPA to be used when conducting audits and generating SOC reports. The most current version (SSAE 18) was made effective in May 2017 and added additional sections and controls to further enhance the content and quality of SOC reports. It is primarily used by auditors when conducting SOC audits rather than service providers or customers.
The International Auditing and Assurance Standards Board issues the International Standard on Assurance Engagements (ISAE). This is similar to the AICPA and SSAE standards, but there are some differences between the two standards. A security professional should always consult the relevant business departments to determine which audit report(s) will be used when assessing cloud systems. Although SOC 2 is a standard defined by a U.S. body, it has become something of a de facto global standard. Although cloud computing is global, the major CSPs are U.S.-based and implemented these standards for other large tech companies that are also U.S.-based. The ISAE 3402 standard is roughly equivalent to the SOC 2; the major CSPs offer audit reports for both. As a cloud provider or customer, it is important for a security practitioner to understand the relevant types of reports they need to either consume from their CSPs or provide to their customers.
The Security Trust Assurance and Risk (STAR) certification program from CSA can be used by cloud service providers, cloud customers, or auditors and consultants to demonstrate compliance to a desired level of assurance. STAR consists of two levels of certification, which provide increasing levels of assurance to customers:
Since CSA is an industry group comprising cloud providers and major customers, it is focused specifically on cloud computing security risks and controls. A Level 1 STAR is a weak form of assurance, as an organization's self-assessment is not as rigorous as a third-party audit conducted by a trained, qualified auditor. More details on the registry and assurance requirements can be found at cloudsecurityalliance.org/star
.
Audit scope statements are an essential part of an audit report. They provide the reader with details on what was included in the audit and what was not—if the reader is using a service that was not included in the scope of the audit, then the report provides nothing useful for making a risk decision. Learning to read audit reports and extract these important details is key to gaining assurance regarding the security controls in place at a CSP or other service provider.
Determining the scope of an audit is usually a joint activity performed by the organization being audited and their auditor. Several frameworks, such as SOC 2 and ISO 27001, include guidance on defining the scope of the audit, specifying which parts of the organization and services are included. The final scope is documented by the auditor in the resulting report and should be used by any consumers when determining if the services they are evaluating have been audited.
An audit scope statement generally includes the following information:
Any audit must have parameters set to ensure that the efforts are focused on relevant areas that can be effectively audited. Setting these parameters for an audit is commonly known as the audit scope restrictions. Why limit the scope of an audit? Audits are expensive endeavors that can engage highly trained (and highly paid) content experts. The auditing of systems can affect system performance and, in some cases, require the downtime of production systems.
Large organizations with multiple service offerings may also restrict the scope of an audit to a specific service or set of services for a variety of reasons. A newly created service may not have all relevant controls implemented, so an audit is largely useless until the service is complete and controls are implemented. In other cases, it may be a deliberate decision to exclude certain services from being audited, as the cost of implementing controls and auditing to verify their effectiveness is too high relative to the revenue the service generates.
Scope restrictions are of particular importance to security professionals. They can spell out the operational components of an audit, such as the acceptable times and time periods (for example, days and hours of the week), types of testing that will be conducted, and which systems or services are to be audited. Carefully crafting scope restrictions can ensure that production systems are not adversely impacted by an auditor's activity, and it is vital to ensure that systems that customers need assurance for are included in the scope. Scoping can also be a means of controlling costs related to compliance. For example, an audit on HIPAA compliance should only include systems that handle PHI; otherwise, the auditor will charge for time spent auditing systems that should have no valid reason to be audited.
As a precursor to a formal audit process, an organization may find a gap analysis a useful starting point. Gap analyses lack the rigor of a formal audit and can be a quick check of compliance, which is useful for organizations preparing to undergo a formal audit for the first time. They can also be useful when assessing the impact of changes to regulatory or compliance frameworks, which introduce new or modified requirements. A gap analysis identifies where the organization does not meet these changed requirements and provides important information to help remediate thee gaps.
The main purpose of a gap analysis is to compare the organization's current practices against a specified framework and identify the gaps between the two. These may be performed by either internal or external parties, and the choice of which to use will be driven by the cost and need for objectivity. If a gap analysis is being performed against a business function, the first step is to identify a relevant industry-standard framework to compare business activities against. In information security, this usually means a standard such as ISO 27002 (best-practice recommendations for information security management). Another common comparison framework used as a cybersecurity benchmark is the NIST cybersecurity framework.
A gap analysis can be conducted against almost any business function, from strategy and staffing to information security. The common steps generally consist of the following:
Since a gap analysis provides measurable deficiencies and, in some cases, needs to be signed off by senior leadership, it can be a powerful tool for an organization to identify weaknesses in their efforts for compliance. It is also useful as a planning and prioritization tool, as any identified gaps can be evaluated against known risks. The gaps that correspond to risks should be prioritized first, since closing them also supports the organization's overall security risk management strategy.
Any audit, whether related to financial reporting, compliance, or cloud computing security risk management, must be carefully planned and organized. This helps ensure that the results of the audit are relevant to the organization and contain useful information that can be used to help the organization improve on any identified weaknesses or deficiencies.
The audit process can generally be broken down into four phases, starting with audit planning. During this phase, the organization must perform several tasks, including the following:
Once the audit planning process is completed, the actual work of the audit begins. After planning, there are three major phases of an audit, which include the following activities:
In many organizations, audit is a continuous process. This is often structured into business activities to provide an ongoing view into how an organization is meeting compliance and regulatory goals. As part of the audit planning process, scheduling and coordinating these audits can be challenging but is essential to prevent audits from adding too much operational overhead. Cloud security practitioners can utilize audits as a way to monitor the status of their compliance programs and therefore the status of their risk mitigation strategies.
An information security management system (ISMS) is a systematic approach to information security consisting of processes, technology, and people designed to help protect and manage an organization's information. The ISO 27001 standard directly addresses the need for and approaches to implementing an ISMS, starting with an explanation of what the ISMS is and how it should align with other organizational processes:
The information security management system preserves the confidentiality, integrity, and availability of information by applying a risk management process and gives confidence to interested parties that risks are adequately managed.
It is important that the information security management system is part of and integrated with the organization's processes and overall management structure and that information security is considered in the design of processes, information systems, and controls. It is expected that an information security management system implementation will be scaled in accordance with the needs of the organization.
This International Standard can be used by internal and external parties to assess the organization's ability to meet the organization's own information security requirements.
Source: ISO/IEC 27001:2013
Information technology — Security techniques — Information security management systems — Requirements
An ISMS is a powerful risk management tool and is most often implemented at medium or large organizations where there is a formal need to quantify risk, develop and execute strategies to mitigate it, and provide formal reporting on the status of these risk mitigation efforts. It gives both internal and external stakeholders additional confidence in the security measures in place at the company.
Though the function of an ISMS can vary from industry to industry, there are a number of benefits to implementation that hold true across all industries.
As with any major organizational element, an ISMS requires buy-in from company leadership to be effective. For CSPs an ISMS can provide a single organizational function for addressing risks that customers will ask about, such as the security of the data they put into the cloud and availability of systems hosted in the CSP's environments. For cloud customers, their own internal ISMS is the implementation point for all the security controls discussed throughout this book, including risk management activities associated with migrating to and using cloud computing.
As a companion to an ISMS, a system of information security controls provides guidance for mitigating the risks identified as part of the ISMS's risk management processes. Often known as control frameworks, these are considered best practices guidance that can give the organization a starting point when addressing their identified risks. As with all shared resources, some modifications may be required.
Scoping controls refers to identifying which controls in the framework apply to the organization and which do not. There may be controls that deal with business processes, system types, or even technologies that are not in use in an organization. Tailoring is a process of taking the applicable controls and matching them to the organization's specific circumstances, such as removing any guidance for Windows systems if an organization is exclusively Linux based. To use a clothing analogy, scoping refers to excluding the sections of a store that sell clothes designed for other age groups—adults are unlikely to find anything wearable in the children's department! Once an appropriate outfit is selected, tailoring can ensure that it fits your individual body type, resulting in the best fit.
There are a number of control frameworks to choose from and various reasons to choose one or another. Organizations implementing an ISO 27001 ISMS will find the ISO 27002 controls very easy to use, since they are designed to fit together. Other control frameworks include NIST Special Publication 800-53, the NIST Cybersecurity Framework (CSF), the Secure Controls Framework, and the CSA CCM.
In addition to providing a set of standardized control activities, these frameworks may also provide guidance and processes for the tailoring and implementation of activities needed to meet the objectives. For instance, the NIST CSF organizes controls based on their intended risk mitigation functions.
For example, Identify controls are useful for identifying threats and risks, while Protect controls are designed to proactively mitigate identify risks. Once controls are in place, the Detect category includes controls related to detecting whether a security incident has occurred, while Respond and Recover focus on mitigating the impact and returning to normal operations. More information on the NIST CSF can be found at nist.gov/cyberframework/online-learning/five-functions
.
Policies are a key part of any data security strategy. Policies provide users and organizations with a way to understand requirements and provide the organization to enforce these requirements in a systematic way. Employees and management are made aware of their roles and responsibilities via policies, which is a way for organizations to govern activities occurring during the course of operations. Policies are an important piece of standardizing practices in an organization.
From a cloud computing perspective, policies can be an important tool to govern migration to and use of cloud resources. While cloud computing offers significant benefits like cost savings, it can also introduce unexpected or unwanted risk. Policies communicate expectations such as acceptable use of cloud services, helping to ensure that the organization balances the benefits realized via cloud computing without taking on unacceptable risks.
Policies are a formal and high-level document that should be approved by the organization's management. They support strategic goals and initiatives and generally do not contain highly specific details like system configurations or step-by-step procedures. Without formal management approval and proper education for relevant stakeholders, policies will be ineffective, so it is important for security practitioners to devote adequate attention to them.
Companies use policies to outline rules and guidelines, which are usually complemented by other documentation such as procedures, job aids, etc. Policies make employees aware of the organization's views and values on specific issues and what actions will occur if they are not followed. As an example, organizations typically define policies related to proper use of company resources like expense reimbursements and travel. These specify how and when employees can seek reimbursement and what rules they must follow when booking travel to ensure that the company complies with relevant accounting and fiduciary laws.
Policies are a proactive risk mitigation tool designed to reduce the likelihood of risks, such as the following:
A functional policy is a set of standardized definitions for employees that describe how they are to make use of systems or data. Functional policies typically guide specific activities crucial to the organization, such as appropriate handling of data, vulnerability management, and so on.
One common policy at many organizations is a data classification policy, which communicates what types of data the organization handles and what protections must be in place. Other policies, such as cloud computing and acceptable use policies, can provide guidance for appropriate handling of data on employee workstations and cloud services based on the data's classification level. This might include requirements for applying encryption or even specify specific classification levels where data is not to be processed in the cloud at all.
Functional policies generally codify requirements identified in the ISMS and often align with the families of controls in security frameworks. The following, while not an exhaustive list, identifies several common policies that organizations might find useful:
The ease of deploying cloud resources has led to a significant problem known as shadow IT, which is any IT service or information system that exists without formal knowledge of the organization. In an organization that uses SharePoint for filesharing and collaboration, a single team signing up for and using Dropbox to share files is an example of shadow IT. The controls in place for data security in SharePoint are unlikely to be applied to Dropbox, since the service was not formally approved and secured by the organization's IT department. Shadow IT can also create financial risks, as the organization's IT spending becomes harder to measure when multiple teams are involved.
Cloud services should not be exempt from organizational policy application. These policies will define the requirements that users must adhere to in order to make use of the services and may dictate specific cloud services that are approved for various uses. Because of the ease of provisioning cloud services, many organizations have specific policies in place that discourage or prohibit the use of cloud services by individuals outside of central IT oversight.
Since cloud computing is outside the direct control of the organization, policies may be written to guide the selection and use of cloud environments, rather than being used to govern the day-to-day activities of internal employees. When evaluating policies and how they should be applied to the cloud, security practitioners should address major areas of risk such as the following:
In some instances, a cloud service provider cannot meet a company's requirements when it comes to adhering to a specific policy. If this happens, it is important to consider the risks of using the provider, and any deviations from policy should be carefully documented. All policy exceptions should be treated as risks, which require compensating controls to mitigate. If the threat landscape changes significantly, these risks may increase above the organization's tolerance, which will necessitate action such as finding a new CSP or moving back to on-premises hosting.
One key challenge in the audit process is the inclusion of any relevant stakeholders. This includes the organization's management who will likely be paying for the audit, security practitioners who will be responsible for facilitating the audit, and employees who will be called upon to provide evidence to auditors in the form of documentation, artifacts, or sitting for interviews.
Cloud computing environments can include more stakeholders than on-premises systems, because there can be multiple CSPs involved. For instance, a SaaS application may introduce both the SaaS vendor as well as their infrastructure provider, where an on-premises environment would involve only the organization's internal IT department. When it comes to performing audits, certain challenges can arise from these complicated supply chains,
It is important to both identify and involve all relevant stakeholders. If this is not done, any audit performed risks missing important details and information the auditors need to uncover potential weaknesses. This applies even without additional vendors or stakeholders—auditors will need access to relevant personnel such as system administrators and management inside the organization. When auditing a cloud system, stakeholders from the CSP may need to be informed or involved.
To identify relevant stakeholders, some key challenges that cloud security practitioners face include the following:
Responsibility for compliance to any relevant regulations ultimately rests with the cloud consumer, and organizations that migrate to the cloud do not absolve themselves of risks associated with their information systems. Some industries have cloud-specific regulatory or compliance guidance, and some industries have extensive regulatory frameworks due to the sensitivity of the data handled by that industry. This significantly impacts the work of security practitioners, who may find their entire job description dictated by the compliance requirements.
Many CSPs have compliance-focused cloud service offerings, which meet the requirements of specific regulatory or legal frameworks. An organization's cloud computing strategy should be designed with regulatory compliance in mind, including mandating the use of compliant cloud service offerings. The cloud customer is unlikely to perform their own audit of a CSP and instead will rely on the CSP's published audit reports to gain assurance that the CSP's services implement adequate protections to meet the regulatory requirements.
Highly regulated industries typically involve highly sensitive data, such as health or financial information, or provide services that make them critical infrastructure, such as power and other utility providers. Organizations in these industries need to be aware of the regulations governing their operations and ensure that their strategy for using cloud computing enables them to be compliant. Examples of these regulatory frameworks include the following:
Since public CSPs do not generally allow individual customers to perform audits, organizations in highly regulated industries may seek out a different cloud deployment model. If enough organizations need cloud computing, creating a community cloud might be a feasible option. Since the user community shares the same regulatory requirements, the community cloud can be specifically designed to meet those needs. This simplifies the task of compliance, and any audits performed on the cloud will be specific to the industry-specific regulations, which will make security activities easier for all customers of that community cloud.
Cloud computing enables distributed IT service delivery, with systems that can automatically replicate data and provide services from data centers around the globe. Auditing such a complex environment requires significant modifications from traditional computing models, where it was possible to point to a specific data center and specific server rack where data or systems were hosted.
One obvious impact of this distributed model is the additional geographic locations auditors must consider when performing an audit. An important term in audits is sampling, which is the act of picking a subset of the system's physical infrastructure to inspect. For example, when performing a configuration audit on a system with 100 web servers, an auditor might pull configuration information for 20 of them to perform checks. The time needed to audit all 100 is prohibitive, so reviewing 20 percent is adequate to determine if the organization's configuration management policy is being followed.
Now expand this problem from 100 servers in a few data centers in one country: auditors now face hundreds of data centers in many different countries. Further complicating the issue is virtualization, which means the virtual servers that are part of a specific system could exist on any one of thousands of hardware clusters around the world—and their location can change almost instantly. This is an obvious benefit for system availability but makes the process of sampling much more difficult.
CSPs have found ways to collect evidence that provides auditors with sufficient assurance that they have collected a representative sample. This can include continuous monitoring strategies that capture information on a frequent enough basis to supply the auditor with sufficient, competent evidence. However, the cost of audits with geographically distributed systems will be greater, as the auditors may have to perform physical site inspections that require travel.
Legal jurisdiction issues can also complicate the process of conducting audits. While the process of auditing is not necessarily illegal, issues associated with auditors traveling and gaining access to facilities can add complexity to the audit process. For example, gaining access and approval to do business in some countries requires visas and work permits, which must be approved before an audit can take place. As with other aspects of cloud security, practitioners should coordinate with appropriate legal resources to determine any needs related to international auditing.
If you compare how IT services were provisioned two decades ago to how they are done today, you would see a completely different landscape. In the dot-com era, provisioning systems took experts days or weeks to build out hardware, operating systems, and applications, and that assumed a facility was available—if not, building out a data center could take months. Companies spent millions of dollars on physical infrastructure in the form of data centers, wide area networks, and physical server hardware. Today, anyone can provision a multitier web application in a few minutes or sign up for a SaaS application in mere seconds.
This shift in how IT services is provisioned has significantly altered enterprise risk management practices. In the past, the thought of “accidentally” provisioning a server simply did not exist, much less the scenario of spinning up infrastructure in another country or legal jurisdiction. Today that scenario is not only possible but also highly likely in many organizations, which opens them up to new risks. These require new management and mitigation strategies, approaches, and tools.
It is vital for both cloud customers and CSPS to understand not only how enterprise risk management has changed, but how it continues to evolve as more organizations adopt cloud computing and novel cloud services emerge. New strategies can be employed for risk mitigation, and new ways of assessing, evaluating, and communicating about risk are needed.
Prior to establishing a relationship with a cloud provider, a cloud customer needs to analyze the risks associated with adopting that provider's services. The goal of this is the same as performing risk assessments for on-premises infrastructure, but the method of gaining assurance is different. Rather than performing a direct audit, the customer must rely on their supply chain risk management (SCRM) processes. Similar to shifting IT control away from the customer to the CSP, SCRM requires new approaches.
First and foremost in SCRM is evaluating whether a supplier has a risk management program in place, and if so whether the risks identified by that program are being adequately mitigated. Unlike traditional risk management activities, where the organization can directly review their own processes and procedures, SCRM may require an indirect approach. The major CSPs do not permit direct customer audits or assessments, so cloud customers must review audit reports furnished by the CSP to gain the information needed.
SOC 2, ISO 27001, FedRAMP, and CSA STAR have all been discussed in previous sections of this chapter. CSPs will engage a qualified third-party auditor to perform an audit and issue a report using one or more of these frameworks, and possibly others, depending on the markets the CSP is trying to win business in. Some customers are required to choose CSPs that are compliant with a particular framework, such as U.S. government agencies that must use FedRAMP-accredited CSPs. Nonregulated organizations may be able to choose a CSP that provides an audit report that offers adequate assurance, such as picking a CSA STAR Level 1 CSP. Although the Level 1 self-assessment provides only low assurance, the lower cost associated with a less-audited environment may be appropriate for lower-sensitivity data.
When reviewing an audit report, there are several key elements of the report to focus on. These include any scoping information or description of the audit target. Some compliance frameworks allow audits to be very narrowly scoped, such as SOC 2. If the CSP's SOC 2 audit did not cover a specific service a customer wants to use, then the audit finding does not provide any value. Also important to review are any findings, weaknesses, or deficiencies identified in the report, as these represent inadequate or nonfunctional risk mitigations. If the risk applies to a service the customer is not using, the finding can be ignored, but if it does impact a service in use, then that risk is inherited by the customer. This may drive changes, such as enhanced customer-side controls, tracking the CSP's mitigation and resolution efforts, or even migrating to another CSP altogether.
There are resources that can help organizations build out or enhance their SCRM program. NIST has a resource library that includes working groups, publications, and other resources, available here: csrc.nist.gov/Projects/cyber-supply-chain-risk-management
. ISO 27000:2022 specifies a security management system for security and resilience, with a particular focus on supply chain management. It extends concepts found in the ISO 31000:2018 standard, which focuses on enterprise risk management. Both standards provide guidance for identifying and assessing a supplier's security controls, policies and procedures, and the effectiveness of their security risk mitigations.
Two other important aspects to consider when evaluating a CSP's risk management program include the company's risk profile and risk appetite. Risk profile describes the risk present in the organization based on all the identified risks and any associated mitigations in place. For example, technology startups typically have much higher risk than established financial services firms. This is due to several factors, including the age of the company and maturity of their risk management programs, as well as the varying amount of regulation each industry faces.
Risk appetite describes the amount of risk an organization is willing to accept without mitigating—once again a startup is likely to accept more risk than a bank simply due to the resources required to mitigate those risks. A cash-strapped startup cannot staff up a risk department, so it must accept more operational and technology risk. Both of these factors should be considered by any cloud customers when evaluating a provider's risk management program. These details may be provided in audit reports as part of the provider's description of their ISMS, or in other documentation like security whitepapers.
An important distinction in data is the difference between the data owner (data controller) and the data custodian (data processor). While these nuanced definitions may seem unneeded, they do have implications for managing risks associated with privacy data. It is helpful to start with some definitions:
For example, let's say BikeCo sells bicycles and allows users to provide personal data online to fulfill orders. They use a CSP, called CloudWheelz, to host their website and customer database, as well as an online payment processor, Circle, to handle payment card transactions. In this case, any customer is the data subject, and BikeCo is the data controller. Both CloudWheelz and Circle are data processors that act on behalf of the data owner. In the event that either CloudWheelz or Circle suffers a data breach, BikeCo is still legally liable for the data. If they have not taken adequate steps to ensure that their data processors implemented adequate protections, regulatory agencies are likely to assess significant penalties.
The distinctions are important for regulatory and legal reasons. Data processors are responsible for the safe and private custody, transport, and storage of data according to business agreements. Data owners are legally responsible (and liable) for the safety and privacy of the data under most international laws. When data controllers use processors, they must ensure that security requirements follow the data. This is often achieved via contract clauses that specify data protection and handling requirements, breach notification timelines, and possibly risk transfer such as the requirement for the processor to carry insurance that helps defray costs associated with a security incident.
A cloud security professional should be aware of the transparency requirements imposed on data controllers by various regulations and laws around the world. Many of these were written before cloud computing was as pervasive as it is today, so it is also important to stay informed about changes, as well as new regulations that come into force and impact the organization. Many legal firms will provide this kind of guidance as a service, and in-house legal counsel can also be a useful resource to identify regulatory requirements and any changes needed to come into compliance.
The following is a short and noncomprehensive list of several important regulatory frameworks that require transparency related to data security and privacy. Security practitioners in organizations regulated by these frameworks must be aware of the frameworks and work to implement the required controls. As a data owner or processor, cloud security professionals must be aware of all relevant regulatory requirements.
Most recent privacy laws include mandatory breach notification. If an organization suffers a data breach, it is obligated to provide notification of that breach. There are some variations among the laws, mainly around issues of timing of the notification and who must be notified. Some regulations require notification within the specified time period of a suspected breach, while others are less strict and require only notification for a confirmed breach. Similarly, some regulations require notification only to the affected data subjects, while others require notification to a regulatory official such as a governmental data privacy official.
Regulations that require breach notification include, but are not limited to, GDPR, HIPAA (as amended by the HITECH Act), GLBA, and PIPEDA. In addition to these, numerous regional, state, and provincial regulations require data breach notification. Cloud security professionals should identify all relevant regulatory frameworks their organization is subject to, and build processes to ensure that obligations are met for notifying affected data subjects.
Incident response plans and procedures should include relevant information about the time period for reporting, as well as the required contacts in the event of a data breach. They should also include guidance on when it is necessary to contact specific data privacy officials. For example, a data breach that affects only U.S. citizens does not need to be reported to any data protection officials in the EU, since the GDPR does not apply to those data subjects.
If a company is publicly traded in the United States, they are subject to transparency requirements under the Sarbanes-Oxley Act (SOX) of 2002. Specifically, as data owners, these companies should consider the following:
SOX compliance is often an issue with both data breaches and ransomware incidents at publicly traded companies. The loss of data related to compliance due to external actors does not protect a company from legal obligations.
For companies doing business in the European Union or with citizens of the EU, transparency requirements under the GDPR are laid out in Article 12 (see gdpr-info.eu/art-12-gdpr
). The exact language states that a data controller (data owner) “must be able to demonstrate that personal data are processed in a manner transparent to the data subject.” The obligations for transparency begin at the data collection stage and apply “throughout the lifecycle of processing.”
The GDPR stipulates that communication to data subjects must be “concise, transparent, intelligible and easily accessible, and use clear and plain language.” Achieving this task may not be the responsibility of a security practitioner, but security should be present when requirements are developed for user interfaces and language presented to users. Legal counsel may also be involved to ensure that the requirements under GDPR are met by any system or application designs.
Meeting the requirement for transparency also requires processes for providing data subjects with access to their data. This process may be owned by customer-facing resources such as a support team, with input required from legal and security to ensure that the procedures meet the GDPR requirements.
In simple terms, this means that plain language must be used to explain why data is being collected and what it is being used for. Similar language is included in other privacy regulations, so building a robust process for providing transparent information is a requirement for many security teams.
Risk treatment is the practice of modifying risk, usually to lower it, which can be achieved in a number of ways. Risk treatment begins with identifying and assessing risks, typically by measuring the likelihood and impact of their occurrence. Not all risks can be treated equally, so risk management usually prioritizes those risks that are higher impact and likelihood first. These risk assessment procedures should be documented as part of the organization's ISMS.
Once risks are identified, risk treatments should be selected to reduce the likelihood or impact (or both). Treatments that reduce likelihood are proactive, and sometimes known as safeguards, while risks that reduce the impact after a risk has occurred are reactive and known as countermeasures. These are collectively referred to as controls, which should be using a cost-benefit analysis to achieve acceptable risk mitigation at an adequate price.
There are four main approaches to treat risk, and many organizations will use more than one treatment option for the same risk. The options are as follows:
It is important to remember that risks are never entirely eliminated. Mitigations and transfer reduce the risks, and cloud security practitioners should keep this in mind as they perform tasks like evaluating CSPs and selecting security controls for implementation.
Similar to security control frameworks, there are several risk management frameworks available for security practitioners to use as guides when designing a risk management program. Many of these are published by the same bodies that publish the control frameworks, and they are complementary. Organizations designing an ISO 27001 ISMS can easily utilize the relevant ISO standard for designing a security risk management program. These standards are known as risk management frameworks (RMFs).
It is important to note that risk management is not only a security activity. Other departments typically pursue risk management as well, which means the security team might be required to work in a collaborative way when conducting risk assessment and management. Some organizations may find a single risk management function to be useful, so executives have a single view of risk and associated metrics. Other organizations may allow different departments to conduct risk management differently, especially if there are conflicting regulations that govern the activities.
In the cloud computing arena, a cloud security professional should be familiar with the ISO 31000:2018 guidance standard, the European Network and Information Security Agency (ENISA)'s cloud computing risk assessment tool, and NIST standards such as 800-146, “Cloud Computing Synopsis and Recommendation,” and 800-37, “Risk Management Framework for Information Systems and Organizations: A System Lifecycle Approach for Security and Privacy.”
ISO 31000 contains several standards related to building and running a risk management program. ISO 31000:2018, “Risk management — Guidelines,” provides the foundation of an organization's RFM, while IEC 31010:2019, “Risk management — Risk assessment techniques,” provides guidance on conducting a risk assessment. The related ISO GUIDE 73:2009, “Risk management — Vocabulary,” provides a standard set of terminology used through the other documents and is useful for defining elements of the risk management program.
This ISO standard provides generic recommendations for the design, implementation, and review of risk management processes within an organization. The 2018 update provides more strategic guidance and redefines risk from the concept of a “probability of loss” to a more holistic view of risk as the “effect of uncertainty of objectives,” recasting risk as either a negative or positive effect. ISO 31000 recommends the following steps in planning for risk:
ISO 31000 is a detailed framework but is not designed to be used in certification (there is no such thing as “ISO 31000 certified”). Adopting this framework will require extensive management conformity to accountability standards as well as strategic policy implementation, communication, and review practices. Documents and supporting resources can be found here: iso.org/iso-31000-risk-management.html
.
As a rough equivalent to the U.S. NIST, ENISA produces useful resources related to information and cybersecurity aligned with EU government objectives and programs. The “Cloud Computing Risk Assessment” is one of these documents and provides details of cloud-specific risks that organizations should be aware of and plan for when designing cloud computing systems.
This guide identifies various categories of risks and recommendations for organizations to consider when evaluating cloud computing. These include research recommendations to advance the field of cloud computing, legal risks, and security risks. Examples of the security risks identified include the following:
The full document can be accessed here: enisa.europa.eu/publications/cloud-computing-risk-assessment
.
Although a U.S. government agency, NIST publishes well-regarded information security standards that are free to download and may be used by any organization. NIST Special Publication (SP) 800-146, “Cloud Computing Synopsis and Recommendations,” provides definitions of various cloud computing terms. These include the service and deployment models like SaaS and public cloud, which were discussed in Chapter 1. Although not a dedicated risk management standard, the various risks and benefits associated with different deployment and service models are discussed. These can be an important input to any discussion of cloud computing risk, and the document may be found here: csrc.nist.gov/publications/detail/sp/800-146/final
.
NIST also publishes an RMF, documented in NIST Special Publication 800-37. This document specifies the RMF to be used by U.S. government federal agencies and is often applied to organizations providing goods and services to these agencies. Although it shares some terminology with the ISO 31000 standard, the NIST RMF is specifically designed to address security and privacy risks. The RMF is flexible and can be applied at multiple levels of an organization, including the system level, an organizational unit level, or across the entire organization. The full publication and supporting documents are located at csrc.nist.gov/publications/detail/sp/800-37/rev-2/final
.
There are some key cybersecurity metrics that companies can track to present measurable data to company stakeholders. Each organization should evaluate its strategy, risks, and management requirements for data when designing a metrics program. Some metrics that are commonly tracked include the following:
Metrics provide vital information for decision makers in the organization. Metrics that are within expected parameters indicate risk mitigations that are operating effectively and keeping risk at an acceptable level. Metrics that deviate from expected parameters, such as MTTD increasing, can indicate that existing risk mitigations are no longer effective and should be reviewed.
The cloud has become a critical operating component for many organizations, so it is crucial to identify and understand the risks posed by a CSP. Cloud providers are subject to risks similar to other service providers, but since they provide a critical service to many organizations, the impact of these risks is increased. It is important to consider a number of questions when considering a cloud service, vendor, or infrastructure provider.
It can be a daunting challenge for any cloud customer to perform due diligence on their provider. However, since the customer organization still holds legal accountability, it is a vital step in selecting and using a vendor. Designing a SCRM program to assess CSP or vendor risks is a due diligence practice, and actually performing the assessment is an example of due care. As a data controller, any organization that uses cloud services without adequately reviewing and mitigating the risks is likely to be found negligent should a breach occur.
There are frameworks for evaluating vendor and infrastructure risks, which provide guidance on designing and executing the required processes. Some of these are general technology risk management frameworks, while others are specifically designed for cloud computing.
The Common Criteria for Information Technology Security Evaluation is an international standard for information security certification. The evaluation process is designed to establish a level of confidence in a product or platform's security features through a quality assurance process.
Common Criteria (CC) evaluation is done through testing laboratories where the product or platform is evaluated against a standard set of criteria. This includes the Target of Evaluation (ToE) and Protection Profiles (PP), which describe the system evaluated and specific security services the product offers. The result is an Evaluation Assurance Level (EAL), which defines how robust the security capabilities are in the evaluated product.
Most CSPs do not have common criteria evaluations over their entire environments, but many cloud-based products do. One common example relevant to security practitioners is security tools designed to be deployed in virtual environments like the cloud. Defining a desired EAL level can be used when evaluating security products, as it allows the organization to select products that have been independently verified against a standardized set of criteria.
A product that has undergone CC evaluation cannot be considered totally secure. The ToE specifies the configuration of the product, and failure to configure a system to the same specification as the ToE can result in a less-secure state. Similarly, newly discovered vulnerabilities could lead to loss of security in a system even if it is properly configured. EALs are useful as a selection tool but are not an absolute guarantee of security.
An up-to-date list of certified products can be found at commoncriteriaportal.org/products
.
When evaluating the risks in a specific CSP or other cloud service, the CSA STAR can be a useful method for ascertaining risks. The CSA STAR contains evaluations of cloud services against the CSA's cloud-specific controls (the CCM), and organizations have the flexibility to select self-assessed or third-party-assessed cloud services. Organizations that are not regulated by other frameworks and that make extensive use of the cloud may find this a lightweight but useful risk management framework.
Since the registry of certified providers is publicly available, the STAR program makes it easy for a company to assess the relative risk of a provider and should certainly be consulted when assessing any new CSP.
ENISA has published a standard for certifying the cybersecurity practices present in cloud environments. The framework, known as EUCS, defines a set of evaluation criteria for various cloud service and deployment models, with the goal of producing security evaluation results that allow comparison of the security posture across different cloud providers. The standard is still under development as of 2022, but the draft scheme can be found here: enisa.europa.eu/publications/eucs-cloud-service-scheme
.
The EUCs defines several elements needed to support certification, and many are similar to the Common Criteria. This includes assurance levels, necessary information for assurance reviews and tests, and a process for self-assessments. Newly discovered vulnerabilities are explicitly identified as an area of concern, and the scheme identifies a process for handling such vulnerabilities and updating any relevant certification documentation as needed. In addition, the scheme identifies conformance assessment body (CAB) criteria that detail the requirements an organization must meet to perform evaluations and issue certifications under EUCS.
Outsourcing refers to using a party outside the organization to perform services or deliver goods. Outsourcing can allow organizations to take advantage of higher-skilled resources that may be difficult or expensive to hire internally or take advantage of shared resources that benefit from an economy of scale. CSPs provide this type of outsourcing; by pooling and sharing resources, organizations gain access to a globally distributed network of data centers that would be prohibitively expensive for all but the largest multinational companies.
When entering into an outsourcing arrangement, organizations utilize a variety of legal agreements and must also perform oversight and monitoring functions to validate compliance with the agreed terms. Cloud security professionals are well served by understanding key contractual provisions that provide risk management options for the specific CSPs that are used by their organizations.
Before executing a contract with a CSP, it is important for any business to fully understand their own business needs to select a CSP that can adequately meet those needs. The evolution of the cloud means that more and more IT functions can use cloud computing. Once an organization deems cloud computing fit for their needs, the process of codifying these needs and identifying CSPs that meet them can begin. In legal terms, a cloud customer and a CSP enter into a master service agreement (MSA), which is defined as any contract that two or more parties enter into as a service agreement.
Many organizations will have standardized contract templates, and the task of creating and maintaining these is usually not assigned to the security team. Legal counsel is most often responsible for these contracts, but input from the security team is essential to ensure that security requirements make it into these templates. Common areas of security that should be addressed in contracts include any compliance requirements the customer is passing along to the CSP, as well as important processes and parameters the CSP must meet, such as the duty to inform the customer of a breach within a specific time period after detection.
Another important legal document that may be required is a statement of work (SOW). SOWs are usually created after an MSA has been executed and govern a specific unit of work. For example, the agreement to use a CSP's services at specific prices would be documented in the MSA. A SOW could be issued under this MSA detailing requirements, expectations, and deliverables for a major project, such as paying the CSP to assist with a migration from on-premises to cloud hosting.
The greater specificity of a SOW allows for more granular security requirements that are specific to that unit of work, such as the use of physically secured transport and secure handling of hard drives that the CSP takes and migrates data into their systems. If this activity is performed only during the initial migration, these requirements do not make sense in the overall MSA since this physical data transfer is not part of the ongoing services the CSP provides.
The final legal document where business requirements can be captured is the service level agreement (SLA), which specifies levels of service the CSP is obligated to provide. SLAs measure common aspects of service delivery like uptime and throughput and are often tied to system requirements the organization needs in order to function properly.
The SLA is a legally binding agreement, and if the CSP fails to provide the specified levels of service, the customer usually has recourse options defined. These may include refunds or credits for the service and possibly the ability to terminate the contract without penalties. While this is a dramatic option, if a CSP is unable to meet the organization's required service levels, then the organization should be free to seek out another provider. Contracts often contain penalties for early termination without cause, so monitoring the service levels and properly documenting any shortcomings is essential, as this often enables the customer to terminate the contract with cause and avoid the termination fee.
Service level agreements can be a key factor in avoiding potential issues once a contract is in place. Service metrics, such as uptime or quality of service, can be included in this section of a contract. An uptime of 99 percent may seem adequate, but that level of allowed downtime would be equal to 87.6 hours a year. For a nonessential system, this may be acceptable, but a mission-critical system that must be available 24/7 cannot tolerate that much downtime!
SLAs should be written to ensure that the organization's service level requirements (SLRs) are met, and SLAs are best suited for defining recurring, discrete, measurable items the parties agree upon. This is in contrast to nonrecurring items that are better suited to a contract, such as agreed prices for specific services. Examples of these requirements, and common elements documented in SLAs, include the following:
As discussed previously, the process of managing risk is complicated when parts of the organization's IT infrastructure exist outside the organization's direct control. The practices of SCRM and vendor management overlap significantly, though in many cases vendor management will include more activities related to operational risks.
Vendor management concerns existed for traditional, on-premises infrastructure, but the activities required for cloud computing necessitate different processes and approaches. Selecting a vendor for on-premises hardware might have been a once-and-done activity, since hardware would be expected to last for a defined period of time; at the end of its useful life, assessment of replacement vendors would be conducted.
Cloud computing requires more continuous management activities, since it involves outsourcing ongoing organizational processes and infrastructure to a service provider. This redefined relationship requires a great deal of trust and communication with vendors. Cloud professionals need strong project and people management skills to be successful when performing activities such as the following:
The management of cloud contracts is a core business activity that is central to any ongoing relationship with a CSP. Organizations must employ adequate governance structures to monitor contract terms and performance and be aware of outages and any violations of stated agreements. A standards body known as the OMG Cloud Working Group publishes a useful guide to cloud service agreements, including defining and enforcing contracts, managing SLAs, and building programs to govern these service arrangements. The guide can be found here: omg.org/cloud/deliverables/Practical-Guide-to-Cloud-Service-Agreements.pdf
.
There are a number of specific elements that should be considered when engaging a CSP or other cloud provider. A contract clause is a specific article of related information that specifies the agreement between the contracting parties. Examples of clauses include language related to the customer's obligation to pay and any security requirements the customer expects the service provider to meet, such as implementing industry-standard security.
Writing and reviewing contract clauses may be outside the scope of a security professional's job, but understanding the function of these clauses and important considerations that should be addressed in the contract is important. This necessitates collaboration between security practitioners and legal counsel, especially as contract negotiations often involve very advanced legal knowledge. Some common contract clauses that should be considered for any CSP or other data service provider include the following:
Cyber risk insurance is designed to help an organization reduce the financial impact of risk by transferring it to an insurance carrier. In the event of a security incident, the insurance carrier can help offset associated costs, such as digital forensics and investigation, data recovery, system restoration, and even covering legal or regulatory fines associated with the incident. As discussed previously, cyber insurance carriers are in the business of risk management, so they are unlikely to offer coverage to an organization that is lacking in security controls designed to mitigate some of the risk.
Cyber insurance requires organizations to pay a premium for the insurance plan, and most plans have a limit of coverage that caps how much the insurance carrier pays. There may also be sublimits, which cap the amount that will be paid for specific types of incidents such as ransomware or phishing. It is important to understand what type of coverage is best suited to your organization's unique operating circumstances, and an insurance broker can be a useful resource when investigating insurance options. Factors to discuss with a broker include the amount of coverage needed, different types of coverage such as business interruption or cyber extortion, and security controls that the insurance carrier requires such as MFA. The broker can help to ensure that the insurance coverage is appropriate to an organization's unique circumstances and may be able to save money by eliminating unnecessary elements of the policy.
Cyber risk insurance usually covers costs associated with the following:
Supply chain attacks are increasing in frequency and severity and have been on this track for some time. Back in 2015 the retail company Target was attacked using a vendor with weak security controls. In 2020, governments and major companies around the world were impacted by an attack against a popular SolarWinds network monitoring tool that was shipped to customers with compromised code. The popular open-source library npm has come under repeated attack, since so many open-source software (OSS) packages it hosts are incorporated into other software tools used by organizations all over the world.
Managing risk in the supply chain focuses on both operational risks, to ensure that suppliers are capable of providing the needed services, and security risks. This includes ensuring that suppliers have adequate risk management programs in place to address the risks that they face. Without these controls, risks that impact your organization's suppliers can easily turn into risks that impact your organization. If a major CSP does not enforce environmental controls and server equipment begins to fail, this translates to loss of availability for all of the CSP's customers.
The supply chain should always be considered in any business continuity or disaster recovery planning. The same concepts of understanding dependencies, identifying single points of failure, and prioritizing services for restoration are important to apply to the entire supply chain. Proactive measures including contract language and assurance processes can be used to quantify the risks associated with using suppliers like CSPs, as well as the effectiveness of these suppliers' risk management programs.
The ISO 27000 family of standards has been discussed in many areas of this reference guide, and there is a specific standard dedicated to supply chain cybersecurity risk management. ISO 27036:2021 provides a set of practices and guidance for managing cybersecurity risks in supplier relationships. This standard is particularly useful for organizations that use ISO 27001 for building an ISMS or ISO 31000 for risk management, as it builds on concepts found in those standards.
ISO 27036 comprises four parts, including the following:
ISO 27036, like other ISO standards, is not a free resource. Additional resources worth review include the NISTIR 8276, “Key Practices in Cyber Supply Chain Risk Management: Observations from Industry”; NIST SP 800-161,” Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations”; and the 2015 ENISA publication “Supply Chain Integrity: An overview of the ICT supply chain risks and challenges, and vision for the way forward.”
A cloud security professional must be constantly aware of the legal and compliance issues inherent in migrating and maintaining systems in the cloud. Understanding the legal requirements, privacy issues, audit challenges, and how these relate to risk and contracts with cloud providers is a must for any company taking advantage of cloud services. A cloud security professional must also be well versed in the frameworks provided by professional organizations such as ENISA, NIST, ISO, and the CSA. All information security activities are tied back to business risks, since security should always be aligned to the needs of the organization. Understanding, assessing, and mitigating these risks is critical to any business strategy, and requires collaboration between the security professional and other teams. Cloud security professionals should understand the role that IT plays in this larger picture and have the communication and people skills to involve the appropriate business, legal, and risk decision makers across the organization.
18.119.248.149